Raymarching explained interactively

Introduction

Raymarching is a 3D rendering technique popular in the demoscene and in websites like Shadertoy.
One of the impressive things about demos is that there is no 3D geometry whatsoever. All the objects that make up the scene are generated in real time using mathematical functions.

In this interactive tutorial, we will learn how the raymarching algorithm works from the ground up and create our own raymarcher with shaders.

Signed distance function

Before diving into the raymarching algorithm, we need to talk about distances.

You may be familiar with the formula for the distance between two points, it is known as the Euclidean distance and is calculated using the Pythagorean theorem:

EuclideanDistance(P1,P2) = (P1.x-P2.x)2+(P1.y-P2.y)2
=

You can interact by dragging points around

Now instead of two points, imagine we want to calculate the distance between a point and a circle. To do that we first calculate the euclidean distance between the point and the center of the circle, we then substract the radius of the circle:

DistanceToCircle(P,C,R) = EuclideanDistance(P,C)-R
=

Notice that when the point is inside the circle, the distance is negative—hence "signed" distance function. This fact will become useful later on.

The SDF of an object is a function that, given a point in space as input, outputs the distance from that point to the closest point of the object.

The SDF of rectangle is a little bit more complicated, here is a very good explanation on how to derive it.

DistanceToRect(P,C,Size) = ||max(|P-C|-(Size/2),0)||
=

If we have multiple shapes and we want to find the closets point to P, we calculate the minimum of all the SDFs of the shapes:

The Raymarching algorithm

The raymarching algorithm goes like this:

  1. Set the current point to the camera's origin;
  2. Calculate the closest distance between the current point and the objects in the scene using the minimum of their SDFs;
  3. The current point is moved along some direction (a ray) for the distance calculated in step 2;
  4. Steps 2 and 3 are repeated until the minimum distance is smaller than some small distance Δ, indicating that we have hit an object;
  5. The maximum number of iterations is limited to avoid infinite loops in case no object is hit.

The idea is to run the above algorithm to each pixel on the screen. If the ray hits an object, we set the pixel's color to the object's color. Otherwise, we set the pixel to the background's color.

You can see the iterations of the algorithm in the demo below. click on to start the animation. You can also change the camera's parameters like the field of view or the number of rays we send.

Near plane:
Field of view:
Number of rays:
Iterations:

Going 3D

Until now, we have only seen how raymarching works in two dimensions. To do 3D rendering we are going to use a fragment shader.

I'll skip explaining how to setup a fragment shader like this, you can either use shadertoy or draw a fullscreen quad in a custom 3D engine.


First, we will display the screen space position of each pixel (multiplied by the aspect ratio). The horizontal position is represented by the red channel, and the vertical position by the green channel.

Just like how we did in 2D, the idea is to send a ray for each pixel on the screen. The view frustum extends from the camera towards the corners of the screen.
By iteratively moving the ray along some direction, we test whether we hit an object (if the distance is below a certain minimum) or if the ray goes to infinity (if the distance is above a maximum).

The equation of the SDF for a 3D sphere is basically the same for the 2D circle, in this shader we display the depth (distance) of each point in the scene composed of a sphere:

Here we add the SDF for a cube and a plane:

If we return a number id for each object, we can choose a color for each different object. If the ray goes to infinity, we can draw a plain color to draw a basic sky:

And if we compute the normals and do some basic lighting calculations, we can add diffuse shading to our objects:

Futher reading