A ray tracer is software that can create images resembling real life scenes or images.
The quality of
these images and how good a ray tracer is at achieving this goal, entirely depends on how good the coder(s) behind the ray tracer is.
Writing a serious ray tracer (for a single
person) takes years. Writing a basic ray tracer can be done in weeks or less.
Ray tracing is writing software (most commonly in C/C++), that simulates images using mathematically
defined objects, camears, lights etc. and have a computer show us, what it looks like, after our code has had its say.
To achieve it it's all about math, and in particular,
vectors. Vectors form the fundamentals of ray tracing.
We can't mimic the real world yet... not even close. Even with the incredible power our home computers boast today, we are
very very VERY far from anything, that mimics real life scenes.
A standard home computer is capable of doing about 60 GFLOPS (2016/2017) (Giga Floating Point Operations Per Second)...
or sixty BILLION floating point ops/sec.
That is a lot of computing power indeed but still, in effect, impossibly far from enough to do real time ray tracing of real scenes.
Hollywood companies use thousands upon thousands of XEON based multiprocessor servers in strings, to generate the 24 or 60 images/second they need. Add 3D movies to that.
A single 18 core XEON based server is about $10000. There's a reason Hollywood animated movies have a heavy price tag.
90+ percent of bulding a ray tracer is optimizing and making it smarter, because ray tracing eats computer power for breakfeast.
Ray tracing has given us "Rattatouille",
"Finding Nemo", "Ice Age", "Cars", "Planes" etc., and resulted in the first ever academy award for a danish dude, Henrik Van Jensen, for his extreme new technologies, "Sub surface scattering" (Gollum in "The lord of the Rings), and "Photon Mapping" (an efficient
and accurate Global Illumination technique).
I don't know if he directly contributed to "The Martian" with "Matt Damon", but he did a lot of the ground work.
Henrik Van Jensen, academy award winner!.. and a ray tracing coder at heart, looking at his code, and just watching his results. (Gollum for his realistically looking skin, sub surface scattering)
So. Bottom line, ray tracing tries to recreate real life images, using math.
Everything in rayracing
is math: The camera, shapes, environment, movement, materials, lights etc.
Example of a real life scene:
In your living room you have a lamp on your table top. It has a standard light bolb that emits light (photons in every direction).
So this light source emits an
endless amount of photons in a virtually infinite number of directions from where it is located.
We _could_ write a program, that recreated this scene... but it would never
get finished rendering.
It would need to follow every single (an infinite number) photon emitted by the light source, bounce the photon around in the scene, determine hits
on reflective or refractive shapes, and maybe.... _maybe_ camera hits.
A photon would register only when it mathematiclly _exactly_ hit the eye (single 3D point) of the camera.
So, in other words, only an extremely small number of samples would ever meet the camera and we would be wasting an enormous amount of computing power on nothing.
So... we need
another technique, and it is not that complex: Reverse (or backward) ray tracing:
The principle is rather simple: You fire a ray from the eye or camera, and gather the information
from whatever that ray hits, to finally shade the pixel you are currently rendering.
THIS is basically it. Send a ray into the scene, and find the color of what ever it hits...
including reflection and refraction,
So, let's stick with _backward_ ray tracing.
rays into the scene, from our eye (camera) and our camera looks through all the pixels of our screen, pixel by pixel.
Lets say we have a monitor that has a resolution of 1920x1080,
the camera would start shooting rays through the upper left corner of the screen, (x, y) = (0,0), and continue onto (x, y) = (1,0) and so on.
Based on the 3D position of the
camera, and the definition of the camera plane, a ray vector is generated for each point, and shot into the scene.
From here on, using each ray shot from the camera:
Loop through every pixel on the screen and shoot a ray from the camera
The most simple rendering routine,
iterating every pixel on your screen:
for (int y = 0; y < heigth; y++)
(int x = 0; x < widtht; x++)
ray = scene->camera->GetEyeRay(x, y);
if (intersect(ray, &color))
that example ColorRGB is a simple class that holds the R, G and B values.
Th Ray class is more complex, because I use it to also keep track of where it is (inside or outside an
object), direction, where it came from (position and object and, possibly, which polygon), the normal of the last hit, etc.
nofShapesInList = 0;
Ray(Point_3D& origin, Vector_3D& dir)
nofShapesInList = 0;
closestIntersect = FLT_MAX;
insideShape = false;
rayDepth = 0;
// The direction of the ray
* The point from which the ray originated.
This may be the camera
* or any object surface, eg. a mirror ray
* The intersect is the 'final result' after determining the intersection
* (if any) that is closest
to the origin point
// The normal vector at the surface we hit
// If the shape we hit is a mesh, this is the polygon in the mesh
* closestIntersect is the 'final result' after determining
* the intersection (if any) that is closest to the origin point
// Which object (if any, ie. not the camera)
did the ray come from?
// The object (closest)
hit by the ray
// Specifies whether the ray
currently travels inside an object, eg. a glass sphere
// 2D screen coords
double traceX, traceY;
// 2D coords
with offset used for antialiasing
double offsetX, offsetY;
Complete list of shapes in the scene
// Used to iterate over all the shapes in the sceneww
unsigned int nofShapesInList;
// Unused for now
track of the depth the ray is at right now
Lets us know where to look in the texture buffer for the hit object
int textureBufferX, textureBufferY;
If you had enough computer power at
your disposal, it wouldn't be about anything else than just vectors. Dissapointingly it's not that simple. We (not even Hollywood) do not have that kind of computer power. We have to be smart about it.
In ray tracing we talk about 'scenes'. "We render a scene". So what is a scene?
A scene is everything our camera sees through its lens... all the objects, dirt in the air, flies flying around, dirt on the table.
like a scene in the theatre, everything is set up, and your eyes are the camera. Take a still picture with an actual camera, and it would take a picture of the 'scene'.
picture is what ray tracing is all about. Ray tracing attempts to recreate real life 'scenes' using _math_!
A vector describes an origin point and a direction, that is, a
straight line that moves from a three dimensional point to another three dimensional point... a line in 'space'.