Perhaps 90% of the development of a ray tracer is making it faster.

If you use no acceleration techniques, like bounding volumes, last shadow object etc. etc., rendering your scenes will be extremely slow.

You can improve rendering speed by a factor of many thousand, by employing various acceleration techniques.

It's hard work, but it _is_ neccesary.

The more complex your ray tracer becomes, the more time it takes to render a scene; the more time it takes to render each pixel in the image.

Distributed ray tracing is used for e.g. Depth Of Field, anti-aliasing, soft shadows, glossy surfaces, translucency etc. and these are extremely costly techniques, in terms of the number of cast rays.

For example, to create relatively accurate soft shadows it may easily be neccessary to fire 500 or a thousand rays towards the areal light, depending on the size of the light, distances etc. For each ray we need to check if any objects are hit by the ray.

This is also true for any objects that are hit by reflection or transmission rays.

If we use anti-aliasing this (already huge) number is multiplied by the number of antialiasing rays we shoot.

Even worse, if we use depth of field. Here we also need to fire a _lot_ of rays per pixel to get good quality... easily 1000 rays/pixel and for each of these rays, we need to do all those soft shadow rays.

The number of rays we need to fire, increases exponetially, each time we introduce a new distributed ray tracing technique.

How do we make it faster then?

Here is a short, but far from complete list of acceleration techniques. I will discuss them in greater detail later on.

These techniques are independent of coding language.

  • AABBs (Axis Aligned Bounding Boxes)
  • Octrees (sub division of AABBs)
  • Bounding Volume Hierarchy
  • Last shadowing object
  • Visible light sources
  • Visible objects
  • Material properties (casting/accepting shadows)
  • Anti-aliasing threshold
  • Soft shadow threshold
  • DepthOfField threshold
  • Multithreading
  • Distributed multi-processing

Code (language) specific (c/c++/c#/java):

  • Macros (#define) instead of functions (c/c++ only) (methods)
  • Linked lists of pointers to objects instead of built in lists (c/c++ only)
  • double d = x * x; instead of double d = pow(x, 2);

So one way to make a ray tracer faster is to actually exploit the modern hardware, where even house hold computers have multi-core architetures.

It is ok to think, that whatever program I write, takes the full advantage of the processing power I have in my computer.... but it is not so.

If you do not explicitecly take advantage of your PC's multithreading capabilities, you waste computing power... almost times the number of physical cores you have (depending on how effetive your code is).

Why does my program not just simply use all the cores I have at my disposal?

It sounds simple. It is not, I'm afraid.

Using multithreading is fantastic, and immediately relatively simple, and you can gain speed-boosts almost the factor of the number of cores (physical) your system boasts.

If, however, you need to handle the same data in different threads almost simulteanously, you have a challenge.

In ray tracing, that challenge quickly becomes a serious problem, beause any thread needs to access the same data as any other thread.

You likely have some kind of scene, with objects that share a lot of information, e.g. you have pre-cmputed lists of polygons in a boundig volume, visible lights etc., that makes renderig you scene much, much faster.

For performance reasons, a bunch of different types of classes hold lists of other classes, that also have lists of yet other types of objects.

An example could be an axis aligned bounding box, that has a list of member polygons. The polygons themselves point to their parent shapes (objects) and lightsources visible in their normal direction, in the hopes of reducing rendering time.


To be continued...