Rendering is a crucial process in any 3D animation or VFX project. Picking the right technology and anticipating trends requires a good understanding of the topic. We decided to write a serie of blog article, which will introduce bit by bit, with a growing difficulty level, all there is to know about rendering.
At the most basic level, rendering is about “simulating” light on a computer. In that regards, it can be considered as a field of numerical simulation.
Let’s consider a system, with a set of physical parameters we want to predict. Those physical parameters behave according to several physical laws, which are themselves described with equations.
Numerical simulation is about:
1- Defining the parameters you want to predict
2- Identify the physics behind their behaviour, and write the equations
3- Make assumptions about your use case in order to dumb down the equations, so that they can be solved numerically (a lot of equations still have no known analytical nor numerical solution)
4- Implement an algorithm to solve those equations, and compute the value of your system’s parameters that are relevant to you
How Equations are Found?
Nowadays thousands of different physics are known, and even more equations are studied in various labs across the planet.
Most of the times, the core idea is Conservation Laws. The most practical way is to define energy and matters with quantities relevant to your field, and you begin by simply stating that in a small piece of space, the variation of energy and matter in this space equals the net flux to and from that space.
Then, you work from there, re-write equivalent formulations and you end up with a relevant equation to the situation at hand.
Why Simplify Equations?
Once you have your equation, you’re not done yet. Very often, a particular equation is overkill for what you’re trying to solve. For example, Navier-Stokes equations describe the behaviour of fluids. They are incredibly complex to fully solve, and they contain many physics (micro-turbulence etc) that may not be relevant to everyday life applications.
For example, in aerospace, if you want to model the air flow around a wing, you can make an assumption about air, that it is a Newtonian Fluid. Experiments show that this assumption is accurate, and on the theoretical side if you include this assumption in Navier-Stokes equations you can simplify them significantly.
To sum it up, you make reasonable assumptions, that enables you to ignore physics that are not relevant to you, in order to simplify the equations you want to solve. As a result you can solve them numerically.
In CGI, we are most concerned about light, and all its physical phenomenons. So basically rendering is about solving equations that describe the behaviour of light.
Up until the mid-80’s, many techniques existed to get decent results in computer graphics. People made a lot of assumptions and restrained a lot what they wanted to solve. This enabled researchers to have computer graphics that could be rendered with the hardware available at the time.
In 1986, a now famous paper was published by James Kajiya from CalTech, that laid out the Rendering Equation:
James Kajiya’s paper says it all:
“The technique we present subsumes a wide variety of rendering algorithms and provides a unified context for viewing them as more or less accurate approximations to the solution of a single equation. That this should be so is not surprising once it is realized that all rendering methods attempt to model the same physical phenomenon, that of light scattering off various types of surfaces.
We mention that the idea behind the rendering equation is hardly new. […] However the form in which we present this equation is well suited for computer graphics, and we believe that this form has not appeared before.”
Before this breakthrough, people were investigating particular corner cases, and once this general formulation was published, each existing methods could be understood as a simplification of this one particular equation, adapted to a particular situation.
This mental shift was fundamental, in that it provided a unified framework to think about rendering, and to understand the arbitrage made by each algorithm. Understanding those arbitrages are a vital step to improve on what’s existing.
This equation belongs to a wider category of equations, called “Fredholm equations of the second kind”. So far scientists have found analytical solutions to several subcategories of Fredholm equations of the second kind, unfortunately the rendering equation is not one of them, one of the reasons being that the equation is “infinitely recursive”: the energy coming from a ray depends on where and how the ray bounced before.
The rendering equation includes an integral over surface patches around the point of interest. The equation can be re-written as an integral over light paths. Solving it is then all about deciding which light paths you can ignore, or compute differently, to speed up the process. The integral is highly dimensional (the space of light paths is very large), and for such problems Monte Carlo methods are a good fit.
Most renderers today (Arnold etc) use some form of Monte Carlo ray tracing. It is important to understand that Monte Carlo methods are statistical methods which involve random sampling, of light paths. This means a lot of random memory access, which in turns explains why a lot of commercial renderers perform better on CPUs (vs GPUs).
However, as always, it is always possible to implement an algorithm in many ways, and some renderers (ex: Octane Render) implement Monte Carlo ray tracing in a GPU-friendly way.
NOTE: I advise reading this excellent article from fxguide