Is GPU rendering coming of age?

Along with many others in the computer graphics/VFX industries, I’ve been watching GPU based rendering mature with great interest. Since it was first pioneered for niche scientific applications such as Folding@home, the art of performing general purpose calculations on a GPU has come a long way.

We now have fully ratified and supported languages such as NVIDIA’s CUDA and the platform-agnostic OpenCL, which make developing and supporting GPU-based applications much easier.

A bit of background

GPUs are primarily designed to process huge amounts of data in parallel, and this is how they manage to draw all the pixels on a screen simultaneously. The architecture is very different from CPUs which traditionally excel at processing data in a linear fashion. Even modern multicore CPUs can typically only process a maximum of 12 threads simultaneously, whereas modern GPUs can process hundreds.

So, if they can handle so many more threads, why don’t we just use GPUs for all computer processing? Although they can process many more threads simultaneously compared to a CPU, those threads can only perform relatively simple calculations. This often means processing a single thread on a GPU is slower than on a CPU. This isn’t an issue for software that can perform its processing in parallel as, by sheer weight of numbers, a GPU could still arrive at the answer quicker. However, not all software lends itself to this parallel architecture, so don’t hold your breath for a word processor or email client that runs on GPU. Applications that perform a large number of relatively simple, repetitive calculations such as raytracing or video encoding though can be sped up by orders of magnitude on a GPU.

In reality, both CPU and GPU processing are needed for a balanced machine, with the GPU being used as a co-processor to offload any suitable processing and freeing up the CPU for other tasks. Even with the above in mind there are still challenges to overcome. Early GPU-based applications were notorious for making a machine unusable by taking up so many of the GPU’s resources that the screen would flicker or become unresponsive while the GPU was too busy to re-draw it.

A new era…

NVIDIA have recently made a large leap in unlocking the potential of the GPU for the masses with their Maximus technology. Maximus allows you to combine an Nvidia Quadro card with a Tesla co-processor which is effectively a GPU without the extras needed to output to a screen.  The Maximus system will then intelligently assign suitable work to these two resources. So – as an example – while the machine is in use, the Quadro will be used to draw the screen and the Tesla for any CUDA processing. If the machine is not in use, then the might of both cards will be assigned for CUDA processing – maximising the processing power on offer, while keeping the machine usable.

The first GPU-accelerated content creation applications we saw tended to be standalone applications that required you to import work from a user’s main application and work in an often unfamiliar interface. This was far from ideal and any gains in speed from the GPU acceleration were potentially wiped out by the time taken to move data from one application to another. More recently, we’ve started to see GPU technology integrated into industry standard applications such as V-Ray, 3ds Max, Photoshop, After Effects and Adobe Premiere.

Both V-Ray and 3ds Max now offer renderers in the form of V-Ray RT and iray that can be accelerated on the GPU to potentially increase render speed by a factor of ten or more. The renders can be used as ActiveShade previews and, with the right GPUs, can give interactive results close to final quality to base lighting and texturing decisions. This can break the tedious iterate/test render/iterate cycle many 3D artists are used to. As RT and iray are closely related to their CPU-based counterparts V-Ray and mental ray, the final renders can be passed off to one of these if the unsupported features are required.

If a 10x speed boost still isn’t quick enough, our Cubix GPU-Xpander boxes can be used to add GPUs to your render farm. There isn’t enough space inside the typical render blade for a large GPU, so using an Xpander to add external GPUs is a great option. The Xpanders can also be used to add up to four GPUs to a single workstation which means you may be able to forgo the render farm altogether.

Now that we have GPU acceleration in industry standard applications and technologies such as Maximus and the GPU-Xpander I think the technology is ready to make a move into the mainstream. In fact, you may already have at least one part of the puzzle built into your application or workstation.

For more information on how to speed up your render workflow with GPU acceleration, call us on 03332 409 306 or email For all the latest news, follow @Jigsaw24Video on Twitter or ‘Like’ our Facebook page.


Call us: 03332 409 306

Leave a Reply

Your email address will not be published. Required fields are marked *