Linear workflow and gamma correction – part 3

Linear workflow and gamma correction – part 3

In this, the third of what will now be four parts, I look at the manual method of setting up and working in a linear workspace with 3ds Max and VRay.

I’ll try not to repeat any of the points made in earlier articles, but it is important to reiterate that all inputs and outputs require some form of gamma correction.

The Input

The method I am recommending is to add a colour correction node to your bitmaps and colours, and then apply an inverse gamma curve to that by setting the RGB gamma to 0.4545. You should remember this value from the first article – if you don’t, I’d advise you to take another look.

gamma correction 1

This will not doubt be a change to your existing workflow and to start with, you will probably forget to apply this additional node when creating materials but it really is the simplest method and most flexible.

It gives you absolute control over the amount of correction you are applying and allows you to make some materials darker or lighter depending on your preference, as well as tweaking the other options that the colour correction node offers.

The Output

As mentioned in part 2, there are slightly different workflows depending on what you are planning on doing with the render after the 3d application. If you aren’t going to do any post processing then you will need to bake the gamma correction in to the final render. VRay does this with the Colour Mapping rollout in the Render settings.

gamma correction 2

Baking this gamma correction is also the method I choose when rendering out test scenes as it gives instant feedback without the need to get it into post. If you adopt this method, you will need to remember to revert back to the default of 1 when rendering out the final image.

There is of course a tool for this that can also help with previews. What you will need to do is, enable the VRay frame buffer from the render settings, return the gamma correction colour mapping to 1 and then toggle the sRGB button to apply the gamma correction.

gamma correction 3

The correction is made after the image has been rendered, so there will be times when you turn it on to correct and brighten up the image, but because there wasn’t enough sampling in the darker areas, it will become noisy. This is the trade-off for sheer ease of use! Personally, I don’t use this method (for the above reason) but it is a very useful tool.

By now you should be familiar with both the concept and workflow involved in manually setting up a linear workspace with 3ds Max and VRay. It may be worth your while getting to grips with this now by testing it out on some of your old scenes and seeing for yourself a marked improvement.

Part four of the series is on its way. In the meantime, if you’d like to find out more, give the team a call on 03332 409 306 or email To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

Maxon Cinema 4D plugin for After Effects

Maxon Cinema 4D plugin for After Effects

Maxon have recently made available the CS5 compatible plugin for Cinema 4D and After Effects.

The plugin features 64-bit native compatibility for Windows and Mac OS X to allow users of Cinema 4D to take full advantage of available hardware operating system performance for improved rendering and workflow efficiency directly inside the After Effects application.

Plugins available here

Email us for more information at or call our 3D team on 03332 409 306. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page. Visit our website


Linear workflow and gamma correction – part 2

Linear workflow and gamma correction – part 2

If you have read the first part of my gamma correction article, you should now understand exactly why this alteration to your workflow is necessary.

In this second part, I will look at the built-in linear workflow within Autodesk 3ds Max. It is the quickest method as it is controlled entirely by 3ds Max and requires no disruption to your existing workflow.

The test scene

We will be using the test scene in figure 1 for all of the examples. It is a very basic scene consisting of a box with a wooden floor, a camera, 3 VRay lights and a simple structure acting as the light fitting. All render, material and light settings will remain the same, unless otherwise stated.

Linear Workflow

The 3ds Max preference method

When the above scene is rendered with default 3ds Max settings, the result is figure 2.

Linear workflow

As you can see, the image is very dark with almost no detail in the darker areas. Previously, most users would just try and compensate for the lack of light by either increasing the intensity or even quantity of the lights. As explained in part 1, there are obvious drawbacks to this. Figure 2 illustrates the effect of increasing the intensity of the lights.

 Linear workflow

Although you may have accomplished your goal of getting more light into the scene, you have also introduced some very small artifacting around the lid of the teapot as well as a severe hotspot on the back wall. If you cannot see this highly contrasted hotspot, try raising your chair ever so slightly. You will see that the gradient is very sharp and not at all realistic.

What you should be doing is gamma correcting both the input and output. The simplest method is to go to Customise > Preferences > Gamma and LUT, and select the settings shown in figure 4.

linear workflow

Once these preferences have been set, our rendered test scene looks like figure 5.

linear workflow

The benefits are there for you too see but, if you need any reminding, please refer back to the previous section which lists, in detail, the full benefits of working in a linear workspace.

Because we aren’t doing any post-processing with this scene, it is perfectly acceptable to export a non-linear gamma corrected image. If you were intending on post-processing the image, you would need to override the output in the ‘Save as’ dialogue box or alternatively disable the 2.2 output default in the preferences. You will also need to output in anything other than the JPEG format! We recommend either half float OpenExr, 16-bit TIFF or PNG, anything else is either overkill or doesn’t contain enough image data.

I’m sure you will agree that this method is very simple and we hope you can see the benefits of making this change to your workflow. In the upcoming articles, we will cover the manual methods of both VRay and Mental Ray with 3ds Max.

To find out more, get in touch with the team on 03332 409 306 or email To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page. Visit our website

Linear workflow and gamma correction

Linear workflow and gamma correction

What is it and why do I need to care?

Light intensity works linearly, whereas electronic displays do not. In the real world, two lights of the same intensity directed on the same spot will illuminate the area with twice the intensity of a single light. This can be expressed as a simple graph, as shown in figure 1 (below left).

3D graph3D graph

This is perfectly sensible and is to be expected but a problem arises when light is displayed on electronic equipment. If you double the voltage running over a liquid crystal (Liquid Crystal Display), the intensity of light emitting from that crystal doesn’t double, as it does in nature, and therefore isn’t linear. Because of this, computer monitors cannot display images and video without a certain amount of pre-processing.

Both hardware and software typically apply what is known as a gamma correction curve to images and video so that they can be displayed on monitors within a sensible colour range. This gamma correction curve typically has a value of 2.2 and can be seen in figure 2 (above right).

So, if the manufacturers of hardware already correct the deficiencies of the monitor, why do I need to care?

Because any manipulation of any image or colour you make on a computer is simply a change to an already gamma corrected image, which is itself only a best guess, resulting in images that aren’t physically accurate and are generally of poorer quality.

The linear workflow method tries to address this issue. The process involves applying an inverse gamma curve (de-gamma) to all input images and video so that the footage is converted back into a linear format and is then ready to be worked on and manipulated. This is shown in figure 3 (below left). The value of this curve is obtained by dividing the target gamma value of 1 by the current gamma value of 2.2, therefore the value of the inverse gamma curve is 1 / 2.2 = 0.4545.

3D graph3D graph

The linear workflow for the 3D industry boils down to image input and output. As you now know, images will almost always need an inverse gamma curve applied to them when they are brought into a 3D application. This will ensure that you are working in a linear workspace. When the image has been rendered, post-processed and finalised, a gamma correction curve needs to be applied so that it can be displayed on computer monitors. This process is demonstrated in figure 4 (above right).

I’m actually quite satisfied with the images that I’m producing. Is it really worth all the trouble?

Yes! Below are just some of the benefits of working in a linear workspace.

  • You will spend less time tinkering your images to get realistic results as working in a linear workspace yields physically accurate results.
  • There should be no need to render out different channels and composite them later in post, as again the rendered image will be physically accurate. (It is also worth pointing out that any blending of layers together in post, such as Add or Multiply, is completely inaccurate and mathematically insane, when not working in a linear workspace. This is because you are blending together layers that themselves have been ‘corrected’.)
  • Effects, like fog, lights and motion blur, work better in a linear workspace.
  • Eliminates unrealistically strong reflections.
  • Smoother gradients in darker areas.
  • Less artifacting around specular highlights.
  • Fewer blown out or overexposed areas.
  • You can use lower intensity lights in your scene and push them further.
  • Smaller file sizes, as there is no need to add all the individual render channels.


In parts two and three of the series, I will look at 3ds Max’s built-in gamma correction, 3ds Max and VRay, and 3ds Max and Mental Ray. In the meantime, if you want to find out more, give the team a call on 03332 409 306 or email To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

GPU rendering: an update

GPU rendering: an update

Over the past few months, GPU rendering has taken a few steps forward and, whilst we are nowhere near where we need to be, it does look promising. So let’s review what has been happening.

3ds Max 2011 Quicksilver Renderer

3ds Max 2011 was released in April 2010 and took most of us by surprise when it was revealed that Autodesk had included with it the CPU / GPU Quicksilver renderer. As this was the first GPU-based renderer actually integrated into a mainstream 3d program, we were hopeful that it would provide the perfect solution to the ever increasing problem of render times.

We did some tests on Quicksilver and concluded that it is far from perfect, but that it is looking promising and should improve with future releases and service packs.


It seems ages ago now when we saw the demo of VRay RT GPU at SIGGRAPH 2009. Since then, Chaos Group have released the nearly brilliant CPU version of RT for 3ds Max.

Whilst the CPU version is a massive step forward, it is clear that we are all still waiting for the fully-fledged GPU version. Our excitement grew when we saw the most recent video by Chaos Group (released 14th May), in which they say that VRay RT on the GPU is ‘practically a completed product’ and also that a version for Maya will be available within about two months time.

They have also conducted tests which demonstrate the huge leap forward in quality and speed. Another interesting fact about RT is that it runs on OpenCL, meaning that it will run on graphics cards from both Nvidia and AMD.


The announcement of iRay, coupled with misleading documentation, has successfully managed to confuse most of the 3D industry that has been anticipating its release. Let’s try and clear things up.

Mental images, the developer, stated in their documentation that ‘iRay is provided with mental ray from version 3.8 and RealityServer from version 3.0′ and then clarify this by saying ‘iRay-enabled products feature an iRay rendering mode’.

I suppose that this statement is true, in that, mental ray when bought as a standalone product is iRay enabled, but the confusion seems to be that, whilst the 2011 releases of Autodesk products do ship with mental ray 3.8, they haven’t enabled the iRay rendering node.

iRay remains something we are very much looking forward too, but it seems that we will have to wait the best part of a year before it is integrated into the Autodesk suite of products.

Unbiased renderering

iRay is an unbiased renderer meaning that there are no settings for the renderer as such, you just import or create a model set iRay running on it and watch as the image quality gets progressively better over time. Items such as materials and lighting can be changed with near instant feedback.

Many people think that this kind of workflow will be restrictive but we found quite the opposite. It feels liberating to just make edits to your materials and lights without having to worry about using render settings and tricks to improve your image quality. All of that is left to iray which uses real world physical properties to calculate its images and is extremely fast compared to traditional renderers.

There seems to be two ways in which iray and other similar renderers can fit into your existing pipeline. The first way would be to use it as a production renderer. At present, this limits you to only using mental ray materials and as iRay is not integrated into the Autodesk line up yet adds an extra step to your workflow.

The other, more sensible use, would be to use it on location with clients, so that you can get immediate feedback on colour schemes, materials and lighting.

For example, you are an interior designer and have already modelled the set in 3ds Max. You can now import the scene into iray and then light the scene and apply the materials. You could take your laptop to the client, and right there and then change anything that the client wished. This would then eliminate the back and forth nature of finalising and perfecting a job to the clients’ needs.

The rest is then up to you and your client. If they like the GPU-produced render, then fine, but if not, you could take all those tweaks made in front of the client and then re-create them in the 3ds Max, knowing that there won’t be any further changes necessary.

We are currently testing several other GPU-based renderers similar to iRay so watch this space for reviews.

For now, if you want to know more, you can get in touch with us on 03332 409 306 or email To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

The future of rendering

The future of rendering

A little background…

Since the invention of 3D rendering, the CPU has been responsible for rendering out images and animations from 3D and video applications using graphic APIs such as OpenGL or DirectX to communicate with the graphics card.

With only a few exceptions render engines use ‘bucket rendering’. This splits up the image into buckets (squares), which are processed on the CPU until the full image is complete. These buckets can be distributed over a render farm to speed up rendering but, if working on a single workstation, the total number of active buckets depends on the amount of CPU cores you have available.

8 core CPU rendering buckets

An example of an 8 core CPU rendering 8 buckets simultaneously.

Recent developments

Inspired by this limitation, manufacturers of both hardware and software have been working towards developing a method that will offload the job of rendering from the CPU to the GPU. Although there are a number of render engines currently developing this, almost all will utilise the CUDA technology of NVIDIA graphics cards.

As mentioned earlier, the total number of active buckets is determined by the amount of cores available. If we consider an entry-level NVIDIA GeForce graphics card such as the 8800GT, which has 112 cores, you start to see how GPU rendering can and will have a massive impact on render times as more buckets can be rendered simultaneously.

As you would expect, the better the graphics card, the faster the render. This means that the high-end GeForce and Quadro graphics cards could render up to 60 times faster than a standard quad core CPU.  This will improve and quicken the artist’s workflow by allowing the user to see the immediate effect on the scene after any alteration. This could be as simple as changing a light or material parameter, or introducing a new object. It can also allow the artist to pan and zoom the camera around the scene without the need to wait while the frame buffer re-renders. Instead, the viewport follows the user’s actions while working on the scene and automatically (and progressively) generates a photorealistic preview.

Whilst NVIDIA’s CUDA engine is clearly the leader in this field – effectively locking all GPU processing tasks to NVIDIA hardware – there are others on the horizon. Apple have been working with the Khronos Group on OpenCL, a standards-based method for general purpose GPU computing.

By democratising GPU processing, any program on Macintosh, Windows and Linux platforms will be able to compute 3D data on any graphics card, regardless of manufacturer. Not only is OpenCL a genuine competitor, it is likely to supersede CUDA as the API of choice, allowing programs such as Maxon’s Cinema4D and Autodesk’s Maya to render on the GPU.

Another worthy mention is Microsoft DirectX 11’s compute shader feature, which is shipping with Windows 7. This feature enables post-processing effects, such as depth of field and motion blur, to be carried out by the GPU. Although locked to the Windows platform, it can be used on both AMD and NVIDIA graphics cards.

Click here to see our selection of graphics cards or, for more information, please contact us on or call 03332 409 309.

Rendering software

In order to use the GPU cores for rendering, we have had to wait for software companies to catch up with the developments at NVIDIA. There are two clear leaders in the race to get a fully supported GPU renderer on the shelves; Mental Ray’s iRay and Chaos Group’s V-Ray RT.

iRay will hopefully be available to all customers who upgrade to future releases of Mental Ray, either as a standalone renderer or from within applications that include the software (including such as Autodesk 3ds Max, Autodesk Maya and Autodesk Softimage).

Although impressive, indoor scenes or scenes with a large amount of bounced light seem to take significantly longer than other images to fully render. Even after a few seconds the image looks like a poor reception from a television and not at all production quality. These results were obtained using four GPUs; what type we don’t know, but most likely it would have been a Tesla S1070, (a platform iRay was designed to run on).

Incredibly, those pioneers over at Mental Images have also found the time to develop mental mill and, in conjunction with NVIDIA, the RealityServer. mental mill enables artists to create shaders and graphs for GPU and CPU rendering through an intuitive GUI with realtime visual feedback. The NVIDIA RealityServer delivers the power of thousands of cores that allow for realtime rendering over the web, perfect for product designers and architects who can easily visualise their clients’ projects with a laptop or even an iPhone!

The NVIDIA RealityServer platform is a powerful combination of NVIDIA Tesla GPUs, RealityServer software and iRay. Later, we will consider the NVIDIA Tesla GPUs in more depth and explore how they too are shaping the future of GPU rendering.

The other viable option for realtime rendering is V-Ray RT. Whilst V-Ray RT is currently CPU based, Chaos Group have already developed it into a fully interactive GPU accelerated renderer, which will hopefully be available as a service pack upgrade this year. A beta version of this was showcased last year at the industry event SIGGRAPH and was considered the major highlight of the show.

V-Ray has long been at the forefront of photorealistic rendering and is well known for being the fastest and easiest to use. In contrast to the iRay demo, it appears that V-Ray RT will yield faster results whilst using mid- to high-range graphics card. In the video, they use an NVIDIA GeForce GTX 285, which is available for just £399 exVAT. Once V-Ray RT goes fully GPU based, users should expect renderings to be completed 10 to 20 times faster than its CPU counterpart.

So which is better?



  • Available as a future release of Mental Ray
  • Web interface
  • mental mill


  • Very expensive hardware
  • Slower than V-Ray RT

V-Ray RT


  • Faster than iRay
  • Cheaper hardware


  • No web interface
  • No definite release date
  • CPU version currently does not support meshes

If money is no object and you require a method of interacting with your 3d scene over the web, perhaps whilst in front of clients, then iRay is for you.

However, if you are prepared to wait a bit for its release, GPU based V-Ray RT will offer you quicker and cheaper results and will seamlessly fit into current workflow methods. It is worth mentioning that both solutions are scalable, meaning that you can add multiple graphics cards into a workstation or distribute the task over a network. Be aware that it is almost certain that each graphics cards will need a 16 x PCIE 2.0 slot to work fully, so check your motherboard before you upgrade.

The only other GPU rendering solution worth mentioning is OctaneRender, developed by Refractive Software. A limited feature demo is available for the Windows platform.

OctaneRender isn’t locked to a particular program, you simply import a Wavefront ‘obj’ file and then start applying shaders and adding lights to the scene whilst viewing your changes in realtime. The upside of this is that almost all 3D applications can export to it but it does require a significant change in current workflow techniques and is unlikely to surpass the complex and now standard practices of Mental Ray and V-Ray.

NVIDIA Tesla technology

Right, you’ve heard us mention the Tesla a few times already, so it’s about time we explain why it is at the heart of this GPU revolution.

The Tesla S1070 is the world’s first 4 teraflop processor. This is achieved by using four 1 teraflop GPUs, each with 240 processor cores to give a total of 960 cores, all in 1U of rack space! This amount of cores will reduce render times from hours to minutes or even seconds.

Needless to say, there is also a workstation equivalent. The C1060 takes one of those 4GB GDDR3 1 teraflop GPU’s used in the S1070 and uses a regular PCIE 2.0 bus so that it can be immediately implemented into existing workstations.

This breakthrough finally provides an affordable solution for individuals and small companies who can now have the processing power of 60 Quad core processors (which would previously take up the space of a small room!) located neatly alongside a regular graphics card used for video display.

So, together with a render engine such as V-Ray RT or iRay and a CUDA enabled graphics card, individuals will soon have access to realtime photorealistic rendering power at a fraction of the cost of a render farm. I’m sure you will agree this is a massive, game-changing development.

Back in the real world

Aside from all the facts and demos, if you ever needed proof that the burden of rendering has fallen on the shoulders of the GPU, then consider the hugely successful and brilliant film ‘Avatar’.

At last, film and special effects companies such as WETA now have the necessary hardware to produce stunningly beautiful and lushly detailed scenes with an extensive cast of virtual characters set in computer generated environments.

Of course this has been done before; in fact, the last breakthrough in this field was made on another of WETA’s creations, ‘Lord of the Rings’. However, those 3D effects were merged into the real world footage, whereas ‘Avatar’ is total fantasy, everything exists only in a 3D virtual model.

WETA were required for the first time in the history of CG visual effects, to model, animate, shade, light and render billions rather than millions of polygons in a single scene. The computational power required to process the ‘Avatar’ shots was higher than anything they had attempted previously; so they turned to NVIDIA, masters of the GPU.

Step forward the Tesla S1070 that, along with new custom designed software, PandaRay, allowed WETA to process their shots 25 times faster than any CPU-based server.

One scene in particular exemplifies the advantages of PandaRay and GPU-based servers. If you’ve got a copy, pay close attention to the shots where a huge flock of purple creatures and enemy helicopters are flying amongst tree covered mountains. Those sorts of scenes were  pre-computed in a day and half where previously it would have taken a week with traditional CPU-based servers.

avatar 3D animation  avatar rendering images

The increased rendering speed allowed for minute detail of vegetation and near perfect colour separation between distances, creating a more beautiful shot.

So as you can see, GPU computing is both the present and future of 3D rendering. If you would like any more information regarding CUDA-enabled graphics cards and servers, as well as rendering programs, please don’t hesitate to get in touch.

To find out more, get in touch with us 03332 403 309 or email

Network Rendering III: Third-party management software

Network Rendering III: Third-party management software

Last time, I looked at manufacturer-specific render farm management software. While this software can make a very good solution for many people, it doesn’t tell the full story. Many CG pipelines need to render images that have been created using software from several manufacturers. In order to effectively manage such pipelines, a third-party solution is needed that can queue and dispatch jobs to several software packages.

There are several packages that are capable of this and most of them act as remote program launchers with some kind of front-end queuing system. If you are planning on building a multi-package render farm, you will need a copy of each software package you intend to render along with all plug-ins installed on each of your render nodes. Licensing for this varies between software packages, and most manufacturers offer a number of render-node licenses for free with each seat. Many render farm managers make use of the built-in network rendering functionality discussed in the last article. This helps them to get around any licensing issues and means you can avoid having to buy a fully licensed copy of your chosen software package(s) for each render node.

A few things to look for in a render farm manager are:

Queuing and priority – This should be present in any solution worth its salt – the more granular the better. On a large render farm, options to control queuing/priority on a per user basis can be very helpful. Some managers also have options for creating clusters of nodes that can then be assigned to a certain artist or department. This ensures that, on those nodes, the artist will always have priority.

Resource management – You may have a limited number of render node licenses for certain software packages or plug-ins. For example, you may have 20 render nodes but only 10 licenses for a certain plug-in. If your render manager tries to send frames using this plug-in to all 20 nodes, you may end up with certain elements not being rendered. Your chosen render manager needs to have some kind of method for managing these resources to avoid this happening.

Job dependence – Many render jobs will depend on other other elements being completed first. You may, for example, have a final scene that uses externally created textures. If you were to submit the rendering of both the final scene and the baking of the texture to your farm at the same time, it may try to start rendering the final scene before the texture is baked. You need some way of telling the manager not to start rendering the final scene before the texture is baked.

In-app submission – Most artists will prefer to submit jobs from within their applications rather than using a render manager’s GUI. If you are planning on letting artists submit their jobs directly to the render farm, rather than through a render wrangler station, then it is worth checking that your chosen solution has submission plug-ins for the software you are using.

The Future of Render Farms

Everyone seems to be talking about GPU-based computing at the moment and, with its large amount of relatively simple calculations, CG rendering could lend itself very well to technologies such as CUDA or OpenCL. There are already software packages, such as iray and StudioGPU, claiming tenfold speed increases when rendering on a GPU as opposed to the CPU. These packages are yet to be widely adopted but considering such speed increases, it is bound to filter down into the more mainstream packages as it matures. NVIDIA are already shipping Tesla GPU clusters consisting of several GPUs connected by high speed links. In the future, we may see render farms built (at least partially) out of these clusters instead of traditional CPU-based servers.

If you are planning on building a render farm and would like some advice on the options available, give me a call on 03332 409 309 or email us at Visit us on Facebook or Twitter (@Jigsaw24video)

Experimenting with stereoscopy with Maya

Experimenting with stereoscopy with Maya

As a long time CINEMA 4D user, I was a little daunted at the prospect of learning to use Maya. But, as Autodesk’s application has built-in support for stereoscopic rendering and live stereoscopic previews (something only available in CINEMA 4D through plug-ins), my interest has piqued.

Creating stereoscopic content

All 3D software is technically capable of creating stereoscopic content; you just need to use two virtual cameras (one to represent each eye) and then finish the resulting content in the same way you would live action stereoscopic content, but this approach creates a few problems. For one thing, animating two cameras as one to maintain the 3D effect often requires complex scripting to keep the cameras aligned and to achieve comfortable, working 3D. This problem is compounded by the fact that most software has no provision for previewing your work in stereoscopic 3D. A company called SVI do make a plug-in that will allow you to edit stereoscopic work within CINEMA 4D but, as Maya has this functionality built-in, I wanted to test it out.

Working in Maya

I decided very early on that learning to model within the application would take far too long to learn so, after obtaining some demo content from my good friends at Autodesk, I set about learning the stereoscopic aspects of Maya.

maya camera attributes

The good news is that everything is very well integrated in the Maya program. Autodesk have used the built-in scripting language to create a range of stereoscopic camera rigs for you to use, and made it very simple to control all of the important stereoscopic parameters (see left). The rigs range from a simple three-camera rig (two of these cameras represent the viewers eyes and there is one in the centre for framing your shots) to more complex nine-camera stereo rigs. These more complicated rigs are useful for scenes with a lot of depth, such as outdoor scenes, as often you will set the stereo parameters for objects in the foreground and it will break the stereo effect in the background or vice versa. These rigs, combined with Mayas render layers, can allow you to use different stereo parameters on different objects in your scene, making it a very flexible solution.

When using these cameras, Maya can show a 3D preview directly in the viewport and supports anaglyph display (using inexpensive tinted glasses) for those without special displays or options for more exotic displays, including horizontal interlaced, active shuttered displays and checkerboard format. This allows Maya to display an image on almost any 3D display out there. It’s worth bearing in mind that some of these displays require additional hardware, and you will certainly need a powerful graphics card to display a (usable) stereoscopic preview. We recommend NVIDIA’s Quadro range of graphics cards and can advise you on a 3D display for a range of budgets.

The camera rigs have several options for controlling the 3D effect. You can control the inter-ocular distance (separation between the cameras), zero parallax plane, and also have options to mimic physical 3D rigs (such as parallel or off axis).

stereo volume in maya

This image illustrates the safe stereo volume (in blue) and the zero parallax plane (in red).

Maya will also show a visual representation of the zero parallax plane along with a comfortable viewing volume (think of this as a three dimensional title guide).  These features take a lot of the guess work out of composing 3D images, and give you all the help you need to create comfortable 3D scenes.
Export options are also plentiful; Maya is able to directly export an anaglyph image (for posting to the web or printing out) or separate left and right streams (for post processing or use with stereoscopic players).

In summary, although these options are available in other software through plug-ins or scripting, the fact that they are an integral part of Maya helps to make them a great solution for producing stereoscopic CG content. Being able to preview your work in realtime will also save you a huge amount of time.

To find out more about creating stereoscopic content in Maya (or CINEMA 4D), get in touch with us on 03332 409 309 or email

Network Rendering II: Management Software

Network Rendering II: Management Software

Last time, I discussed the hardware requirements for a render farm and drew the conclusion that CPU power is still king for dedicated render machines. I will now take a look at some of the software management solutions that are available to manage all of that hardware.

Most popular rendering packages ship with a solution for managing network rendering. This section will look at some of those options:

NET Render – Maxon’s solution for rendering Cinema4D jobs across a network is NET Render. It will distribute the rendering of animations on a frame-by-frame basis or still images using the tiled camera. It can also be used to batch render multiple jobs from multiple machines. NET Render is available as a chargeable add-on to Cinema4D or is included with the XL (three client licenses) and Studio (unlimited client licenses) bundles.

NET Render will run on OS X, on Windows clients, or even a mixture of the two. It is relatively easy to set up and, because jobs are submitted through a web interface, they can theoretically be submitted from any internet connected computer. To submit a job to NET Render, you have to open the interface and upload not only your scene file, but any associated assets such as textures or externally referenced models one by one to the NET Render server. While this ensures that all of the assets are in the right place, it can become tedious if you have many assets.

ScreamerNet – This represents NewTek’s solution for network rendering with LightWave. It is capable of distributing the rendering of an animation by having each node in your farm render complete frames. ScreamerNet ships with LightWave for no extra cost and can batch render jobs but only from a single machine. It is compatible with Windows or Mac machines.

ScreamerNet requires shared folders to be set up on your network for it to work properly, which means it cannot work in mixed environments. All render nodes should be running the same operating system as the machine that created the scene files. ScreamerNet gives a good speed advantage but it can be difficult and confusing to set up.

Aerender – Also known as the After Effects Render Engine, this is Adobe’s command line renderer for After Effects and can be used to set up an After Effects render farm. The render engine is included with every After Effects license and can be used to render multiple jobs from multiple machines. There is no queuing system; jobs are rendered on a first-come, first-served basis. Setting this up requires a watch folder to be shared out over the network and the project, and all associated assets must be copied here before rendering. This watch folder can make setting up cross-platform render farms difficult, although it is possible.

Backburner – Autodesk’s solution for network rendering supports several Autodesk products, including 3ds Max, Maya, Smoke for Mac, and Cleaner. Backburner can render multiple jobs from multiple machines and includes a facility for queuing and managing these jobs. It can even render jobs submitted from several different supported applications, provided those applications are running under the same operating system.

Backburner is supported on Windows, OS X, and Linux, but all render nodes must have the same operating system as the submitting workstations; mixed environments are not supported. Backburner is powerful, fairly easy to set up and expandable.

Mental Ray Satellite – Another Autodesk technology that allows distributed rendering. Mental Ray Satellite is designed to allow several machines to lend their CPU power to a designated workstation. Renders are started as if processing locally, and networked workstations help out with producing the final image(s) – this is then displayed and saved on the creating workstation. Mental Ray Satellite works best when there is only a single workstation creating content on each set of render nodes. It is compatible with any Autodesk software, making use of Mental Ray, and will run on Windows, OS X or Linux. Different packages ship with differing numbers of Mental Ray Satellite licenses, ranging from three to eight machines. This number can be extended by purchasing standalone Mental Ray Licenses.

Next week, I will look at third party management software and make some predictions about the future of network rendering.

For more information about render farms or any of the products mentioned above, give me a call on 03332 409 309 or email Visit us on Facebook and Twitter (@Jigsaw24video).

Network Rendering I: What’s it all about?

Network Rendering I: What’s it all about?

Rendering a realistic image of a 3D scene is one of the most stressful things you can ask a computer to do. For complicated scenes, this can take hours or even days of processor time, leaving a computer effectively useless until the render is complete. This is simply unacceptable for most CG artists – many of them resort to leaving their machines on overnight to finish renders.

Network rendering is the process of having multiple networked machines collaborate on the same project, with the sole purpose of bringing the processing time down. These banks are often known as a render farm, where each machine will render a full frame of an animation or, in the case of a still image, a tile of that image. There are several ways to run a render farm and many of the popular 3D packages have their own management solution.

This is the first in a series of articles that will look at the options available for network rendering in order to help you make an informed decision on which one to use. I will start by looking at the hardware requirements for a render farm, along with the best workflows for creating a scene using network rendering. Future articles will look at render farm management software including those that ship with popular content creation software as well as with third party solutions.


Currently, final renders are calculated using the CPU. While the field of GPU rendering is looking very promising for the future, it has yet to achieve widespread adoption. For this article, I will focus on CPU-based rendering as this is the industry standard. As I mentioned before, complex rendering will max out any CPU on the market for a significant amount of time. As these renders take so long even a small increase in CPU speed can mean saving a few minutes or even hours for a single frame. If you think that a typical animation will have 25-30 frames for every second, then increasing the speed of your cores can save quite a bit of time; increasing the amount of cores can save even more time. Most renderers are multi-threaded so rendering scales very linearly with respect to cores – going from 1 to 2 or 2 to 4 cores will equate to around 1.5 to 2 times the render speed.

Bearing this in mind, building a render farm is all about getting as many fast cores as you can. These need to be backed up by a decent amount of RAM as each machine will need to load the entire scene and any associated assets into the memory to achieve best performance. We recommend 1 to 4GB per CPU core depending on the type of software you are using and the complexity of your scenes. There is no requirement for any kind of graphics acceleration in a render farm machine and typically these machines are managed remotely, so are not even hooked up to monitors during day- to-day use. This means that you can get away with very basic onboard graphics as you will likely only use them during initial setup and troubleshooting of the machine.

That should have the heavy lifting covered. Another thing to consider is the networking hardware. For network rendering to work properly, the scene file and all its assets need to be stored on a network location that can be accessed by all of the nodes. Depending on the number of users and level of redundancy needed, this can be anything from a simple network attached hard drive to a full-blown RAID system. Quite a lot of data flows back and forth between render farm machines – the scene file and assets will be read from the file server and control messages will be sent between the worker machines, so a fast network is advisable. We recommend gigabit for general use or even something faster (like fibre channel) if you are rendering high definition video from After Effects or similar software.

That’s it for this installment. Next time we will look at software management solutions for all of this hardware.

You can find out more about network rendering, as well as all things 3D by getting in touch with our experts on 03332 409 309 or by emailing Visit us on Facebook and Twitter (@Jigsaw24Video).