Demystifying 3D for students

Demystifying 3D for students

Are you starting sixth form, college or university in September? If so, read on as this article will clear up some common misconceptions about the world of 3d modelling, and will offer sound advice for anyone just starting out.

The first piece of advice is that you should visit Autodesk’s student portal. Autodesk have very generously decided to offer their software free to students. You will need your student email address (one ending in .ac.uk) or a faculty member to sign up but, within a few minutes you can start downloading all your favourite software.

Once you have signed up, I would recommend creating a profile and posting work, as it’s a great way of learning new tricks, making contact with your peers and will be useful when comparing your work to other students.

There are other resources that you can rely on to be informative and helpful, irrespective of your skill level. For example, forums such as our 3d site are there to advise on all aspects of the 3d workflow.

Anyway, once you have the free software, you’ll need to know how to get started. A good place to learn the basic interface is the Services and Support section of the Autodesk website. From there, you can select the application you want to start learning and can navigate to the video tutorials, read the documentation, get updates and much more.

So now you know how to get the software, you need to know what software to get; this can get confusing! Ultimately, it will largely be dependent on the type of course you are doing, so it may be worthwhile contacting your tutor and finding out in advance what you will be learning.

It is likely that your course will fall into one of five subjects; Engineering, Product Design, Built Environment, Multimedia (inc. animation) and Games Design. So that you can better understand the various applications and in which field they are used, we have given a brief summary of all of the major ones.

It is worth mentioning that most, if not all the non-Autodesk applications, have free trials available on their respective websites and generally provide ample support to get started.

Final thought

Finally, remember not to try and master everything. There are so many applications with so many tools that no-one could possibly learn them all. I’d bet that even the most advanced users only know 40% of one individual application’s capabilities, so don’t despair if it takes months or even years to get to a decent standard. You will need to develop near god-like levels of patience but if you stick with it, you will be rewarded.



If you want to find out more, give the team a call on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

 

Maya lighting tutorials

Maya lighting tutorials

The other day I stumbled across these lighting tutorials which I thought I would share with you. They were published a few years ago and provide a brief introduction to the theories of lighting as well as how to practically implement them into Maya.

The tutorials are broken down into six separate sections that cover different types of lighting such as moonlight, candlelight and underwater light. I really recommend reading them even if your choice of weapon isn’t Maya or Mental Ray.

Below are some examples.

maya lighting tutorials Underwater

maya lighting tutorials Twilight

Enjoy!

For more information on improving your 3D workflow, call our team on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

 

 

A shining example of 3ds Max, V-Ray and After Effects

A shining example of 3ds Max, V-Ray and After Effects

Unless you have been living in a cave for the past seven months, you will have probably seen and admired Alex Roman’s short film, The Third and the Seventh. It is without doubt the best photo-realistic short film ever produced and has successfully managed to make almost everyone in the industry feel woefully inadequate!

Watch the masterclass in 3ds Max, V-Ray and After Effects here.

Be sure to check out the ‘making of’ videos as well.

To find out more, call our 3D team on 03332 409 309 or email 3D@Jigsaw24.com.To receive the latest 3D news, follow @Jigsaw24Video on Twitter or ‘Like’ our Facebook page.

Linear workflow and gamma correction – part 4

Linear workflow and gamma correction – part 4

This is the final part of our series of linear workflow articles. Here, I will look at the manual method of working in a linear workspace with 3ds Max and Mental Ray.

The gamma correction of the bitmap inputs is handled in the exact same way as the VRay workflow, you simply add a colour correct node to the diffuse channel and use a gamma value of 0.4545.

As you might expect, the process of gamma correcting the image output is different for Mental Ray, but is thankfully very straightforward. Press F10 or go to Rendering > Render Setup and select Mental Ray as the renderer, then switch over to the Renderer tab and scroll down to the Camera Effects section. Click on the empty slot next to the Lens shader, and choose a Utility Gamma and Gain shader.

Instance this to the material editor and make the changes as shown in figure 1.

gamma 4

This will bake the gamma correction into your outputted image, which is ideal for test renders or if you aren’t planning on doing any post-production work on the image but, if you are, you will need to remember to return the gamma value to 1 when you are ready to start your final render.

As you can see, it is a very simple process and one that is very easy to implement. And that concludes this series of articles. I hope you have found them informative and interesting.

If you have any questions, call me on 03332 409 306 or email 3D@jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

Linear workflow and gamma correction – part 3

Linear workflow and gamma correction – part 3

In this, the third of what will now be four parts, I look at the manual method of setting up and working in a linear workspace with 3ds Max and VRay.

I’ll try not to repeat any of the points made in earlier articles, but it is important to reiterate that all inputs and outputs require some form of gamma correction.

The Input

The method I am recommending is to add a colour correction node to your bitmaps and colours, and then apply an inverse gamma curve to that by setting the RGB gamma to 0.4545. You should remember this value from the first article – if you don’t, I’d advise you to take another look.

gamma correction 1

This will not doubt be a change to your existing workflow and to start with, you will probably forget to apply this additional node when creating materials but it really is the simplest method and most flexible.

It gives you absolute control over the amount of correction you are applying and allows you to make some materials darker or lighter depending on your preference, as well as tweaking the other options that the colour correction node offers.

The Output

As mentioned in part 2, there are slightly different workflows depending on what you are planning on doing with the render after the 3d application. If you aren’t going to do any post processing then you will need to bake the gamma correction in to the final render. VRay does this with the Colour Mapping rollout in the Render settings.

gamma correction 2

Baking this gamma correction is also the method I choose when rendering out test scenes as it gives instant feedback without the need to get it into post. If you adopt this method, you will need to remember to revert back to the default of 1 when rendering out the final image.

There is of course a tool for this that can also help with previews. What you will need to do is, enable the VRay frame buffer from the render settings, return the gamma correction colour mapping to 1 and then toggle the sRGB button to apply the gamma correction.

gamma correction 3

The correction is made after the image has been rendered, so there will be times when you turn it on to correct and brighten up the image, but because there wasn’t enough sampling in the darker areas, it will become noisy. This is the trade-off for sheer ease of use! Personally, I don’t use this method (for the above reason) but it is a very useful tool.

By now you should be familiar with both the concept and workflow involved in manually setting up a linear workspace with 3ds Max and VRay. It may be worth your while getting to grips with this now by testing it out on some of your old scenes and seeing for yourself a marked improvement.

Part four of the series is on its way. In the meantime, if you’d like to find out more, give the team a call on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

Maxon Cinema 4D plugin for After Effects

Maxon Cinema 4D plugin for After Effects

Maxon have recently made available the CS5 compatible plugin for Cinema 4D and After Effects.

The plugin features 64-bit native compatibility for Windows and Mac OS X to allow users of Cinema 4D to take full advantage of available hardware operating system performance for improved rendering and workflow efficiency directly inside the After Effects application.

Plugins available here

Email us for more information at 3D@Jigsaw24.com or call our 3D team on 03332 409 306. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page. Visit our website Jigsaw24.com.

 

Linear workflow and gamma correction – part 2

Linear workflow and gamma correction – part 2

If you have read the first part of my gamma correction article, you should now understand exactly why this alteration to your workflow is necessary.

In this second part, I will look at the built-in linear workflow within Autodesk 3ds Max. It is the quickest method as it is controlled entirely by 3ds Max and requires no disruption to your existing workflow.

The test scene

We will be using the test scene in figure 1 for all of the examples. It is a very basic scene consisting of a box with a wooden floor, a camera, 3 VRay lights and a simple structure acting as the light fitting. All render, material and light settings will remain the same, unless otherwise stated.

Linear Workflow

The 3ds Max preference method

When the above scene is rendered with default 3ds Max settings, the result is figure 2.

Linear workflow

As you can see, the image is very dark with almost no detail in the darker areas. Previously, most users would just try and compensate for the lack of light by either increasing the intensity or even quantity of the lights. As explained in part 1, there are obvious drawbacks to this. Figure 2 illustrates the effect of increasing the intensity of the lights.

 Linear workflow

Although you may have accomplished your goal of getting more light into the scene, you have also introduced some very small artifacting around the lid of the teapot as well as a severe hotspot on the back wall. If you cannot see this highly contrasted hotspot, try raising your chair ever so slightly. You will see that the gradient is very sharp and not at all realistic.

What you should be doing is gamma correcting both the input and output. The simplest method is to go to Customise > Preferences > Gamma and LUT, and select the settings shown in figure 4.

linear workflow

Once these preferences have been set, our rendered test scene looks like figure 5.

linear workflow

The benefits are there for you too see but, if you need any reminding, please refer back to the previous section which lists, in detail, the full benefits of working in a linear workspace.

Because we aren’t doing any post-processing with this scene, it is perfectly acceptable to export a non-linear gamma corrected image. If you were intending on post-processing the image, you would need to override the output in the ‘Save as’ dialogue box or alternatively disable the 2.2 output default in the preferences. You will also need to output in anything other than the JPEG format! We recommend either half float OpenExr, 16-bit TIFF or PNG, anything else is either overkill or doesn’t contain enough image data.

I’m sure you will agree that this method is very simple and we hope you can see the benefits of making this change to your workflow. In the upcoming articles, we will cover the manual methods of both VRay and Mental Ray with 3ds Max.

To find out more, get in touch with the team on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page. Visit our website Jigsaw24.com.

Linear workflow and gamma correction

Linear workflow and gamma correction

What is it and why do I need to care?

Light intensity works linearly, whereas electronic displays do not. In the real world, two lights of the same intensity directed on the same spot will illuminate the area with twice the intensity of a single light. This can be expressed as a simple graph, as shown in figure 1 (below left).

3D graph3D graph

This is perfectly sensible and is to be expected but a problem arises when light is displayed on electronic equipment. If you double the voltage running over a liquid crystal (Liquid Crystal Display), the intensity of light emitting from that crystal doesn’t double, as it does in nature, and therefore isn’t linear. Because of this, computer monitors cannot display images and video without a certain amount of pre-processing.

Both hardware and software typically apply what is known as a gamma correction curve to images and video so that they can be displayed on monitors within a sensible colour range. This gamma correction curve typically has a value of 2.2 and can be seen in figure 2 (above right).

So, if the manufacturers of hardware already correct the deficiencies of the monitor, why do I need to care?

Because any manipulation of any image or colour you make on a computer is simply a change to an already gamma corrected image, which is itself only a best guess, resulting in images that aren’t physically accurate and are generally of poorer quality.

The linear workflow method tries to address this issue. The process involves applying an inverse gamma curve (de-gamma) to all input images and video so that the footage is converted back into a linear format and is then ready to be worked on and manipulated. This is shown in figure 3 (below left). The value of this curve is obtained by dividing the target gamma value of 1 by the current gamma value of 2.2, therefore the value of the inverse gamma curve is 1 / 2.2 = 0.4545.

3D graph3D graph

The linear workflow for the 3D industry boils down to image input and output. As you now know, images will almost always need an inverse gamma curve applied to them when they are brought into a 3D application. This will ensure that you are working in a linear workspace. When the image has been rendered, post-processed and finalised, a gamma correction curve needs to be applied so that it can be displayed on computer monitors. This process is demonstrated in figure 4 (above right).

I’m actually quite satisfied with the images that I’m producing. Is it really worth all the trouble?

Yes! Below are just some of the benefits of working in a linear workspace.

  • You will spend less time tinkering your images to get realistic results as working in a linear workspace yields physically accurate results.
  • There should be no need to render out different channels and composite them later in post, as again the rendered image will be physically accurate. (It is also worth pointing out that any blending of layers together in post, such as Add or Multiply, is completely inaccurate and mathematically insane, when not working in a linear workspace. This is because you are blending together layers that themselves have been ‘corrected’.)
  • Effects, like fog, lights and motion blur, work better in a linear workspace.
  • Eliminates unrealistically strong reflections.
  • Smoother gradients in darker areas.
  • Less artifacting around specular highlights.
  • Fewer blown out or overexposed areas.
  • You can use lower intensity lights in your scene and push them further.
  • Smaller file sizes, as there is no need to add all the individual render channels.

 

In parts two and three of the series, I will look at 3ds Max’s built-in gamma correction, 3ds Max and VRay, and 3ds Max and Mental Ray. In the meantime, if you want to find out more, give the team a call on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

GPU rendering: an update

GPU rendering: an update

Over the past few months, GPU rendering has taken a few steps forward and, whilst we are nowhere near where we need to be, it does look promising. So let’s review what has been happening.

3ds Max 2011 Quicksilver Renderer

3ds Max 2011 was released in April 2010 and took most of us by surprise when it was revealed that Autodesk had included with it the CPU / GPU Quicksilver renderer. As this was the first GPU-based renderer actually integrated into a mainstream 3d program, we were hopeful that it would provide the perfect solution to the ever increasing problem of render times.

We did some tests on Quicksilver and concluded that it is far from perfect, but that it is looking promising and should improve with future releases and service packs.

VRay RT

It seems ages ago now when we saw the demo of VRay RT GPU at SIGGRAPH 2009. Since then, Chaos Group have released the nearly brilliant CPU version of RT for 3ds Max.

Whilst the CPU version is a massive step forward, it is clear that we are all still waiting for the fully-fledged GPU version. Our excitement grew when we saw the most recent video by Chaos Group (released 14th May), in which they say that VRay RT on the GPU is ‘practically a completed product’ and also that a version for Maya will be available within about two months time.

They have also conducted tests which demonstrate the huge leap forward in quality and speed. Another interesting fact about RT is that it runs on OpenCL, meaning that it will run on graphics cards from both Nvidia and AMD.

iRay

The announcement of iRay, coupled with misleading documentation, has successfully managed to confuse most of the 3D industry that has been anticipating its release. Let’s try and clear things up.

Mental images, the developer, stated in their documentation that ‘iRay is provided with mental ray from version 3.8 and RealityServer from version 3.0′ and then clarify this by saying ‘iRay-enabled products feature an iRay rendering mode’.

I suppose that this statement is true, in that, mental ray when bought as a standalone product is iRay enabled, but the confusion seems to be that, whilst the 2011 releases of Autodesk products do ship with mental ray 3.8, they haven’t enabled the iRay rendering node.

iRay remains something we are very much looking forward too, but it seems that we will have to wait the best part of a year before it is integrated into the Autodesk suite of products.

Unbiased renderering

iRay is an unbiased renderer meaning that there are no settings for the renderer as such, you just import or create a model set iRay running on it and watch as the image quality gets progressively better over time. Items such as materials and lighting can be changed with near instant feedback.

Many people think that this kind of workflow will be restrictive but we found quite the opposite. It feels liberating to just make edits to your materials and lights without having to worry about using render settings and tricks to improve your image quality. All of that is left to iray which uses real world physical properties to calculate its images and is extremely fast compared to traditional renderers.

There seems to be two ways in which iray and other similar renderers can fit into your existing pipeline. The first way would be to use it as a production renderer. At present, this limits you to only using mental ray materials and as iRay is not integrated into the Autodesk line up yet adds an extra step to your workflow.

The other, more sensible use, would be to use it on location with clients, so that you can get immediate feedback on colour schemes, materials and lighting.

For example, you are an interior designer and have already modelled the set in 3ds Max. You can now import the scene into iray and then light the scene and apply the materials. You could take your laptop to the client, and right there and then change anything that the client wished. This would then eliminate the back and forth nature of finalising and perfecting a job to the clients’ needs.

The rest is then up to you and your client. If they like the GPU-produced render, then fine, but if not, you could take all those tweaks made in front of the client and then re-create them in the 3ds Max, knowing that there won’t be any further changes necessary.

We are currently testing several other GPU-based renderers similar to iRay so watch this space for reviews.

For now, if you want to know more, you can get in touch with us on 03332 409 306 or email 3D@Jigsaw24.com. To receive the latest 3D news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

The future of rendering

The future of rendering

A little background…

Since the invention of 3D rendering, the CPU has been responsible for rendering out images and animations from 3D and video applications using graphic APIs such as OpenGL or DirectX to communicate with the graphics card.

With only a few exceptions render engines use ‘bucket rendering’. This splits up the image into buckets (squares), which are processed on the CPU until the full image is complete. These buckets can be distributed over a render farm to speed up rendering but, if working on a single workstation, the total number of active buckets depends on the amount of CPU cores you have available.

8 core CPU rendering buckets

An example of an 8 core CPU rendering 8 buckets simultaneously.

Recent developments

Inspired by this limitation, manufacturers of both hardware and software have been working towards developing a method that will offload the job of rendering from the CPU to the GPU. Although there are a number of render engines currently developing this, almost all will utilise the CUDA technology of NVIDIA graphics cards.

As mentioned earlier, the total number of active buckets is determined by the amount of cores available. If we consider an entry-level NVIDIA GeForce graphics card such as the 8800GT, which has 112 cores, you start to see how GPU rendering can and will have a massive impact on render times as more buckets can be rendered simultaneously.

As you would expect, the better the graphics card, the faster the render. This means that the high-end GeForce and Quadro graphics cards could render up to 60 times faster than a standard quad core CPU.  This will improve and quicken the artist’s workflow by allowing the user to see the immediate effect on the scene after any alteration. This could be as simple as changing a light or material parameter, or introducing a new object. It can also allow the artist to pan and zoom the camera around the scene without the need to wait while the frame buffer re-renders. Instead, the viewport follows the user’s actions while working on the scene and automatically (and progressively) generates a photorealistic preview.

Whilst NVIDIA’s CUDA engine is clearly the leader in this field – effectively locking all GPU processing tasks to NVIDIA hardware – there are others on the horizon. Apple have been working with the Khronos Group on OpenCL, a standards-based method for general purpose GPU computing.

By democratising GPU processing, any program on Macintosh, Windows and Linux platforms will be able to compute 3D data on any graphics card, regardless of manufacturer. Not only is OpenCL a genuine competitor, it is likely to supersede CUDA as the API of choice, allowing programs such as Maxon’s Cinema4D and Autodesk’s Maya to render on the GPU.

Another worthy mention is Microsoft DirectX 11’s compute shader feature, which is shipping with Windows 7. This feature enables post-processing effects, such as depth of field and motion blur, to be carried out by the GPU. Although locked to the Windows platform, it can be used on both AMD and NVIDIA graphics cards.

Click here to see our selection of graphics cards or, for more information, please contact us on 3d@Jigsaw24.com or call 03332 409 309.

Rendering software

In order to use the GPU cores for rendering, we have had to wait for software companies to catch up with the developments at NVIDIA. There are two clear leaders in the race to get a fully supported GPU renderer on the shelves; Mental Ray’s iRay and Chaos Group’s V-Ray RT.

iRay will hopefully be available to all customers who upgrade to future releases of Mental Ray, either as a standalone renderer or from within applications that include the software (including such as Autodesk 3ds Max, Autodesk Maya and Autodesk Softimage).

Although impressive, indoor scenes or scenes with a large amount of bounced light seem to take significantly longer than other images to fully render. Even after a few seconds the image looks like a poor reception from a television and not at all production quality. These results were obtained using four GPUs; what type we don’t know, but most likely it would have been a Tesla S1070, (a platform iRay was designed to run on).

Incredibly, those pioneers over at Mental Images have also found the time to develop mental mill and, in conjunction with NVIDIA, the RealityServer. mental mill enables artists to create shaders and graphs for GPU and CPU rendering through an intuitive GUI with realtime visual feedback. The NVIDIA RealityServer delivers the power of thousands of cores that allow for realtime rendering over the web, perfect for product designers and architects who can easily visualise their clients’ projects with a laptop or even an iPhone!

The NVIDIA RealityServer platform is a powerful combination of NVIDIA Tesla GPUs, RealityServer software and iRay. Later, we will consider the NVIDIA Tesla GPUs in more depth and explore how they too are shaping the future of GPU rendering.

The other viable option for realtime rendering is V-Ray RT. Whilst V-Ray RT is currently CPU based, Chaos Group have already developed it into a fully interactive GPU accelerated renderer, which will hopefully be available as a service pack upgrade this year. A beta version of this was showcased last year at the industry event SIGGRAPH and was considered the major highlight of the show.

V-Ray has long been at the forefront of photorealistic rendering and is well known for being the fastest and easiest to use. In contrast to the iRay demo, it appears that V-Ray RT will yield faster results whilst using mid- to high-range graphics card. In the video, they use an NVIDIA GeForce GTX 285, which is available for just £399 exVAT. Once V-Ray RT goes fully GPU based, users should expect renderings to be completed 10 to 20 times faster than its CPU counterpart.

So which is better?

iRay

Pros

  • Available as a future release of Mental Ray
  • Web interface
  • mental mill

Cons

  • Very expensive hardware
  • Slower than V-Ray RT

V-Ray RT

Pros

  • Faster than iRay
  • Cheaper hardware

Cons

  • No web interface
  • No definite release date
  • CPU version currently does not support meshes

If money is no object and you require a method of interacting with your 3d scene over the web, perhaps whilst in front of clients, then iRay is for you.

However, if you are prepared to wait a bit for its release, GPU based V-Ray RT will offer you quicker and cheaper results and will seamlessly fit into current workflow methods. It is worth mentioning that both solutions are scalable, meaning that you can add multiple graphics cards into a workstation or distribute the task over a network. Be aware that it is almost certain that each graphics cards will need a 16 x PCIE 2.0 slot to work fully, so check your motherboard before you upgrade.

The only other GPU rendering solution worth mentioning is OctaneRender, developed by Refractive Software. A limited feature demo is available for the Windows platform.

OctaneRender isn’t locked to a particular program, you simply import a Wavefront ‘obj’ file and then start applying shaders and adding lights to the scene whilst viewing your changes in realtime. The upside of this is that almost all 3D applications can export to it but it does require a significant change in current workflow techniques and is unlikely to surpass the complex and now standard practices of Mental Ray and V-Ray.

NVIDIA Tesla technology

Right, you’ve heard us mention the Tesla a few times already, so it’s about time we explain why it is at the heart of this GPU revolution.

The Tesla S1070 is the world’s first 4 teraflop processor. This is achieved by using four 1 teraflop GPUs, each with 240 processor cores to give a total of 960 cores, all in 1U of rack space! This amount of cores will reduce render times from hours to minutes or even seconds.

Needless to say, there is also a workstation equivalent. The C1060 takes one of those 4GB GDDR3 1 teraflop GPU’s used in the S1070 and uses a regular PCIE 2.0 bus so that it can be immediately implemented into existing workstations.

This breakthrough finally provides an affordable solution for individuals and small companies who can now have the processing power of 60 Quad core processors (which would previously take up the space of a small room!) located neatly alongside a regular graphics card used for video display.

So, together with a render engine such as V-Ray RT or iRay and a CUDA enabled graphics card, individuals will soon have access to realtime photorealistic rendering power at a fraction of the cost of a render farm. I’m sure you will agree this is a massive, game-changing development.

Back in the real world

Aside from all the facts and demos, if you ever needed proof that the burden of rendering has fallen on the shoulders of the GPU, then consider the hugely successful and brilliant film ‘Avatar’.

At last, film and special effects companies such as WETA now have the necessary hardware to produce stunningly beautiful and lushly detailed scenes with an extensive cast of virtual characters set in computer generated environments.

Of course this has been done before; in fact, the last breakthrough in this field was made on another of WETA’s creations, ‘Lord of the Rings’. However, those 3D effects were merged into the real world footage, whereas ‘Avatar’ is total fantasy, everything exists only in a 3D virtual model.

WETA were required for the first time in the history of CG visual effects, to model, animate, shade, light and render billions rather than millions of polygons in a single scene. The computational power required to process the ‘Avatar’ shots was higher than anything they had attempted previously; so they turned to NVIDIA, masters of the GPU.

Step forward the Tesla S1070 that, along with new custom designed software, PandaRay, allowed WETA to process their shots 25 times faster than any CPU-based server.

One scene in particular exemplifies the advantages of PandaRay and GPU-based servers. If you’ve got a copy, pay close attention to the shots where a huge flock of purple creatures and enemy helicopters are flying amongst tree covered mountains. Those sorts of scenes were  pre-computed in a day and half where previously it would have taken a week with traditional CPU-based servers.

avatar 3D animation  avatar rendering images

The increased rendering speed allowed for minute detail of vegetation and near perfect colour separation between distances, creating a more beautiful shot.

So as you can see, GPU computing is both the present and future of 3D rendering. If you would like any more information regarding CUDA-enabled graphics cards and servers, as well as rendering programs, please don’t hesitate to get in touch.

To find out more, get in touch with us 03332 403 309 or email 3D@Jigsaw24.com.