Are you using the right hardware for your Autodesk software?

Are you using the right hardware for your Autodesk software?

Whether you’re sculpting in Mudbox, animating characters in Maya, whipping up pre-visualisations in 3ds Max or drafting like billy-o in AutoCAD LT, some of the basics of what makes a good Autodesk workstation stay the same (stock up on RAM and pack in as many cores as possible), but with so many different software suites and qualified components out there, it can be difficult to work out which workstation is best for you. To help make things easier, here are our top tips for choosing Mac and PC workstations for your Autodesk software of choice… 

For AutoCAD and AutoCAD LT for Mac users

We have good news: virtually any Mac will run AutoCAD or AutoCAD LT, from the beefiest of Mac Pros (ideal for handling big models quickly) to the smallest Mac mini (great for setting up freelancers with temporary desks, or if you want to take your setup with you to meet a client, as it’ll plug into any keyboard and display).

We know that a lot of users are sticking to their ageing Mac Pros in order to keep using NVIDIA Quadro 4000 or Quadro K5000 cards due to their higher fidelity, but the latest models have a huge amount to offer. With powerful 12-core CPUs on offer, the latest Mac Pro can help you create and navigate simulations far faster. The fact that the usual lumbering hard drive has been replaced by a fast, agile SSD means you’ll also be able to work with huge models far more efficiently.

If you’re really itching to customise your workstation, we’ll say it again: you can never have enough RAM. Get in touch with our team to find out how easy it is to pack your Mac with some extra memory.

For 3ds Max users

Autodesk 3ds Max 2014

If you’re working in a field like games development, odds are you’re using 3ds Max or a 3ds Max-based Entertainment Creation Suite (if you’re not, you might want to drop us a line…). You’ll want plenty of processing power, so we’d recommend opting for a 16-core HP Z820 for maximum responsiveness, although a high-spec Z620 will do the trick if you’re budget-conscious. While the new Mac Pros look promising, we’re still waiting for Autodesk to qualify a configuration, so if you need an interim Mac workstation go for a 27” Quad-core i7 3.4Ghz iMac with at least 8GB of RAM – preferably more.

If you invested in iMac before the latest Mac Pro was announced and are wincing at the cost of replacing them, remember that you can use the iMac screen as a second display and harness the internals as part of your rendering setup, meaning that artists can continue working on their Mac Pro while their iMac takes care of rendering work, rather than sitting and watching the progress bar.

When it comes to graphics, you need to bear in mind that Autodesk recently rewrote 3ds Max’s viewport engine, moving it over to DirectX from OpenGL. This means you’ll get faster performance for your money using gaming cards than you will using traditionally professional cards – which is great news for your wallet, and means you can design your work on the same card your end user will be playing it on.

One good choice for working with Autodesk software is NVIDIA’s 6GB GeForce GTX Titan, as it has the kind of stamina you usually only see in pro cards and so is least likely to melt under constant use. However, it’s not qualified yet and is also pretty expensive, so you might want to opt for Autodesk’s qualified card, the lower-spec 4GB GeForce GTX 680, which delivers a surprising amount of power for such an affordable card.

For Maya and Mudbox

For areas like graphics or post-production work, we’d typically recommend Autodesk Maya or a Maya-centric Entertainment Creation Suite (Autodesk’s Entertainment Creation Suite Ultimate gets you Maya, 3ds Max, Motionbuilder, Mudbox, Softimage and Sketchbook Designer, so it’s a good option if you want to make sure you’re covered for every eventuality). The main difference between Maya and an application like 3ds Max is that you really need a NVIDIA Quadro card to get the best possible graphics performance. The Quadro drivers are optimised for Maya, and going for something like the ultra-powerful Quadro K5000 or the K2000 if you’re kitting out an assist station will give you the smoothest, most accurate viewport performance.

While we’re still waiting to hear how Autodesk plan to handle the dual GPU potential of the 2013 Mac Pro, if you need a Mac in an interim then your only real option is the top spec 3.4GHz i7 iMac, with 8 or 16GB of RAM depending on the size of project you think you’ll need to handle (this can always be repurposed as a combined second display and a render node if you decide to upgrade to a Mac Pro further down the line). For PC workstations, we’d recommend going no lower than an HP Z620 (ideally a Z820) with as many cores and as much RAM as you can pack in, as both will help you complete projects in the fastest possible time.

Want to know more? Give us a call on 03332 409 306 or email Autodesk@Jigsaw24.com. For all the latest news, follow us on Twitter or ‘Like’ us on Facebook.

NVIDIA’s Quadro K6000 GPU unveiled at SIGGRAPH 2013

NVIDIA’s Quadro K6000 GPU unveiled at SIGGRAPH 2013

NVIDIA have once again proceeded to steal everyone’s thunder at SIGGRAPH 2013 by releasing the Quadro K6000 GPU, apparently “the fastest and most capable GPU ever built”, as well as a new line of GPUs designed specifically for mobile workstations. Read on for the full press release, or take a look at the official NVIDIA Quadro K6000 spec sheet

ANAHEIM, Calif.— SIGGRAPH — July 23, 2013— NVIDIA today unveiled the visual computing industry’s new flagship technology – the NVIDIA Quadro K6000 GPU, the fastest and most capable GPU ever built.

NVIDIA today also launched a new line of professional graphics GPUs for mobile workstations, delivering the highest levels of performance and graphics memory ever available on mobile platforms.

The Quadro K6000 GPU delivers five-times higher compute performance and nearly double the graphics capability of its predecessor, the NVIDIA Quadro 6000 GPU, and features the world’s largest and fastest graphics memory.

Combining breakthrough performance and advanced capabilities in a power-efficient design, the Quadro K6000 GPU enables leading organisations such as Pixar, Nissan, Apache Corporation and the Weather Channel’s WSI division to tackle visualisation and analysis workloads of unprecedented size and scope.

Animation and Visual Effects – Pixar

“The Kepler features are key to our next generation of real-time lighting and geometry handling. We were thrilled to get an early look at the K6000. The added memory and other features allow our artists to see much more of the final scene in a real-time, interactive form, and allow many more artistic iterations.” – Guido Quaroni, Pixar vice president of Software R&D

Product Styling – Nissan

“With Quadro K6000’s 12 GB of memory, I am now able to load nearly complete vehicle models into RTT Deltagen and have stunning photorealism almost instantly. Instead of spending significant time simplifying the models to fit into previous hardware, we can now spend more time reviewing and iterating designs up front which helps avoid costly changes to tooling.” – Dennis Malone, associate engineer, Nissan North America

Energy Exploration – Apache

“Compared to the Quadro K5000, the Quadro K6000 tripled the performance when running jobs on Terraspark’s InsightEarth application. With jobs running in mere minutes, we can run more simulations and get better insight into where to drill. In this business, drilling in the wrong place is a multi-million dollar mistake, and the Quadro K6000 gives us the edge to make better decisions.” – Klaas Koster, manager, seismic interpretation, Apache Corporation

Unprecedented Performance

The Quadro K6000 GPU is based on the NVIDIA Kepler™ architecture – the world’s fastest, most efficient GPU architecture. Key performance features and capabilities include:

– 12GB ultra-fast GDDR5 graphics memory lets designers and animators model and render characters and scenes at unprecedented scale, complexity and richness

– 2,880 streaming multiprocessor (SMX) cores deliver faster visualisation and compute horsepower than previous-generation products

– Supports four simultaneous displays and up to 4k resolution with DisplayPort™ 1.2

– Ultra-low latency video I/O and support for large-scale visualisations

“The NVIDIA Quadro K6000 GPU is the highest performance, most capable GPU ever created for the professional graphics market,” said Ed Ellett, senior vice president, Professional Solutions Group at NVIDIA. “It will significantly change the game for animators, digital designers and engineers, enabling them to make the impossible possible.”

New Mobile Workstation GPUs

NVIDIA today also revealed a new flagship professional graphics GPU for workstation notebooks, the NVIDIA Quadro K5100M GPU. Delivering the highest levels of performance and graphics memory available on notebook platforms, the Quadro K5100M anchors a new line of workstation notebook graphics that includes the Quadro K4100M, K3100M, K2100M, K1100M, K610M, and K510M GPUs.

Quadro GPUs are designed, built and tested by NVIDIA to provide the superb reliability, compatibility and dependability that professionals require.  They are certified and recommended by more than 150 leading software application providers worldwide.

Availability

The NVIDIA Quadro K6000 will be available beginning this fall from HP, Dell, Lenovo and other  major workstation providers; from systems integrators, including BOXX Technologies and Supermicro; and from authorised distribution partners, including PNY Technologies in North America and Europe, ELSA and Ryoyo in Japan, and Leadtek in Asia Pacific.

The new Quadro mobile workstation graphics product line will also be available beginning this fall from major mobile workstation OEMs.

Want to know more about the latest from NVIDIA? Give us a call on 03332 409 306 or email sales@Jigsaw24.com. For all the latest news, follow us on Twitter or ‘Like’ us on Facebook

NVIDIA’s Tesla K20 and Quadro K5000 to power Maximus 2.0

NVIDIA’s Tesla K20 and Quadro K5000 to power Maximus 2.0

NVIDIA have announced that the second generation of their innovative Maximus platform will be up and running in December. Powered by NVIDIA’s new Kepler-based GPUs, the Quadro K5000 and the Tesla K20, Maximus promises faster, better graphics performance for anyone from mograph artists to prospective oil barons.

How does Maximus work?

Maximus technology allows a Tesla and Quadro card to work in parallel to crunch numbers and simulate or render graphics at the same time, reducing the workload of both the cards and your CPU and resulting in faster graphics performance.

The new GPUs

Over to Jigsaw24 3D consultant and resident Maximus expert, Ben Kitching, to explain why we should be getting excited about the Tesla K20 and the Quadro K5000. “The new Kepler-based cards have up  to 3000+ CUDA cores – that’s six times as many as the previous high-end cards like the Quadro 6000 and Tesla C2075. The new cards also have SMX and dynamic parallelism, two new technologies that allow them to make more efficient use of those cores,” he explains.

“On top of this, there is the pioneering  GPU virtualisation, which brings the long awaited dream of remote working to those needing to use high performace apps like Autodesk Maya or the Adobe suites. Imagine being able to remote into your high performance workstation from a MacBook Air and access your production data at full speed and quality as if you were sat in front of it.”

Other key features of the Quadro K5000 include:

  • ‪Bindless Textures that give users the ability to reference over 1 million textures directly in memory while reducing CPU overhead.
  • ‪FXAA/TXAA film-style anti-aliasing technologies for outstanding image quality.
  • ‪Increased frame buffer capacity of 4GB, plus a next-generation PCIe-3 bus interconnect that accelerates data movement by 2x compared with PCIe-2.
  • ‪An all-new display engine capable of driving up to four displays simultaneously with a single K5000.
  • ‪Display Port 1.2 support for resolutions up to 3840×2160 at 60Hz.

The Tesla K20 is no slouch either, adding SMX streaming technology that promises to deliver up to three times as much performance per watt, dynamic parallelism and Hyper-Q technology (we should probably point out that all these stats came from NVIDIA, and we haven’t been able to verify them independently).

When can I have one?

The Quadro K5000 will be available as a standalone desktop GPU from October (we’re trying to wrangle a demo unit before then, so keep your eyes peeled for benchmarks). The Tesla K20 and qualified Maximus-capable workstations are set to follow in December.

Want to know more? Give us a call on 03332 409 306 or email sales@Jigsaw24.com. For the latest news, follow @Jigsaw24VIdeo on Twitter or Like’ our Facebook page. Visit our website at Jigsaw24.com.

The future of rendering

The future of rendering

A little background…

Since the invention of 3D rendering, the CPU has been responsible for rendering out images and animations from 3D and video applications using graphic APIs such as OpenGL or DirectX to communicate with the graphics card.

With only a few exceptions render engines use ‘bucket rendering’. This splits up the image into buckets (squares), which are processed on the CPU until the full image is complete. These buckets can be distributed over a render farm to speed up rendering but, if working on a single workstation, the total number of active buckets depends on the amount of CPU cores you have available.

8 core CPU rendering buckets

An example of an 8 core CPU rendering 8 buckets simultaneously.

Recent developments

Inspired by this limitation, manufacturers of both hardware and software have been working towards developing a method that will offload the job of rendering from the CPU to the GPU. Although there are a number of render engines currently developing this, almost all will utilise the CUDA technology of NVIDIA graphics cards.

As mentioned earlier, the total number of active buckets is determined by the amount of cores available. If we consider an entry-level NVIDIA GeForce graphics card such as the 8800GT, which has 112 cores, you start to see how GPU rendering can and will have a massive impact on render times as more buckets can be rendered simultaneously.

As you would expect, the better the graphics card, the faster the render. This means that the high-end GeForce and Quadro graphics cards could render up to 60 times faster than a standard quad core CPU.  This will improve and quicken the artist’s workflow by allowing the user to see the immediate effect on the scene after any alteration. This could be as simple as changing a light or material parameter, or introducing a new object. It can also allow the artist to pan and zoom the camera around the scene without the need to wait while the frame buffer re-renders. Instead, the viewport follows the user’s actions while working on the scene and automatically (and progressively) generates a photorealistic preview.

Whilst NVIDIA’s CUDA engine is clearly the leader in this field – effectively locking all GPU processing tasks to NVIDIA hardware – there are others on the horizon. Apple have been working with the Khronos Group on OpenCL, a standards-based method for general purpose GPU computing.

By democratising GPU processing, any program on Macintosh, Windows and Linux platforms will be able to compute 3D data on any graphics card, regardless of manufacturer. Not only is OpenCL a genuine competitor, it is likely to supersede CUDA as the API of choice, allowing programs such as Maxon’s Cinema4D and Autodesk’s Maya to render on the GPU.

Another worthy mention is Microsoft DirectX 11’s compute shader feature, which is shipping with Windows 7. This feature enables post-processing effects, such as depth of field and motion blur, to be carried out by the GPU. Although locked to the Windows platform, it can be used on both AMD and NVIDIA graphics cards.

Click here to see our selection of graphics cards or, for more information, please contact us on sales@Jigsaw24.com or call 03332 409 309.

Rendering software

In order to use the GPU cores for rendering, we have had to wait for software companies to catch up with the developments at NVIDIA. There are two clear leaders in the race to get a fully supported GPU renderer on the shelves; Mental Ray’s iRay and Chaos Group’s V-Ray RT.

iRay will hopefully be available to all customers who upgrade to future releases of Mental Ray, either as a standalone renderer or from within applications that include the software (including such as Autodesk 3ds Max, Autodesk Maya and Autodesk Softimage).

Although impressive, indoor scenes or scenes with a large amount of bounced light seem to take significantly longer than other images to fully render. Even after a few seconds the image looks like a poor reception from a television and not at all production quality. These results were obtained using four GPUs; what type we don’t know, but most likely it would have been a Tesla S1070, (a platform iRay was designed to run on).

Incredibly, those pioneers over at Mental Images have also found the time to develop mental mill and, in conjunction with NVIDIA, the RealityServer. mental mill enables artists to create shaders and graphs for GPU and CPU rendering through an intuitive GUI with realtime visual feedback. The NVIDIA RealityServer delivers the power of thousands of cores that allow for realtime rendering over the web, perfect for product designers and architects who can easily visualise their clients’ projects with a laptop or even an iPhone!

The NVIDIA RealityServer platform is a powerful combination of NVIDIA Tesla GPUs, RealityServer software and iRay. Later, we will consider the NVIDIA Tesla GPUs in more depth and explore how they too are shaping the future of GPU rendering.

The other viable option for realtime rendering is V-Ray RT. Whilst V-Ray RT is currently CPU based, Chaos Group have already developed it into a fully interactive GPU accelerated renderer, which will hopefully be available as a service pack upgrade this year. A beta version of this was showcased last year at the industry event SIGGRAPH and was considered the major highlight of the show.

V-Ray has long been at the forefront of photorealistic rendering and is well known for being the fastest and easiest to use. In contrast to the iRay demo, it appears that V-Ray RT will yield faster results whilst using mid- to high-range graphics card. In the video, they use an NVIDIA GeForce GTX 285, which is available for just £399 exVAT. Once V-Ray RT goes fully GPU based, users should expect renderings to be completed 10 to 20 times faster than its CPU counterpart.

So which is better?

iRay

Pros

  • Available as a future release of Mental Ray
  • Web interface
  • mental mill

Cons

  • Very expensive hardware
  • Slower than V-Ray RT

V-Ray RT

Pros

  • Faster than iRay
  • Cheaper hardware

Cons

  • No web interface
  • No definite release date
  • CPU version currently does not support meshes

If money is no object and you require a method of interacting with your 3d scene over the web, perhaps whilst in front of clients, then iRay is for you.

However, if you are prepared to wait a bit for its release, GPU based V-Ray RT will offer you quicker and cheaper results and will seamlessly fit into current workflow methods. It is worth mentioning that both solutions are scalable, meaning that you can add multiple graphics cards into a workstation or distribute the task over a network. Be aware that it is almost certain that each graphics cards will need a 16 x PCIE 2.0 slot to work fully, so check your motherboard before you upgrade.

The only other GPU rendering solution worth mentioning is OctaneRender, developed by Refractive Software. A limited feature demo is available for the Windows platform.

OctaneRender isn’t locked to a particular program, you simply import a Wavefront ‘obj’ file and then start applying shaders and adding lights to the scene whilst viewing your changes in realtime. The upside of this is that almost all 3D applications can export to it but it does require a significant change in current workflow techniques and is unlikely to surpass the complex and now standard practices of Mental Ray and V-Ray.

NVIDIA Tesla technology

Right, you’ve heard us mention the Tesla a few times already, so it’s about time we explain why it is at the heart of this GPU revolution.

The Tesla S1070 is the world’s first 4 teraflop processor. This is achieved by using four 1 teraflop GPUs, each with 240 processor cores to give a total of 960 cores, all in 1U of rack space! This amount of cores will reduce render times from hours to minutes or even seconds.

Needless to say, there is also a workstation equivalent. The C1060 takes one of those 4GB GDDR3 1 teraflop GPU’s used in the S1070 and uses a regular PCIE 2.0 bus so that it can be immediately implemented into existing workstations.

This breakthrough finally provides an affordable solution for individuals and small companies who can now have the processing power of 60 Quad core processors (which would previously take up the space of a small room!) located neatly alongside a regular graphics card used for video display.

So, together with a render engine such as V-Ray RT or iRay and a CUDA enabled graphics card, individuals will soon have access to realtime photorealistic rendering power at a fraction of the cost of a render farm. I’m sure you will agree this is a massive, game-changing development.

Back in the real world

Aside from all the facts and demos, if you ever needed proof that the burden of rendering has fallen on the shoulders of the GPU, then consider the hugely successful and brilliant film ‘Avatar’.

At last, film and special effects companies such as WETA now have the necessary hardware to produce stunningly beautiful and lushly detailed scenes with an extensive cast of virtual characters set in computer generated environments.

Of course this has been done before; in fact, the last breakthrough in this field was made on another of WETA’s creations, ‘Lord of the Rings’. However, those 3D effects were merged into the real world footage, whereas ‘Avatar’ is total fantasy, everything exists only in a 3D virtual model.

WETA were required for the first time in the history of CG visual effects, to model, animate, shade, light and render billions rather than millions of polygons in a single scene. The computational power required to process the ‘Avatar’ shots was higher than anything they had attempted previously; so they turned to NVIDIA, masters of the GPU.

Step forward the Tesla S1070 that, along with new custom designed software, PandaRay, allowed WETA to process their shots 25 times faster than any CPU-based server.

One scene in particular exemplifies the advantages of PandaRay and GPU-based servers. If you’ve got a copy, pay close attention to the shots where a huge flock of purple creatures and enemy helicopters are flying amongst tree covered mountains. Those sorts of scenes were  pre-computed in a day and half where previously it would have taken a week with traditional CPU-based servers.

avatar 3D animation  avatar rendering images

The increased rendering speed allowed for minute detail of vegetation and near perfect colour separation between distances, creating a more beautiful shot.

So as you can see, GPU computing is both the present and future of 3D rendering. If you would like any more information regarding CUDA-enabled graphics cards and servers, as well as rendering programs, please don’t hesitate to get in touch.

To find out more, get in touch with us 03332 403 309 or email sales@Jigsaw24.com.