It’s time to get HDR-ready

It’s time to get HDR-ready

Remember 2010, when we were all very excited about shooting in native 3D? Well, I think we can all agree that that trend is now dying a death, and ceding its Cool Trend crown to High Dynamic Range (HDR) imagery. However, HDR is different. Rather than a hyped up flash in the pan, it’s actually offering something that filmmakers have been clamouring for – a return to filmic production values, without losing the agility of digital shooting. 

So what exactly do you need to know before you wade into the world of HDR content production? Can you shoot it with your current kit? And what does it really mean for your images? We asked our production team to give us the lay of the land.

First, for the newcomers: what is HDR?

So, the human eye has a functional range of roughly 100,000 nit from the darkest to brightest light it can perceive detail in, and the lens of a camera has a similar range. Until now, however, image processing, transmission and display technologies have reduced this range, meaning bright and dark objects that were perfectly visible to the naked eye appeared clipped or burned in a captured image. You could expose for the highlights and lose detail in the shadows, or expose for shadow but lose detail in the highlights, but there was no way to capture detail in both.

An HDR workflow preserves this full range from capture through transmission, all the way to final display, so your final image has the full dynamic range of the human eye, and therefore appears much more realistic and immersive, as shown in our illustration (alas, this will only work if you’re viewing this on an HDR-ready display). You’ll see more vivid colours, and more detail in shadows.

HDR

 

Screen Shot 2017-02-08 at 11.29.08

But to give some sense of the scale of this change, the brightest possible pixel on an HDR display is about 40 times what it used to be on an SD display, and when you’re working with an HDR image in post, you can tweak brightness levels pixel by pixel.

However, to get the full effect of HDR, you need more than a camera with a lot of latitude. For example, without support for a wide colour gamut, you won’t see as much colour variation in the newly visible section of your image. Support for high frame rates is also recommended, and you’ll need support for 10- or 12-bit capture too, depending on which version of HDR you’re working with.

There are competing versions of HDR?

Yep. The current frontrunner is HDR10, as it’s been picked up by various gaming platforms. Also popular is the more detailed Dolby Vision. The image displayed by Dolby Vision is ‘scene referred’, which means it varies from scene to scene, working with your display to adjust each image. By contrast, HDR10 is static.

Most consumer displays rely on Hybrid Log Gamma, an electronic-optical transfer function protocol  that combines standard gamma with log to create (wait for it) a hybrid that extends traditional gamma beyond the standard curve. Any TV can display HLG, as it displays the standard gamma. TVs brighter than 100 nits (i.e. most LCDs) will then display more highlight information until it reaches its point of maximum brightness, when it’ll clip.

Which of these is the one my smartphone camera can do?

Neither. The ‘HDR’ advertised on smartphones is actually HDR-I, which uses tone mapping to give the impression that you’re seeing images with a higher dynamic range than you are. This is not the same as the true HDR you’ll be capturing on a pro camera for a production workflow.

So what qualifies a camera as being capable of shooting real HDR?

There are several features that your camera needs to qualify as HDR-capable, but the main ones are:

– 10-bit capture to Log or RAW. As a minimum, your camera needs to support ProRes or DNX 10-bit 4.2.2., but don’t feel like you have to stop there. The more bits the better, really.

– Plenty of latitude. Canon’s C300 MkII is being touted as having 15 stops, which is ideal, but the Sony FS7 and FS5 both have 14, and if you have a C500 in your arsenal, that still has a perfectly respectable 12 stops of dynamic range.

– S-Log3/C-LOG 3 capture capability; if you are shooting RAW and recording to Log over SDI, this needs to be 10-bit. 12-bit CinemaDNG capture is also good.

– Rec2020 gamut support.

Your existing camera may already be able to record S-Log3 with the help of an external recorder. (The Atomos Flame and Inferno series are a good bet for this, as they incorporate high quality HDR-ready monitors so you can see your footage accurately on set.)

Which cameras are HDR-ready?

Several such cameras are on, or at least making their way to, the market, but as we mentioned earlier, our favourites among the current crop are Canon’s C500 and C300 MK II, Sony’s FS7 and FS5, and the Panasonic GH4 and GH5. All of these cameras output a RAW signal that can be recorded as ProRes or DNX with the help of external recorder, and all have LOG gamma encoding.

Apart from a camera and maybe an external recorder, what else will I need?

In order to see what you’re doing with your HDR images in post, you will need a monitor that can support HDR. Currently, the simplest and most affordable are the Atomos Flame and Inferno ranges, which offer on-camera HDR monitoring combined with the ability to play back and edit your footage at full res, making a collaborative HDR workflow possible for everyone on set. If you’ve already invested in a Atomos Ninja Assassin, Blade HD, Flame, Shogun or Shogun Inferno, HDR support is available as a free upgrade, but as their screens only hit 500 nit, you won’t be able to see more than seven or eight stops of dynamic range; the newer monitors are 1500 nit and showcase 10 stops.

When it comes to post-production, we can’t in good conscience recommend grading on anything less than DaVinci Resolve. Its ability to power through high resolution and frame rate files without slowing down or falling over is going to be extremely necessary if you’re going to be tackling HDR, and it features the industry’s most advanced and sensitive HDR toolkit. The ability to grade a project for multiple colourspaces at the same time is going to come in handy until you’re delivering HDR 4K all the time, too.

Will my current infrastructure be OK?

To be honest, that depends how much 4K work you’ve done so far, and how many changes you’ve made to accommodate it. That 10-bit workflow with its attendant file sizes and frame rates means you’re going to want to be working on a 10Gb Ethernet network, rather than the standard 1GbE.

You’ll also want to make sure you have plenty of high capacity storage both at your facility and on set. One of the reasons we’re so keen on Atomos devices is that they’ve teamed up with G-Technology to develop the Master Caddy range. These high capacity SSDs can slot into any compatible Atomos recorder to capture your footage, then be removed and inserted in to an adaptor that makes them compatible with G-DOCK and ev series storage from G-Technology, so there’s no need for you to invest in proprietary recording media that’ll only work with one of your cameras (you’ll get better speeds and capacities this way, too).

Want to know more? Give us a call on 03332 409 306 or email broadcast@Jigsaw24.com. For all the latest news, follow @Jigsaw24Video on Twitter or ‘Like’ us on Facebook.

NVIDIA’s 3D Vision glasses

NVIDIA’s 3D Vision glasses

We recently managed to get our hands on a couple of pairs of these glasses and the associated hardware to use in our demonstrations at BVE. Having wanted to see them for quite some time, I was excited about seeing them in action.

Seeing an image in 3D on the screen requires each one of your eyes to see a slightly different image, there are several different ways to achieve this. Most systems use passive glasses; these take the form of either the coloured anaglyph glasses (which require no special display technology) or clear polarised glasses (requiring a matching polarised display).

Regardless of the technology used, the theory is the same: the glasses and display work together to ensure that your left eye only sees the left image and your right eye only sees the right image. Your brain does the rest, fusing these two separate images into a 3D picture.

The NVIDIA glasses work on the same theory but achieve it in a slightly different way. They are based on active technology and are powered by a small battery. The glasses work wirelessly, although they are charged over USB. Each lens of the glasses contains a liquid crystal display similar to those used in old calculators and this display changes the lens from black to clear at a rate of 60 HZ (60 times a second). While this is happening, the display flicks from the left image to the right image at a rate of 120 HZ. This is synced with the glasses via an infra red emitter to ensure that when the left image is being shown the right eye is blanked out and vice versa.

To make all of this work, you will need the following equipment:

  • The NVIDIA 3D Vision starter kit, containing a pair of glasses and the infra red sync emitter.
  • A compatible NVIDIA graphics card with a DIN connector for the sync emitter. A Quadro is needed for pro applications such as Maya. A Geforce is needed for Games.
  • A display that is capable of displaying an image at 120HZ – the Samsung SyncMaster range is a good place to start.
  • Software that is capable of using active stereo. In games, this is taken care of by the Nvidia driver. With regard to pro apps, any app that supports Quad Buffered OpenGL will work.

So, enough of the technical stuff – what are these glasses like to use? I was lucky enough to test them extensively, using them both for gaming and within Autodesk’s Maya. I was very impressed with them, I had expected to see some flickering of the picture as it switched from the left to right images but, with each eye being displayed at nearly 3 times the frame rate required for smooth viewing, the picture was extremely smooth. The glasses do make the screen appear a little dimmer but this can be fixed easily by turning up the brightness a little.

The experience of getting the glasses to work with my professional applications was a smooth one also, and just required enabling stereoscopic support in the NVIDIA control panel. It is even possible to display 3D output from two different programs at the same time.

In summary, these glasses are ideal if you want to preview and edit stereoscopic content in programs like Maya, or view stereoscopic movies. Imagine being able to show your 3D film or game in full colour progressive 3D, or showing off your product or building designs to clients in full 3D. With most major modelling packages including 3ds Max Design, Maya, CINEMA 4D and others at least able to create stereoscopic content even if you can’t directly edit in 3D, these glasses offer a great way to show your work in an immersive way. Content can be exported from this software and played back using Nvidias stereoscopic player and you can even use them for a bit of gaming after work!

If you’re not sure about the best way to create or view stereoscopic content, give us a call on 03332 409 306 or email sales@Jigsaw24.com.

Experimenting with stereoscopy with Maya

Experimenting with stereoscopy with Maya

As a long time CINEMA 4D user, I was a little daunted at the prospect of learning to use Maya. But, as Autodesk’s application has built-in support for stereoscopic rendering and live stereoscopic previews (something only available in CINEMA 4D through plug-ins), my interest has piqued.

Creating stereoscopic content

All 3D software is technically capable of creating stereoscopic content; you just need to use two virtual cameras (one to represent each eye) and then finish the resulting content in the same way you would live action stereoscopic content, but this approach creates a few problems. For one thing, animating two cameras as one to maintain the 3D effect often requires complex scripting to keep the cameras aligned and to achieve comfortable, working 3D. This problem is compounded by the fact that most software has no provision for previewing your work in stereoscopic 3D. A company called SVI do make a plug-in that will allow you to edit stereoscopic work within CINEMA 4D but, as Maya has this functionality built-in, I wanted to test it out.

Working in Maya

I decided very early on that learning to model within the application would take far too long to learn so, after obtaining some demo content from my good friends at Autodesk, I set about learning the stereoscopic aspects of Maya.

maya camera attributes

The good news is that everything is very well integrated in the Maya program. Autodesk have used the built-in scripting language to create a range of stereoscopic camera rigs for you to use, and made it very simple to control all of the important stereoscopic parameters (see left). The rigs range from a simple three-camera rig (two of these cameras represent the viewers eyes and there is one in the centre for framing your shots) to more complex nine-camera stereo rigs. These more complicated rigs are useful for scenes with a lot of depth, such as outdoor scenes, as often you will set the stereo parameters for objects in the foreground and it will break the stereo effect in the background or vice versa. These rigs, combined with Mayas render layers, can allow you to use different stereo parameters on different objects in your scene, making it a very flexible solution.

When using these cameras, Maya can show a 3D preview directly in the viewport and supports anaglyph display (using inexpensive tinted glasses) for those without special displays or options for more exotic displays, including horizontal interlaced, active shuttered displays and checkerboard format. This allows Maya to display an image on almost any 3D display out there. It’s worth bearing in mind that some of these displays require additional hardware, and you will certainly need a powerful graphics card to display a (usable) stereoscopic preview. We recommend NVIDIA’s Quadro range of graphics cards and can advise you on a 3D display for a range of budgets.

The camera rigs have several options for controlling the 3D effect. You can control the inter-ocular distance (separation between the cameras), zero parallax plane, and also have options to mimic physical 3D rigs (such as parallel or off axis).

stereo volume in maya

This image illustrates the safe stereo volume (in blue) and the zero parallax plane (in red).

Maya will also show a visual representation of the zero parallax plane along with a comfortable viewing volume (think of this as a three dimensional title guide).  These features take a lot of the guess work out of composing 3D images, and give you all the help you need to create comfortable 3D scenes.
Export options are also plentiful; Maya is able to directly export an anaglyph image (for posting to the web or printing out) or separate left and right streams (for post processing or use with stereoscopic players).

In summary, although these options are available in other software through plug-ins or scripting, the fact that they are an integral part of Maya helps to make them a great solution for producing stereoscopic CG content. Being able to preview your work in realtime will also save you a huge amount of time.

To find out more about creating stereoscopic content in Maya (or CINEMA 4D), get in touch with us on 03332 409 309 or email sales@jigsaw24.com.

3D rendering – What’s more important: Memory, processor speed or access to storage?

3D rendering – What’s more important: Memory, processor speed or access to storage?

At Jigsaw we’re frequently asked by customers, “What’s the most important facet of a render farm – memory, processor speed or access to storage when it comes to 3D rendering?” Unfortunately this question doesn’t have a simple answer – the best way to explain is to use a metaphor:

In a restaurant’s takeaway delivery service, what is the key factor influencing our appraisal of the service? How quickly the chef prepares and cooks the food? The efficiency of the person taking orders over the phone? The time it takes the guy on his moped to deliver the food? How about the quality of the food?!

Obviously, there is no one clear answer. We haven’t considered the different types of food on offer, the distance it needs to be delivered, the number of inbound order calls, or even if the guy taking the calls has to write them by hand or has an automated electronic system. We don’t know how many grills the restaurant has, how long it takes to cook each dish, or if there is a pattern for ordering. It becomes very clear that the answer is usually “It depends,” and the conditions that it depends on are nothing if not dynamic.

To relate this to 3D rendering, simply replace “chef” with “processor”, the person taking orders with “memory” and the delivery guy with “shared storage”. Now you have a render farm scenario.

Just like the example of the takeaway restaurant, in order to get a specific answer to the question about your 3D render farm, your query itself needs to be much more specific.

The question should be “what’s the most important element in a render farmrunning ABC 3D application,” not simply “what’s the most important element in a 3D render farm.” After all, one chef could churn out edible dishes every few minutes while another could be doing no better than one every 10 minutes because he works more methodically and presents dishes of a higher standard.

The answer lies in recognising and measuring what the key bottlenecks are when performing your job:

How long does it take to render a given jog unit (e.g.10 frames) on a given processor with your 3D software application of choice?

–   What is the volume of data going into the image and how long does its take to retrieve this data from the drive?

–   How reliant on memory speed is the rendered frames’ delivery?

–   What is the time frame on delivering your frames to a designated drive?

Once these have been answered for one system (processor/memory chip/shared drive) we can start to think about the render farm as a whole. What starts happening to our render performance if we add more processors, memory storage or speed? Or in terms of our takeaway analogy, what happens if we add more people to take orders, more chefs to cook the food, or more people on the phones to take more orders and to prioritise jobs?

We have to be careful at this stage to try to keep all parts of the farm balanced. For example, you don’t want to be in a situation where all jobs are coming in too quickly for processors to handle but, on the flip-side, we don’t want jobs being processed so fast that the storage can’t read and write frames fast enough.

If we look at this sensibly, we’ve already established roughly how long it will take each processor to get its next set of instructions, render 10 frames and then save them to a location. Using these figures, we can estimate capacities and rates, allowing us to work out the best way to spend our money.

Conclusion

Render farms are usually CPU bound, processing large amounts of work compared to the data coming in, so fast I/O is often a good thing. A large cache can also be beneficial, depending on the size of the frames being rendered.

With data transfer rates at the speed they are, it’s unlikely that SAN speed will be the first bottleneck but this is very dependent on the size of the render farm. For example, there will become a point where there will be too many nodes rendering in parallel, resulting in the processors waiting on the storage. At this stage, things start to depend on how big the image is, how long it takes to render a frame, and if frame completions are synchronised.

Visit Jigsaw24, or feel free to call 03332 409 306 or email sales@Jigsaw24.com with any CAD-related questions.