When does production become post?

When does production become post?

The increasingly common combination of high shoot ratios and tight deadlines puts pressure on facilities to increase efficiencies across production and post. In an effort to reduce turnaround times and cut down the amount of time spent on non-billable activities, production and post teams are now working together from far earlier into the production schedule, with post houses sending staff to set to ensure footage is logged as soon after the shoot as possible, and DITs performing post-critical functions.

Manufacturers are keen to facilitate this collaboration – AJA seem to have kicked off the scramble to unite the two when they released the first Ki Pro back in 2009, and noone seems to have paused for breath since. But there are dozens of factors to consider when planning your workflow, from whether you’ll be handling HDR footage, to IP integration, to the impact of the incoming 5G connectivity standard. With that in mind, we’re taking a look at a few different points of contact, and how you can make sure your workflow there is mutually beneficial for production and post.

Shooting and monitoring

Accurate metadata can speed up post-production immensely, by making it far easier for artists to match the original scene conditions when compositing, compensate for issues with specific cameras or lenses when correcting footage, and more.

Zeiss are currently setting the standard for incredibly detailed metadata with the new eXtended Data lens, the CP.3 XD. As well as giving your DoP precision, quality and all the other benefits of working with Zeiss glass, XD lenses create a huge amount of metadata about each shot, containing details not just of features like focal length and exposure, but details about the lens itself. In post, tweaking this metadata becomes a quicker, easier way to compensate for lens shading, or to correct for the different distortions of individual lenses used in production. When compositing, the metadata drastically cuts down the amount of trial and error (and therefore time) needed for artists to match on-set lighting conditions. This ultimately drives down the time and money needed for post, and so could even help buy you more time on set.

Monitor and recorder manufacturers Atomos have attempted to bring a similar spirit of cooperation to monitoring with their newly announced SUMO 19 HDR production/grading monitor, which can record dailies, proxies or 4Kp60 masters as needed.  This means camera crews can see what they’ve captured in HDR, as it will appear to post teams, and be sure they’re happy with the shot as it appears, rather than having to guess based off a Rec.709 image. The recording feature also means that dailies (or a low res proxy, if you have limited bandwidth/storage) can be send to a post facility immediately, and assembly can begin far earlier than usual.

Solutions like this are making it easier for production and post crews to maintain a common vision of the project throughout, and reduce the time taken to create the final product without limiting either party’s options in that way that, say, Sony baking HLG into footage from some of its lower-end cameras does.

Logging and metadata

Loggers and ingest technicians are increasingly venturing out to log footage as close to set as possible. While data and asset management has been an intrinsic part of post for a long time, it’s now widely acknowledged that by focusing more on this on set, crews can increase the overall efficiency of the project, and drastically reduce the time needed to put everything together in post.

Asset management systems like axle Video are excellent – axle is particularly good if you’re new to this, as you can just point it at your file system and it will automatically index all media files, then update its database automatically in realtime as you add new footage. You can then share low res proxies through a web browser so that people can reject, trim and comment on clips; it’ll even integrate with NLEs so that editors can search new footage without leaving their editing application. It ships with a standard metadata schema, but you can customise this to the requirements of your shoot.

Avid’s MediaCentral | Asset Management option (formerly Avid Interplay MAM) performs a similar function, indexing media in a range of formats and allowing you to add custom metadata in order to make it easier to find. It even allows you to remotely access assets from multiple locations, so if crews at different locations both log footage, all of it will be available for review at the same time. Avid’s MediaCentral system also allows for a high degree of automation when it comes to things like ingest, logging, archiving and sharing footage, meaning you can achieve more in less time, and with a smaller team.

Cloud delivery

Once footage has been logged, it can be sent back to the post facility, or to a staging post if you’re in a remote location. As the available networks have become faster, cloud delivery has gained popularity, whether that’s ENG crews using in-camera FTP capabilities to send footage back to the newsroom, or crews on location leveraging file sharing services to deliver footage to post as quickly as possible. And with 5G set to make 100Mbps over the air file sharing a reality over the next few years, this option is only set to get more popular.

If you’re collecting or monitoring footage from drones, car-mounted cams and other inaccessible recorders, Soliton’s on-camera encoders and receivers are a great investment – they use a mixture of H.265 compression and proprietary RASCOW technology to ensure you see an HD live stream of your footage even in areas where 3G and 4G coverage is patchy, with delays as low as 240 ms.

For reliable file transfer, we’d recommend IBM’s Aspera service. While it’s pricier than WeTransfer, it uses end to end encryption to to keep your footage secure and, unlike consumer services, doesn’t get slower the larger your files are. Another feature we’re particularly keen on is that it calculates the precise time a transfer will take on your current connection before it begins, so if it says a transfer will take seven hours, you can ring ahead and let your colleagues know when to expect the file with a fairly high degree of certainty.

How does this all fit together?

We can help you develop workflows to maximise efficiency in production and post, and advise on ways to prepare your existing infrastructure for the future, or fold new releases into your existing workflow. As well as providing consultancy, workflow design and specialist hardware, we can provide ongoing support and maintenance for your core kit. To find out more, get in touch with the team on the details below.

If you want to know more, give us a call on 03332 409 306 or email broadcast@Jigsaw24.com. For all the latest news, follow @WeAreJigsaw24 on Twitter, or ‘Like’ us on Facebook.

HDR: Which format is which?

HDR: Which format is which?

UHD or ‘Ultra High Definition’ television promises many things, among them high dynamic range (HDR), a wider colour gamut (ie getting closer to the huge range of colours that most people can see), higher frame rates (for super-smooth action, particularly in sport) and higher resolution (4K). Between them, they’re shaking up the TV technology landscape.

In HDR, there are three main standards you’ll have probably heard of. For delivery, the BBC and NHK have developed their Hybrid Log-Gamma system, HLG, while Dolby favour their own Dolby Vision (also known as Dolby PQ). Then, for more domestic delivery, there is also HDR 10.

The principle of using an alternate gamma so that you concentrate the bit-depth where you want the extra range is well established; our eyes do not perceive light the way cameras do. To recap, with a camera, when twice the number of photons hit the sensor, it receives twice the signal (a linear relationship). We, on the other hand, perceive twice the light as being only a fraction brighter — and this is increasingly true at higher light intensities (a nonlinear relationship).

Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describing the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive). This means gamma encoded images store greyscale more efficiently.

ITU-R BT.2100

It does seem like all of the manufacturers will coalesce around BT.2100, which defines (amongst other things) how you handle the specular highlights: those very bright parts of the picture which really add to the look of pictures.

Specular highlights are typically defined to be >500 Cdm-2, which is much brighter than broadcast white! The idea is that in 10-bit HDR, the tenth bit of dynamic range (all values above 512) represents the highlights, and the other nine bits are akin to the usual video dynamic range.

Delivery formats

There are three delivery formats you need to consider.

HLG

HLG was developed by the BBC and their Japanese counterpart NHK. It is a scene-referred system, just like conventional television, and has been designed with the specific goal of making the transition to HDR easy on broadcasters and production crews – hence its compatibility with SDR, which means that broadcasters can continue to use their existing 10-bit SDI production installations (as with all video, levels are considered dimensionless).

HLG uses relative brightness values to dictate how an image is displayed – the display uses its knowledge of its own capabilities to interpret the relative, scene-referred information. This means that the image can be displayed on monitors with very different brightness capabilities without any impact on the artistic effect of the scene. Because it uses relative values, HLG does not need to carry metadata, and can be used with displays of differing brightness in a wide range of viewing environments.

HLG is supported in Rec. 2100 with a nominal peak luminance of 1000 Cdm-2 (though the BBC have said this is an artificial cap imposed by the monitors they use, and the real figure is more like 4000). It is also supported in HEVC.

Dolby Vision or DolbyPQ (Perceptual Quantiser)

Dolby Vision is the wider set of products that cover both digital cinema and video – Dolby PQ is the element that we’re concerned with. Unlike HLG, DolbyPQ is a display-referred system that uses absolute dimensioned values for the light captured.

The metadata that travels in the SDi payload defines how video levels equate to light levels, and how they should be reproduced at the DolbyPQ display end. The display then reports back to the playback device via EDID to convey its maximum light output.

DolbyPQ supports a maximum brightness of 10,000 Cdm-2.

HDR 10
HDR 10 is another format you’ll have heard of. It’s an open source version of PQ, developed by device manufacturers, but it has a lower video quality, mastered in 10-bit, and only up to 1000 nits cd/m(compared to Dolby’s potential 10,000). It does use metadata, but a far simpler form than Dolby, specifying one luminance level for the entire programme rather than frame by frame.
It’s not backwards compatible either (so can’t be viewed on SDR displays) but because its far more affordable, it’s a popular standard for domestic delivery, particularly on systems where it’s easy to host an SDR version to accompany the HDR (so UHD Blu-rays, set-top boxes and streaming services like Amazon). Sony and Microsoft have also gone down this route for PlayStation 4 and Xbox One S. So while it is big in home cinema, it ultimately has little relevance for the professional production environment.
Dynamic range

All of these formats have greater dynamic range than the human eye (about 14 stops) and SDR video (about six stops) and 10-bit pro SDR (about 10). When HLG footage is displayed on a 1,000 Cdm-2 display with a bit depth of 10 bits per sample, it has a dynamic range of 200,000:1 or 17.6 stops.

HLG also increases the dynamic range by not including the linear part of the conventional gamma curve used by Rec. 601 and Rec. 709. The linear part of the conventional gamma curve was used to limit camera noise in low light video, but is no longer needed with HDR cameras.

DolbyPQ has an even broader dynamic range of around 18 stops, but this is necessary when using display referral as you need to be able to accommodate different types of display and viewing conditions.

Which is right for you?

In a controlled environment like a movie theatre, Dolby Vision makes a lot of sense – it gives you very precise control, can take advantage of more advanced displays, and has a high degree of futureproofing thanks to that whopping 10,000 Cdm-2 upper limit. If you’re in a feature-focused facility with an existing Dolby workflow, it makes a lot of sense to roll out this system to other parts of your pipeline.

However, if your bread and butter jobs come from the BBC and you’re aware that a lot of your viewers will, ultimately, be watching your content in their living room, in the office at lunch or on their iPhone, aligning your setup to their standard seems sensible, as you’ll be in line with a major customer, and the adaptive nature of HLG means it’s well suited to the variety of viewing environments you need to cater for if you’re producing online or television content.

How can we help?

Well, we can make recommendations for acquisition, post-production and delivery of HDR content. We carry cameras, monitors and video interfaces appropriate to HDR workflows, and can often offer demo kit to test in customers’ workflows. To find out more, get in touch with the team on the details below.

If you want to know more, give us a call on 03332 409 306 or email broadcast@Jigsaw24.com. For all the latest news, follow @WeAreJigsaw24 on Twitter, or ‘Like’ us on Facebook.

IBC 2017: Sony’s Z450 gets new firmware

IBC 2017: Sony’s Z450 gets new firmware

PXW-Z450 users, there’s new firmware coming your way. Granted, it’s not due until December, but in three short months you’ll be the proud owner of a camera with far better HDR, 4K and audio support. 

Sony are really pushing to get their lineup HDR-friendly, and the Z450 receives support for Hybrid Log-Gamma (HLG) and S-Log3 recording and output, and support for the BT.2020 colour space. You’ll also be able to record 4K HDR and HD SDR simultaneously to one card, and record HDR with BT-709 for an SDI output.

While you’re shooting in HLG or S-Log3, your viewfinder will display your image using the BT.709 colour space.

4K shooters will get Slow & Quick mode to play with (you can record up t 60 fps in it) and XAVC-L cache recording for your 4K footage, and a 4K resolution Focus Mag option.

Updates on the audio side are largely practical, with improved support for Power Save Mode being the headline. It’ll be available on your assignable keys, will sync with power off, and can even be used to control the power level of some belt transmitters. You’ll also see the power save status on the viewfinder when said mode is enabled.

If you want to know more, give us a call on 03332 409 306 or email broadcast@Jigsaw24.com. For all the latest news, follow @WeAreJigsaw24 on Twitter‘Like’ us on Facebook or take a look at our IBC roundup.