Last summer, I spoke to Daryn Okada, ASC, about his cinematography on the short film Emma, a thriller directed by Howard Lukk that was designed to gauge the potential of high-dynamic-range imaging, or HDRI. I recently reconnected with Daryn to find out whether there had been any surprises in the postproduction and screening stages of the project, which premiered at SMPTE’s Hollywood Symposium in the fall. His answer was yes.
“We used a lot of single-source, modeled lighting on Emma because that was the mood of the piece,” says Daryn. “The capability for HDRI monitoring on the set didn’t exist yet, and we didn’t have a chance to do comprehensive tests. Projecting the edited piece in HDR, we were able to detect a lot of the makeup in the images. We felt the makeup needed to be more naturally blended. It’s not unlike the difference between HD and film, where the limited color gamut of HD requires adjustments to makeup. In this case, it wasn’t color gamut; we just had more texture between highlight and shadow.”
Another other surprise had to do with lighting. “We had a couple of practicals that we wanted to appear hot, but they ended up being even brighter than we had anticipated,” says Daryn. “We could bring them down in post, but once we started messing with it too much, it looked like we were intentionally stepping on it.
“I think my conclusion is that cinematographers need to be able to control the HDR image and not always exploit the expanded dynamic range — the luminance range to be expanded has to be selective. For example, you can extend the highlights and let everything else fall into a more normal range that we’re accustomed to seeing. That way, you don’t have to constantly readdress everything.
“You don’t get that much of a benefit in the other areas, anyway,” he adds, “because it’s on the extreme ends that your eye perceives a deeper, more detailed sort of image. It seems to excite your eye more when you have a brighter bright and a darker dark.
“In my opinion, HDR has to be able to capture what you could do on film by shooting images with brighter contrast, but in a way that transforms the information your eye doesn’t need into a form that’s acceptable, not distracting,” he says. “Right now, we need to determine what the shape of that curve might be; we need to work with it and adapt it for different kinds of material. I don’t think it’s ‘one size fits all.’ Just like color timing and other elements of visual style, it must be adapted to needs of the scene.”
In other words, HDR is another imaging tool that, like any other, must be controlled and put in the service of the story and character. Daryn notes, “The old adage applies: You have to light it and shoot it right, with creative intent, for the correct result. It’s not a ‘fix it in post’ situation.”
He says that there are several different ideas for how HDR should be considered.
“One issue is how to handle it in post, which I look at as the mastering process. The other is as a way to prepare for tomorrow’s display technologies — how do we protect the embedded details so that the images we archive can be repurposed in a cost-effective way? That’s a moving target. Right now, the display side is where the big challenges are. I think only an ACES-type workflow with HDR files has enough flexibility to deliver a rich, detailed image that’s fun to watch. That’s where the biggest shift needs to happen. The bandwidth of DPX files can’t even hold all the detail that’s in Kodak 5219 negative.”
Of course, post facilities would need to adapt to accommodate significantly greater bandwidth.
My first post about Daryn’s work on Emma prompted some questions from readers, including James L. Carter, ASC, who asked, “Isn’t HDR shooting multiple exposures per frame? What did I miss?”
Some HDR systems work that way, according to Daryn, especially still-photography systems. Two-camera rigs with a beamsplitter have been demonstrated by Technicolor. But for a single camera, multiple-exposure motion imaging can introduce temporal artifacts — slight differences between exposures — that cause problems. On Emma, HDR was essentially a way of processing and displaying the image that allowed the system to extract and deliver more of the information the sensor was able to gather.
“For live action, we’re taking everything the sensor can give us and remapping it in such a way that we can use more of the dynamic range,” says Daryn. “It’s not truly an HDR image in the still-photography sense, but we’re taking advantage of higher-dynamic-range displays because that’s what was holding us back. You have to be able to work with a display that’s going to give you that range and more. At the moment, most flat panels have more dynamic range than the signal we’re providing to them, and the capabilities of available displays are changing every six months. Right now, we have no way of managing that additional range, and that’s the reason for this experiment.
“Because HDR delivers wider color gamut and luminance, these images can look more realistic without being manipulated or compressed, which sort of flattens everything out. You can display all the bright and darker values in a way that your eye sees as real.”