The American Society of Cinematographers

Loyalty • Progress • Artistry
Return to Table of Contents
Return to Table of Contents October 2008 Return to Table of Contents
Body of Lies
Production Slate
Need for Speed
Post Focus
DVD Playback
ASC Close-Up
Grounding Gamers in the Real World

Video-game creators have long been wary of incorporating live-action footage into their games, fearing it might disconnect players from the fantasy in which they are immersed. “It’s a huge risk,” observes Mataio Gardi, development director at Electronic Arts (EA) Black Box. But a new version of the game Need for Speed might help change that — and might redefine the way game creators and filmmakers collaborate.

For Need for Speed: Undercover, slated for release this month, EA Black Box asked director Joseph Hodges and cinematographer Jeffrey Seckendorf to create 22 minutes of narrative footage, which was then used to set the look and style for the game. According to Gardi, the company decided its computer-generated footage appeared photo-realistic enough to intercut smoothly with live-action elements, but having good visual storytelling in the latter would be key. “We wanted to do this right and get a good director and cinematographer,” he says.

Hodges, the production designer on the television series 24 (AC Feb. ’04), had previously directed a smaller job for EA. For Need for Speed, he brought on producer Eric Mofford, a 24 collaborator, and Mofford tapped Seckendorf to shoot the project.

At first, Seckendorf turned the job down, but as Mofford explained how the footage would establish the game’s look, the cinematographer became intrigued. “Really, it’s a feature film wrapped in a game,” says Seckendorf. “They hired us because we’re filmmakers, not game-makers. They said, ‘Make the footage look as amazing as you can, and we’ll make the game look like that.’”

Hodges and Seckendorf had to shoot about 55 individual scenes and sequences — ranging from one-shot pieces to scenes as long as 90 seconds — that will appear after the players complete a level of the game. “I’ll never see the stuff because I’ll never get past Level One!” jokes Seckendorf.

Hodges, though, is an avid gamer, and “he really understood how the narrative elements had to fit into the game,” says Seckendorf. The director also knew he would have to impose a Hollywood-style hierarchy on the EA team. “The problem with [gaming] is it’s like walking into a schoolroom without a teacher, and all the kids are throwing paper around,” says Hodges. “There are no professional borders — everyone thinks he’s an art director, and everyone thinks he’s a writer. I had to say, ‘I’m the director, and this is how it’s going to be.’”

In the past, says Hodges, live-action game elements have usually consisted of actors standing in front of the camera and talking to the players, something he finds very uncomfortable to watch. “We changed the script for Need for Speed to give the performers something to do,” he explains. He tried to make the audience feel like part of the scene by having the actors talking with one another and occasionally glancing over at the camera, as if it’s another character, and by moving the camera behind objects so viewers would feel as though they’re in the set.

During prep, Seckendorf investigated various digital cameras, including Panavision’s Genesis, and he eventually decided on the Red One, partly because he and Hodges wanted to shoot with multiple cameras, something that was relatively affordable with the One. He chose the oldest set of Zeiss Standard Speed primes he could find, as well as two new Angenieux zooms, a 17-80mm and a 24-290mm.

The shoot took place entirely in a warehouse in downtown Los Angeles, where the crew moved between eight separate sets that Seckendorf, gaffer Larry Wallace, key grip Rocky Ford and other crew pre-lit with Maxi-Brutes bouncing into muslins overhead. The team shot about 24 pages in six days, with two cameras running most of the time.

Camera operator Dan Turrett says that going into the project, one of his concerns was the One’s viewing system, but he was pleasantly surprised. “Even though the camera doesn’t have an optical finder, the viewing system is quite excellent,” he says. “It has a color video monitor that’s the best I’ve ever seen.” Rather than changing the length of the viewfinder itself, as would be the case on an Arricam or a Panaflex, the monitor was on a sliding rail, so it could be positioned anywhere Turrett wanted it.

The One is relatively small and light, qualities that were very important on this project. “One of the concerns, since we were going to do a lot of handheld, was that a lot of [the HD] cameras were never designed to be put on the shoulder,” says Turrett,  “but I was quite impressed with the One in that sense.”

Need for Speed is a racing game, so the cars in the live-action footage had to be well-lit. To bounce highlights into a car’s fenders, Seckendorf and Wallace used large, skinny muslins measuring about 4'x20'. They were able to determine where to put the light sources by using a laser pointer: from the camera’s perspective, Wallace would point at any area in the car that he and Seckendorf felt needed extra light, and the laser beam would bounce off the car and go to exactly where the light source needed to be.

One pivotal element of the design and lighting was a Translite backing of the city where the game is set. The backing is most prominent in a hotel-room set, but it turns up in almost every other set, too — barely visible through an open door, reflected in the shiny metal of a car, or behind frosted-glass panels, for example. It’s a unifying element that brings every scene into the yellow-gold palette that characterizes the entire game. “It’s those little things that help the players feel as though they’re not departing too far from the game,” notes Gardi.

Hodges recalls that someone initially suggested putting a greenscreen outside the hotel-room window. “I suggested a backing instead, because I always prefer doing things practically,” he says. “I got the EA art department to create a view looking out over the city, and then I played with it a lot in Photoshop.”

Although the backing was made of material designed to be lit from the front, Seckendorf found it looked more luminous and had lower contrast when lit from behind. To create a soft, directional keylight, he used a 20K shining down an 8'-long foamcore box (4' wide on each side and open at either end, with the black side of the foamcore facing in), sometimes adding a 1⁄4 grid in the middle of the box. He used China balls to fill the actors’ faces, moving the lights closer or farther away as needed.

Seckendorf lit exclusively with tungsten light, but the One has a daylight-balanced chip. “I’ve shot uncorrected daylight with tungsten film, but I heard about noise with this camera,” he notes. During prep, he ran a series of tests and found that although using a daylight filter cost him exposure, there was noticeably less noise. The best balance between noise and stop-loss came with an 82B filter, which cost about 2⁄3 of a stop of exposure.

The production shot the equivalent of 90,000' of film, recording to drives rather than cards. The One records in a proprietary format called Redcode Raw (.r3d), and these data files stay intact, even when the cinematographer applies look-up tables to the footage. “It’s raw data,” says Seckendorf. “With the Genesis and cameras that go out to tape, the digital-imaging technician [DIT] is controlling the image. With the One, it’s more like a negative. The image is not being managed or set during the shoot, so for the DIT, it’s more a matter of data management. But it’s important to look at the raw files in Red software and make sure you can see detail. If you can’t, it’s not there.”

He found the One’s imagery very close to the quality of Kodak Vision2 16mm film stock. “It’s like 35mm with a little less latitude at the top and bottom [of the exposure curve].” But, he notes, it is neither film nor HD. “It works as its own thing, its own medium. I’ve shot a million miles of film, and this is by far the best non-film camera I’ve ever touched.”

During production, all the camera data was immediately backed up onto two FireWire drives and shuttled to Plaster City Post, which handled the editorial and online work. At Plaster City, the original .r3d files were backed up onto LTO tapes and then transferred to the facility’s network. The .r3d files were then transcoded into Apple ProRes 2K using Final Cut Pro’s Log and Transfer.

After the editors spent three or four weeks doing offline editing using the ProRes files, the online began — and so did the problems. “It was a very difficult job,” says Michael Cioni, founder and director of operations at Plaster City. To meet EA’s needs, the project had been shot at 29.97 fps, and the offline editors worked using Final Cut Pro’s multi-cam capability. Unfortunately, those two approaches did not mesh well — the Auto Conform process did not function properly with a 29.97/multi-cam Final Cut Pro project, so the whole thing had to be hand-conformed, which took several days.

“Red is still a science experiment and a moving target,” says Cioni. “Even though we’ve done 75 Red projects, we still find problems. We report them to Red, Apple and Assimilate [makers of the color-correction system Scratch], and those companies are great about using our input and fixing the problems. They’re running to catch up and break our fall.”

Once the online and color-correction were finished, Plaster City provided EA with Seckendorf’s color-corrected version and with the raw, uncorrected DPX files. These were sent to the Vancouver visual-effects facility Anamorphic Productions Ltd., which used the color-corrected version as a baseline and took the uncorrected frames into Apple Shake to refine the look further.

Jason Toth, a visual-effects supervisor and CEO at Anamorphic, explains that his team rotoscoped every character in the footage, isolating each so its look and lighting could be manipulated individually. He notes, however, that EA didn’t want the images to look heavily processed, so the visual-effects artists tried to make every effect look as though it could have happened in a real camera — vignetting, a smudge on the lens, or camera shake, for example. “We’re playing with it but still using all the tools a cinematographer would use,” says Toth.

Gardi says he hopes the combination of CG material and live-action footage adds up to a seamless experience for the players. “We knew going in it was risky, and often, what we predict is not how the public reacts,” he says. “In gaming, we always aspire to the cinematography produced in Hollywood, and we’re still trying to find the right footsteps for converging the two worlds. The next five years are going to be very interesting.” 


<< previous || next >>