Audio mixing strategy, part 1

Modern nonlinear editors have good tools for mixing audio within the application, but often it makes more sense to send the mix to a DAW (digital audio workstation) application, like Pro Tools, Logic or Soundtrack Pro. Whether you stay within the NLE or mix elsewhere, you generally want to end up with a mixed track, as well as a set of “split track stems”. I’ll confine the discussion to stereo tracks, but understand that if you are working on a 5.1 surround project, the track complexity increases accordingly.

The concept of “stems” means that you will do a submix for components of your composite mix. Typically you would produce stems for dialogue, sound effects and music. This means a “pre-mixed” stereo AIFF or WAVE file for each of these components. When you place these three stereo pairs onto a timeline, the six tracks at a zero level setting should correctly sum to equal a finished stereo composite mix. By muting any of these pairs, you can derive other versions, such as an M&E (music+effects minus dialogue) or a D&E (dialogue+effects minus music) mix. Maintaining a “split-track, superless” master (without text/graphics and with audio stems) will give you maximum flexibility for future revisions, without starting from scratch.

A recent project that I edited for the Yarra Valley winemakers was cut in Avid Media Composer 5, but mixed in Apple Soundtrack Pro. I could have mixed this in Media Composer, but I felt that a DAW would give me better control. Since I don’t have Pro Tools, Soundtrack Pro became the logical tool to use.

I’ve had no luck directly importing Avid AAF or OMF files into Soundtrack Pro, so I would recommend two options:

a)    Export an AAF and then use Automatic Duck Pro Import FCP to bring those tracks into Final Cut Pro. Then “send to” Soundtrack Pro for the mix.

b)   Export individual tracks as AIFF audio files. Import those directly into Soundtrack Pro or into FCP and then “send to” Soundtrack Pro.

For this spot, I used option B. First, I checker-boarded my dialogue and sound effects tracks in Media Composer and extended each clip ten frames to add handles. This way I had some extra media for better audio edits and cross fades as needed in Soundtrack Pro. Next, I exported individual tracks as AIFF files. These were then imported into Final Cut Pro, where I re-assembled my audio-only timeline. In FCP, I trimmed out the excess (blank portion) of each track to create individual clips again on these checker-boarded tracks. Finally, I sent this to Soundtrack Pro to create a new STP multi-track project.

Soundtrack Pro applies effects and filters onto a track rather than individual clips. Each track is analogous to a physical track on a multi-track audio recorder and a connected audio mixer; therefore, any processing must be applied to the entire track, rather than only a portion within that track. My spot was made up entirely of on-camera dialogue from winemakers in various locations and circumstances. For example, some of these were recorded on moving vehicles and needed some clean-up to be heard distinctly. So, the next thing to do was to create individual tracks for each speaking person.

In STP, I would add more tracks and move the specific clips up or down in the track layout, so that every time the same person spoke, that clip would appear on the same track. In doing so, I would re-establish the audio edits made in Media Composer, as well as clean up excess audio from my handles. DAWs offer the benefit of various cross fade slopes, so you can tailor the sound of your audio edits by the type of cross fade slope you pick for the incoming and outgoing media.

The process of moving dialogue clips around to individual tracks is often referred to as “splitting out the dialogue”. It’s the first step that a feature film dialogue editor does when preparing the dialogue tracks for the mix. Now you can concentrate on each individual speaking part and adjust the track volume and add any processing that you feel is appropriate for that speaker. Typically I will use EQ and some noise reduction filters. I’ve become quite fond of the Focusrite Scarlett Suite and used these filters quite a bit on the Yarra Valley spot.

Soundtrack Pro’s mixer and track sheet panes are divided into tracks, busses, submixes and a master. I added three stereo submixes (for dialogue, sound effects/ambiances and music) and a master. Each individual track was assigned to one of these submixes. The output of the submixes passed through the master for the final mix output. Since I adjusted each individual track to sound good on its own, the submix tracks were used to balance the levels of these three components against each other. I also added a compressor for the general sound quality onto the submix, as well as a hard limiter on the master to regulate spikes, which I set to -10dB.

By assigning individual dialogue, effects and music tracks to these three submixes, stems are created by default. Once the mix is done to your satisfaction, export a composite mix. Then mute two of the three submixes and export one of the stems. Repeat the process for the other two. Any effects that you’ve added to the master should be disabled whenever you export the stems, so that any overall limiting or processing is not applied to the stems. Once you’ve done this, you will have four stereo AIFF files – mix plus dialogue, sound effects and music stems.

I ended the Yarra Valley spot with a nine-way tag of winemakers and the logo. Seven of these winemakers each deliver a line, but it’s intended as a cacophony of sound rather than being distinguishable. I decided to build that in a separate project, so I could simply import it as a stereo element into the master project. All of the previous dialogue lines are centered as mono within a stereo mix, but I wanted to add some separation to all the voices in the tag.

To achieve this I took the seven voices and panned them to different positions within the stereo field. One voice is full left, one is full right, one is centered. The others are partially panned left or right at increments to fill up the stereo spectrum. I exported this tag as a stereo element, placed it at the right timecode location in my main mix and completed the export steps. Once done, the AIFF tracks for mix and stems were imported into Media Composer and aligned with the picture to complete the roundtrip.

Audio is a significant part of the editing experience. It’s something every editor should devote more time to, so they may learn the tools they already own. Doing so will give you a much better final product.

©2011 Oliver Peters

When it absolutely has to be there

A lot of the productions we post these days are delivered electronically – either on the web or as DVDs (or Blu-rays). Bouncing a finished product to an FTP site is a pretty good method for getting short projects around the world, but often masters or longer DVDs still require shipping. For many of us, FedEx is a mainstay; however, if it has to get halfway around the world by the next day, then even FedEx falls short. This reminds me of a bumper sticker slogan for an imaginary Tardis Express: “When it absolutely, positively has to get there yesterday!” So, with apologies to Dr. Who, how do you make this happen?

I recently had to get an eight minute presentation to a client in Australia. This was to be presented from DVD. Due to last minute changes, there was no time for physical shipping – even if we could have gotten it there overnight (quite unlikely). I could, of course, post an MPEG2 and AC3 file or a disc image file, but the client at the other end would not have been savvy enough to take this into a DVD authoring program (like DVD Studio Pro or even Toast) and actually burn a final disc. The second wrinkle was that my master was edited in the NTSC world of 29.97fps. Although many Australians own multi-standard DVD players, there was no guarantee that this would be the case in our situation. After a bit of trial-and-error with the director, we settled on this approach and I pass it along. Take this as more of a helpful anecdote rather than a professional workflow, but in a pinch it can really save you.

Apple iDVD will take a QuickTime movie and automatically generate the necessary encoded DVD files. That’s not much of a surprise, but, of course FTP’ing a ProRes master wouldn’t have been feasible, as the file size would have still been too large. It turns out, though, that iDVD will also do this from other QuickTime formats including high-quality H.264 files. Our Australian client’s daughter understood how to use iDVD, so the director decided it would be a simple matter to talk her through downloading the file and burning the presentation disc.

The first step for me was to generate a 25fps master file from my 29.97 end product. Compressor can do this and I’ve discussed the process before in my posts about dealing with HDSLRs. First, I converted the 29.97 file to a 25fps ProRes file. Then I took the 25fps ProRes high-def video and converted it in Compressor to a 16×9 SD PAL file, using a high-quality H.264 setting (around 8Mbps). Bounce it up to my MobileMe account, “share” the file and let the daughter generate the DVD in Australia using iDVD on her MacBook. Voila! Halfway around the world and no shipping truck in sight!

A related situation happened to me in 2004 – the year several hurricanes crisscrossed through central Florida. The first of these was headed our way out of the Gulf in the middle of my editing a large corporate job. Initially the storm looked like Tampa would get a direct hit and then pass to the north of Orlando. It was Friday and everyone was battening down for the weekend, so I called my announcer to see what his plans were for getting voice-over tracks to me. “No problem. I am putting up friends from Tampa and once they get settled in, I’ll record the tracks and send them your way.” That seemed fine, since I didn’t need these until Monday.

Unfortunately the storm track changed – blowing in south of the Tampa area and straight through central Florida. The main local damage was power outages, due to many fallen trees throughout the city. Power returned relatively quickly at my house, but much of the area ultimately was without power for several weeks. However, the weekend progressed and I still hadn’t heard back from my announcer. By Sunday I finally got through to him on the cell phone.

“Were you able to record the tracks?” I asked.  “Oh yes,” he replied. “They are up on my FTP site.” What followed is a classic. “We lost power and, in fact, it’s still out. I waited until the neighbor’s generator was off for the evening and was able to record the tracks to my laptop using battery power. Then I drove around and found a Panera Bread location.” Panera Bread is a national restaurant/coffee shop chain that offers free wi-fi connectivity in most of its locations. He continued, “The restaurant was closed, but they must have had power as the wi-fi was still running. So, I sat in the parking lot and uploaded the files to my FTP site.”

So thanks to modern technology and the world of consumer connectivity, both of these clients were able to receive their products on schedule. That’s in spite of logistical difficulties that would have made this sort of thing impossible only a few short years ago. Time machine – or phone booth – anyone?

©2010 Oliver Peters

Connections – looking back at the future

Maybe it’s because of The Jetsons or the 1964 World’s Fair or Disney’s Tomorrowland, but it’s always fun to look back at our past views of the future. What did we get right? What is laughable today?

I had occasion to work on a more serious future-vision project back in the 90s for AT&T. Connections: AT&T’s Vision of the Future was a 1993 corporate image video that was produced as a short film by Century III, then the resident post house at Universal Studios Florida. I was reminded of this a few years ago when someone sent me a link to the Paleo-Future blog. It’s a fun site that talks about all sorts of futuristic concepts, like “where are our flying cars?” Connections became a topic of a series of posts, including links to all sections of the original video.

The genesis of the video was the need to showcase technology, which AT&T had in the lab, in an entertaining way. It was meant to demonstrate the type of daily impact this technology might have in everyday life a few short years down the road. The project was spearheaded by AT&T executive Henry Bassman, who brought the production to Century III. We were ideally suited for this effort, thanks to our post and effects pipeline in sci-fi/fantasy television production (The Adventures of Superboy, Super Force, Swamp Thing, etc.) and our experience in high-value, corporate image projects. Being on the lot gave us access to Universal’s soundstages and working on these series put us together with leading dramatic directors.

One of these directors was Bob Wiemer, who had worked on a number of the episodes at Universal as well as other shows (Star Trek: The Next Generation, SeaQuest, etc.). Bassman, Wiemer and other principals, including cinematographer Glenn Kershaw, ASC, together with the crew at Century III formed the production and post team behind Connections. It was filmed on 35mm and intended to have all the production value of any prime time TV show. I was the online editor and part of the visual effects team on this show.

The goal of Connections was to present a slice-of-life scenario approximately 20 years into the future. Throughout the course of telling the story, key technology was matter-of-factly used. We are not quite at the 20-year mark, but it’s interesting to see where things have actually gone. In the early 90s, many of the showcased technologies were either in their infancy or non-existent. The Internet was young, the Apple Newton was a model PDA and all TV sets were 4×3 CRTs. Looking back at this video, there’s a lot that you’ll recognize as common reality today and a few things you won’t.

Some that are spot-on, include seat-back airplane TVs, monitors that are 16×9 aspect ratio, role-playing/collaborative video games, the use of PDAs in the form of iPhones, iPads and smart phones. In some cases, the technology is close, but didn’t quite evolve the way it was imagined – at least not yet. For example, Connections displayed the use foldable screens on PDAs. Not here yet. It also showed the use of simultaneous translation, complete with image morphing for lipsync and accurate speech-to-text on screen. Only a small part of that’s a reality. Video gamers interact in many role-playing games, even online, but they have yet to reach the level of virtual reality presented.

Nearly all depicted personal electronic devices demonstrate multimedia convergence. PDAs and cell phones merged into a close representation of today’s iPhone or Droid phone. Home and office computers and televisions are networked systems that tie into the same computing and entertainment options. In one scene, the father is able to access the computer from the widescreen TV set in his bedroom.

One big area that has never made it into practice is the way interaction with the computer was presented. The futurists at AT&T believed that the primary future interface with a computer would be via speech. They felt that the operating system would be represented to us by a customizable, personalized avatar. This was based on their extrapolation from actual artificial intelligence research. Think of Jeeves on steroids. Or maybe Microsoft’s Bob.  Well, maybe not. So far, the technology hasn’t made it that far and people don’t seem to want to adopt that type of a solution.

The following are a some examples of showcased technologies from Connections. Click on any frame for an enlarged view.

In the opening scene, the daughter (an anthropologist) is on a return flight from a trip to the Himalayas. She is on an in-flight 3-way call with her fiancé (in France) and a local artisan, who is making a custom rug for their wedding. This scene depicts videophone communications, 16×9 seat-back in-flight monitors with phone, movie and TV capabilities. Note the simultaneous translation with text, speech and image (lipsync) adjustment for all parties in the call.

The father (a city planner) is looking at a potential urban renewal site. He is using a foldable PDA with built-in camera and videophone. The software renders a CAD version of the possible new building to be constructed. His wife calls and appears on screen. Clearly we are very close to this technology today, when you look at iPhone 4, the iPad and Apple’s new FaceTime videophone application.

The son is playing a virtual reality, interactive role-playing game with two friends. Each player is rendered as a character within the game and displayed that way on the other players’ displays. Virtual reality gloves permit the player to interact with virtual objects on the screen. The game is interrupted by a message from mom, which causes the players to morph back into their normal appearance, while the game is on hold.

The mother appears in his visor as a pre-recorded reminder, letting him know it’s time to do his homework. The son exits the game. One of the friends morphs back into her vampire persona as the game resumes.

Mom and dad pick up the daughter at the airport. They go into a public phone area, which is an open-air booth, employing noise-cancelling technology for quiet and privacy in the air terminal. She activates and places the international call (voice identification) to introduce her new fiancé to her parents. This again depicts simultaneous translation and speech-to-text technology.

The mother (a medical professional) is consulting with a client (a teen athlete with a prosthetic leg) and the orthopedic surgeon. They are discussing possible changes to the design of the limb in a 3-way videophone conference call. Her display is the computer screen, which depicts the live feed of the callers, a CAD rendering of the limb design, along with the computer avatars from the doctor’s and her own computer. The avatars provide useful research information, as well as initiate the call at her voice request.

Mother and daughter are picking a wedding dress. The dress shop has the daughter’s electronic body measurements on file and can use these to display an accurate 3-sided, animated visual of how she will look in the various dress designs. She can interactively make design alterations, which are then instantly modified on screen from one look to the next.

In order to actually produce this shot, the actress was simultaneously filmed with three cameras in a black limbo set. These were synced in post and one wardrobe was morphed into another as a visual effect. By filming the one actress with three cameras, her motions in all three angles maintained perfect sync.

The father visits an advanced, experimental school where all students receive standardized instructions from an off-campus subject specialist. The in-classroom teacher assists any students with questions for personalized assistance. Each student has their own display system. Think of online learning mashed up with the iPad and One Laptop Per Child and you’ll get the idea.

I assembled a short video of excerpts from some of these scenes. Click the link to watch it or watch the full video at the Paleo-Future blog.

AT&T ultimately used the Connections short film in numerous venues to demonstrate how they felt their technology would change lives. The film debuted at the Smithsonian National Air and Space Museum as an opener for a showing of 2001: A Space Odyssey, commemorating its 25th anniversary re-release.

©2010 Oliver Peters

Easy Canon 5D post – Round II

RED’s Scarlet appears to be just around the corner and both Sony and Panasonic seem to be responding to the challenge of the upstart photo manufacturers. No matter what acronym you use – DSMC, HD-DSLR, HDSLR – these hybrid HD video / still photo cameras have grabbed everyone’s attention. 2010 may indeed be the year that hybrid digital SLR cameras hit their stride.

The Canon EOS 5D Mark II showed the possibilities in late 2008 when Vincent Laforet released Reverie, but like all of these new camera products, the big question was how to best handle the post. The 5D (so far) only shoots video at a true 30fps – lacking both the filmic 24fps rate – or any of the video-friendly frame rates (29.97, 25 or 23.976). That oversight was corrected in Canon’s EOS 7D and EOS 1D Mark IV models and may soon be corrected by a firmware update to the 5D. Even so, the 5D has remained a preferred option, because of its low light capabilities and full frame sensor. Photographers, videographers and filmmakers love the shallow depth-of-field, so a 24p-capable 5D is certainly on many wish lists.

Click the above image to enlarge

Until the 5D gets a 24fps upgrade [EDIT: coming in March, download will be here] , folks in post will have to contend with the 30fps footage generated by the camera. Last year I wrote an article on how to post a 5D project, which covers a lot of the basics. I’ve since done more 5D projects and formed a number of opinions and workflow tips. I’ve picked up many of these from reading Philip Bloom and Bruce Sharpe (PluralEyes inventor) and at the end of this post, I’ll include a number of useful links.

My first observation on the several 5D projects I’ve posted is that you get the best results from these new cameras when you treat them like film. Use classical production methods – slow pans, steady hand-held work, tripods, dollies and record audio as double-system sound. Secondly, allow time for processing files and syncing sound before you expect to start editing. 35mm film shoots typically require a day or more between the production day and post for lab processing and film transfer. The equivalent is true for HDSLRs. Whether it’s RED or an HDSLR, you have to become the film lab and transfer house. Once you wrap your head around that concept, the workflow steps make a lot more sense.

Click the above image to enlarge

I recently cut another Canon 5D Mark II job with Director/DP Toby Phillips. This was an internet commercial for the wine growers of the Yarra Valley region of Australia. Yarra Valley is to Australia, what Napa Valley is to California. Coincidentally, it’s also the region ravaged by the horrific fires of 2009. In order to keep the production light, Toby’s crew was bare bones and nearly all images were shot under available light – including sodium vapor lighting in warehouse areas. The creative concept was intended to be tongue-in-cheek. Real workers discussed why their job was the most important role in winemaking. The playful interplay between worker comments and winery/vineyard footage round out this :60 commercial.

Production tips

Toby rigged his camera with a modified plate, rails and matte box from his existing film equipment. This includes Arri and Manfrotto parts modified by Element Technica. The 5D records passable sound on its own, but it really isn’t ideal for the best quality. To get around this, a Zoom H4n handheld recorder was used for double-system sound. The Zoom has XLR inputs for external mics, in addition to its built-in XY-pattern stereo mics. A Sennheiser shotgun was plugged into the Zoom, which in turn recorded uncompressed 16-bit/48kHz WAV files. The headphone output of the Zoom was connected to the 5D, so that the camera files always contained reference audio.

There are a number of important tips to note here. First, there’s an impedance mismatch in this connection and the 5D uses an AGC circuit to attenuate audio, so the camera file audio will be clipped. To avoid this, turn down the headphone output level to a very low volume. Second, because the audio is clipped, if you forget to press record on the Zoom, the 5D’s audio is NOT acceptable. Following the traditional approach, a slate with clapstick was used for every sound take. The Zoom records numbered, sequential files, so the crew also wrote the audio file number on the slate for each take. These two steps make it easy to identify the correct audio take and to sync audio and video later in post.

Post workflow / pre-processing

This production configuration isn’t too different than shooting with other tapeless video cameras, but post requires a unique workflow. Key steps include video format conversion, speed adjustment and syncing the sound.

Video conversion – The Canon EOS 5D Mark II records 40Mpbs H.264 QuickTime movies in a 1920x1080p/30fps format. H.264 is not conducive to smooth editing in its native form. 5D files can be up to 4GB in length (about 12 minutes), but there is no clip-spanning provision, as in P2 or XDCAM. Where and when you convert the native H.264 camera files depends on your NLE. With Avid Media Composer, files are converted into Avid’s MXF format upon import. The import will be slow, since it’s also transcoding, but this is a one-step process. Unfortunately it ties up your NLE, so maybe in the future Avid’s MetaFuze or AMA will come to the rescue.

I cut with Apple Final Cut Pro, which does permit direct editing with the H.264 files, but you don’t really want to do that. I typically convert 5D files into Apple ProRes, using a batch setting in Compressor. You can use other codecs, of course, like DVCPRO HD, ProRes HQ, ProRes LT, etc. Philip Bloom likes to convert his files to the EX format using MPEG Streamclip. The reason for EX, according to him, is that the data rate is similar to the 5D files, so storage requirements don’t expand significantly.

The wine commercial had 127 camera files (2 hours 11 minutes of raw footage), which were converted to ProRes in about 4 hours on an 8-core Mac Pro. Storage needs increased from 40GB (H.264) to 142GB (ProRes). The nice part of this step (at least for FCP users) is that the conversion can be left as a batch to churn unattended. One word of caution, though. Compressor has a tendency to choke and crash when you throw tons of files at it, like 100+ camera files. So I usually do these conversions in groups of 20 or so files at a times.

Video speed adjustment - The 5D files are a true 30fps and not the fractional video rate of 29.97fps. Avid will convert these files to the correct rate on import, if audio and video tracks have been separated. According to Michael Phillips of Avid (one of their workflow gurus), “If the MOV file is video-only, then I use the ‘ignoreQtrate true’ console command and get a frame-for-frame import, resulting in a .1% slow down.” This is analogous to what happens when film is transferred to video. In my testing, it was important to first strip off the audio track of the MOV in order for this to work. You can do this using QuickTime Player Pro 7.

Final Cut permits native 30fps editing, but then your files won’t play through standard video gear, like a KONA card. I suppose for an internet spot this wouldn’t matter, however we had other uses, so a speed adjustment would have to happen at some point. I could either convert to 29.97 first and be done with it – or I could cut at 30fps and convert the finished spot. I normally opt to convert the ProRes files to 29.97fps first. To do this I use the Cinema Tools “conform” feature. That’s a nearly instantaneous process, which only alters the file’s metadata. It tells media players to run the file at the fractional frame rate of 29.97fps instead of 30fps.

Audio speed adjustment - Changing the frame rate from 30 to 29.97 means the picture has been slowed by .1% and so audio must also undergo the same pulldown. If you use a location sound recorder capable of a 48048kHz sample rate, then Avid Media Composer will automatically adjust the rate upon import back down to 48kHz and achieve the pulldown. In addition, there are various utilities that can “restamp” the metadata for the sample rate. A good choice is Sound Devices’ free Wave Agent. The Zoom recorder created 48kHz files, but these could be restamped as 47952kHz by such a software utility. In the case of Media Composer, the software sees this on import and slows the file by .1% to achieve the desired 48kHz sample rate. Thus the audio is back in sync.

Final Cut Pro works differently than Media Composer so your results may vary. FCP simply tries to maintain the same duration and thus would force a render in the timeline to convert the sample rate to 48kHz without altering the speed. Instead, I recommend that you render new versions of the audio before importing the files into FCP that have an applied speed change. When I initially tried the restamp approach, I got sync drift. After posting this entry, I tried it again with Wave Agent and the results were dead-on in sync. The only issue is that then you have to render the audio in FCP to get the correct sample rate. I’m not a big fan of how FCP renders audio files and so prefer to correct them prior to import into FCP. I have also had inconsistent results with FCP and how it handles sync with external audio files.

Because of these various concerns, I used Telestream Episode Pro and created an audio-only preset that included a speed change with a .999 value. I used this preset to batch-convert 20 16-bit/48kHz WAV files from the Zoom recorder (1 hour 9 minutes of raw dialogue) into “pulled down” AIF files. This took about two minutes. Whichever approach you take, I urge you to do this only with copies of files. Some of these various utilities use destructive processes, so you don’t want to change your originals.

(Note: For a better understanding of how BWF (broadcast wave files), QuickTime and Final Cut Pro interact, check out this product (BWF2XML) and description by Spherico.)

Syncing the dailies – After these conversion steps, the files are ready to import into FCP. Audio and video files are now in optimized formats that will match FCP’s native media settings. Next, you’ll have to sync the audio and video takes. If the crew used a clapstick, it’s easy to sync in either Avid or Final Cut using the standard group or multiclip routines.

For this wine spot, I used Singular Software’s PluralEyes to automatically sync all sound takes. PluralEyes was one of the highlights of NAB 2009 and is about as close to magic as any software can get. It analyzes audio waveforms to compare and align the reference camera audio against the separate audio files. This is why it’s critical to record even poor-quality reference audio to the camera in order to give PluralEyes something to analyze. Unfortunately for the Avid editor, PluralEyes only works with Final Cut and Sony Vegas Pro. It’s not a plug-in, but works on a timeline labeled “pluraleyes” in an open and saved FCP project.

Here are the steps:

a) Create a blank FCP timeline named “pluraleyes”.

b) Drag & drop all camera clips with dialogue (audio & video) onto the timeline (random order is OK).

c) Drag & drop all separate audio files onto the same timeline onto unused audio tracks (random order is OK).

d) Disable any redundant audio track (speeds up analysis).

e) Save the project, launch PluralEyes, start analysis/sync processing.

After a few minutes of processing, PluralEyes will automatically create a series of new FCP sequences – one for each sync take. The audio will be aligned so that the double-system sound files are now perfectly in sync with the camera audio.

Post workflow / edit / mix / grade

Now that you have sync takes, you can pretty much edit anyway you like. I picked the following tip up from Bloom’s blog. To make editing easier on the wine spot, I took these new sequences and renamed them according to the person who was speaking and which take it was. I export the sequences as QuickTime references movies (not self contained) to a location on my media drives. I then re-imported these reference movies, in effect turning them into master clips with merged 5D video and Zoom audio. These became my source for all sync takes. Any b-roll shots came from the regular ProRes files.

The rest of the edit went normally. I’ve got my MacPro set up with two internal 1TB drives configured as a software RAID-0 for media files (2TB). No issues with cutting ProRes this way. I bounced the audio to Soundtrack Pro for the final mix. No real reason, other than to take advantage of some of the plug-ins to add a touch of “sparkle” to the dialogue.

I used Apple Color for the grade. If you follow my blog, you know that I could have tackled this easily with various plug-ins and stayed inside FCP, however, I do like the Color interface and toolset. This spot was ideally suited to go through a grading pass using Color. As it turned out, this step might have been a bit premature due to client revisions. In hindsight, using plug-ins might have been preferable. I thought the cut was locked, so proceeded with the correction in Color.

The first version of the spot was a faster paced cut (57 shots in :60), so the client requested a second version with a little more breathing room and a few alternate dialogue takes. This necessitated going back into the footage. Those familiar with Color know that it generates new media files when it renders color correction. This is required to “bake in” the color corrections. If you assign handles of a few seconds to each shot, you have some room to trim shots when you are back in Final Cut. This doesn’t help you with other footage.

I decided to step back to the sequence before “sending to” Color and cut a second, more-relaxed version (46 shots in :60). Although this meant starting a new Color project, I was aided by Color’s ability to store grades. I could save the settings for each of the shots in version one and apply these settings to the similar or same shot in version two, within the new Color project. Adjust keyframes, tweak a few settings, render and bingo! – the grade is done. With :02 handles on each shot, version one (57 shots) rendered in about 40 minutes and version two (46 shots) took about 30 minutes. Both as 1920×1080 ProRes (29.97fps) media. Of course, like many commercials this wasn’t the end and a few more changes were made! The final version ended being a combination of these two cuts.

(As an aside, Stu Maschwitz has done a nice post about Color Correcting Canon 7D Footage on his ProLost blog.)

Post-processing / 24fps conversion

This could have been the end of the post for the wine spot, but there’s one more step. A big reason people like these HDSLRs is because they provide a very cost-effective way of getting that elusive “film look”. One part of that look is the 24fps frame rate. Yes – some film is shot at 30fps for spots and TV shows – so technically the 5D’s 30p footage is just fine. But clients really do want that 24fps look.

You can convert these 5D files quite cleanly to 24fps. This is a process I picked up from Bloom and discussed in my previous Canon post.  Here are the steps:

a) Note the exact duration of the 29.97fps timeline.

b) Export a self-contained QuickTime movie of the finished 29.97 sequence.

c) Bring that exported file into Compressor and set up a ProRes-to-ProRes conversion. Use a frame rate of 24fps (it actually is 23.98, but Compressor labels it as 24).

d) Turn Frame Controls on, set Rate Conversion to Best and change Duration from 100% of source to the exact duration of the original 29.97 timeline.

Now let Compressor crunch for a while.  My :60 spot took about 36 minutes to convert from 29.97 to 23.98. For good measure, I also take the finished file into Cinema Tools and conform it to 23.98, just in case it’s 24 and not 23.98. Then I import the file back into FCP. I create a new 23.98 timeline and edit the clip to the file. If everything is done correctly, this media should match without any rendering needed. Then I’ll copy and paste the audio from the 29.97 timeline to the 23.98 timeline. This should be in sync.

A couple of additional pointers. Since I don’t want to have this conversion process get confused with titles and dissolves, I remove all graphics and make dissolves into cuts (with handles) in the 29.97 sequence, prior to export. I actually exported the wine spot timeline as 1:04 instead of :60. When I was back in the 23.98 timeline, I fixed these trims, added back the fades, dissolves and graphics in order to complete the sequence.

The second issue is speed changes. I sped up two shots, which actually passed through Color and this 24p conversion just fine – except for one problem. My 29.97 timeline was actually an interlaced timeline. This doesn’t matter for the camera files, as they are inherently progressive. However, any timeline effects, like speed changes, titles and transitions are processed with interlaced motion. This affected the two sped-up shots in the 24p conversion, resulting in interlace artifacts. The simple fix was to replace these with the normal-speed media and redo the speed change in the 23.98 timeline. No big deal, but something to be mindful of in the future.

Finally, although this conversion is very good, it isn’t perfect. Cuts do stay as clean cuts and slow action converts cleanly looking as if it were shot at 24fps. Fast motion, however, does introduce some artifacts. These mainly show up as blended frames in areas of fast activity or fast camera movement. It’s no big deal really, as it tends to add to the filmic look of the material – a bit like motion blur.

Remember that this is an OPTIONAL and SUBJECTIVE step. I personally think that 30p is a “sweet spot” for LCD and plasma screens. This is especially true for the web and computer displays. In the end, my client decided they liked the 30p image better, because it was crisper.

Click the image to see the video in HD on Vimeo.

Or here for the “Alternate Cut” at 30fps (no 24p conversion).

Additional tools

Since the media files the HDSLR cameras generate are an outgrow of file creation at the consumer product level, there is very little metadata in them that an NLE would care about. No reel numbers, SMPTE timecode, edge numbers, etc. That’s good and bad. Good – in that the folder and file structure is quite simple and very malleable. Bad – in that you can have duplicate file names and there’s no ability to span clips. Think of it like a roll of 35mm negative. That would have about 11 minutes of capacity and new metadata is added when it’s transferred to video.

Since files are sequentially numbered on the memory card, once you start recording to the next card, it’s likely to have repeating file names. This is true both in the camera and on a recorder like the Zoom, simply because there is no reel (i.e. card) ID name or number.  The good news is that you can easily change this without corrupting metadata – as you would with RED or P2 – but, it means you have to manually impose some sort of structure yourself.

R-Name - One utility that can help in R-Name. Unfortunately it may be out of development, but I still use version 3, which works with Snow Leopard. You might be able to find a download still lurking in the depths of the internet – or, if not – a similar utility or an Automator routine. R-Name lets you rename files (as the name implies), but you can also append prefix or suffix character strings to a file name. For example, a set of media files from a 5D may be named MVI_1073.mov through MVI_1200.mov and you’d like to add a prefix for Card 1. Simply create an R-Name batch that adds a prefix such as “C001_” to all these files. Run the batch and voila – your files are now named C001_MVI_1073.mov through C001_MVI_1200.mov. Follow this process for each card and it becomes a nice, fast way of organizing your media.

QtChange – If reel numbers and timecode are important for you to have, then check out VideoToolshed’s QtChange. This is a comprehensive QuickTime utility, which lets you alter several file parameters. Most importantly, you can add or change reel number and timecode values. Although this isn’t essential for you to cut in FCP, certain functions, like dupe detection, won’t work without an assigned reel number. There are several ways to alter this info in QtChange, but one of the ways it can work is to automatically use the date stamp of the file for the reel number and the time stamp as a starting timecode number. Files can be changed in a batch, but be careful as these are destructive changes. Developer Bouke Vahl has been making ongoing changes to the product and recently added Avid Log Exchange functionality.

MetaCheater One deficiency of Avid Media Composer has been the inability to directly read all of the metadata from a QuickTime file. For instance, older versions of Media Composer and Symphony would not read QuickTime timecode. This has been corrected in the most recent versions; these apps now import the timecode, but still no reel number. In addition, the Canon cameras don’t generate timecode or reel numbers so you must add them if you need such information. You could use QtChange to add reel IDs and timecode, which Media Composer would import, but then there’s still the reel ID problem. MetaCheater is a simple way around this. This program extracts QuickTime metadata and creates an Avid Log Exchange file (ALE) with proper reel numbers and timecode values. Import the ALE file into Media Composer and then batch import the corresponding QuickTime movies. In this process, Media Composer uses the timecodes and reel numbers from the ALE instead of default values, with the result that your Avid bins properly reflect the reel and timecode information added to the 5D files. It would be just as if this media had been captured from a videotape source.

Here are a few comparisons of the color grading applied to these shots.

Click each of these images to see an enlarged view.

Original Image

Graded Image

Split Screen

Original Image

Graded Image

Split Screen

Addendum (Feb 2010)

After I initially wrote this article in January, I pulled it down for some tweaks. In the interim, I got busier for a few weeks until I could repost it. In that time, was able to do some more testing with Avid Media Composer 4.0.5 on another Canon 5D spot. I am adding my observations here, since many of my readers are Avid cutters and want to know the best way to handle these files in Media Composer.

Unlike FCP, there’s no simple drag-and-drop method in Avid. If you elect to convert the files using an external encoding application, you still have to bring the files in through Avid’s import routines. This adds a step and effectively doubles the total time it takes to convert and import as compared with FCP. Another frustrating issue is that when you move from the native camera files into Avid, you have to move out of the QuickTime color and gamma architecture and into an MXF structure using Avid codecs.

In the Avid world, video files are treated using the rec. 601/709 colorspace (16-235 on an 8-bit scale) and computer files are assumed to be in RGB space (0-255). When you import or export files to and from Media Composer, you always need to check the proper setting – RGB or 601/709. Unfortunately (or fortunately depending on your POV), this is largely hidden from view in the QuickTime world. Furthermore, Canon really hasn’t provided documentation that I’m aware of regarding the colorspace that these cameras work in and how closely color scaling conforms to either RGB or rec. 709. The long and short of it is that when you move in and out of QuickTime, you are often fighting level and gamma changes to varying degrees.

I tried a number of different import and encoding methods with Media Composer. All of them work, but with various trade-offs. The easiest method is as I outlined earlier in this article – simply import the H.264 camera files into Media Composer. When you do that, select RGB color space. The import time will take approximately 3:1 to 4:1 on a fast machine, depending on the target codec you choose to use, because the media is being transcoded during this import stage. I had the fastest encoding times using the Sony XDCAM-EX codec, which is now natively supported by Media Composer.

A second option is to use Apple Compressor (or another QuickTime encoder) to convert the camera files into QuickTime movies using an Avid DNxHD codec. This is the same approach as converting to Apple ProRes 422. Unfortunately, Avid still imposes a longer import time to get these files from QuickTime MOVs into the MXF media format. Although Compressor offers a choice between RGB and 709 when you select DNxHD, it doesn’t seem to make any difference in the appearance of the files. The files are converted to 709 color space and so should be imported into Avid with the import setting on 709. I hope that this import step will be eliminated at some point in the future, when and if Avid decides to support QuickTime files through its AMA feature.

The fastest, current method was to use Episode Pro again. MXF is now supported in this encoder, so I was able to convert the H.264 files into MXF-wrapped XDCAM-EX files that were ready for Avid. The beauty of is that the work can be done on an external machine in a batch and the import back into Media Composer is very fast. No transcoding is needed, as this just becomes a file copy. The EX codec looked clean and wasn’t too taxing on my Mac Pro. You also have the option of using XDCAM-HD and XDCAM HD 422 (50Mbps) codecs in the MXF file format. The only issue was that one of the media files appeared to be corrupt after encoding and had to be re-encoded. This might be an anomaly, but we ARE dealing with two long-GOP codecs in this process! Another benefit of this route is that no user interaction is required to determine color space settings.

Now to the level issues. In all of this back and forth – once I exported back out to QuickTime (ProRes 422 codec, using RGB setting on export) – no conversion identically matched the original camera files. When I compared versions, direct import of the files (H.264 into Avid) yielded slightly darker results. External conversion to DNxHD and then importing, yielded a slight gamma shift. Conversion/import via the MXF route appeared a bit lighter than the original. None of these were major differences, though. If you are going to color grade the final product anyway, it doesn’t really matter. I finally settled on a 2-step conversion workflow (described in my February 21 post) that yielded good results going from the 5D files into Media Composer and then to FCP.

As far as editing, syncing and grading, that is the same as with any other acquisition media. I used the same preparatory steps as outlined earlier (Cinema Tools conform to 29.97 and a .999 speed adjustment of the audio) – then converted and imported the video files. Inside Media Composer (1080p/29.97 project), everything synced and edited just as I expected.

Also in early February, Canon announced its EOS Movie Plugin-E1 for Final Cut Pro. Click here for the description. It’s supposed to be released in March and if I understand their description correctly, it allows you to import camera clips via FCP’s Log and Transfer module. During the import stage, files are transcoded to ProRes. Unfortunately there is no explanation of how frame rates are handled, so I presume the files are imported and remain at their original frame rate.

My conclusion after all of this is that both FCP and Media Composer are just fine for working with HDSLR projects. FCP seems a bit faster at the front, but in the end, you’re just traveling two different roads to get to the same destination.

I leave you with one last tidbit to ponder. Apple has just introduced Aperture 3, which includes HD video clip support in slideshows. I wonder how apps like Aperture, Lightroom and Photoshop (already supports some video functions) will impact these HDSLR workflows in the future?

(UPDATE: If you got here through links from other blogs, make sure you read the updated Round III post as well.)

Useful Links

5DMk2 blog – 1001 Noisy Cameras

Assisted Editing

Philip Bloom

Canon Explorers of Light

Canon Filmmakers

Cinema5D

DSLR HD

DVinfo

DVXuser

Element Technica

FreshDV

Tyler Ginter

Vincent Laforet

ProLost

Red Rock Micro

Bruce Sharpe

Spherico

Peter Wiggins

Planet5D

Video Toolshed

Zacuto

©2010 Oliver Peters

Canon EOS 5D Mark II in the real world

blg_canon5d_11

A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.

blg_canon5d_3

Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.

blg_canon5d_4

Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.

blg_canon5d_1

Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.

blg_canon5d_2

Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.

blg_canon5d_8

Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.

blg_canon5d_5

Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.

blg_canon5d_10

Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.

blg_canon5d_6

Frame from My Room video

Editing

Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.

blg_canon5d_7

Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.

blg_canon5d_9

What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters

RED Post – the Easy Way

blg_redpost_1

A commercial case study

Ever since the RED Digital Cinema Camera Company started to ship its innovative RED One camera, producers have been challenged with the best way to post produce its footage. Detractors have slammed RED for a supposed lack of post workflows. This is in wrong, since there are a number of solid ways to post RED footage. The trouble is that there isn’t a single best way and the path you choose is different depending on your computing platform, NLE of choice and destination. Many of the RED proponents over-think the workflow and insist on full 4K, native camera raw post. In my experience that’s unnecessary for 99% of all projects, especially those destined for the web or TV screens.

blg_redpost_2

Camera RAW

The RED One records images using a Bayer-pattern color filter array CMOS sensor, meaning that the data recorded is based on the intensity of red, green or blue light at each sensor pixel location. Standard video cameras record images that permanently incorporate (or “bake in”) the colorimetry of the camera as the look of the final image. The RED One stores colorimetry data recorded in the field for white balance, color temperature, ISO rating, etc. only as a metadata software file that can be nondestructively manipulated or even discarded completely in post. Most high-end DSLR still cameras use the same approach and can record either a camera raw image or a JPEG or TIFF that would have camera colorimetry “baked” into the picture. Shooting camera raw stills with a DSLR requires an application like Apple Aperture or Adobe Photoshop Lightroom or other similar image processing tools to generate final, color-corrected images from the stills you have shot.

Likewise, camera raw images from RED One require electronic processing to turn the Bayer pattern information into RGB video. Most of the typical image processing circuitry used in a standard HD video camera isn’t part of RED One, so these processes have to be applied in post. The amount of computation required means that this won’t happen in real-time and applying this processing requires rendering to “bake” the “look” into a final media file. Think of it as the electronic equivalent of a 35mm film negative. The film negative out of the camera rarely looks like your results after lab developing and film-to-tape transfer (telecine). RED One simply shifts similar steps into the digital realm. The beauty of RED One is that these steps can be done at the desktop level if you have the patience. Converting RED One’s camera raw images into useable video files involves the application of de-Bayering, adding colorimetry information, cropping, scaling, noise reduction and image sharpening.

blg_redpost_3

Native workflow

I am not a big believer in native RED workflows, unless you post with an expensive system, like Avid DS, Assimilate Scratch or Quantel. If you post with Apple Final Cut Studio, Adobe Creative Suite or Avid Media Composer, then the native workflow is largely a pain in the rear. “Native” means that you are working with some sort of reference or transcoded file during the creative editorial process. Because you are still dragging along the 4K-worth of  data, playback tends to be sluggish at the exact point where an editor really wants to rock-n-roll. When you move to the online editing (finishing) phase, you have to go through extra steps to access the original, camera raw media files (.R3D) and do any necessary final conversions. When cutting “native” not all of the color metadata used with the file is recognized, so you may or may not see the DP’s intended “look” during the offline (creative) editing phase.  For example, the application of curves values isn’t passed in the QuickTime reference file.

In some cases, such as visual effects shoots, native post is totally impractical. As the editor (or later the colorist), you may determine one color setting for the video files; but, the visual effects artist creates a different result, because he or she is also working natively with a set of camera raw files. You can easily end up in a situation where the effects shots don’t match the standard shots. Not only don’t they match, but it will be difficult to make them match, unless you go back to the camera raw information. This wouldn’t be possible with final, rendered effects shots. For these and many other reasons, I’m not keen on the native workflow and will discuss an alternative approach.

blg_redpost_12

Commercial post

I just wrapped up two national spots for Honda Generators with area production company, Florida Film & Tape. Brad Fuller (director/director of photography) shot with a RED One and I worked the gig as editor, post supervisor and colorist. The RED One can be set for various frame rates, aspect ratios and frame sizes and until recently, most folks have been shooting at 4096×2048 – a 2:1 aspect ratio. Early camera software builds had issues with 16×9, but that appears to have been fixed, so our footage was recorded as 4096×2304 at 23.98fps. That’s a 16×9 slice of the sensor using 4096 pixels (4K) as the horizontal dimension.

As an aside, there is plenty of discussion on the web about pixel dimensions versus resolution. Our images looked fine at 2K and HD because of the benefits of the oversampled starting point and downsampling that to a smaller size. When I actually extract a 4K TIFF for analysis and look at points of color detail, like the texture on an actor’s face or blades of grass, I see a general, subtle “softness” that I attribute to the REDcode wavelet compression. It’s comparable to the results you get from many standard digital still photo cameras when viewed at 1:1 pixels (a 100% view). I don’t feel that full-size 4K stills look as nice as images from a high-end Nikon or Canon DSLR for print work; but, that’s not my destination for this footage. I’m working in the TV world and our final spots were to be finished as HD (both 1080i and 720p) and NTSC (480i letterboxed). In that environment, the footage holds up quite well when compared with a 35mm film, F900 or VariCam commercial shoot.

The spots were shot on a stage and on location over the course of a week. The camera’s digital imaging tech (DIT) set up camera files on location and client, agency and director/DP worked out their looks based on the 720p video tap from the RED One to an HD video monitor. As with most tapeless media shoots, the media cards from the camera were copied to a set of two G-Tech FireWire drives as part of the on-set data wrangling routine. At this point all media was native .R3D and QuickTime reference files generated in-camera. The big advantage of the QuickTime reference files – and a part of the native workflow that IS quite helpful – is the fact that all folks on the set can review the footage. This allowed the client, agency and director to cull out the selected clips for use in editing. Think of it exactly like a film shoot. These are now your “circle” or “print” takes. Since I’m the “lab” in this scenario, it becomes very helpful to boil down the total of 250 clips shot to only 50 or so clips that I need to “process”.

blg_redpost_5

Processing

This approach is similar to a film shoot with a best-light transfer of dailies, final correction in post and no retransferring of the film. The Honda production wrapped on a Friday and I did my processing on Saturday in time for a Monday edit. This is where the various free and low-cost RED tools come into play. RED Digital Cinema offers several post applications as free downloads. In addition, a number of users have developed their own apps – some free, some for purchase. My first step was to select all the RED clips in Clipfinder. This is a free app that you can use to a) select and review all RED media files in a given volume or folder, b) add comments to individual files and c) control the batch rendering of selected files.

The key application for me is RED Alert. The RED One generates color metadata in-camera and RED Alert can be used to review and alter this metadata. It can also be used to export single TIFF, DPX or rendered, self-contained QuickTime media files, as well as to generate new QuickTime reference files. The beauty is that updating color metadata or generating new reference files is a nearly instantaneous process. Since I am functioning in the role of a colorist at this point, it is important that I communicate what I am doing with the DP and/or director to make sure I don’t undo a look painstakingly created during the shoot.

With all due respect to DPs and DITs everywhere, I’m skeptical that the look everyone liked on an HD monitor during the shoot is really the best setting to get an optimal result in post. There have been a number of evolving issues with RED One over successive camera builds. People have often ended up with a less-pleasing results than they thought they were getting, simply because what they thought they were seeing on set wasn’t what was being recorded.

blg_redpost_71

Three factors affect this: Color Space, Output LUT and ISO settings. Since color settings are simply metadata and don’t actually affect the raw recording, these are all just different ways to interpret the image. Unfortunately that’s a double-edged sword, because each of these settings have a lot of options that drastically change how the image appears. They also affect what you see on location and, if adjusted incorrectly, can cause the DP to under or overexpose the image. My approach in post is generally to ignore the in-camera data and create my own grade in RED Alert. On this job, I set Color Space to REDspace and the Output LUT (look-up table) to Rec 709. The latter is the color space for HD video. From what I can tell, REDspace is RED’s modified and punchier version of Rec 709. These settings essentially tell RED Alert to interpret the camera raw image with REDspace values and convert those to Rec 709. Remember that my destination is TV anyway, so ultimately Rec 709 is really all I’m going to be interested in at the end.

Some folks recommend the Log settings, but I disagree. Log color settings are great for film and are a way of truncating a wider dynamic range into less space by “squeezing” the portion of the light values pertaining to highlights and shadows. The fallacy of this for TV – especially if you are working with FCP or Media Composer – is that these tools don’t employ log-to-linear image conversion, so there’s really no mathematically-accurate way to expand the actual values of this compressed dynamic range. Instead, I prefer to stay in Rec 709 and work with what I see in front of me.

ISO is another much-discussed setting. The RED One is nominally rated as ISO 320 (default). I really think it’s more like 200, because RED One doesn’t have the best low-light sensitivity. When you compare it with available-light shots from the Canon EOS 5D Mark II (for example, stills from Reverie), the Canon will blow away the RED One. The RED One images are especially noisy in the blue channel. You can bump up the ISO setting as high as 2000, but if you do this in camera (and don’t correct it in post), it really isn’t as pleasant as “pushing” film or even using a high-gain setting on an HD video camera.

On the other hand, there are some very nice examples of corrected low-light shots over at RedUser; however, additional post production filtering techniques were used to achieve these cleaner images. Clean-up in post is certainly no substitute for better lighting during the shoot. In reasonably well-lit evening shots, an ISO of 400 or 500 in RED Alert is still OK, but you do start to see noise in the darker areas of the image.

blg_redpost_6

Pre-grading

The rub in all of this, when working with RED Alert, is that you have no output to a video display or scopes by which to accurately judge the image. You see it on your computer display, which is notoriously inaccurate. That’s an RGB display set to goodness-knows-what gamma value!  The only valid analysis tool is RED Alert’s histogram – so learn to use it. Since I am working this process as a “pre-grade” with the intent of final color grading later, my focus is to create a good starting point – not the final look of the shot. This means I will adjust the image within a safe range. In the case of these Honda spots, I increased the contrast and saturation with the intent that my later grading would actually involve a reduction of saturation for the desired appearance. Since my main tool is the histogram, I basically “stretched” the dynamic range of the displayed image to both ends of the histogram scale without clipping highlights or crushing shadows. I rendered new media and didn’t use the QuickTime reference files for post, which allowed me to apply a slight S-curve to my images. RED Alert lets you save grading presets, so even though you can only view one clip at a time, you can save and load presets to be applied to other clips, such as several takes of the same set-up.

Clipfinder and RED Alert work beautifully together. You can simply click on a clip in Clipfinder and it will open in RED Alert. Tweak the color settings and you’re done. It’s just that simple, fast and easy. The bad news is that these tools are Mac Intel only. Nothing for Power PCs. If you are running Windows, then you have to rely on RED Cine for these same tasks. RED Cine is a stripped down version of Scratch and has a lot of power, but I don’t find it as fast or straightforward as the various Mac tools.

blg_redpost_81

Rendering media files

My premise is not to work within the native flow, so I still have to render media files that I’m going to use for the edit. There is no easy way around this, because the good/fast/cheap triad is in effect. (You can only pick two.) If you are doing this at the desktop level, you can either buy the most fire-breathing computer you can afford or you can wait the amount of time it takes to render. Period!

The Mac RED tools require Intel Macs, but my client owns a G5-based FCP suite. To work around this, I processed the RED files at another FCP facility nearby that was equipped with a quad-core Mac Pro. I rendered the files to ProResHQ, which the faster G5s can still play, even though this codec is optimized for Intels. In addition, our visual effects artist was using After Effects on a PC. His druthers were for uncompressed DPX image sequences, but once Apple released its QuickTime decoder for ProRes on Windows, he was able to work with the ProResHQ files without issue on his PC.

My Saturday was spent adjusting color on the 50 circle takes and then I let the Mac Pro render overnight. You can render media files in RED Alert, Clipfinder or RED Rushes (another free RED application), but all three are actually using RED Line – a command-line-driven rendering engine. Clipfinder and RED Rushes simply provide a front-end GUI and batch capabilities so the user doesn’t have to mess with the Mac command line controls. At this point, you set cropping, scaling and de-Bayer values. Choices made here become a trade-off between time and quality. Since I had a bit of time, I went with better quality settings, such as the “half-high” de-Bayer value. This gives you very good results in a downsampled 2K or HD image, but takes a little longer to render.

OK, so how much longer? My 50 clips equaled about 21 minutes of footage. This was 24fps (23.98) footage and rendering averaged about 1.2 to 1.5fps – about 16:1. Ultimately several hours, but not unreasonable for an overnight render on a quad-core. Certainly if I were working with one of the newest, maxed out, octo-core Intel Xeon “Nehalem” Mac Pros, then the rendering would be done in less time!

On Sunday morning I check the files and met with the director/DP to review the preliminary color grade. He was happy, so I was happy and could dupe a set of the files to hand off to the visual effects artist.

blg_redpost_41

The edit

I moved back to the client’s G5 suite with the ProResHQ media. As a back-up plan, I brought along my Macbook Pro laptop – an Intel machine – just in case I had to access any additional native .R3D files during the edit. Turns out I did. Not for the edit, but for some extra plate shots that the effects artist needed, which hadn’t been identified as circle takes. Whip out the laptop and a quick additional render. Like most tapeless media shoots, clips are generally short. My laptop rendered at a rate of about .8fps – not really that shabby compared to the tower. Rendering a few additional clips only took several minutes and then we were ready to rock.

I cut these spots on Apple Final Cut Pro, but understand that there’s nothing about this workflow that would have been significantly different using another NLE, especially Avid Media Composer. In that case, I would have simply rendered DNxHD files or image sequence files, instead of ProResHQ. Since I had rendered 1920×1080 ProResHQ files, I was cutting “offline” with finished-quality media. No real issues there, even on the G5. Our spots were very simple edits, so the main need was to work out the pacing and the right shots to lock picture and hand off clips for visual effects. All client review and approval was done long distance using Xprove. Once the client approved a cut, I sent an EDL to the visual effects artist, who had a duplicate drive full of the same ProResHQ media.

blg_redpost_9

Finishing and final grade

The two spots each used a distinctly different creative approach. One spot was a white limbo studio shoot. The camera follows our lead actor walking in the foreground and activity comes to life in the background as he passes by. The inspiration for the look was a Tim McGraw music video in which McGraw wears a white shirt that is blown out and slightly glowing. Spot number two is all location and was intended to have a look reminiscent of the western Days of Heaven. In that film the colors are quite muted. In the white limbo spot, the effects not only involved manipulating the activity in the background, but creating mattes and the bloom effect for our foreground talent. Ultimately the decision was made to have a totally different look to the color and luminance of our foreground actor and the background elements and background actors. That sequence ended up with five layers to create each scene.

Spot number two wasn’t as complex, but required a rig-removal in nearly every scene. With these heavy VFX components, it seems obvious to me that working with native RED camera files would have been totally impractical. The advantage to native, camera raw files in grading is supposed to be that you have a greater correction range than with standard HD files. I had already done most of that, though, in my RED Alert “pre-grade”. There was very little advantage in returning to the native files at this point.

blg_redpost_7a1

Another wrinkle in our job was the G5. In Apple’s current workflow, you only have direct native access to .R3D files in Apple Color. Most G5s didn’t have graphics display cards up to the task of working with ProResHQ high-def files and Color. I ran a few tests to see if that was even an option and Color just chugged! Instead, I did my final grades in FCP using Magic Bullet Colorista, which was more than capable for this grading. Furthermore, the white limbo spot required different grading on different video tracks and interactive adjustment of grading, opacity and blend modes. The background scene was graded with a lower luminance level and colors were desaturated and shifted to an overall blue tone. Our lead foreground actor was graded very bright with much higher saturation and natural color tones. In the end, it would have been hard to accomplish what I needed to do in Color anyway. FCP was actually the better environment in this case, but After Effects would have been the next best alternative.

blg_redpost_111

Framing

One big advantage to RED is the ability to work with oversized images. I rendered my files at 1920×1080, but I did have to reframe one of our hero product shots. In that case, I simply re-rendered the file as 2K (2048×1152) and positioned it inside FCP’s 1920×1080 timeline. Again, this was a quick render on the laptop to generate the 2K ProResHQ clip.

DPs should consider this as something that works to their advantage. When RED footage was commonly only shot at a 2:1 aspect ratio, there was some “bleed-room” factored in for repositioning within a 16×9 project. Since shooting in 16×9 now means a 1:1 relationship of the camera file to the edited frame, DPs might actually be best off to shoot with a slightly looser composition. This would allow the 4096×2304 file to be rendered to 2K (2048×1152) and then the final position would be adjusted in the NLE. Final Cut Pro, Quantel, Premiere Pro, Autodesk Smoke and Avid DS can all handle 2K files. I understand that DPs might be reticent about leaving the final framing to someone else, but the fact of the matter is that this happens with every film-to-tape transfer of a 35mm negative. It’s easily controlled through proper communication, the use of registration/framing charts on set and ALWAYS keeping the DP in the loop.

Needless to say, most commercials still run as 4×3 on many TV stations and networks, so DPs should frame to protect for 4×3 cropping. This way “center-cut” conversions of HD masters retain the important part of the composition. Many shots composed for 16×9 will work fine in 4×3, but certain shots, like product shots, probably won’t. To avoid problems on the distribution end, compose your shots for both formats when possible and double shoot a shot when it’s not practical. The alternative is to only run letterboxed versions in standard def, but not every client has control of this down the line.

blg_redpost_13

Click to see the finished spots.

Final thoughts

The RED One is an innovative camera that has many converts on the production side. It doesn’t have to become magilla in post if you treat it like digital “film” and design an efficient workflow that accommodates processing, editing, VFX and grading. I believe the honeymoon is waning for RED (in a good way). Now serious users are leaving much of the unabashed enthusiasm behind and are getting down to brass tacks. They are learning how to use the camera and the post tools in the most efficient and productive manner. There are many solutions, but pick the one best for you and stick to it.

Click here for additional RED-related posts on DigitalFilms.

Follow these links to some of the available RED resources:

Clipfinder

Crimson

Cineform
Rubber Monkey Software

R3D Data Manager
Imagine Products
RED’s free tools
MetaCheater
Assimilate
Avid
Quantel
Autodesk
Adobe

©2009 Oliver Peters

Scare Zone

blg_sz_1

Anatomy of posting a digital, indie feature film


This blog is named Digital Films and today’s post is definitely in keeping with that title. I wrapped up last year cutting an indie feature, called Scare Zone – a comedy/horror – or horror/comedy – film that is the brainchild of writer/director Jon Binkowski. Jon and I have worked on projects for years. His forte is creative design for theme parks and although he has written and directed a number of short films for park attractions, Scare Zone was his first full-length, dramatic feature film. The story takes place in a seasonal, Halloween-style, haunted house attraction. Our cast is an ensemble of young folks who’ve taken part-time jobs at the attraction for its short run; but, it turns out that someone is actually killing people at the Scare Zone.


Like all good low-budget films, Scare Zone benefited from good timing. Namely, that Jon was able to mount the production at Universal Studios Florida right after their annual Halloween Horror Nights park events. Some of the attraction sets are constructed in the soundstages and Scare Zone was able to take advantage of these during the window between the end of Halloween Horror Nights and the time when the sets were scheduled to be destroyed for another year. One key partner in this endeavor was area producer, Ben Kupfer, who produced and co-edited Scare Zone. As a low-budget film targeted for DVD distribution, Jon and Ben opted to shoot the film digitally, relying on two Sony XDCAM-EX1 and EX3 cameras for the look of this film.


blg_sz_2

Straight to the cut


Scare Zone started production in November and I’ll have to say that I have never worked on a film this fast before. I signed on as editor and colorist and started my first cut a week or so after taping commenced. Since this was a digital feature, we opted to cut this film natively (using the original compressed HD format from the camera) and not follow the more standard offline/online approach. Each day’s worth of shooting from the A and B cameras – housed on SxS cards from the EX cameras – was backed up to two Western Digital drives at the end of the day. Imagine Products’ ShotPut software was used, because this offered copy and verification to multiple drives and the ability to add some improved organization, such as adding tape name prefixes to files. Once files were backed up, the drives were sent over to the cutting room and media loaded to our storage array.


Although I’ve cut a lot of long form projects with Apple Final Cut Pro, this was actually the first dramatic feature (not including documentaries) that I’ve cut start-to-finish on FCP. Since I was cutting natively, we used the standard XDCAM import routine, which imports and rewraps the 1920×1080, 23.98fps, 35Mbps VBR MPEG2 files from the EX cameras into QuickTime media files. We also recorded double-system broadcast WAVE files for back-up audio, but only accessed these for a few lines – notably, when the footage was shot on GlideCam and the camera was untethered from any mics. In our native workflow, I was always cutting with final-quality footage and the quad-core Mac Pro had no trouble keeping up with the footage.


Both cameras accrued about 26 hours of combined raw footage. I finished my first cut in the equivalent of 15 working days. Basically, I was done in time for the wrap party! I have to point out that I’ve never cut a film this quickly and although FCP, tapeless media and/or native editing might have been a factor, I certainly have to extend kudos to an organized shoot, good directing and a talented cast. Jon did a lot of cutting in his head as he directed. As an editor, I prefer that a director not do too much of this, but I generally found that I had as much coverage as needed on Scare Zone. Our ensemble cast was on the money, which limited the number of takes. I rarely had more than four takes and most were good. This meant solid performances and good continuity, which lets a film like this almost cut itself. Since the storyline has to progress in a linear fashion over a 3-day period, there wasn’t a huge need to veer from the chronology of the original script.


blg_sz_4

Locking the picture


After turning in a solid first cut, Jon and Ben took a pass at it. Ben is also an experienced editor, so this gave them a chance to review my cut and modify it as needed. My mantra is to cut tighter, so a lot of their tweaks came in opening up some of the cuts. This was less of a stylistic difference and more because Jon’s pace was musically driven. Jon already had a score in his head, which required some more breath in certain scenes. In addition, a few ad hoc changes to the script had been made on the set. I was cutting in parallel to the production, so these changes needed to be incorporated, since they weren’t in my cut. In the end, after a few weeks of tweaking and some informal, “friends and family” focus screenings, the picture was locked – largely reflecting the structure of the first cut.


A locked picture meant we could move on to music, visual effects, sound design/mix and color grading. I tackled the latter. Working in a native form meant we could go straight to finishing – no “uprez” step required. I’ve had my ups and downs with Apple Color, but it was an ideal choice for Scare Zone. I split my timeline into 6 reels that were sent to Color for grading. Splitting up the timeline into reels of fewer than 200 edits is a general recommendation for long form projects sent to Color. Once in Color, I set up my project to render in ProResHQ. Color renders new media and these rendered clips with “baked in” color grading become the linked video files when you roundtrip back to FCP. Thus your final, graded FCP timeline will be linked to the Color renders and not the original camera files. This effectively gives you an additional level of redundancy, because you have duplicated the clips in the actual cut, in addition to the files imported from the camera.


blg_sz_3

Grading


The Sony EX cameras can be preset to a number of different Cine-Gamma style settings. At the front end, Ben and DP Mike Gluckman decided on a setting that was generally brighter and flatter than the desired, final look. This is a preset intended as an optimal starting point when post-production grading is to be used. It gives you the advantage of using the lower cost EX cameras in much the same way as you would use Sony’s expensive F900 or F23 CineAlta cameras. Unfortunately, there was no budget for film lens adapters or prime lenses, so standard video zooms were used. Nevertheless, the look was very filmic, given the rather tight quarters of our haunted attraction sets.


During grading, I generally brought levels down, making most of the film darker and less saturated. Typically, I would set a slight S-curve value in Color’s Primary In “room” as my first basic setting. This would increase the contrast of the picture and counteract the flatness of the Cine-Gamma setting used in camera. Effectively you are working to increase dynamic range in a way similar to film negative. This is a key issue, because the MPEG2 compression used by Sony in its XDCAM and EX cameras is not kind when you have to increase gain and gamma. By doing so, you tend to raise the noise floor and at times start to see the compression artifacts. If you start brighter and lower the “pedestal”, “lift” or “black” settings as you grade, you will end up with a better look and don’t run the risks caused by raising gamma. This is in keeping with the “expose to the right” philosophy that most digital shooters try to adhere to these days.


The proof of the look for us was at the first official screening held in one of Universal’s digital HD theaters. Scare Zone was encoded to Blu-Ray and run on a 2K Christie projector for our cast, crew, families and investors. The grading done to the Panasonic reference monitor held up quite well on a large theater screen. In the end, Scare Zone went from the first day of shooting to this “premiere” in about 3 ½ months. This is easily 1/3 the time that most films take for the same processes. Like all films, the next phase is sales and distribution – often the hardest. For now though, everyone involved was very happy with the positive response enjoyed at these screenings. Mixing up horror and comedy can be quite dicey, but judging by the audience reactions, it seems to have been pulled off! In any case, Scare Zone is another example to show that digital production and desktop tools have come of age, when it comes to entertaining the traditional film audience.


Click here for more on Scare Zone.


© 2009 Oliver Peters