ARRI ALEXA post, part 5

A commercial case study

Upon my return from NAB, I dove straight into post on a set of regional commercials for Hy-Vee, a Midwest grocer. I’ve worked with this client, agency and director for a number of years and all previous projects had been photographed on 35mm, transferred to Digital Betacam and followed a common, standard definition post workflow. The new spots featured celebrity chef Curtis Stone and instead of film, Director/DP Toby Phillips opted to produce the spots using the ARRI ALEXA. This gave us the opportunity to cut and finish in HD. Although we mastered in 1080p/23.98, delivery formats included 720p versions for the web and cinema, along with reformatted spots in 14×9 SD for broadcast.

The beauty of ALEXA is that you can take the Apple ProRes QuickTime camera files straight into edit without any transcoding delays. I was cutting these at TinMen, a local production company, on a fast 12-core Mac Pro connected to a Fibre Channel SAN, so there was no slowdown working with the ProRes 4444 files. Phillips shot with two ALEXAs and a Canon 5D, plus double-system sound. The only conversion involved was to get the 5D files into ProRes, using my standard workflow. The double-system sound was mainly as a back-up, since the audio was also tethered to the ALEXA, which records two tracks of high-quality sound.

On location, the data wrangler used the Pomfort Silverstack ARRI Set application to offload, back-up and organize files from the SxS cards to hard drive. Silverstack lets you review and organize the footage and write a new XML file based on this organization. Since the week-long production covered several different spots, the hope was to organize files according to commercial and scene. In general, this concept worked, but I ran into problems with how Final Cut Pro reconnects media files. Copying the backed-up camera files to the SAN changes the file path. FCP wouldn’t automatically relink the imported XML master clips to the corresponding media. Normally, in this case, once you reconnect the first file, the rest in a similar path will also relink. Unfortunately by using the Silverstack XML, it meant I had to start the reconnect routine every few clips, since this new XML would bridge information across various cards. Instead of using the Silverstack-generated XML, I decided to use the camera-generated XML files, which meant only going through the reconnect dialogue once per card.

It’s worth noting that the QuickTime files written by the ARRI ALEXA somehow differ from what FCP expects to see. When you import these files into FCP, you frequently run into two error prompts: the “media isn’t optimized” message and the “file attributes don’t match” message. Both of these are bogus and the QuickTime files work perfectly well in FCP, so when you encounter such messages, simply click “continue” and proceed.

Click for an enlarged view

Dealing with Log-C in the rough cut

As I’ve discussed in numerous posts, one of the mixed blessings of the camera is the Log-C profile. It’s ARRI’s unique way of squeezing a huge dynamic range into the ALEXA’s recorded signal, but it means editors need to understand how to deal with it. Since these spots wouldn’t go through the standard offline-online workflow, it was up to me as the editor to create the “dailies”. I’ve mentioned various approaches to LUTs (color look-up tables), but on this project I used the standard FCP color correction filter to change the image from its flat Log-C appearance to a more pleasing Rec 709 look. On this 12-core Mac Pro, ProRes 4444 clips (with an unrendered color correction filter applied) played smoothly and with full video quality on a ProRes HQ timeline. Since the client was aware of how much better the image would look after grading – and because in the past they had participated in film transfer and color correction sessions – seeing the flat Log-C image didn’t pose a problem.

From my standpoint, it was simply a matter of creating a basic setting and then quickly pasting that filter to clips as I edited them to the timeline. One advantage to using the color correction filter instead of a proper LUT, is that this allowed me to subjectively tweak a shot for the client, without adding another filter. If the shot looked a little dark (compared with a “standard” setting), I would quickly brighten it as I went along. Like most commercial sessions, I would usually have several versions roughed in before the client really started to review anything. In reality, their exposure to the uncorrected images was less frequent than you might think. As such, the “apply filter as you go” method works well in the spot editorial world.

Moving to finishing

New Hat colorist Bob Festa handled the final grading of these spots on a Filmlight Baselight system. There are a couple of ways to send media to a Baselight, but the decision was made to send DPX files, which corresponded to the cut sequence. Since I was sending a string of over ten commercials to be graded, I had a concern about the volume of raw footage to ship. There is a bug in the ALEXA/FCP process and that has to do with FCP’s Media Manager. When you media manage and trim the camera clips, many are not correctly written and result in partial clips with a “-v” suffix. If you media manage, but take the entire length of a clip, then FCP’s Media Manager seems to work correctly. To avoid sending too much footage, I only sent an assembled sequence with the entire series of spots strung out end-to-end. I extended all shots to add built-in handles and removed any of my filters, leaving the uncorrected shots with pad.

Final Cut Pro doesn’t export DPX files, but Premiere Pro does. So…  a) I exported an XML from FCP, b) imported that into Premiere Pro, and c) exported the Premiere Pro timeline as DPX media. In addition, I also generated an EDL to serve as a “notch list”, which lined up with all the cuts and divided the long image sequence into a series of shots with edit points – ready to be color corrected.

After a supervised color correction session at New Hat, the graded shots were rendered as a single uncompressed QuickTime movie. I imported that file and realigned the shots with my cuts (removing handles) to now have a set of spots with the final graded clips in place of the Log-C camera footage.

Of course, spot work always involves a few final revisions, and this project was no exception. After a round of agency and client reviews, we edited for a couple of days to revise a few spots and eliminate alternate versions before sending the spots to the audio mixing session. Most of these changes were simple trims that could be done within the amount of handle length I had on the graded footage. However, a few alternate takes were selected and in some cases, I had to extend a shot longer than my handles. This combination meant that about a dozen shots (out of more than ten commercials) had to be newly graded, meaning a second round at New Hat. We skipped the DPX pass and instead sent an EDL and the raw footage as QuickTime ProRes 4444 camera files for only the revised clips. Festa was able to match his previous grades, render new QuickTimes of the revised shots and ship a hard drive back to us.

Click to view “brand introduction” commercial

Reformatting

Our finished masters were ProRes HQ 1920×1080 23.98fps files, but think of these only as intermediates. The actual spots that run in broadcast are 4×3 NTSC. Phillips had framed his shots protecting for 4×3, but in order to preserve some of the wider visual aspect ratio, we decided to finish with a 14×9 framing. This means that the 4×3 frame has a slight letterbox with smaller top and bottom black bars. Unlike the usual 4×3 center-crop, a smaller portion of the left and right edge of the 16×9 HD frame is cropped off. I don’t like how FCP handles the addition of pulldown (to turn 23.98 into 29.97 fps) and I’m not happy with its scaling quality to downconvert HD to SD. My “go to” solution is to use After Effects as the conversion utility for the best results.

From Final Cut, I exported a self-contained, textless QuickTime movie (HD 23.98). This was placed into an After Effects 720 x 486 D1 composition and scaled to match a 14×9 framing within that comp. I rendered an uncompressed QuickTime file out of After Effects (29.97 fps, field-rendered with added 3:2 pulldown). The last step was to bring this 720 x 486 file back into FCP, place it on an NTSC 525i timeline, add and reposition all graphics for proper position and finish the masters.

Most of these steps are not unusual if you do a lot of high-end spot work. In the past, 35mm spots would be rough cut from one-light “dailies”. Transfer facilities would then retransfer selects in supervised color correction sessions and an online shop would conform this new film transfer to the rough cut. Although many of the traditional offline-online approaches are changing, they aren’t going away completely. The tricks learned over the past 40 years of this workflow still have merit in the digital world and can provide for rich post solutions.

Sample images – click to see enlarged view

Log-C profile from camera

Nick Shaw Log-C to Rec 709 LUT (interface)

Nick Shaw Log-C to Rec 709 LUT (result)

Final image after Baselight grading

© 2011 Oliver Peters

ARRI ALEXA post, part 4

Local producers have started real productions with the ARRI ALEXA, so my work has moved from the theoretical to the practical. As an editor, working with footage from ALEXA is fun. The ProRes files are easily brought into FCP, Premiere Pro or Media Composer via import or AMA with little extra effort. The Rec 709 color profile looks great, but if the DP opts for the Log-C profile, grading is a snap. Log-C, as I wrote before, results in an image akin to a film scan of uncorrected 35mm negative. It’s easy to grade and you end up with gorgeous natural colors. There’s plenty of range to swing the image in different ways for many different looks.

Working with the Log-C profile in front of a client takes a bit of strategy, depending on the NLE you are using. Under the best of circumstances, you’d probably want to process the images first and work with offline-resolution editing clips (like Avid DNxHD36 or Apple ProRes Proxy) with a color correction LUT “baked” into the image. Much like one-light “dailies” for film-originated projects.

Many projects don’t permit this amount of advanced time, though, so editors often must deal with it as part of the edit session. This primarily applies to commercial and corporate work. Workflows for feature film and TV projects should follow a more traditional course, with prep time built into the front end, but that’s another blog post.

Strategies

There are LUT (color look-up table) filters for FCP, but unfortunately real-time performance is challenged by many. The best performance, is when you can use the native filters, even though that might not technically be the correct curve. That’s OK, because most of the time you simply want a good looking image for the client to see while you are doing the creative cut. Apple Final Cut Pro and Adobe Premiere Pro both require that you apply a filter to each clip on the timeline. This has an impact on your workflow, because you have to add filters as you go.

One good approach, which balances FCP performance with an accurate LUT, is the Log-C to Rec 709 plug-in developed by Nick Shaw at Antler Post. It not only corrects the profile, but adds other features, like “burn-in” displays. If you leave your FCP timeline’s RT setting in dynamic/dynamic, the unrendered clips with this filter applied will drop frames. Changing the setting to Full frame rate and/or High/Full will yield real-time playback at full video quality on a current Mac.

ARRI has enabled its web-based LUT Generator, which is accessible for free if you register at the ARRIDIGITAL site. You can create LUTs in various formats, but the one that has worked best for me is the Apple Color .mga version. This can be properly imported and applied in Apple Color. There it may be used simply for viewing or optionally baked into the rendered files as part of the color correction.

You can also use Red Giant’s free Magic Bullet LUT Buddy. This filter can be used to create and/or read LUTs. Apply it to a clip in Final Cut Pro, Premiere Pro or After Effects, read in the .mga file and render. Lastly, the Adobe apps also include a Cineon conversion filter. Apply this in Premiere Pro or After Effects and tweak as needed. On a fast machine, Premiere Pro CS 5.5 plays clips with the Cineon converter applied, in real-time without rendering.

Avid Media Composer and Adobe After Effects currently have the best routines, because you can add color correction to an upper layer and everything underneath is adjusted.

After Effects actually treats this as an “adjustment layer”, like in Photoshop, while Media Composer simply lets you add filters to a blank track – effectively doing the same thing as an adjustment layer. You still won’t see the source clip as a corrected image, but once it is placed on the timeline, the correction is applied and the image appears richer.

In the case of Avid Media Composer, this can also include some filters other than its own color correction mode filters. For example, GenArts Sapphire or Magic Bullet Looks. Media Composer is able to play these files at full quality, even though they are unrendered, giving it a performance edge over FCP.

Cutting spots in Log-C

I recently cut a set of national spots for Florida Film & Tape (a local production company) on a late-model Apple dual-processor PowerMac G5, running FCP 6.0.6. It was equipped with a fast SCSI RAID and an AJA Kona card. That’s a perfectly good set-up for most SD and HD post. In fact, I’ve previously edited spots photographed on 35mm film and the RED One camera for the same client and same production company on this system. G5s were manufactured and sold before ProRes was ever released; but, in spite of that, I was able to work with the 1920×1080 23.98fps ProRes4444 files that were shot. I placed my selected clips on an uncompressed timeline and started cutting. The client had already seen a Rec 709 preview out of the camera, so he understood that the image would look fine after grading. Therefore, there was no need to cut with a corrected image. That was good, because adding any sort of color correction filter to a large amount of footage would have really impacted performance on this computer.

In order to make the edit as straightforward and efficient as possible, I first assembled a timeline of all the “circle takes” so the director (Brad Fuller) and the client could zero in on the best performances. Then I assembled these into spots and applied a basic color correction filter to establish an image closer to the final. At this point, I rendered the spots and started to fine-tune the edit, re-rendering the adjustments as I went along. This may sound more cumbersome than it was, since I was editing at online quality the entire time (uncompressed HD). Given the short turnaround time, this was actually the fastest way to work. The shoot and post (edit, grade, mix) were completed in three consecutive days!

Once the picture was locked, I proceeded to the last steps – color grading the spots and formatting versions for various air masters. I decided to grade these spots using the Magic Bullet Colorista (version 1) plug-in. There was no need to use Apple Color and Colorista works fine on the G5. I removed the basic filter I had applied to the clips for the edit and went to work with Colorista. It does a good job with the Log-C images, including adding several layers for custom color-correction masks. As flat as the starting images are, it’s amazing how far you can stretch contrast and increase saturation without objectionable noise or banding.

I’ll have more to write about ALEXA post in the coming weeks as I work on more of these projects. This camera has garnered buzz, thanks to a very filmic image and its ease in post. It’s an easy process to deal with if your editing strategy is planned out.

©2011 Oliver Peters

Video sweetening

Color grading for mood, style and story

Video “sweetening” is both a science and an art. To my way of thinking, Color correction is objective – evening out shot-to-shot consistency and adjusting for improper levels or color balance. Color grading is subjective – giving a movie, show or commercial a “look”. Grading ranges from the simple enhancement of what the director of photography gave you – all the way to completely “relighting” a scene to radically alter the original image. Whenever you grade a project, the look you establish should always be in keeping with the story and the mood the director is trying to achieve. Color provides the subliminal cues that lead the audience deeper into the story.

Under the best of circumstances, the colorist is working as an extension of the director of photography and both are on the same page as the director. Frequently the DP will sit in on the grading session; however, there are many cases – especially in low budget projects – where the DP is no longer involved at that stage. In those circumstances, it is up to the colorist to properly guide the director to the final visual style.

I’ve pulled some examples from two digital films that I graded – The Touch (directed by Jimmy Huckaby) and Scare Zone (directed by Jon Binkowski). The first was shot with a Sony F900 and graded with Final Cut Pro’s internal and third-party tools. The latter used two Sony EX cameras and was graded in Apple Color.

The Touch

This is a faith-oriented film, based on a true story about personal redemption tied to the creation of a local church’s women’s center. The story opens as our lead character is arrested and goes through police station booking. Since this was a small indie film, a real police station was used. This meant the actual, ugly fluorescent lighting – no fancy, stylized police stations, like on CSI. Since the point of this scene isn’t supposed to be pretty, the best way to grade it was to go with the flow. Don’t fight the fluorescent look, but go more gritty and more desaturated.

(Click on any of these images to see an enlarged view.)

Once she’s released and picked up by her loser boyfriend, we are back outside in sunny Florida weather. Just stick with a nice exterior look.

Nearly at the bottom of her life, she’s in a hotel room on the verge of suicide. This was originally a very warm shot, thanks to the incandescents in the room. But I felt it should go cooler. It’s night – there’s a TV on casting bluish light on her – and in general, this is supposed to be a depressing scene. So we swung the shot cooler and again, more desaturated from the original.

The fledgling women’s center holds group counseling sessions in a living room environment. This should feel comfortable and inviting. Here we went warmer.

Our lead character is haunted by the evils of her past, including childhood molestation and a teen rape. This is shown in various flashback sequences marked by an obvious change in editorial treatment utilizing frenetic cutting and speed ramps – together with a different visual look. The flashbacks were graded differently using Magic Bullet Looks for a more stylized appearance, including highlight glows.

Our lead comes to her personal conversion through the church and again, the sanctuary should look warm, natural and inviting. Since the lens used on the F900 resulted in a very deep depth of field, we decided to enhance these wider shots using a tilt-and-shift lens effect in Magic Bullet Looks. The intent was to defocus the background slightly and draw the audience in towards our main character.

Scare Zone

As you’ve probably gathered, Scare Zone is a completely different sort of tale than The Touch. Scare Zone is a comedy-horror film based on a Halloween haunted house attraction, which I discussed in this earlier post. In this story, our ensemble cast are part-time employees who work as “scaractors” in the evening. But… They are being killed off by a real killer. Most of the action takes place in the attraction sets and gift shop, with a few excursions off property. As such, the lighting style was a mixed bag, showing the attraction with “work lights” only and with full “attraction lighting”. We also have scenes without lights, except what is supposed to be moonlight or street lamp lighting coming through leaks from the exterior windows. And, of course, there’s the theatrical make-up.

This example shows one of the attraction scenes with work lights as the slightly, off-kilter manager explains their individual roles.

(Click on any of these images to see an enlarged view.)

Here are several frames showing one of the actors in scenes with show lighting, work lights and at home.

These are several frames from the film’s attraction/action/montage segments showing scaractor activity under show lighting. In the last frame, one of our actresses gets attacked.

The gift shop has a more normal lighting appearance. Not as warm as the work light condition, but warmer than the attraction lighting. In order to soften the look of the Goth make-up on the close-ups of our lead actress, I used a very slight application of the FCP compound blur filter.

Naturally, as in any thriller, the audience is to be left guessing throughout most of the film about the identity of the real killer. In this scene one of the actresses is being follow by the possible killer. Or is he? It’s a dark part of the hallway in a “show lighting” scene. One of the little extras done here was to use two secondaries with vignettes to brighten each eye socket of the mask, so as to better see the whites of the character’s eyes.

A crowd of guests line up on the outside, waiting to get into the attraction. It’s supposed to look like a shopping mall parking lot at night with minimal exterior lighting.

And lastly, these frames are from some of the attack scenes during what is supposed to be pre-show or after-show lighting conditions. In the first frame, one of our actresses is being chased by the killer through the attraction hallways and appears to have been caught. Although the vignette was natural, I enhanced this shot to keep it from being so dark that you couldn’t make out the action. The last two frames show some unfortunate vandals who tried to trash the place over the night. This is supposed to be a “lights-off” scene, with the only light being from the outside through leaks. And their flashlights, of course. The last frame required the use of secondary correction to make the color of the stage blood appear more natural.

©2011 Oliver Peters

Audio mixing strategy, part 1

Modern nonlinear editors have good tools for mixing audio within the application, but often it makes more sense to send the mix to a DAW (digital audio workstation) application, like Pro Tools, Logic or Soundtrack Pro. Whether you stay within the NLE or mix elsewhere, you generally want to end up with a mixed track, as well as a set of “split track stems”. I’ll confine the discussion to stereo tracks, but understand that if you are working on a 5.1 surround project, the track complexity increases accordingly.

The concept of “stems” means that you will do a submix for components of your composite mix. Typically you would produce stems for dialogue, sound effects and music. This means a “pre-mixed” stereo AIFF or WAVE file for each of these components. When you place these three stereo pairs onto a timeline, the six tracks at a zero level setting should correctly sum to equal a finished stereo composite mix. By muting any of these pairs, you can derive other versions, such as an M&E (music+effects minus dialogue) or a D&E (dialogue+effects minus music) mix. Maintaining a “split-track, superless” master (without text/graphics and with audio stems) will give you maximum flexibility for future revisions, without starting from scratch.

A recent project that I edited for the Yarra Valley winemakers was cut in Avid Media Composer 5, but mixed in Apple Soundtrack Pro. I could have mixed this in Media Composer, but I felt that a DAW would give me better control. Since I don’t have Pro Tools, Soundtrack Pro became the logical tool to use.

I’ve had no luck directly importing Avid AAF or OMF files into Soundtrack Pro, so I would recommend two options:

a)    Export an AAF and then use Automatic Duck Pro Import FCP to bring those tracks into Final Cut Pro. Then “send to” Soundtrack Pro for the mix.

b)   Export individual tracks as AIFF audio files. Import those directly into Soundtrack Pro or into FCP and then “send to” Soundtrack Pro.

For this spot, I used option B. First, I checker-boarded my dialogue and sound effects tracks in Media Composer and extended each clip ten frames to add handles. This way I had some extra media for better audio edits and cross fades as needed in Soundtrack Pro. Next, I exported individual tracks as AIFF files. These were then imported into Final Cut Pro, where I re-assembled my audio-only timeline. In FCP, I trimmed out the excess (blank portion) of each track to create individual clips again on these checker-boarded tracks. Finally, I sent this to Soundtrack Pro to create a new STP multi-track project.

Soundtrack Pro applies effects and filters onto a track rather than individual clips. Each track is analogous to a physical track on a multi-track audio recorder and a connected audio mixer; therefore, any processing must be applied to the entire track, rather than only a portion within that track. My spot was made up entirely of on-camera dialogue from winemakers in various locations and circumstances. For example, some of these were recorded on moving vehicles and needed some clean-up to be heard distinctly. So, the next thing to do was to create individual tracks for each speaking person.

In STP, I would add more tracks and move the specific clips up or down in the track layout, so that every time the same person spoke, that clip would appear on the same track. In doing so, I would re-establish the audio edits made in Media Composer, as well as clean up excess audio from my handles. DAWs offer the benefit of various cross fade slopes, so you can tailor the sound of your audio edits by the type of cross fade slope you pick for the incoming and outgoing media.

The process of moving dialogue clips around to individual tracks is often referred to as “splitting out the dialogue”. It’s the first step that a feature film dialogue editor does when preparing the dialogue tracks for the mix. Now you can concentrate on each individual speaking part and adjust the track volume and add any processing that you feel is appropriate for that speaker. Typically I will use EQ and some noise reduction filters. I’ve become quite fond of the Focusrite Scarlett Suite and used these filters quite a bit on the Yarra Valley spot.

Soundtrack Pro’s mixer and track sheet panes are divided into tracks, busses, submixes and a master. I added three stereo submixes (for dialogue, sound effects/ambiances and music) and a master. Each individual track was assigned to one of these submixes. The output of the submixes passed through the master for the final mix output. Since I adjusted each individual track to sound good on its own, the submix tracks were used to balance the levels of these three components against each other. I also added a compressor for the general sound quality onto the submix, as well as a hard limiter on the master to regulate spikes, which I set to -10dB.

By assigning individual dialogue, effects and music tracks to these three submixes, stems are created by default. Once the mix is done to your satisfaction, export a composite mix. Then mute two of the three submixes and export one of the stems. Repeat the process for the other two. Any effects that you’ve added to the master should be disabled whenever you export the stems, so that any overall limiting or processing is not applied to the stems. Once you’ve done this, you will have four stereo AIFF files – mix plus dialogue, sound effects and music stems.

I ended the Yarra Valley spot with a nine-way tag of winemakers and the logo. Seven of these winemakers each deliver a line, but it’s intended as a cacophony of sound rather than being distinguishable. I decided to build that in a separate project, so I could simply import it as a stereo element into the master project. All of the previous dialogue lines are centered as mono within a stereo mix, but I wanted to add some separation to all the voices in the tag.

To achieve this I took the seven voices and panned them to different positions within the stereo field. One voice is full left, one is full right, one is centered. The others are partially panned left or right at increments to fill up the stereo spectrum. I exported this tag as a stereo element, placed it at the right timecode location in my main mix and completed the export steps. Once done, the AIFF tracks for mix and stems were imported into Media Composer and aligned with the picture to complete the roundtrip.

Audio is a significant part of the editing experience. It’s something every editor should devote more time to, so they may learn the tools they already own. Doing so will give you a much better final product.

©2011 Oliver Peters

When it absolutely has to be there

A lot of the productions we post these days are delivered electronically – either on the web or as DVDs (or Blu-rays). Bouncing a finished product to an FTP site is a pretty good method for getting short projects around the world, but often masters or longer DVDs still require shipping. For many of us, FedEx is a mainstay; however, if it has to get halfway around the world by the next day, then even FedEx falls short. This reminds me of a bumper sticker slogan for an imaginary Tardis Express: “When it absolutely, positively has to get there yesterday!” So, with apologies to Dr. Who, how do you make this happen?

I recently had to get an eight minute presentation to a client in Australia. This was to be presented from DVD. Due to last minute changes, there was no time for physical shipping – even if we could have gotten it there overnight (quite unlikely). I could, of course, post an MPEG2 and AC3 file or a disc image file, but the client at the other end would not have been savvy enough to take this into a DVD authoring program (like DVD Studio Pro or even Toast) and actually burn a final disc. The second wrinkle was that my master was edited in the NTSC world of 29.97fps. Although many Australians own multi-standard DVD players, there was no guarantee that this would be the case in our situation. After a bit of trial-and-error with the director, we settled on this approach and I pass it along. Take this as more of a helpful anecdote rather than a professional workflow, but in a pinch it can really save you.

Apple iDVD will take a QuickTime movie and automatically generate the necessary encoded DVD files. That’s not much of a surprise, but, of course FTP’ing a ProRes master wouldn’t have been feasible, as the file size would have still been too large. It turns out, though, that iDVD will also do this from other QuickTime formats including high-quality H.264 files. Our Australian client’s daughter understood how to use iDVD, so the director decided it would be a simple matter to talk her through downloading the file and burning the presentation disc.

The first step for me was to generate a 25fps master file from my 29.97 end product. Compressor can do this and I’ve discussed the process before in my posts about dealing with HDSLRs. First, I converted the 29.97 file to a 25fps ProRes file. Then I took the 25fps ProRes high-def video and converted it in Compressor to a 16×9 SD PAL file, using a high-quality H.264 setting (around 8Mbps). Bounce it up to my MobileMe account, “share” the file and let the daughter generate the DVD in Australia using iDVD on her MacBook. Voila! Halfway around the world and no shipping truck in sight!

A related situation happened to me in 2004 – the year several hurricanes crisscrossed through central Florida. The first of these was headed our way out of the Gulf in the middle of my editing a large corporate job. Initially the storm looked like Tampa would get a direct hit and then pass to the north of Orlando. It was Friday and everyone was battening down for the weekend, so I called my announcer to see what his plans were for getting voice-over tracks to me. “No problem. I am putting up friends from Tampa and once they get settled in, I’ll record the tracks and send them your way.” That seemed fine, since I didn’t need these until Monday.

Unfortunately the storm track changed – blowing in south of the Tampa area and straight through central Florida. The main local damage was power outages, due to many fallen trees throughout the city. Power returned relatively quickly at my house, but much of the area ultimately was without power for several weeks. However, the weekend progressed and I still hadn’t heard back from my announcer. By Sunday I finally got through to him on the cell phone.

“Were you able to record the tracks?” I asked.  “Oh yes,” he replied. “They are up on my FTP site.” What followed is a classic. “We lost power and, in fact, it’s still out. I waited until the neighbor’s generator was off for the evening and was able to record the tracks to my laptop using battery power. Then I drove around and found a Panera Bread location.” Panera Bread is a national restaurant/coffee shop chain that offers free wi-fi connectivity in most of its locations. He continued, “The restaurant was closed, but they must have had power as the wi-fi was still running. So, I sat in the parking lot and uploaded the files to my FTP site.”

So thanks to modern technology and the world of consumer connectivity, both of these clients were able to receive their products on schedule. That’s in spite of logistical difficulties that would have made this sort of thing impossible only a few short years ago. Time machine – or phone booth – anyone?

©2010 Oliver Peters

Connections – looking back at the future

Maybe it’s because of The Jetsons or the 1964 World’s Fair or Disney’s Tomorrowland, but it’s always fun to look back at our past views of the future. What did we get right? What is laughable today?

I had occasion to work on a more serious future-vision project back in the 90s for AT&T. Connections: AT&T’s Vision of the Future was a 1993 corporate image video that was produced as a short film by Century III, then the resident post house at Universal Studios Florida. I was reminded of this a few years ago when someone sent me a link to the Paleo-Future blog. It’s a fun site that talks about all sorts of futuristic concepts, like “where are our flying cars?” Connections became a topic of a series of posts, including links to all sections of the original video.

The genesis of the video was the need to showcase technology, which AT&T had in the lab, in an entertaining way. It was meant to demonstrate the type of daily impact this technology might have in everyday life a few short years down the road. The project was spearheaded by AT&T executive Henry Bassman, who brought the production to Century III. We were ideally suited for this effort, thanks to our post and effects pipeline in sci-fi/fantasy television production (The Adventures of Superboy, Super Force, Swamp Thing, etc.) and our experience in high-value, corporate image projects. Being on the lot gave us access to Universal’s soundstages and working on these series put us together with leading dramatic directors.

One of these directors was Bob Wiemer, who had worked on a number of the episodes at Universal as well as other shows (Star Trek: The Next Generation, SeaQuest, etc.). Bassman, Wiemer and other principals, including cinematographer Glenn Kershaw, ASC, together with the crew at Century III formed the production and post team behind Connections. It was filmed on 35mm and intended to have all the production value of any prime time TV show. I was the online editor and part of the visual effects team on this show.

The goal of Connections was to present a slice-of-life scenario approximately 20 years into the future. Throughout the course of telling the story, key technology was matter-of-factly used. We are not quite at the 20-year mark, but it’s interesting to see where things have actually gone. In the early 90s, many of the showcased technologies were either in their infancy or non-existent. The Internet was young, the Apple Newton was a model PDA and all TV sets were 4×3 CRTs. Looking back at this video, there’s a lot that you’ll recognize as common reality today and a few things you won’t.

Some that are spot-on, include seat-back airplane TVs, monitors that are 16×9 aspect ratio, role-playing/collaborative video games, the use of PDAs in the form of iPhones, iPads and smart phones. In some cases, the technology is close, but didn’t quite evolve the way it was imagined – at least not yet. For example, Connections displayed the use foldable screens on PDAs. Not here yet. It also showed the use of simultaneous translation, complete with image morphing for lipsync and accurate speech-to-text on screen. Only a small part of that’s a reality. Video gamers interact in many role-playing games, even online, but they have yet to reach the level of virtual reality presented.

Nearly all depicted personal electronic devices demonstrate multimedia convergence. PDAs and cell phones merged into a close representation of today’s iPhone or Droid phone. Home and office computers and televisions are networked systems that tie into the same computing and entertainment options. In one scene, the father is able to access the computer from the widescreen TV set in his bedroom.

One big area that has never made it into practice is the way interaction with the computer was presented. The futurists at AT&T believed that the primary future interface with a computer would be via speech. They felt that the operating system would be represented to us by a customizable, personalized avatar. This was based on their extrapolation from actual artificial intelligence research. Think of Jeeves on steroids. Or maybe Microsoft’s Bob.  Well, maybe not. So far, the technology hasn’t made it that far and people don’t seem to want to adopt that type of a solution.

The following are a some examples of showcased technologies from Connections. Click on any frame for an enlarged view.

In the opening scene, the daughter (an anthropologist) is on a return flight from a trip to the Himalayas. She is on an in-flight 3-way call with her fiancé (in France) and a local artisan, who is making a custom rug for their wedding. This scene depicts videophone communications, 16×9 seat-back in-flight monitors with phone, movie and TV capabilities. Note the simultaneous translation with text, speech and image (lipsync) adjustment for all parties in the call.

The father (a city planner) is looking at a potential urban renewal site. He is using a foldable PDA with built-in camera and videophone. The software renders a CAD version of the possible new building to be constructed. His wife calls and appears on screen. Clearly we are very close to this technology today, when you look at iPhone 4, the iPad and Apple’s new FaceTime videophone application.

The son is playing a virtual reality, interactive role-playing game with two friends. Each player is rendered as a character within the game and displayed that way on the other players’ displays. Virtual reality gloves permit the player to interact with virtual objects on the screen. The game is interrupted by a message from mom, which causes the players to morph back into their normal appearance, while the game is on hold.

The mother appears in his visor as a pre-recorded reminder, letting him know it’s time to do his homework. The son exits the game. One of the friends morphs back into her vampire persona as the game resumes.

Mom and dad pick up the daughter at the airport. They go into a public phone area, which is an open-air booth, employing noise-cancelling technology for quiet and privacy in the air terminal. She activates and places the international call (voice identification) to introduce her new fiancé to her parents. This again depicts simultaneous translation and speech-to-text technology.

The mother (a medical professional) is consulting with a client (a teen athlete with a prosthetic leg) and the orthopedic surgeon. They are discussing possible changes to the design of the limb in a 3-way videophone conference call. Her display is the computer screen, which depicts the live feed of the callers, a CAD rendering of the limb design, along with the computer avatars from the doctor’s and her own computer. The avatars provide useful research information, as well as initiate the call at her voice request.

Mother and daughter are picking a wedding dress. The dress shop has the daughter’s electronic body measurements on file and can use these to display an accurate 3-sided, animated visual of how she will look in the various dress designs. She can interactively make design alterations, which are then instantly modified on screen from one look to the next.

In order to actually produce this shot, the actress was simultaneously filmed with three cameras in a black limbo set. These were synced in post and one wardrobe was morphed into another as a visual effect. By filming the one actress with three cameras, her motions in all three angles maintained perfect sync.

The father visits an advanced, experimental school where all students receive standardized instructions from an off-campus subject specialist. The in-classroom teacher assists any students with questions for personalized assistance. Each student has their own display system. Think of online learning mashed up with the iPad and One Laptop Per Child and you’ll get the idea.

I assembled a short video of excerpts from some of these scenes. Click the link to watch it or watch the full video at the Paleo-Future blog.

AT&T ultimately used the Connections short film in numerous venues to demonstrate how they felt their technology would change lives. The film debuted at the Smithsonian National Air and Space Museum as an opener for a showing of 2001: A Space Odyssey, commemorating its 25th anniversary re-release.

©2010 Oliver Peters

Easy Canon 5D post – Round II

RED’s Scarlet appears to be just around the corner and both Sony and Panasonic seem to be responding to the challenge of the upstart photo manufacturers. No matter what acronym you use – DSMC, HD-DSLR, HDSLR – these hybrid HD video / still photo cameras have grabbed everyone’s attention. 2010 may indeed be the year that hybrid digital SLR cameras hit their stride.

The Canon EOS 5D Mark II showed the possibilities in late 2008 when Vincent Laforet released Reverie, but like all of these new camera products, the big question was how to best handle the post. The 5D (so far) only shoots video at a true 30fps – lacking both the filmic 24fps rate – or any of the video-friendly frame rates (29.97, 25 or 23.976). That oversight was corrected in Canon’s EOS 7D and EOS 1D Mark IV models and may soon be corrected by a firmware update to the 5D. Even so, the 5D has remained a preferred option, because of its low light capabilities and full frame sensor. Photographers, videographers and filmmakers love the shallow depth-of-field, so a 24p-capable 5D is certainly on many wish lists.

Click the above image to enlarge

Until the 5D gets a 24fps upgrade [EDIT: coming in March, download will be here] , folks in post will have to contend with the 30fps footage generated by the camera. Last year I wrote an article on how to post a 5D project, which covers a lot of the basics. I’ve since done more 5D projects and formed a number of opinions and workflow tips. I’ve picked up many of these from reading Philip Bloom and Bruce Sharpe (PluralEyes inventor) and at the end of this post, I’ll include a number of useful links.

My first observation on the several 5D projects I’ve posted is that you get the best results from these new cameras when you treat them like film. Use classical production methods – slow pans, steady hand-held work, tripods, dollies and record audio as double-system sound. Secondly, allow time for processing files and syncing sound before you expect to start editing. 35mm film shoots typically require a day or more between the production day and post for lab processing and film transfer. The equivalent is true for HDSLRs. Whether it’s RED or an HDSLR, you have to become the film lab and transfer house. Once you wrap your head around that concept, the workflow steps make a lot more sense.

Click the above image to enlarge

I recently cut another Canon 5D Mark II job with Director/DP Toby Phillips. This was an internet commercial for the wine growers of the Yarra Valley region of Australia. Yarra Valley is to Australia, what Napa Valley is to California. Coincidentally, it’s also the region ravaged by the horrific fires of 2009. In order to keep the production light, Toby’s crew was bare bones and nearly all images were shot under available light – including sodium vapor lighting in warehouse areas. The creative concept was intended to be tongue-in-cheek. Real workers discussed why their job was the most important role in winemaking. The playful interplay between worker comments and winery/vineyard footage round out this :60 commercial.

Production tips

Toby rigged his camera with a modified plate, rails and matte box from his existing film equipment. This includes Arri and Manfrotto parts modified by Element Technica. The 5D records passable sound on its own, but it really isn’t ideal for the best quality. To get around this, a Zoom H4n handheld recorder was used for double-system sound. The Zoom has XLR inputs for external mics, in addition to its built-in XY-pattern stereo mics. A Sennheiser shotgun was plugged into the Zoom, which in turn recorded uncompressed 16-bit/48kHz WAV files. The headphone output of the Zoom was connected to the 5D, so that the camera files always contained reference audio.

There are a number of important tips to note here. First, there’s an impedance mismatch in this connection and the 5D uses an AGC circuit to attenuate audio, so the camera file audio will be clipped. To avoid this, turn down the headphone output level to a very low volume. Second, because the audio is clipped, if you forget to press record on the Zoom, the 5D’s audio is NOT acceptable. Following the traditional approach, a slate with clapstick was used for every sound take. The Zoom records numbered, sequential files, so the crew also wrote the audio file number on the slate for each take. These two steps make it easy to identify the correct audio take and to sync audio and video later in post.

Post workflow / pre-processing

This production configuration isn’t too different than shooting with other tapeless video cameras, but post requires a unique workflow. Key steps include video format conversion, speed adjustment and syncing the sound.

Video conversion – The Canon EOS 5D Mark II records 40Mpbs H.264 QuickTime movies in a 1920x1080p/30fps format. H.264 is not conducive to smooth editing in its native form. 5D files can be up to 4GB in length (about 12 minutes), but there is no clip-spanning provision, as in P2 or XDCAM. Where and when you convert the native H.264 camera files depends on your NLE. With Avid Media Composer, files are converted into Avid’s MXF format upon import. The import will be slow, since it’s also transcoding, but this is a one-step process. Unfortunately it ties up your NLE, so maybe in the future Avid’s MetaFuze or AMA will come to the rescue.

I cut with Apple Final Cut Pro, which does permit direct editing with the H.264 files, but you don’t really want to do that. I typically convert 5D files into Apple ProRes, using a batch setting in Compressor. You can use other codecs, of course, like DVCPRO HD, ProRes HQ, ProRes LT, etc. Philip Bloom likes to convert his files to the EX format using MPEG Streamclip. The reason for EX, according to him, is that the data rate is similar to the 5D files, so storage requirements don’t expand significantly.

The wine commercial had 127 camera files (2 hours 11 minutes of raw footage), which were converted to ProRes in about 4 hours on an 8-core Mac Pro. Storage needs increased from 40GB (H.264) to 142GB (ProRes). The nice part of this step (at least for FCP users) is that the conversion can be left as a batch to churn unattended. One word of caution, though. Compressor has a tendency to choke and crash when you throw tons of files at it, like 100+ camera files. So I usually do these conversions in groups of 20 or so files at a times.

Video speed adjustment – The 5D files are a true 30fps and not the fractional video rate of 29.97fps. Avid will convert these files to the correct rate on import, if audio and video tracks have been separated. According to Michael Phillips of Avid (one of their workflow gurus), “If the MOV file is video-only, then I use the ‘ignoreQtrate true’ console command and get a frame-for-frame import, resulting in a .1% slow down.” This is analogous to what happens when film is transferred to video. In my testing, it was important to first strip off the audio track of the MOV in order for this to work. You can do this using QuickTime Player Pro 7.

Final Cut permits native 30fps editing, but then your files won’t play through standard video gear, like a KONA card. I suppose for an internet spot this wouldn’t matter, however we had other uses, so a speed adjustment would have to happen at some point. I could either convert to 29.97 first and be done with it – or I could cut at 30fps and convert the finished spot. I normally opt to convert the ProRes files to 29.97fps first. To do this I use the Cinema Tools “conform” feature. That’s a nearly instantaneous process, which only alters the file’s metadata. It tells media players to run the file at the fractional frame rate of 29.97fps instead of 30fps.

Audio speed adjustment – Changing the frame rate from 30 to 29.97 means the picture has been slowed by .1% and so audio must also undergo the same pulldown. If you use a location sound recorder capable of a 48048kHz sample rate, then Avid Media Composer will automatically adjust the rate upon import back down to 48kHz and achieve the pulldown. In addition, there are various utilities that can “restamp” the metadata for the sample rate. A good choice is Sound Devices’ free Wave Agent. The Zoom recorder created 48kHz files, but these could be restamped as 47952kHz by such a software utility. In the case of Media Composer, the software sees this on import and slows the file by .1% to achieve the desired 48kHz sample rate. Thus the audio is back in sync.

Final Cut Pro works differently than Media Composer so your results may vary. FCP simply tries to maintain the same duration and thus would force a render in the timeline to convert the sample rate to 48kHz without altering the speed. Instead, I recommend that you render new versions of the audio before importing the files into FCP that have an applied speed change. When I initially tried the restamp approach, I got sync drift. After posting this entry, I tried it again with Wave Agent and the results were dead-on in sync. The only issue is that then you have to render the audio in FCP to get the correct sample rate. I’m not a big fan of how FCP renders audio files and so prefer to correct them prior to import into FCP. I have also had inconsistent results with FCP and how it handles sync with external audio files.

Because of these various concerns, I used Telestream Episode Pro and created an audio-only preset that included a speed change with a .999 value. I used this preset to batch-convert 20 16-bit/48kHz WAV files from the Zoom recorder (1 hour 9 minutes of raw dialogue) into “pulled down” AIF files. This took about two minutes. Whichever approach you take, I urge you to do this only with copies of files. Some of these various utilities use destructive processes, so you don’t want to change your originals.

(Note: For a better understanding of how BWF (broadcast wave files), QuickTime and Final Cut Pro interact, check out this product (BWF2XML) and description by Spherico.)

Syncing the dailies – After these conversion steps, the files are ready to import into FCP. Audio and video files are now in optimized formats that will match FCP’s native media settings. Next, you’ll have to sync the audio and video takes. If the crew used a clapstick, it’s easy to sync in either Avid or Final Cut using the standard group or multiclip routines.

For this wine spot, I used Singular Software’s PluralEyes to automatically sync all sound takes. PluralEyes was one of the highlights of NAB 2009 and is about as close to magic as any software can get. It analyzes audio waveforms to compare and align the reference camera audio against the separate audio files. This is why it’s critical to record even poor-quality reference audio to the camera in order to give PluralEyes something to analyze. Unfortunately for the Avid editor, PluralEyes only works with Final Cut and Sony Vegas Pro. It’s not a plug-in, but works on a timeline labeled “pluraleyes” in an open and saved FCP project.

Here are the steps:

a) Create a blank FCP timeline named “pluraleyes”.

b) Drag & drop all camera clips with dialogue (audio & video) onto the timeline (random order is OK).

c) Drag & drop all separate audio files onto the same timeline onto unused audio tracks (random order is OK).

d) Disable any redundant audio track (speeds up analysis).

e) Save the project, launch PluralEyes, start analysis/sync processing.

After a few minutes of processing, PluralEyes will automatically create a series of new FCP sequences – one for each sync take. The audio will be aligned so that the double-system sound files are now perfectly in sync with the camera audio.

Post workflow / edit / mix / grade

Now that you have sync takes, you can pretty much edit anyway you like. I picked the following tip up from Bloom’s blog. To make editing easier on the wine spot, I took these new sequences and renamed them according to the person who was speaking and which take it was. I export the sequences as QuickTime references movies (not self contained) to a location on my media drives. I then re-imported these reference movies, in effect turning them into master clips with merged 5D video and Zoom audio. These became my source for all sync takes. Any b-roll shots came from the regular ProRes files.

The rest of the edit went normally. I’ve got my MacPro set up with two internal 1TB drives configured as a software RAID-0 for media files (2TB). No issues with cutting ProRes this way. I bounced the audio to Soundtrack Pro for the final mix. No real reason, other than to take advantage of some of the plug-ins to add a touch of “sparkle” to the dialogue.

I used Apple Color for the grade. If you follow my blog, you know that I could have tackled this easily with various plug-ins and stayed inside FCP, however, I do like the Color interface and toolset. This spot was ideally suited to go through a grading pass using Color. As it turned out, this step might have been a bit premature due to client revisions. In hindsight, using plug-ins might have been preferable. I thought the cut was locked, so proceeded with the correction in Color.

The first version of the spot was a faster paced cut (57 shots in :60), so the client requested a second version with a little more breathing room and a few alternate dialogue takes. This necessitated going back into the footage. Those familiar with Color know that it generates new media files when it renders color correction. This is required to “bake in” the color corrections. If you assign handles of a few seconds to each shot, you have some room to trim shots when you are back in Final Cut. This doesn’t help you with other footage.

I decided to step back to the sequence before “sending to” Color and cut a second, more-relaxed version (46 shots in :60). Although this meant starting a new Color project, I was aided by Color’s ability to store grades. I could save the settings for each of the shots in version one and apply these settings to the similar or same shot in version two, within the new Color project. Adjust keyframes, tweak a few settings, render and bingo! – the grade is done. With :02 handles on each shot, version one (57 shots) rendered in about 40 minutes and version two (46 shots) took about 30 minutes. Both as 1920×1080 ProRes (29.97fps) media. Of course, like many commercials this wasn’t the end and a few more changes were made! The final version ended being a combination of these two cuts.

(As an aside, Stu Maschwitz has done a nice post about Color Correcting Canon 7D Footage on his ProLost blog.)

Post-processing / 24fps conversion

This could have been the end of the post for the wine spot, but there’s one more step. A big reason people like these HDSLRs is because they provide a very cost-effective way of getting that elusive “film look”. One part of that look is the 24fps frame rate. Yes – some film is shot at 30fps for spots and TV shows – so technically the 5D’s 30p footage is just fine. But clients really do want that 24fps look.

You can convert these 5D files quite cleanly to 24fps. This is a process I picked up from Bloom and discussed in my previous Canon post.  Here are the steps:

a) Note the exact duration of the 29.97fps timeline.

b) Export a self-contained QuickTime movie of the finished 29.97 sequence.

c) Bring that exported file into Compressor and set up a ProRes-to-ProRes conversion. Use a frame rate of 24fps (it actually is 23.98, but Compressor labels it as 24).

d) Turn Frame Controls on, set Rate Conversion to Best and change Duration from 100% of source to the exact duration of the original 29.97 timeline.

Now let Compressor crunch for a while.  My :60 spot took about 36 minutes to convert from 29.97 to 23.98. For good measure, I also take the finished file into Cinema Tools and conform it to 23.98, just in case it’s 24 and not 23.98. Then I import the file back into FCP. I create a new 23.98 timeline and edit the clip to the file. If everything is done correctly, this media should match without any rendering needed. Then I’ll copy and paste the audio from the 29.97 timeline to the 23.98 timeline. This should be in sync.

A couple of additional pointers. Since I don’t want to have this conversion process get confused with titles and dissolves, I remove all graphics and make dissolves into cuts (with handles) in the 29.97 sequence, prior to export. I actually exported the wine spot timeline as 1:04 instead of :60. When I was back in the 23.98 timeline, I fixed these trims, added back the fades, dissolves and graphics in order to complete the sequence.

The second issue is speed changes. I sped up two shots, which actually passed through Color and this 24p conversion just fine – except for one problem. My 29.97 timeline was actually an interlaced timeline. This doesn’t matter for the camera files, as they are inherently progressive. However, any timeline effects, like speed changes, titles and transitions are processed with interlaced motion. This affected the two sped-up shots in the 24p conversion, resulting in interlace artifacts. The simple fix was to replace these with the normal-speed media and redo the speed change in the 23.98 timeline. No big deal, but something to be mindful of in the future.

Finally, although this conversion is very good, it isn’t perfect. Cuts do stay as clean cuts and slow action converts cleanly looking as if it were shot at 24fps. Fast motion, however, does introduce some artifacts. These mainly show up as blended frames in areas of fast activity or fast camera movement. It’s no big deal really, as it tends to add to the filmic look of the material – a bit like motion blur.

Remember that this is an OPTIONAL and SUBJECTIVE step. I personally think that 30p is a “sweet spot” for LCD and plasma screens. This is especially true for the web and computer displays. In the end, my client decided they liked the 30p image better, because it was crisper.

Click the image to see the video in HD on Vimeo.

Or here for the “Alternate Cut” at 30fps (no 24p conversion).

Additional tools

Since the media files the HDSLR cameras generate are an outgrow of file creation at the consumer product level, there is very little metadata in them that an NLE would care about. No reel numbers, SMPTE timecode, edge numbers, etc. That’s good and bad. Good – in that the folder and file structure is quite simple and very malleable. Bad – in that you can have duplicate file names and there’s no ability to span clips. Think of it like a roll of 35mm negative. That would have about 11 minutes of capacity and new metadata is added when it’s transferred to video.

Since files are sequentially numbered on the memory card, once you start recording to the next card, it’s likely to have repeating file names. This is true both in the camera and on a recorder like the Zoom, simply because there is no reel (i.e. card) ID name or number.  The good news is that you can easily change this without corrupting metadata – as you would with RED or P2 – but, it means you have to manually impose some sort of structure yourself.

R-Name – One utility that can help in R-Name. Unfortunately it may be out of development, but I still use version 3, which works with Snow Leopard. You might be able to find a download still lurking in the depths of the internet – or, if not – a similar utility or an Automator routine. R-Name lets you rename files (as the name implies), but you can also append prefix or suffix character strings to a file name. For example, a set of media files from a 5D may be named MVI_1073.mov through MVI_1200.mov and you’d like to add a prefix for Card 1. Simply create an R-Name batch that adds a prefix such as “C001_” to all these files. Run the batch and voila – your files are now named C001_MVI_1073.mov through C001_MVI_1200.mov. Follow this process for each card and it becomes a nice, fast way of organizing your media.

QtChange – If reel numbers and timecode are important for you to have, then check out VideoToolshed’s QtChange. This is a comprehensive QuickTime utility, which lets you alter several file parameters. Most importantly, you can add or change reel number and timecode values. Although this isn’t essential for you to cut in FCP, certain functions, like dupe detection, won’t work without an assigned reel number. There are several ways to alter this info in QtChange, but one of the ways it can work is to automatically use the date stamp of the file for the reel number and the time stamp as a starting timecode number. Files can be changed in a batch, but be careful as these are destructive changes. Developer Bouke Vahl has been making ongoing changes to the product and recently added Avid Log Exchange functionality.

MetaCheater One deficiency of Avid Media Composer has been the inability to directly read all of the metadata from a QuickTime file. For instance, older versions of Media Composer and Symphony would not read QuickTime timecode. This has been corrected in the most recent versions; these apps now import the timecode, but still no reel number. In addition, the Canon cameras don’t generate timecode or reel numbers so you must add them if you need such information. You could use QtChange to add reel IDs and timecode, which Media Composer would import, but then there’s still the reel ID problem. MetaCheater is a simple way around this. This program extracts QuickTime metadata and creates an Avid Log Exchange file (ALE) with proper reel numbers and timecode values. Import the ALE file into Media Composer and then batch import the corresponding QuickTime movies. In this process, Media Composer uses the timecodes and reel numbers from the ALE instead of default values, with the result that your Avid bins properly reflect the reel and timecode information added to the 5D files. It would be just as if this media had been captured from a videotape source.

Here are a few comparisons of the color grading applied to these shots.

Click each of these images to see an enlarged view.

Original Image

Graded Image

Split Screen

Original Image

Graded Image

Split Screen

Addendum (Feb 2010)

After I initially wrote this article in January, I pulled it down for some tweaks. In the interim, I got busier for a few weeks until I could repost it. In that time, was able to do some more testing with Avid Media Composer 4.0.5 on another Canon 5D spot. I am adding my observations here, since many of my readers are Avid cutters and want to know the best way to handle these files in Media Composer.

Unlike FCP, there’s no simple drag-and-drop method in Avid. If you elect to convert the files using an external encoding application, you still have to bring the files in through Avid’s import routines. This adds a step and effectively doubles the total time it takes to convert and import as compared with FCP. Another frustrating issue is that when you move from the native camera files into Avid, you have to move out of the QuickTime color and gamma architecture and into an MXF structure using Avid codecs.

In the Avid world, video files are treated using the rec. 601/709 colorspace (16-235 on an 8-bit scale) and computer files are assumed to be in RGB space (0-255). When you import or export files to and from Media Composer, you always need to check the proper setting – RGB or 601/709. Unfortunately (or fortunately depending on your POV), this is largely hidden from view in the QuickTime world. Furthermore, Canon really hasn’t provided documentation that I’m aware of regarding the colorspace that these cameras work in and how closely color scaling conforms to either RGB or rec. 709. The long and short of it is that when you move in and out of QuickTime, you are often fighting level and gamma changes to varying degrees.

I tried a number of different import and encoding methods with Media Composer. All of them work, but with various trade-offs. The easiest method is as I outlined earlier in this article – simply import the H.264 camera files into Media Composer. When you do that, select RGB color space. The import time will take approximately 3:1 to 4:1 on a fast machine, depending on the target codec you choose to use, because the media is being transcoded during this import stage. I had the fastest encoding times using the Sony XDCAM-EX codec, which is now natively supported by Media Composer.

A second option is to use Apple Compressor (or another QuickTime encoder) to convert the camera files into QuickTime movies using an Avid DNxHD codec. This is the same approach as converting to Apple ProRes 422. Unfortunately, Avid still imposes a longer import time to get these files from QuickTime MOVs into the MXF media format. Although Compressor offers a choice between RGB and 709 when you select DNxHD, it doesn’t seem to make any difference in the appearance of the files. The files are converted to 709 color space and so should be imported into Avid with the import setting on 709. I hope that this import step will be eliminated at some point in the future, when and if Avid decides to support QuickTime files through its AMA feature.

The fastest, current method was to use Episode Pro again. MXF is now supported in this encoder, so I was able to convert the H.264 files into MXF-wrapped XDCAM-EX files that were ready for Avid. The beauty of is that the work can be done on an external machine in a batch and the import back into Media Composer is very fast. No transcoding is needed, as this just becomes a file copy. The EX codec looked clean and wasn’t too taxing on my Mac Pro. You also have the option of using XDCAM-HD and XDCAM HD 422 (50Mbps) codecs in the MXF file format. The only issue was that one of the media files appeared to be corrupt after encoding and had to be re-encoded. This might be an anomaly, but we ARE dealing with two long-GOP codecs in this process! Another benefit of this route is that no user interaction is required to determine color space settings.

Now to the level issues. In all of this back and forth – once I exported back out to QuickTime (ProRes 422 codec, using RGB setting on export) – no conversion identically matched the original camera files. When I compared versions, direct import of the files (H.264 into Avid) yielded slightly darker results. External conversion to DNxHD and then importing, yielded a slight gamma shift. Conversion/import via the MXF route appeared a bit lighter than the original. None of these were major differences, though. If you are going to color grade the final product anyway, it doesn’t really matter. I finally settled on a 2-step conversion workflow (described in my February 21 post) that yielded good results going from the 5D files into Media Composer and then to FCP.

As far as editing, syncing and grading, that is the same as with any other acquisition media. I used the same preparatory steps as outlined earlier (Cinema Tools conform to 29.97 and a .999 speed adjustment of the audio) – then converted and imported the video files. Inside Media Composer (1080p/29.97 project), everything synced and edited just as I expected.

Also in early February, Canon announced its EOS Movie Plugin-E1 for Final Cut Pro. Click here for the description. It’s supposed to be released in March and if I understand their description correctly, it allows you to import camera clips via FCP’s Log and Transfer module. During the import stage, files are transcoded to ProRes. Unfortunately there is no explanation of how frame rates are handled, so I presume the files are imported and remain at their original frame rate.

My conclusion after all of this is that both FCP and Media Composer are just fine for working with HDSLR projects. FCP seems a bit faster at the front, but in the end, you’re just traveling two different roads to get to the same destination.

I leave you with one last tidbit to ponder. Apple has just introduced Aperture 3, which includes HD video clip support in slideshows. I wonder how apps like Aperture, Lightroom and Photoshop (already supports some video functions) will impact these HDSLR workflows in the future?

(UPDATE: If you got here through links from other blogs, make sure you read the updated Round III post as well.)

Useful Links

5DMk2 blog – 1001 Noisy Cameras

Assisted Editing

Philip Bloom

Canon Explorers of Light

Canon Filmmakers

Cinema5D

DSLR HD

DVinfo

DVXuser

Element Technica

FreshDV

Tyler Ginter

Vincent Laforet

ProLost

Red Rock Micro

Bruce Sharpe

Spherico

Peter Wiggins

Planet5D

Video Toolshed

Zacuto

©2010 Oliver Peters