Easy Canon 5D post – Round II

RED’s Scarlet appears to be just around the corner and both Sony and Panasonic seem to be responding to the challenge of the upstart photo manufacturers. No matter what acronym you use – DSMC, HD-DSLR, HDSLR – these hybrid HD video / still photo cameras have grabbed everyone’s attention. 2010 may indeed be the year that hybrid digital SLR cameras hit their stride.

The Canon EOS 5D Mark II showed the possibilities in late 2008 when Vincent Laforet released Reverie, but like all of these new camera products, the big question was how to best handle the post. The 5D (so far) only shoots video at a true 30fps – lacking both the filmic 24fps rate – or any of the video-friendly frame rates (29.97, 25 or 23.976). That oversight was corrected in Canon’s EOS 7D and EOS 1D Mark IV models and may soon be corrected by a firmware update to the 5D. Even so, the 5D has remained a preferred option, because of its low light capabilities and full frame sensor. Photographers, videographers and filmmakers love the shallow depth-of-field, so a 24p-capable 5D is certainly on many wish lists.

Click the above image to enlarge

Until the 5D gets a 24fps upgrade [EDIT: coming in March, download will be here] , folks in post will have to contend with the 30fps footage generated by the camera. Last year I wrote an article on how to post a 5D project, which covers a lot of the basics. I’ve since done more 5D projects and formed a number of opinions and workflow tips. I’ve picked up many of these from reading Philip Bloom and Bruce Sharpe (PluralEyes inventor) and at the end of this post, I’ll include a number of useful links.

My first observation on the several 5D projects I’ve posted is that you get the best results from these new cameras when you treat them like film. Use classical production methods – slow pans, steady hand-held work, tripods, dollies and record audio as double-system sound. Secondly, allow time for processing files and syncing sound before you expect to start editing. 35mm film shoots typically require a day or more between the production day and post for lab processing and film transfer. The equivalent is true for HDSLRs. Whether it’s RED or an HDSLR, you have to become the film lab and transfer house. Once you wrap your head around that concept, the workflow steps make a lot more sense.

Click the above image to enlarge

I recently cut another Canon 5D Mark II job with Director/DP Toby Phillips. This was an internet commercial for the wine growers of the Yarra Valley region of Australia. Yarra Valley is to Australia, what Napa Valley is to California. Coincidentally, it’s also the region ravaged by the horrific fires of 2009. In order to keep the production light, Toby’s crew was bare bones and nearly all images were shot under available light – including sodium vapor lighting in warehouse areas. The creative concept was intended to be tongue-in-cheek. Real workers discussed why their job was the most important role in winemaking. The playful interplay between worker comments and winery/vineyard footage round out this :60 commercial.

Production tips

Toby rigged his camera with a modified plate, rails and matte box from his existing film equipment. This includes Arri and Manfrotto parts modified by Element Technica. The 5D records passable sound on its own, but it really isn’t ideal for the best quality. To get around this, a Zoom H4n handheld recorder was used for double-system sound. The Zoom has XLR inputs for external mics, in addition to its built-in XY-pattern stereo mics. A Sennheiser shotgun was plugged into the Zoom, which in turn recorded uncompressed 16-bit/48kHz WAV files. The headphone output of the Zoom was connected to the 5D, so that the camera files always contained reference audio.

There are a number of important tips to note here. First, there’s an impedance mismatch in this connection and the 5D uses an AGC circuit to attenuate audio, so the camera file audio will be clipped. To avoid this, turn down the headphone output level to a very low volume. Second, because the audio is clipped, if you forget to press record on the Zoom, the 5D’s audio is NOT acceptable. Following the traditional approach, a slate with clapstick was used for every sound take. The Zoom records numbered, sequential files, so the crew also wrote the audio file number on the slate for each take. These two steps make it easy to identify the correct audio take and to sync audio and video later in post.

Post workflow / pre-processing

This production configuration isn’t too different than shooting with other tapeless video cameras, but post requires a unique workflow. Key steps include video format conversion, speed adjustment and syncing the sound.

Video conversion – The Canon EOS 5D Mark II records 40Mpbs H.264 QuickTime movies in a 1920x1080p/30fps format. H.264 is not conducive to smooth editing in its native form. 5D files can be up to 4GB in length (about 12 minutes), but there is no clip-spanning provision, as in P2 or XDCAM. Where and when you convert the native H.264 camera files depends on your NLE. With Avid Media Composer, files are converted into Avid’s MXF format upon import. The import will be slow, since it’s also transcoding, but this is a one-step process. Unfortunately it ties up your NLE, so maybe in the future Avid’s MetaFuze or AMA will come to the rescue.

I cut with Apple Final Cut Pro, which does permit direct editing with the H.264 files, but you don’t really want to do that. I typically convert 5D files into Apple ProRes, using a batch setting in Compressor. You can use other codecs, of course, like DVCPRO HD, ProRes HQ, ProRes LT, etc. Philip Bloom likes to convert his files to the EX format using MPEG Streamclip. The reason for EX, according to him, is that the data rate is similar to the 5D files, so storage requirements don’t expand significantly.

The wine commercial had 127 camera files (2 hours 11 minutes of raw footage), which were converted to ProRes in about 4 hours on an 8-core Mac Pro. Storage needs increased from 40GB (H.264) to 142GB (ProRes). The nice part of this step (at least for FCP users) is that the conversion can be left as a batch to churn unattended. One word of caution, though. Compressor has a tendency to choke and crash when you throw tons of files at it, like 100+ camera files. So I usually do these conversions in groups of 20 or so files at a times.

Video speed adjustment – The 5D files are a true 30fps and not the fractional video rate of 29.97fps. Avid will convert these files to the correct rate on import, if audio and video tracks have been separated. According to Michael Phillips of Avid (one of their workflow gurus), “If the MOV file is video-only, then I use the ‘ignoreQtrate true’ console command and get a frame-for-frame import, resulting in a .1% slow down.” This is analogous to what happens when film is transferred to video. In my testing, it was important to first strip off the audio track of the MOV in order for this to work. You can do this using QuickTime Player Pro 7.

Final Cut permits native 30fps editing, but then your files won’t play through standard video gear, like a KONA card. I suppose for an internet spot this wouldn’t matter, however we had other uses, so a speed adjustment would have to happen at some point. I could either convert to 29.97 first and be done with it – or I could cut at 30fps and convert the finished spot. I normally opt to convert the ProRes files to 29.97fps first. To do this I use the Cinema Tools “conform” feature. That’s a nearly instantaneous process, which only alters the file’s metadata. It tells media players to run the file at the fractional frame rate of 29.97fps instead of 30fps.

Audio speed adjustment – Changing the frame rate from 30 to 29.97 means the picture has been slowed by .1% and so audio must also undergo the same pulldown. If you use a location sound recorder capable of a 48048kHz sample rate, then Avid Media Composer will automatically adjust the rate upon import back down to 48kHz and achieve the pulldown. In addition, there are various utilities that can “restamp” the metadata for the sample rate. A good choice is Sound Devices’ free Wave Agent. The Zoom recorder created 48kHz files, but these could be restamped as 47952kHz by such a software utility. In the case of Media Composer, the software sees this on import and slows the file by .1% to achieve the desired 48kHz sample rate. Thus the audio is back in sync.

Final Cut Pro works differently than Media Composer so your results may vary. FCP simply tries to maintain the same duration and thus would force a render in the timeline to convert the sample rate to 48kHz without altering the speed. Instead, I recommend that you render new versions of the audio before importing the files into FCP that have an applied speed change. When I initially tried the restamp approach, I got sync drift. After posting this entry, I tried it again with Wave Agent and the results were dead-on in sync. The only issue is that then you have to render the audio in FCP to get the correct sample rate. I’m not a big fan of how FCP renders audio files and so prefer to correct them prior to import into FCP. I have also had inconsistent results with FCP and how it handles sync with external audio files.

Because of these various concerns, I used Telestream Episode Pro and created an audio-only preset that included a speed change with a .999 value. I used this preset to batch-convert 20 16-bit/48kHz WAV files from the Zoom recorder (1 hour 9 minutes of raw dialogue) into “pulled down” AIF files. This took about two minutes. Whichever approach you take, I urge you to do this only with copies of files. Some of these various utilities use destructive processes, so you don’t want to change your originals.

(Note: For a better understanding of how BWF (broadcast wave files), QuickTime and Final Cut Pro interact, check out this product (BWF2XML) and description by Spherico.)

Syncing the dailies – After these conversion steps, the files are ready to import into FCP. Audio and video files are now in optimized formats that will match FCP’s native media settings. Next, you’ll have to sync the audio and video takes. If the crew used a clapstick, it’s easy to sync in either Avid or Final Cut using the standard group or multiclip routines.

For this wine spot, I used Singular Software’s PluralEyes to automatically sync all sound takes. PluralEyes was one of the highlights of NAB 2009 and is about as close to magic as any software can get. It analyzes audio waveforms to compare and align the reference camera audio against the separate audio files. This is why it’s critical to record even poor-quality reference audio to the camera in order to give PluralEyes something to analyze. Unfortunately for the Avid editor, PluralEyes only works with Final Cut and Sony Vegas Pro. It’s not a plug-in, but works on a timeline labeled “pluraleyes” in an open and saved FCP project.

Here are the steps:

a) Create a blank FCP timeline named “pluraleyes”.

b) Drag & drop all camera clips with dialogue (audio & video) onto the timeline (random order is OK).

c) Drag & drop all separate audio files onto the same timeline onto unused audio tracks (random order is OK).

d) Disable any redundant audio track (speeds up analysis).

e) Save the project, launch PluralEyes, start analysis/sync processing.

After a few minutes of processing, PluralEyes will automatically create a series of new FCP sequences – one for each sync take. The audio will be aligned so that the double-system sound files are now perfectly in sync with the camera audio.

Post workflow / edit / mix / grade

Now that you have sync takes, you can pretty much edit anyway you like. I picked the following tip up from Bloom’s blog. To make editing easier on the wine spot, I took these new sequences and renamed them according to the person who was speaking and which take it was. I export the sequences as QuickTime references movies (not self contained) to a location on my media drives. I then re-imported these reference movies, in effect turning them into master clips with merged 5D video and Zoom audio. These became my source for all sync takes. Any b-roll shots came from the regular ProRes files.

The rest of the edit went normally. I’ve got my MacPro set up with two internal 1TB drives configured as a software RAID-0 for media files (2TB). No issues with cutting ProRes this way. I bounced the audio to Soundtrack Pro for the final mix. No real reason, other than to take advantage of some of the plug-ins to add a touch of “sparkle” to the dialogue.

I used Apple Color for the grade. If you follow my blog, you know that I could have tackled this easily with various plug-ins and stayed inside FCP, however, I do like the Color interface and toolset. This spot was ideally suited to go through a grading pass using Color. As it turned out, this step might have been a bit premature due to client revisions. In hindsight, using plug-ins might have been preferable. I thought the cut was locked, so proceeded with the correction in Color.

The first version of the spot was a faster paced cut (57 shots in :60), so the client requested a second version with a little more breathing room and a few alternate dialogue takes. This necessitated going back into the footage. Those familiar with Color know that it generates new media files when it renders color correction. This is required to “bake in” the color corrections. If you assign handles of a few seconds to each shot, you have some room to trim shots when you are back in Final Cut. This doesn’t help you with other footage.

I decided to step back to the sequence before “sending to” Color and cut a second, more-relaxed version (46 shots in :60). Although this meant starting a new Color project, I was aided by Color’s ability to store grades. I could save the settings for each of the shots in version one and apply these settings to the similar or same shot in version two, within the new Color project. Adjust keyframes, tweak a few settings, render and bingo! – the grade is done. With :02 handles on each shot, version one (57 shots) rendered in about 40 minutes and version two (46 shots) took about 30 minutes. Both as 1920×1080 ProRes (29.97fps) media. Of course, like many commercials this wasn’t the end and a few more changes were made! The final version ended being a combination of these two cuts.

(As an aside, Stu Maschwitz has done a nice post about Color Correcting Canon 7D Footage on his ProLost blog.)

Post-processing / 24fps conversion

This could have been the end of the post for the wine spot, but there’s one more step. A big reason people like these HDSLRs is because they provide a very cost-effective way of getting that elusive “film look”. One part of that look is the 24fps frame rate. Yes – some film is shot at 30fps for spots and TV shows – so technically the 5D’s 30p footage is just fine. But clients really do want that 24fps look.

You can convert these 5D files quite cleanly to 24fps. This is a process I picked up from Bloom and discussed in my previous Canon post.  Here are the steps:

a) Note the exact duration of the 29.97fps timeline.

b) Export a self-contained QuickTime movie of the finished 29.97 sequence.

c) Bring that exported file into Compressor and set up a ProRes-to-ProRes conversion. Use a frame rate of 24fps (it actually is 23.98, but Compressor labels it as 24).

d) Turn Frame Controls on, set Rate Conversion to Best and change Duration from 100% of source to the exact duration of the original 29.97 timeline.

Now let Compressor crunch for a while.  My :60 spot took about 36 minutes to convert from 29.97 to 23.98. For good measure, I also take the finished file into Cinema Tools and conform it to 23.98, just in case it’s 24 and not 23.98. Then I import the file back into FCP. I create a new 23.98 timeline and edit the clip to the file. If everything is done correctly, this media should match without any rendering needed. Then I’ll copy and paste the audio from the 29.97 timeline to the 23.98 timeline. This should be in sync.

A couple of additional pointers. Since I don’t want to have this conversion process get confused with titles and dissolves, I remove all graphics and make dissolves into cuts (with handles) in the 29.97 sequence, prior to export. I actually exported the wine spot timeline as 1:04 instead of :60. When I was back in the 23.98 timeline, I fixed these trims, added back the fades, dissolves and graphics in order to complete the sequence.

The second issue is speed changes. I sped up two shots, which actually passed through Color and this 24p conversion just fine – except for one problem. My 29.97 timeline was actually an interlaced timeline. This doesn’t matter for the camera files, as they are inherently progressive. However, any timeline effects, like speed changes, titles and transitions are processed with interlaced motion. This affected the two sped-up shots in the 24p conversion, resulting in interlace artifacts. The simple fix was to replace these with the normal-speed media and redo the speed change in the 23.98 timeline. No big deal, but something to be mindful of in the future.

Finally, although this conversion is very good, it isn’t perfect. Cuts do stay as clean cuts and slow action converts cleanly looking as if it were shot at 24fps. Fast motion, however, does introduce some artifacts. These mainly show up as blended frames in areas of fast activity or fast camera movement. It’s no big deal really, as it tends to add to the filmic look of the material – a bit like motion blur.

Remember that this is an OPTIONAL and SUBJECTIVE step. I personally think that 30p is a “sweet spot” for LCD and plasma screens. This is especially true for the web and computer displays. In the end, my client decided they liked the 30p image better, because it was crisper.

Click the image to see the video in HD on Vimeo.

Or here for the “Alternate Cut” at 30fps (no 24p conversion).

Additional tools

Since the media files the HDSLR cameras generate are an outgrow of file creation at the consumer product level, there is very little metadata in them that an NLE would care about. No reel numbers, SMPTE timecode, edge numbers, etc. That’s good and bad. Good – in that the folder and file structure is quite simple and very malleable. Bad – in that you can have duplicate file names and there’s no ability to span clips. Think of it like a roll of 35mm negative. That would have about 11 minutes of capacity and new metadata is added when it’s transferred to video.

Since files are sequentially numbered on the memory card, once you start recording to the next card, it’s likely to have repeating file names. This is true both in the camera and on a recorder like the Zoom, simply because there is no reel (i.e. card) ID name or number.  The good news is that you can easily change this without corrupting metadata – as you would with RED or P2 – but, it means you have to manually impose some sort of structure yourself.

R-Name – One utility that can help in R-Name. Unfortunately it may be out of development, but I still use version 3, which works with Snow Leopard. You might be able to find a download still lurking in the depths of the internet – or, if not – a similar utility or an Automator routine. R-Name lets you rename files (as the name implies), but you can also append prefix or suffix character strings to a file name. For example, a set of media files from a 5D may be named MVI_1073.mov through MVI_1200.mov and you’d like to add a prefix for Card 1. Simply create an R-Name batch that adds a prefix such as “C001_” to all these files. Run the batch and voila – your files are now named C001_MVI_1073.mov through C001_MVI_1200.mov. Follow this process for each card and it becomes a nice, fast way of organizing your media.

QtChange – If reel numbers and timecode are important for you to have, then check out VideoToolshed’s QtChange. This is a comprehensive QuickTime utility, which lets you alter several file parameters. Most importantly, you can add or change reel number and timecode values. Although this isn’t essential for you to cut in FCP, certain functions, like dupe detection, won’t work without an assigned reel number. There are several ways to alter this info in QtChange, but one of the ways it can work is to automatically use the date stamp of the file for the reel number and the time stamp as a starting timecode number. Files can be changed in a batch, but be careful as these are destructive changes. Developer Bouke Vahl has been making ongoing changes to the product and recently added Avid Log Exchange functionality.

MetaCheater One deficiency of Avid Media Composer has been the inability to directly read all of the metadata from a QuickTime file. For instance, older versions of Media Composer and Symphony would not read QuickTime timecode. This has been corrected in the most recent versions; these apps now import the timecode, but still no reel number. In addition, the Canon cameras don’t generate timecode or reel numbers so you must add them if you need such information. You could use QtChange to add reel IDs and timecode, which Media Composer would import, but then there’s still the reel ID problem. MetaCheater is a simple way around this. This program extracts QuickTime metadata and creates an Avid Log Exchange file (ALE) with proper reel numbers and timecode values. Import the ALE file into Media Composer and then batch import the corresponding QuickTime movies. In this process, Media Composer uses the timecodes and reel numbers from the ALE instead of default values, with the result that your Avid bins properly reflect the reel and timecode information added to the 5D files. It would be just as if this media had been captured from a videotape source.

Here are a few comparisons of the color grading applied to these shots.

Click each of these images to see an enlarged view.

Original Image

Graded Image

Split Screen

Original Image

Graded Image

Split Screen

Addendum (Feb 2010)

After I initially wrote this article in January, I pulled it down for some tweaks. In the interim, I got busier for a few weeks until I could repost it. In that time, was able to do some more testing with Avid Media Composer 4.0.5 on another Canon 5D spot. I am adding my observations here, since many of my readers are Avid cutters and want to know the best way to handle these files in Media Composer.

Unlike FCP, there’s no simple drag-and-drop method in Avid. If you elect to convert the files using an external encoding application, you still have to bring the files in through Avid’s import routines. This adds a step and effectively doubles the total time it takes to convert and import as compared with FCP. Another frustrating issue is that when you move from the native camera files into Avid, you have to move out of the QuickTime color and gamma architecture and into an MXF structure using Avid codecs.

In the Avid world, video files are treated using the rec. 601/709 colorspace (16-235 on an 8-bit scale) and computer files are assumed to be in RGB space (0-255). When you import or export files to and from Media Composer, you always need to check the proper setting – RGB or 601/709. Unfortunately (or fortunately depending on your POV), this is largely hidden from view in the QuickTime world. Furthermore, Canon really hasn’t provided documentation that I’m aware of regarding the colorspace that these cameras work in and how closely color scaling conforms to either RGB or rec. 709. The long and short of it is that when you move in and out of QuickTime, you are often fighting level and gamma changes to varying degrees.

I tried a number of different import and encoding methods with Media Composer. All of them work, but with various trade-offs. The easiest method is as I outlined earlier in this article – simply import the H.264 camera files into Media Composer. When you do that, select RGB color space. The import time will take approximately 3:1 to 4:1 on a fast machine, depending on the target codec you choose to use, because the media is being transcoded during this import stage. I had the fastest encoding times using the Sony XDCAM-EX codec, which is now natively supported by Media Composer.

A second option is to use Apple Compressor (or another QuickTime encoder) to convert the camera files into QuickTime movies using an Avid DNxHD codec. This is the same approach as converting to Apple ProRes 422. Unfortunately, Avid still imposes a longer import time to get these files from QuickTime MOVs into the MXF media format. Although Compressor offers a choice between RGB and 709 when you select DNxHD, it doesn’t seem to make any difference in the appearance of the files. The files are converted to 709 color space and so should be imported into Avid with the import setting on 709. I hope that this import step will be eliminated at some point in the future, when and if Avid decides to support QuickTime files through its AMA feature.

The fastest, current method was to use Episode Pro again. MXF is now supported in this encoder, so I was able to convert the H.264 files into MXF-wrapped XDCAM-EX files that were ready for Avid. The beauty of is that the work can be done on an external machine in a batch and the import back into Media Composer is very fast. No transcoding is needed, as this just becomes a file copy. The EX codec looked clean and wasn’t too taxing on my Mac Pro. You also have the option of using XDCAM-HD and XDCAM HD 422 (50Mbps) codecs in the MXF file format. The only issue was that one of the media files appeared to be corrupt after encoding and had to be re-encoded. This might be an anomaly, but we ARE dealing with two long-GOP codecs in this process! Another benefit of this route is that no user interaction is required to determine color space settings.

Now to the level issues. In all of this back and forth – once I exported back out to QuickTime (ProRes 422 codec, using RGB setting on export) – no conversion identically matched the original camera files. When I compared versions, direct import of the files (H.264 into Avid) yielded slightly darker results. External conversion to DNxHD and then importing, yielded a slight gamma shift. Conversion/import via the MXF route appeared a bit lighter than the original. None of these were major differences, though. If you are going to color grade the final product anyway, it doesn’t really matter. I finally settled on a 2-step conversion workflow (described in my February 21 post) that yielded good results going from the 5D files into Media Composer and then to FCP.

As far as editing, syncing and grading, that is the same as with any other acquisition media. I used the same preparatory steps as outlined earlier (Cinema Tools conform to 29.97 and a .999 speed adjustment of the audio) – then converted and imported the video files. Inside Media Composer (1080p/29.97 project), everything synced and edited just as I expected.

Also in early February, Canon announced its EOS Movie Plugin-E1 for Final Cut Pro. Click here for the description. It’s supposed to be released in March and if I understand their description correctly, it allows you to import camera clips via FCP’s Log and Transfer module. During the import stage, files are transcoded to ProRes. Unfortunately there is no explanation of how frame rates are handled, so I presume the files are imported and remain at their original frame rate.

My conclusion after all of this is that both FCP and Media Composer are just fine for working with HDSLR projects. FCP seems a bit faster at the front, but in the end, you’re just traveling two different roads to get to the same destination.

I leave you with one last tidbit to ponder. Apple has just introduced Aperture 3, which includes HD video clip support in slideshows. I wonder how apps like Aperture, Lightroom and Photoshop (already supports some video functions) will impact these HDSLR workflows in the future?

(UPDATE: If you got here through links from other blogs, make sure you read the updated Round III post as well.)

Useful Links

5DMk2 blog – 1001 Noisy Cameras

Assisted Editing

Philip Bloom

Canon Explorers of Light

Canon Filmmakers

Cinema5D

DSLR HD

DVinfo

DVXuser

Element Technica

FreshDV

Tyler Ginter

Vincent Laforet

ProLost

Red Rock Micro

Bruce Sharpe

Spherico

Peter Wiggins

Planet5D

Video Toolshed

Zacuto

©2010 Oliver Peters