Easy Canon 5D post – Round III

The interest in HDSLR production and post shows no sign of waning. Although some of this information will seem redundant with earlier articles (here and here), I decided it was a good time to set down a working recipe of how I like to deal with these files. To some extend this is a “refresh” of the Round II article, given the things I’ve learned since then. The Canon cameras are the dominant choice, but that’s for today. Nikon is coming on strong with its D7000 and Panasonic has made a serious entry into the large-format-sensor video camera market with its Micro 4/3” AG-AF100. In six months, the post workflows might once again change.

To date, I have edited about 40 spots and short-form videos that were all shot using the Canon EOS 5D Mark II. Many of the early post issues, like the need to convert frame rates, are now behind us. This means fewer variables to consider. Here is a step-by-step strategy for working with HDSLR footage, specifically from Canon 5D/7D/1D HDLSR cameras.


Before doing anything with the camera files, it is IMPERATIVE that you clone the camera cards. This is your “negative” and you ALWAYS want to preserve it in its original and UNALTERED form. One application to consider for this purpose is Videotoolshed’s Offloader.

Once that’s out of the way, the first thing I do with files from a Canon 5D or 7D is convert them to the Apple ProRes codec. Yes, various NLEs can natively work with the camera’s H.264 movie files, but I still find this native performance to be sluggish. I prefer to organize these files outside of the NLE and get them into a codec that’s easy to deal with using just about any editing or compositing application. Generally, I will use ProResLT, however, if there is really a quality concern, because the project may go through more heavy post,  then use standard ProRes or ProResHQ. Avid editors may choose to use an Avid DNxHD codec instead.

I have tried the various encoders, like Compressor or Grinder, but in the end have come back to MPEG Streamclip. I haven’t tried 5DtoRGB yet, because it is supposed to be a very slow conversion and most TV projects don’t warrant the added quality it may offer. I have also had unreliable results using the FCP Log and Transfer EOS plug-in. So, in my experience, MPEG Streamclip has not only been the fastest encoder, but will easily gobble a large batch without crashing and delivers equal quality to most other methods. 32GB CF cards will hold about 90-96 minutes of Canon video, so a shoot that generates 4-8 cards in a day means quite a lot of file conversion and you need to allow for that.

MPEG Streamclip allows you to initiate four processes in the batch at one time, which means that on a 4, 8 or 12-core Mac Pro, your conversion will be approximately real-time. The same conversion runs about 1.5x real-time (slower) using the EOS plug-in. The real strength of MPEG Streamclip is that it doesn’t require FCP, so data conversion can start on location on an available laptop, if you are really in that sort of rush.

Timecode and reel numbers

The Canon camera movie files contain little or no metadata, such as a timecode track. There is a THM file (thumbnail file) that contains a data/time stamp. The EOS plug-in, as well as some applications, use this to derive timecode that more-or-less corresponds to TOD (time-of-day) code. In theory, this means that consecutive clips should not have any timecode overlap, but unfortunately I have not found that to be universally true. In my workflow, I generally never use these THM files. My converted ProRes files end up in separate folders that simply contain the movie files and nothing else.

It is important to settle on a naming strategy for the cards. This designator will become the reel ID number, which will make it easy to trace back to the origin of the footage months later. You may use any scheme you like, but I recommend a simple abbreviation for location/day/camera/card. For example, if you shoot for several days in San Francisco with two cameras, then Day 1, Camera 1, Card 1 would be SF01A001 (cameras are designated as A, B, C, etc.); Day 1, Cam 2, Card 1 would be SF01B001; Day 2, Cam 1, Card 3 would be SF02A003 and so on. These card ID numbers are consistent with standard EDL conventions for numbering videotape reels. Create a folder for each card’s contents using this scheme and make sure the converted ProRes files end up in the corresponding folders.

I use QtChange to add timecode to the movie files. I will do this one folder at a time, using the folder name as the reel number. QtChange will embed the folder name (like SF01A001) into the file as the reel number when it writes the timecode track. I’m not a big fan of TOD code and, as I mentioned, the THM files have posed some problems. Instead, I’ll assign new timecode values in QtChange – typically a new hour digit to start each card. Card 1 starts at 1:00:00:00. Card 2 starts at 2:00:00:00 and so on. If Card 1 rolled over into the next hour digit, I might increment the next card’s starting value. So Card 2 might start at 2:30:00:00 or 3:00:00:00, just depending on the overall project. The objective is to avoid overlapping timecodes.

Renaming files

I never change the names of the original H.264 camera files. Since I might need to get back to these files from the converted ProRes media at some point in the future, I will need to be able to match names, like MVI_9877.mov or MVI_1276.mov. This means that I won’t remove the movie file name from the ProRes files either, but it is quite helpful to append additional info to the file name. I use R-Name (a file renaming batch utility) to do this. For example, I might have a set of files that constitute daytime B-roll exterior shots in Boston. With R-Name, I’ll add “-Bos-Ext” after the file name and before the .mov extension.

In the case of interview clips, I’ll manually append a name, like “-JSmith-1” after the movie name. By using this strategy, I am able to maintain the camera’s naming convention for an easy reference back to the original files, while still having a file that’s easy to recognize simply by its name.

Double-system sound

The best approach for capturing high-quality audio on an HDSLR shoot is to bring in a sound mixer and employ film-style, double-system sound techniques. Professional audio recorders, like a Zaxcom DEVA, record broadcast WAVE files, which will sync up just fine and hold sync through the length of the recording. Since the 5D/7D/1D cameras now record properly at 23.98, 29.97 or 25fps, no audio pulldown or speed adjustment should be required for sync.

If you don’t have the budget for this level of audio production, then a Zoom H4n (not the H4) or a Tascam DR-100 are viable options. Record the files at 48kHz sampling in a 16-bit or 24-bit WAVE format. NO MP3s. NO 44.1kHz.

The Zaxcom will have embedded timecode, but the consumer recorders won’t. This doesn’t really matter, because you should ALWAYS use a slate with a clapstick to provide a sync reference. If you use a recorder like a Zaxcom, then you should also use a slate with an LED timecode display. This makes it easy to find the right sound file. In the case of the Zoom, you should write the audio track number on the slate, so that it’s easy to locate the correct audio file in the absence of timecode.

You can sync up the audio manually in your NLE by lining up the clap on the track with the picture – or you can use an application like Singular Software’s PluralEyes. I recommend tethering the output of the audio recorder to the camera whenever possible. This gives you a guide track, which is required by PluralEyes. Ideally, this should have properly matched impedances so it’s useable as a back-up. It may be impractical to tether the camera, in which case, make sure to record reference audio with a camera mic. This may pose more problems for PluralEyes, but it’s better than nothing.

Singular Software has recently introduced DualEyes as a standalone application for syncing double-system dailies.

Your edit system

As you can see, most of this work has been done before ever bringing the files into an NLE application. To date, all of my Canon projects have been cut in Final Cut and I continue to find it to be well-suited for these projects – thanks, in part, to this “pre-edit” file management. Once you’ve converted the files to ProRes or ProResLT, though, they can easily be brought into Premiere Pro CS5 or Media Composer 5. The added benefit is that the ProRes media will be considerably more responsive in all cases than the native H.264 camera files.

Although I would love to recommend editing directly via AMA in Media Composer 5, I’m not quite sure Avid is ready for that. In my own experience, Canon 5D/7D/1D files brought in using AMA as either H.264 or ProRes are displayed at the proper video levels. Unfortunately others have had a different experience, where their files come in with RGB values that exhibit level excursions into the superwhite and superblack regions. The issue I’ve personally encountered is that when I apply non-native Avid AVX effects, like Boris Continuum Complete, Illusion FX or Sapphire, the rendered files exhibit crushed shadow detail and a shifted gamma value. For some reason, the native Avid effects, like the original color effect, don’t cause the same problem. However, it hasn’t been consistent – that is, levels aren’t always crushed.

Recommendations for Avid Media Composer editors

If you are an Avid editor using Media Composer 5, then I have the following recommendations for when you are working with H.264 or ProRes files. If you import the file via AMA and the levels are correct (black = 16, peak white = 235), then transcode the selected cut to DNxHD media before adding any effects and you should be fine. On the other hand, if AMA yields incorrect levels (black = 0, peak white = 255), then avoid AMA. Import “the old-fashioned way” and set the import option for the incoming file as having RGB levels. Avid has been made aware of these problems, so this behavior may be fixed in some future patch.

There is a very good alternative for Avid Media Composer editors using MPEG Streamclip for conversion. Instead of converting the files to one of the ProRes codecs, convert them to Avid DNxHD (using 709 levels), which is also available under the QuickTime options. I have found that these files link well to AMA and, at least on my system, display correct video levels. If you opt to import these the “old” way (non-AMA), the files will come in as a “fast import”. In this process, the QuickTime files are copied and rewrapped as MXF media, without any additional transcoding time.

“Off-speed” files, like “overcranked” 60fps clips from a Canon 7D can be converted to a different frame rate (like 23.98, 25 or 29.97) using the “conform” function of Apple Cinema Tools. This would be done prior to transcoding with MPEG Streamclip.

Avid doesn’t use the embedded reel number from a QuickTime file in its reel number column. If this is important for your workflow, then you may have to manually modify files after they have been imported into Media Composer or generate an ALE file (QtChange or MetaCheater) prior to import. That’s why a simple mnemonic, like SF01A001 is helpful.

Although this workflow may seem a bit convoluted to some, I love the freedom of being able to control my media in this way. I’m not locked into fixed metadata formats like P2. This freedom makes it easier to move files through different applications without being wedded to a single NLE.

Here are some more options for Canon HDSLR post from another article written for Videography magazine.

©2010 Oliver Peters

Grind those EOS files!

I have a love/hate relationship with Apple Compressor and am always on the lookout for better encoding tools. Part of our new file-based world is the regular need to process/convert/transcode native acquisition formats. This is especially true of the latest crop of HDSLRs, like the Canon EOS 5D Mark II and its various siblings. A new tool in this process is Magic Bullet Grinder from Red Giant Software. Here’s a nice description by developer Stu Maschwitz as well as another review by fellow editor and blogger, Scott Simmons.

I’ve already pointed out some workflows for getting the Canon H.264 files into an editable format in a previous post. Although Avid Media Composer 5, Adobe Premiere Pro CS5 and Apple Final Cut Pro natively support editing with the camera files – and although there’s already a Canon EOS Log and Transfer plug-in for FCP – I still prefer to convert and organize these files outside of my host NLE. Even with the newest tools, native editing is clunky on a large project and the FCP plug-in precludes any external organization, since the files have to stay in the camera’s folder structure with their .thm files.

Magic Bullet Grinder offers a simple, one-step batch conversion utility that combines several functions that otherwise require separate applications in other workflows. Grinder can batch-convert a set of HDSLR files, add timecode and simultaneously create proxy editing files with burn-in. In addition, it will upscale 720p files to 1080p. Lastly, it can conform frame-rates to 23.976fps. This is helpful if you want to shoot 720p/60 with the intent of overcranking (displayed as slow motion at 24fps).

The main format files are converted to either the original format (with added timecode), ProRes, ProRes 4444 or two quality levels of PhotoJPEG. Proxies are either ProRes Proxy or PhotoJPEG, with the option of several frame size settings. In addition, proxy files can have a burn-in with various details, such as frame numbers, timecode, file name + timecode or file name + frame numbers. Proxy generation is optional, but it’s ideal for offline/online editing workflows or if you simply need to generate low-bandwidth files for client review.

Grinder’s performance is based on the number of cores. It sends one file to each core, so in theory, eight files would be simultaneously processed on an 8-core machine. Speed and completion time will vary, of course, with the number, length and type of files and whether or not you are generating proxies. I ran a head-to-head test (main format only, no proxy files) on my 8-core MacPro with MPEG Streamclip and Compressor, using 16 H.264 Canon 5D files (about 1.55GB of media or 5 minutes of footage). Grinder took 12 minutes, Compressor 11 minutes and MPEG Streamclip 6 minutes. Of course, neither Compressor nor MPEG Streamclip would be able to handle all of the other functions – at least not within the same, simplified process. The conversion quality of Magic Bullet Grinder was quite good, but like MPEG Streamclip, it appears that ProRes files are generated with the QuickTime “automatic gamma correction” set to “none”. As such, the Compressor-converted files appeared somewhat lighter than those from either Grinder or MPEG Streamclip.

This is a really good effort for a 1.0 product, but in playing with it, I’ve discovered it has a lot of uses outside of HDSLR footage. That’s tantalizing and brings to mind some potential suggestions as well as issues with the way that the product currently works. First of all, I was able to convert other files, such as existing ProRes media. In this case, I would be interested in using it to ONLY generate proxy files with a burn-in. The trouble now is that I have to generate both a new main file (which isn’t needed) as well as the proxy. It would be nice to have a “proxy-only” mode.

The second issue is that timecode is always newly generated from the user entry field. Grinder doesn’t read and/or use an existing QuickTime timecode track, so you can’t use it to generate a proxy with a burn-in that matches existing timecode. In fact, if your source file has a valid timecode track, Grinder generates a second timecode track on the converted main file, which confuses both FCP and QuickTime Player 7. Grinder also doesn’t generate a reel number, which is vital data used by many NLEs in their media management.

I would love to see other format options. For instance, I like ProResLT as a good format for these Canon files. It’s clean and consumes less space, but isn’t a choice with Grinder. Lastly, the conform options. When Grinder conforms 30p and 60p files to 24p (23.976), it’s merely doing the same as Apple Cinema Tools by rewriting the QuickTime playback rate metadata. The file isn’t converted, but simply told to play more slowly. As such, it would be great to have more options, such as 30fps to 29.97fps for the pre-firmware-update Canon 5D files. Or conform to 25fps for PAL countries.

I’ve seen people comment that it’s a shame it won’t convert GoPro camera files. In fact it does! Files with the .mp4 extension are seen as an unsupported format. Simply change the file extension from .mp4 to .mov and drop it into Grinder. Voila! Ready to convert.

At $49 Magic Bullet Grinder is a great, little utility that can come in handy in many different ways. At 1.0, I hope it grows to add some of the ideas I’ve suggested, but even with the current features, it makes life easier in so many different ways.

©2010 Oliver Peters

Solutions to Improve FCP’s Media Management

Media management has long been considered Apple Final Cut Pro’s Achilles’ Heel. In reality, FCP has gotten better in this regard and does a pretty decent job of linking project master clips to media. The shortcomings of FCP media management become apparent when projects are moved around among different edit systems, hard drives and editors. I’ve started to dabble with a few different applications that improve on FCP’s native abilities. I’ll bring these to you on an irregular basis, once I get a chance to do a bit more testing.

The first of these is FcpReconnect from VideoToolShed. This is the brainchild of Bouke Vahl, a Dutch editor and software developer. FcpReconnect may be used in a number of different ways, but in general, works by linking files based on matching reel numbers and timecode. For FCP editors, it provides an excellent solution to projects that use an offline-online edit workflow. Since reel number and timecode are the key, you are less subject to FCP’s need to have file names that completely match. For most workflows, there are two basic ways of using FcpReconnect: a) consolidation and relink or b) relink via XML.

Method A – Consolidate and Relink

As a test, I started with footage from a recent Canon EOS 5D Mark II project. The native camera files are 1920×1080 H.264, 30fps and have no reel numbers or timecode. As I described in a previous post, I converted the media to Apple ProResLT in Compressor, conformed the files to 29.97fps in Cinema Tools and added reel numbers and timecode using QtChange – another handy application from VideoToolShed.

To test FcpReconnect, I used Compressor again to convert the hi-res ProResLT “master” files into DV anamorphic “proxy” files for offline editing. The DV files have the same reel number and timecode, but aren’t an exact file name match, as they had a “DV” suffix appended to the clip name.

I created an FCP edit project (NTSC DV anamorphic) and assembled a basic edit sequence using the DV proxy files. In this example, the DV clips are independent from the hi-res files, which would be the workflow if I decided to do an offline edit on my laptop or gave the files to another editor to cut segments for me. Only the DV files would be the sources in this edit.

Once the edit is done, the next step is to use FCP’s Media Manager to create an offline project. Set the target format to match the hi-res media (1920×1080/30p ProResLT) and set short handle lengths. This creates a new project, with only the clips that were used in the cut. The media for these hi-res clips will show up as “offline”, of course. Next, export a Batch List of this new sequence.

Open FcpReconnect and first make sure you have selected the right timecode standard in the Set Up pulldown menu. VideoToolShed is in The Netherlands, so the default at first launch will be PAL. Once you’ve set this, select the target media folder (the hi-res HD files) and select the Batch List that you just exported. Once it finds all of the matches, you have a few options.

For this test, I chose to use clip names and copy/trim self-contained media of the selected files. This is the equivalent of Avid’s “consolidate” feature.

The clips that are used in the edited sequence are copied to a new folder with a duration equal to the edited length on the timeline, plus the handles. It also renames the media files to match the clip names used in the sequence.

Return to FCP and reconnect the media (currently offline) of the hi-res sequence to the newly consolidated files. Typically, once the first file is located, the others will be automatically found. You will get an FCP dialogue box, because the new media attributes will not completely match the expected attributes. This is normal. Simply click “continue” and you’ll be OK.

Let me caution, that I would still avoid wildly renaming clips inside the FCP browser. The Canon files are sequentially numbered movie files. I tried some tests in which I completely renamed these files. For example, “MVI_2061-DV” might have been renamed to “Richard CU”. Most of the time this worked fine, but I did have a few clips that would not relink. My recommendation is still to use other columns in FCP or at least to leave the number as part of the new clip name. This will make it easier if you must manually locate a few files. I had no such problems in the tests where I left the master clip name the same as its corresponding media file name.

Method B – Relink via XML

An alternate method is to skip the consolidation step. After all, if you already have the hi-res media on your drives, you might not want to copy these files again. In Method B, you’d start the same way with hi-res and proxy files. Edit the proxy project and then use FCP Media Manager to create a new offline project matching the hi-res format. Export a Batch List AND an XML file from this new offline sequence.

In FcpReconnect, pick the target (hi-res) media folder and the Batch List. Instead of coping media, open the XML file. FcpReconnect analyzes the XML against the Batch List and the target media folder and generates a new XML.

Open this new XML file in Final Cut Pro and select “create new project”. The result will be a new FCP project containing one sequence, which is linked to the hi-res media. If you have done this properly, the sequence settings should match the target HD format (ProResLT in my example).

You can make sure the sequence clips are linked to the right media by checking the media path in “item settings”.

In addition, you can also verify frame-accuracy by placing the proxy edit sequence over the hi-res edit sequence and make sure everything lines up. My tests were all accurate.

VideoToolShed’s FcpReconnect is one of a number of applications being developed to fill in the gaps of Final Cut Pro’s media management. It’s clear to see that with a little care, it doesn’t take much to make FCP a far more robust NLE.

©2010 Oliver Peters

CoreMelt Lock and Load X

CoreMelt offers a number of GPU-accelerated plug-in sets for Final Cut Pro, Final Cut Express, Motion and After Effects. One powerful collection is CoreMelt Complete V2 (“v-twin”), which is currently up to 200 filters, transitions and generators. However a very cool, separate filter is their Lock & Lock X stabilization plug-in for Final Cut Pro. The original Lock & Load filter was updated to Lock & Lock X (a free upgrade for L&L users) shortly after NAB2010 and gained a significant new feature – Rolling Shutter Reduction.

Rolling shutter artifacts – the so-called “jello-o-cam” effect – have been the bane of CMOS-sensor cameras, most notably the HD-capable DSLR still cameras. The short answer for why this happens is that objects move in place during the time interval between the data being picked up from the top to the bottom of the sensor. The visual manifestation is skewing or a wobble to the image on fast horizontal motion or shaky handheld shots. CoreMelt’s Lock & Load X is designed to be used for both standard image stabilization, as well as reduction of these artifacts.

Final Cut Pro already includes a very good, built-in stabilization filter in the form of Smoothcam – a technology inherited from Shake. So why buy Lock & Load X when you already own Smoothcam? Two answers: speed and rolling shutter artifact reduction. Generally, Lock & Load X is faster than Smoothcam, although this isn’t always a given. CoreMelt claims up to12 times faster than Smoothcam, but that’s relative. One important factor is the length of the clip. When Smoothcam analyzes a clip to apply stabilization, it must process the entire media clip, regardless of how long of a clip was cut into the sequence. If the media clip is five minutes long, then Smoothcam processes all five minutes. Fortunately, this can proceed as a background function.

In contrast, Lock & Load X only analyzes the length of the clip that is actually in the sequence. If you only used ten seconds out of the five minutes, then Lock & Load X only processes those ten seconds. In this example, processing times between Smoothcam and Lock & Load X would be dramatically different. On the other hand, if you used the complete length of the clip, then processing times for the two might be similar. I’m not exactly sure whether Lock & Load X uses the same type of GPU-acceleration as the V2 filters, so I don’t know whether these processing times change with the card you have in your machine. I’m running a stock NVIDIA GeForce 120 in my Mac Pro, so it could be that an ATI or NVIDIA FX4800 card might show even better results with Lock & Load X. I don’t know the answer to that one, but in any case, processing a 1920 x 1080 ProResLT clip that was several seconds long took less than a minute for both stabilization and rolling shutter reduction.

When you compare the stabilized results between Smoothcam and Lock & Load X, you’ll generally prefer the latter. Most of the time the filter doesn’t zoom in quite as far and if you leave some movement in the image (such as with handheld shots), the “float” of the image feels more natural. However, there are exceptions. I tested one clip with a hard vertical adjustment by the cameraman. At that point, Smoothcam looked more natural than Lock & Load X, which introduced a slight rotation in correcting that portion of the clip. Another difference is real-time performance. On my machine, Smoothcam left me with a green render bar and Lock & Load X was orange. In FCP terms, this means that the unrendered Smoothcam clip played without degraded performance, while the Lock & Load X clip dropped frames. Once rendered, there’s no difference, of course, and render times were similar between the two. Again, this result might differ with another display card.

Rolling shutter artifact reduction is not unique to Lock & Load X, but as far as I know, is currently only available in one other, more expensive filter from The Foundry. In CoreMelt’s implementation, you must select the shutter coefficient, which is based on certain camera profiles supplied by CoreMelt with the filter. If you are working with Canon EOS 5D Mark II or EOS 7D footage, simply pick the camera, run the tracking analysis and you are done. You can choose to stabilize, reduce rolling shutter artifacts or both. In many cases, rolling shutter reduction is very subtle, so you might not see a massive change in the image. Sometimes, the filter simply corrects minor vertical distortions in the frame.

One application I find quite useful is with handheld shots that are intended to look like Steadicam shots. Lock & Load X does a nice job of steadying these shots without losing the natural “float” that you want to keep in the image. The “before” version might look decent, but when you compare the “after” version, it is definitely the preferable image. In order for Lock & Load X to do its magic, it has to blow-up the image slightly, so that the picture fills out to the edges of the frame This is true of any stabilization filter, including Smoothcam. Lock & Load X does this expansion within the filter and doesn’t change motion tab size values. The filter includes a “smart zoom” feature – intelligently resizing the image throughout the clip so that the least amount of blow-up is performed at any time. For a subtle stabilization, like the handheld shot example, Lock & Load X will typically zoom the image between 7% and 12% throughout the length of the clip. Thanks to the processing used, the quality of the rendered clip will be better than if you had zoomed in 10% in FCP’s motion tab.

CoreMelt’s Lock & Load X is a specialized filter. When you have the need for this function, it’s hard to beat. Clearly a new selling point is rolling shutter artifact reduction. Pro video cameras aren’t immune to the effect, however, since even a Sony EX uses a CMOS chip. But it’s a big factor for the HDSLRs. These cameras will continue to be the hot ticket for a while, so Lock & Load X becomes an indispensible tool for editors posting a lot of Canon and Nikon projects.

©2010 Oliver Peters

Canon 5D Avid FCP roundtrip

No, this isn’t the 5D workflow article that you’ve been waiting for. That’s still coming in another couple of weeks. In the meantime, I’ve started on another Canon 5D commercial. This time I’m cutting the project in Avid Media Composer instead of Final Cut Pro. There are a number of reasons, including some recent stability issues I’ve had with FCP. In addition, the creative treatment calls for some nice speed ramp effects. Avid’s FluidMotion is simply a much better slomo technology than anything in Final Cut. So this time, Media Composer is the right tool for the job.

In order to make sure that video levels match what I’m used to with FCP, I’ve been doing some testing of how to roundtrip files back to Final Cut. Ultimately these are web spots, so I want to make sure what I do in Media Composer matches what I do in Final Cut. When I finish editing the spot, there may be a reason to continue in FCP – such as to use Color for grading. That’s another reason to be very sure the images match, regardless of the NLE used.

That’s the dilemma. Avid has always treated video as Rec. 601/709, which means that black and white equal 16 and 235 on a scale of 0-255. This allows headroom and footroom for superwhites and “blacker than black” shadow areas. FCP doesn’t really honor this scale and seems to internally use adjusted levels of 0-235 (my guess), so it makes it tricky whenever you convert clips in and out of QuickTime. Not every QuickTime conversion is equal and you may get level, gamma, saturation and hue shifts depending on where and how the conversion is done and which codec is used.

One visible evidence of this difference is how each UI displays images. An image in a Media Composer window will tend to look “flatter” on the computer display, i.e. less contrast, than the exact same image in a Final Cut window. That really doesn’t matter for most video. If you compare the Avid output through one of Avid’s DX units with FCP’s output through a Kona card, both would look the same on a broadcast monitor and scopes. In the case of these 5D spots, though, the web is the target. I have to make sure the process is as transparent as possible, since there is no I/O hardware between the NLE and the final product.

When you import a QuickTime file into Avid Media Composer you must decide whether the file’s video levels are mapped as RGB (a full 0-255 range) or 601/709 (a scaled 16-235 range). Computer files, like a Photoshop graphic, are almost always RGB. The movie files generated by the Canon EOS 5D Mark II conform to a full RGB range, so set the color level mapping to RGB when importing these files into Media Composer. This tells Media Composer that the range of levels is 0-255 and must be rescaled to 16-235 upon import, when an Avid media file is created. I had both the original H.264 and converted ProRes versions of these files available. Both matched each other, so the resulting levels inside Avid Media Composer were the same whether I picked the H.264 or ProRes file. During the import stage, these were transcoded to the DNxHD145 codec for editing within a 1080p/29.97 project.

At this point you’d edit the same as with any other project. When done you would export a finished file for web conversion. This was the critical stage in my testing, because I wanted to be sure that I could export a file that matched any FCP version. Obviously, if you are going to color grade the footage, it’s less of an issue, since the image is going to look different than the original anyway. My main concern was to assure that the roundtrip would be as transparent as possible. In theory, the easiest approach would be to simply export a QuickTime file with a target codec (like ProRes) and be done with it. It turns out that this isn’t actually as transparent as you’d expect, presumably because of how Avid is interacting with QuickTime to write a non-Avid QuickTime codec.

The better solution takes a couple of steps, but the results are worth it. First of all, you must export from Media Composer with RGB mapping. The 16-235 levels are thus rescaled back out to 0-255 in order to match your computer display. To get the closest overall level match, you should use the Avid 1:1 codec, not one of the Apple uncompressed or ProRes codecs. You aren’t done yet. The Avid codec does display within FCP, but when I attempted to render it on an FCP timeline, the result was just digital hash. The workaround is to do a second conversion in QuickTime 7. Open the Avid 1:1 exported file in QuickTime Pro 7 and export that file again using the Apple ProRes codec.

When I brought the “round-tripped” ProRes file into FCP and split-screened it with the same clip in H.264 (from the camera) or ProRes (first generation conversion of the camera file), there was very little difference between the two clips – either visually or on the waveform. With this knowledge in hand, I’m now ready and comfortable in cutting the spot in Media Composer and won’t feel like I will make any compromise in image quality.

Here’s a recap of the steps:

  1. Import the 5D files into Avid Media Composer
  2. Use RGB mapping
  3. Cut normally
  4. Export an Avid 1:1 QuickTime movie
  5. Use RGB mapping
  6. Open file in QuickTime 7
  7. Export as Apple ProRes
  8. Import into Apple Final Cut Pro and continue working

© 2010 Oliver Peters