Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Easy Canon 5D post – Round III

The interest in HDSLR production and post shows no sign of waning. Although some of this information will seem redundant with earlier articles (here and here), I decided it was a good time to set down a working recipe of how I like to deal with these files. To some extend this is a “refresh” of the Round II article, given the things I’ve learned since then. The Canon cameras are the dominant choice, but that’s for today. Nikon is coming on strong with its D7000 and Panasonic has made a serious entry into the large-format-sensor video camera market with its Micro 4/3” AG-AF100. In six months, the post workflows might once again change.

To date, I have edited about 40 spots and short-form videos that were all shot using the Canon EOS 5D Mark II. Many of the early post issues, like the need to convert frame rates, are now behind us. This means fewer variables to consider. Here is a step-by-step strategy for working with HDSLR footage, specifically from Canon 5D/7D/1D HDLSR cameras.

Conversion

Before doing anything with the camera files, it is IMPERATIVE that you clone the camera cards. This is your “negative” and you ALWAYS want to preserve it in its original and UNALTERED form. One application to consider for this purpose is Videotoolshed’s Offloader.

Once that’s out of the way, the first thing I do with files from a Canon 5D or 7D is convert them to the Apple ProRes codec. Yes, various NLEs can natively work with the camera’s H.264 movie files, but I still find this native performance to be sluggish. I prefer to organize these files outside of the NLE and get them into a codec that’s easy to deal with using just about any editing or compositing application. Generally, I will use ProResLT, however, if there is really a quality concern, because the project may go through more heavy post,  then use standard ProRes or ProResHQ. Avid editors may choose to use an Avid DNxHD codec instead.

I have tried the various encoders, like Compressor or Grinder, but in the end have come back to MPEG Streamclip. I haven’t tried 5DtoRGB yet, because it is supposed to be a very slow conversion and most TV projects don’t warrant the added quality it may offer. I have also had unreliable results using the FCP Log and Transfer EOS plug-in. So, in my experience, MPEG Streamclip has not only been the fastest encoder, but will easily gobble a large batch without crashing and delivers equal quality to most other methods. 32GB CF cards will hold about 90-96 minutes of Canon video, so a shoot that generates 4-8 cards in a day means quite a lot of file conversion and you need to allow for that.

MPEG Streamclip allows you to initiate four processes in the batch at one time, which means that on a 4, 8 or 12-core Mac Pro, your conversion will be approximately real-time. The same conversion runs about 1.5x real-time (slower) using the EOS plug-in. The real strength of MPEG Streamclip is that it doesn’t require FCP, so data conversion can start on location on an available laptop, if you are really in that sort of rush.

Timecode and reel numbers

The Canon camera movie files contain little or no metadata, such as a timecode track. There is a THM file (thumbnail file) that contains a data/time stamp. The EOS plug-in, as well as some applications, use this to derive timecode that more-or-less corresponds to TOD (time-of-day) code. In theory, this means that consecutive clips should not have any timecode overlap, but unfortunately I have not found that to be universally true. In my workflow, I generally never use these THM files. My converted ProRes files end up in separate folders that simply contain the movie files and nothing else.

It is important to settle on a naming strategy for the cards. This designator will become the reel ID number, which will make it easy to trace back to the origin of the footage months later. You may use any scheme you like, but I recommend a simple abbreviation for location/day/camera/card. For example, if you shoot for several days in San Francisco with two cameras, then Day 1, Camera 1, Card 1 would be SF01A001 (cameras are designated as A, B, C, etc.); Day 1, Cam 2, Card 1 would be SF01B001; Day 2, Cam 1, Card 3 would be SF02A003 and so on. These card ID numbers are consistent with standard EDL conventions for numbering videotape reels. Create a folder for each card’s contents using this scheme and make sure the converted ProRes files end up in the corresponding folders.

I use QtChange to add timecode to the movie files. I will do this one folder at a time, using the folder name as the reel number. QtChange will embed the folder name (like SF01A001) into the file as the reel number when it writes the timecode track. I’m not a big fan of TOD code and, as I mentioned, the THM files have posed some problems. Instead, I’ll assign new timecode values in QtChange – typically a new hour digit to start each card. Card 1 starts at 1:00:00:00. Card 2 starts at 2:00:00:00 and so on. If Card 1 rolled over into the next hour digit, I might increment the next card’s starting value. So Card 2 might start at 2:30:00:00 or 3:00:00:00, just depending on the overall project. The objective is to avoid overlapping timecodes.

Renaming files

I never change the names of the original H.264 camera files. Since I might need to get back to these files from the converted ProRes media at some point in the future, I will need to be able to match names, like MVI_9877.mov or MVI_1276.mov. This means that I won’t remove the movie file name from the ProRes files either, but it is quite helpful to append additional info to the file name. I use R-Name (a file renaming batch utility) to do this. For example, I might have a set of files that constitute daytime B-roll exterior shots in Boston. With R-Name, I’ll add “-Bos-Ext” after the file name and before the .mov extension.

In the case of interview clips, I’ll manually append a name, like “-JSmith-1” after the movie name. By using this strategy, I am able to maintain the camera’s naming convention for an easy reference back to the original files, while still having a file that’s easy to recognize simply by its name.

Double-system sound

The best approach for capturing high-quality audio on an HDSLR shoot is to bring in a sound mixer and employ film-style, double-system sound techniques. Professional audio recorders, like a Zaxcom DEVA, record broadcast WAVE files, which will sync up just fine and hold sync through the length of the recording. Since the 5D/7D/1D cameras now record properly at 23.98, 29.97 or 25fps, no audio pulldown or speed adjustment should be required for sync.

If you don’t have the budget for this level of audio production, then a Zoom H4n (not the H4) or a Tascam DR-100 are viable options. Record the files at 48kHz sampling in a 16-bit or 24-bit WAVE format. NO MP3s. NO 44.1kHz.

The Zaxcom will have embedded timecode, but the consumer recorders won’t. This doesn’t really matter, because you should ALWAYS use a slate with a clapstick to provide a sync reference. If you use a recorder like a Zaxcom, then you should also use a slate with an LED timecode display. This makes it easy to find the right sound file. In the case of the Zoom, you should write the audio track number on the slate, so that it’s easy to locate the correct audio file in the absence of timecode.

You can sync up the audio manually in your NLE by lining up the clap on the track with the picture – or you can use an application like Singular Software’s PluralEyes. I recommend tethering the output of the audio recorder to the camera whenever possible. This gives you a guide track, which is required by PluralEyes. Ideally, this should have properly matched impedances so it’s useable as a back-up. It may be impractical to tether the camera, in which case, make sure to record reference audio with a camera mic. This may pose more problems for PluralEyes, but it’s better than nothing.

Singular Software has recently introduced DualEyes as a standalone application for syncing double-system dailies.

Your edit system

As you can see, most of this work has been done before ever bringing the files into an NLE application. To date, all of my Canon projects have been cut in Final Cut and I continue to find it to be well-suited for these projects – thanks, in part, to this “pre-edit” file management. Once you’ve converted the files to ProRes or ProResLT, though, they can easily be brought into Premiere Pro CS5 or Media Composer 5. The added benefit is that the ProRes media will be considerably more responsive in all cases than the native H.264 camera files.

Although I would love to recommend editing directly via AMA in Media Composer 5, I’m not quite sure Avid is ready for that. In my own experience, Canon 5D/7D/1D files brought in using AMA as either H.264 or ProRes are displayed at the proper video levels. Unfortunately others have had a different experience, where their files come in with RGB values that exhibit level excursions into the superwhite and superblack regions. The issue I’ve personally encountered is that when I apply non-native Avid AVX effects, like Boris Continuum Complete, Illusion FX or Sapphire, the rendered files exhibit crushed shadow detail and a shifted gamma value. For some reason, the native Avid effects, like the original color effect, don’t cause the same problem. However, it hasn’t been consistent – that is, levels aren’t always crushed.

Recommendations for Avid Media Composer editors

If you are an Avid editor using Media Composer 5, then I have the following recommendations for when you are working with H.264 or ProRes files. If you import the file via AMA and the levels are correct (black = 16, peak white = 235), then transcode the selected cut to DNxHD media before adding any effects and you should be fine. On the other hand, if AMA yields incorrect levels (black = 0, peak white = 255), then avoid AMA. Import “the old-fashioned way” and set the import option for the incoming file as having RGB levels. Avid has been made aware of these problems, so this behavior may be fixed in some future patch.

There is a very good alternative for Avid Media Composer editors using MPEG Streamclip for conversion. Instead of converting the files to one of the ProRes codecs, convert them to Avid DNxHD (using 709 levels), which is also available under the QuickTime options. I have found that these files link well to AMA and, at least on my system, display correct video levels. If you opt to import these the “old” way (non-AMA), the files will come in as a “fast import”. In this process, the QuickTime files are copied and rewrapped as MXF media, without any additional transcoding time.

“Off-speed” files, like “overcranked” 60fps clips from a Canon 7D can be converted to a different frame rate (like 23.98, 25 or 29.97) using the “conform” function of Apple Cinema Tools. This would be done prior to transcoding with MPEG Streamclip.

Avid doesn’t use the embedded reel number from a QuickTime file in its reel number column. If this is important for your workflow, then you may have to manually modify files after they have been imported into Media Composer or generate an ALE file (QtChange or MetaCheater) prior to import. That’s why a simple mnemonic, like SF01A001 is helpful.

Although this workflow may seem a bit convoluted to some, I love the freedom of being able to control my media in this way. I’m not locked into fixed metadata formats like P2. This freedom makes it easier to move files through different applications without being wedded to a single NLE.

Here are some more options for Canon HDSLR post from another article written for Videography magazine.

©2010 Oliver Peters

Grind those EOS files!

I have a love/hate relationship with Apple Compressor and am always on the lookout for better encoding tools. Part of our new file-based world is the regular need to process/convert/transcode native acquisition formats. This is especially true of the latest crop of HDSLRs, like the Canon EOS 5D Mark II and its various siblings. A new tool in this process is Magic Bullet Grinder from Red Giant Software. Here’s a nice description by developer Stu Maschwitz as well as another review by fellow editor and blogger, Scott Simmons.

I’ve already pointed out some workflows for getting the Canon H.264 files into an editable format in a previous post. Although Avid Media Composer 5, Adobe Premiere Pro CS5 and Apple Final Cut Pro natively support editing with the camera files – and although there’s already a Canon EOS Log and Transfer plug-in for FCP – I still prefer to convert and organize these files outside of my host NLE. Even with the newest tools, native editing is clunky on a large project and the FCP plug-in precludes any external organization, since the files have to stay in the camera’s folder structure with their .thm files.

Magic Bullet Grinder offers a simple, one-step batch conversion utility that combines several functions that otherwise require separate applications in other workflows. Grinder can batch-convert a set of HDSLR files, add timecode and simultaneously create proxy editing files with burn-in. In addition, it will upscale 720p files to 1080p. Lastly, it can conform frame-rates to 23.976fps. This is helpful if you want to shoot 720p/60 with the intent of overcranking (displayed as slow motion at 24fps).

The main format files are converted to either the original format (with added timecode), ProRes, ProRes 4444 or two quality levels of PhotoJPEG. Proxies are either ProRes Proxy or PhotoJPEG, with the option of several frame size settings. In addition, proxy files can have a burn-in with various details, such as frame numbers, timecode, file name + timecode or file name + frame numbers. Proxy generation is optional, but it’s ideal for offline/online editing workflows or if you simply need to generate low-bandwidth files for client review.

Grinder’s performance is based on the number of cores. It sends one file to each core, so in theory, eight files would be simultaneously processed on an 8-core machine. Speed and completion time will vary, of course, with the number, length and type of files and whether or not you are generating proxies. I ran a head-to-head test (main format only, no proxy files) on my 8-core MacPro with MPEG Streamclip and Compressor, using 16 H.264 Canon 5D files (about 1.55GB of media or 5 minutes of footage). Grinder took 12 minutes, Compressor 11 minutes and MPEG Streamclip 6 minutes. Of course, neither Compressor nor MPEG Streamclip would be able to handle all of the other functions – at least not within the same, simplified process. The conversion quality of Magic Bullet Grinder was quite good, but like MPEG Streamclip, it appears that ProRes files are generated with the QuickTime “automatic gamma correction” set to “none”. As such, the Compressor-converted files appeared somewhat lighter than those from either Grinder or MPEG Streamclip.

This is a really good effort for a 1.0 product, but in playing with it, I’ve discovered it has a lot of uses outside of HDSLR footage. That’s tantalizing and brings to mind some potential suggestions as well as issues with the way that the product currently works. First of all, I was able to convert other files, such as existing ProRes media. In this case, I would be interested in using it to ONLY generate proxy files with a burn-in. The trouble now is that I have to generate both a new main file (which isn’t needed) as well as the proxy. It would be nice to have a “proxy-only” mode.

The second issue is that timecode is always newly generated from the user entry field. Grinder doesn’t read and/or use an existing QuickTime timecode track, so you can’t use it to generate a proxy with a burn-in that matches existing timecode. In fact, if your source file has a valid timecode track, Grinder generates a second timecode track on the converted main file, which confuses both FCP and QuickTime Player 7. Grinder also doesn’t generate a reel number, which is vital data used by many NLEs in their media management.

I would love to see other format options. For instance, I like ProResLT as a good format for these Canon files. It’s clean and consumes less space, but isn’t a choice with Grinder. Lastly, the conform options. When Grinder conforms 30p and 60p files to 24p (23.976), it’s merely doing the same as Apple Cinema Tools by rewriting the QuickTime playback rate metadata. The file isn’t converted, but simply told to play more slowly. As such, it would be great to have more options, such as 30fps to 29.97fps for the pre-firmware-update Canon 5D files. Or conform to 25fps for PAL countries.

I’ve seen people comment that it’s a shame it won’t convert GoPro camera files. In fact it does! Files with the .mp4 extension are seen as an unsupported format. Simply change the file extension from .mp4 to .mov and drop it into Grinder. Voila! Ready to convert.

At $49 Magic Bullet Grinder is a great, little utility that can come in handy in many different ways. At 1.0, I hope it grows to add some of the ideas I’ve suggested, but even with the current features, it makes life easier in so many different ways.

©2010 Oliver Peters

Solutions to Improve FCP’s Media Management

Media management has long been considered Apple Final Cut Pro’s Achilles’ Heel. In reality, FCP has gotten better in this regard and does a pretty decent job of linking project master clips to media. The shortcomings of FCP media management become apparent when projects are moved around among different edit systems, hard drives and editors. I’ve started to dabble with a few different applications that improve on FCP’s native abilities. I’ll bring these to you on an irregular basis, once I get a chance to do a bit more testing.

The first of these is FcpReconnect from VideoToolShed. This is the brainchild of Bouke Vahl, a Dutch editor and software developer. FcpReconnect may be used in a number of different ways, but in general, works by linking files based on matching reel numbers and timecode. For FCP editors, it provides an excellent solution to projects that use an offline-online edit workflow. Since reel number and timecode are the key, you are less subject to FCP’s need to have file names that completely match. For most workflows, there are two basic ways of using FcpReconnect: a) consolidation and relink or b) relink via XML.

Method A – Consolidate and Relink

As a test, I started with footage from a recent Canon EOS 5D Mark II project. The native camera files are 1920×1080 H.264, 30fps and have no reel numbers or timecode. As I described in a previous post, I converted the media to Apple ProResLT in Compressor, conformed the files to 29.97fps in Cinema Tools and added reel numbers and timecode using QtChange – another handy application from VideoToolShed.

To test FcpReconnect, I used Compressor again to convert the hi-res ProResLT “master” files into DV anamorphic “proxy” files for offline editing. The DV files have the same reel number and timecode, but aren’t an exact file name match, as they had a “DV” suffix appended to the clip name.

I created an FCP edit project (NTSC DV anamorphic) and assembled a basic edit sequence using the DV proxy files. In this example, the DV clips are independent from the hi-res files, which would be the workflow if I decided to do an offline edit on my laptop or gave the files to another editor to cut segments for me. Only the DV files would be the sources in this edit.

Once the edit is done, the next step is to use FCP’s Media Manager to create an offline project. Set the target format to match the hi-res media (1920×1080/30p ProResLT) and set short handle lengths. This creates a new project, with only the clips that were used in the cut. The media for these hi-res clips will show up as “offline”, of course. Next, export a Batch List of this new sequence.

Open FcpReconnect and first make sure you have selected the right timecode standard in the Set Up pulldown menu. VideoToolShed is in The Netherlands, so the default at first launch will be PAL. Once you’ve set this, select the target media folder (the hi-res HD files) and select the Batch List that you just exported. Once it finds all of the matches, you have a few options.

For this test, I chose to use clip names and copy/trim self-contained media of the selected files. This is the equivalent of Avid’s “consolidate” feature.

The clips that are used in the edited sequence are copied to a new folder with a duration equal to the edited length on the timeline, plus the handles. It also renames the media files to match the clip names used in the sequence.

Return to FCP and reconnect the media (currently offline) of the hi-res sequence to the newly consolidated files. Typically, once the first file is located, the others will be automatically found. You will get an FCP dialogue box, because the new media attributes will not completely match the expected attributes. This is normal. Simply click “continue” and you’ll be OK.

Let me caution, that I would still avoid wildly renaming clips inside the FCP browser. The Canon files are sequentially numbered movie files. I tried some tests in which I completely renamed these files. For example, “MVI_2061-DV” might have been renamed to “Richard CU”. Most of the time this worked fine, but I did have a few clips that would not relink. My recommendation is still to use other columns in FCP or at least to leave the number as part of the new clip name. This will make it easier if you must manually locate a few files. I had no such problems in the tests where I left the master clip name the same as its corresponding media file name.

Method B – Relink via XML

An alternate method is to skip the consolidation step. After all, if you already have the hi-res media on your drives, you might not want to copy these files again. In Method B, you’d start the same way with hi-res and proxy files. Edit the proxy project and then use FCP Media Manager to create a new offline project matching the hi-res format. Export a Batch List AND an XML file from this new offline sequence.

In FcpReconnect, pick the target (hi-res) media folder and the Batch List. Instead of coping media, open the XML file. FcpReconnect analyzes the XML against the Batch List and the target media folder and generates a new XML.

Open this new XML file in Final Cut Pro and select “create new project”. The result will be a new FCP project containing one sequence, which is linked to the hi-res media. If you have done this properly, the sequence settings should match the target HD format (ProResLT in my example).

You can make sure the sequence clips are linked to the right media by checking the media path in “item settings”.

In addition, you can also verify frame-accuracy by placing the proxy edit sequence over the hi-res edit sequence and make sure everything lines up. My tests were all accurate.

VideoToolShed’s FcpReconnect is one of a number of applications being developed to fill in the gaps of Final Cut Pro’s media management. It’s clear to see that with a little care, it doesn’t take much to make FCP a far more robust NLE.

©2010 Oliver Peters

CoreMelt Lock and Load X

CoreMelt offers a number of GPU-accelerated plug-in sets for Final Cut Pro, Final Cut Express, Motion and After Effects. One powerful collection is CoreMelt Complete V2 (“v-twin”), which is currently up to 200 filters, transitions and generators. However a very cool, separate filter is their Lock & Lock X stabilization plug-in for Final Cut Pro. The original Lock & Load filter was updated to Lock & Lock X (a free upgrade for L&L users) shortly after NAB2010 and gained a significant new feature – Rolling Shutter Reduction.

Rolling shutter artifacts – the so-called “jello-o-cam” effect – have been the bane of CMOS-sensor cameras, most notably the HD-capable DSLR still cameras. The short answer for why this happens is that objects move in place during the time interval between the data being picked up from the top to the bottom of the sensor. The visual manifestation is skewing or a wobble to the image on fast horizontal motion or shaky handheld shots. CoreMelt’s Lock & Load X is designed to be used for both standard image stabilization, as well as reduction of these artifacts.

Final Cut Pro already includes a very good, built-in stabilization filter in the form of Smoothcam – a technology inherited from Shake. So why buy Lock & Load X when you already own Smoothcam? Two answers: speed and rolling shutter artifact reduction. Generally, Lock & Load X is faster than Smoothcam, although this isn’t always a given. CoreMelt claims up to12 times faster than Smoothcam, but that’s relative. One important factor is the length of the clip. When Smoothcam analyzes a clip to apply stabilization, it must process the entire media clip, regardless of how long of a clip was cut into the sequence. If the media clip is five minutes long, then Smoothcam processes all five minutes. Fortunately, this can proceed as a background function.

In contrast, Lock & Load X only analyzes the length of the clip that is actually in the sequence. If you only used ten seconds out of the five minutes, then Lock & Load X only processes those ten seconds. In this example, processing times between Smoothcam and Lock & Load X would be dramatically different. On the other hand, if you used the complete length of the clip, then processing times for the two might be similar. I’m not exactly sure whether Lock & Load X uses the same type of GPU-acceleration as the V2 filters, so I don’t know whether these processing times change with the card you have in your machine. I’m running a stock NVIDIA GeForce 120 in my Mac Pro, so it could be that an ATI or NVIDIA FX4800 card might show even better results with Lock & Load X. I don’t know the answer to that one, but in any case, processing a 1920 x 1080 ProResLT clip that was several seconds long took less than a minute for both stabilization and rolling shutter reduction.

When you compare the stabilized results between Smoothcam and Lock & Load X, you’ll generally prefer the latter. Most of the time the filter doesn’t zoom in quite as far and if you leave some movement in the image (such as with handheld shots), the “float” of the image feels more natural. However, there are exceptions. I tested one clip with a hard vertical adjustment by the cameraman. At that point, Smoothcam looked more natural than Lock & Load X, which introduced a slight rotation in correcting that portion of the clip. Another difference is real-time performance. On my machine, Smoothcam left me with a green render bar and Lock & Load X was orange. In FCP terms, this means that the unrendered Smoothcam clip played without degraded performance, while the Lock & Load X clip dropped frames. Once rendered, there’s no difference, of course, and render times were similar between the two. Again, this result might differ with another display card.

Rolling shutter artifact reduction is not unique to Lock & Load X, but as far as I know, is currently only available in one other, more expensive filter from The Foundry. In CoreMelt’s implementation, you must select the shutter coefficient, which is based on certain camera profiles supplied by CoreMelt with the filter. If you are working with Canon EOS 5D Mark II or EOS 7D footage, simply pick the camera, run the tracking analysis and you are done. You can choose to stabilize, reduce rolling shutter artifacts or both. In many cases, rolling shutter reduction is very subtle, so you might not see a massive change in the image. Sometimes, the filter simply corrects minor vertical distortions in the frame.

One application I find quite useful is with handheld shots that are intended to look like Steadicam shots. Lock & Load X does a nice job of steadying these shots without losing the natural “float” that you want to keep in the image. The “before” version might look decent, but when you compare the “after” version, it is definitely the preferable image. In order for Lock & Load X to do its magic, it has to blow-up the image slightly, so that the picture fills out to the edges of the frame This is true of any stabilization filter, including Smoothcam. Lock & Load X does this expansion within the filter and doesn’t change motion tab size values. The filter includes a “smart zoom” feature – intelligently resizing the image throughout the clip so that the least amount of blow-up is performed at any time. For a subtle stabilization, like the handheld shot example, Lock & Load X will typically zoom the image between 7% and 12% throughout the length of the clip. Thanks to the processing used, the quality of the rendered clip will be better than if you had zoomed in 10% in FCP’s motion tab.

CoreMelt’s Lock & Load X is a specialized filter. When you have the need for this function, it’s hard to beat. Clearly a new selling point is rolling shutter artifact reduction. Pro video cameras aren’t immune to the effect, however, since even a Sony EX uses a CMOS chip. But it’s a big factor for the HDSLRs. These cameras will continue to be the hot ticket for a while, so Lock & Load X becomes an indispensible tool for editors posting a lot of Canon and Nikon projects.

©2010 Oliver Peters

Random Impressions – NAB 2010

I always enjoy the show – partly for the new toys – but also to hook up once again, face-to-face, with many friends in the business. I’m back home now and have had a day to decompress and make a few observations about NAB Convention.

First off, this was an extremely strong show for post. Tons of new versions of many of your favorite NLEs, color grading tools and other items. Second, the attendance was good. A bit more than last year – so still a “down” year compared with peaks of a few years ago. Yet, I felt the floor density was higher than the 2009 vs. 2010 numbers indicated. Thursday was still well-attended and not the ghost town I would have expected. So, on the purely subjective metric of how crowded the floor felt, I would have to say that daily averages were much better than 2009.

If you want more specific product knowledge about what was on the floor, check out the various NAB reports at Videography, DV, TV Technology, Studio Daily, Post and Pro Video Coalition. I would encourage you to check out DV’s “(Almost) Live From the NAB Show Blog” – Part 1 and Part 2. The following thoughts fall under opinion and observation, so I’m bound to skip a lot of the details that you might really want to know.

Apple

It never ceases to amaze me when I see blog posts and forum comments that seem to expect Apple to pop up out of nowhere at the show with some amazing new version of Final Cut Studio. Have these folks been under a rock? Apple swore off trade shows several years ago and there’s no indication this policy has changed. They were never on the 2009 or 2010 exhibitor’s list and you can’t plan a 1500-3000 seat “user event” at an area ballroom without word getting out. So, I have no idea why people persist in this fantasy game.

The short term scenario is that it is unlikely that there’ll be a feature-laden new version of FCP/FCS any time soon. Maybe an incremental update like the “new” Final Cut Studio from last year, but I wouldn’t expect that until a few months down the road at the earliest. Or maybe not until 2011. Even if that doesn’t happen or even if the release strikes many as lackluster instead of awesome, it won’t change the breakdown of NLEs to any great degree. If you work with FCP today, you are getting the job done and probably relatively happy with the product. I don’t foresee any change in the product that would greatly alter that situation.

The more important news – as it pertains to NAB – is that Apple is doing a good job of attracting a number of new partners to its core technologies. Autodesk’s Smoke for Mac OS X is a good example, but they are just one of the over 300-strong developer community that constitues the Final Cut ecosystem. A number of folks, such as ARRI, have licensed the ProRes codec, which is a pretty good endorsement of image quality, as well as workflow.

Avid

Certain versions tend to become milestones for a company’s software. I believe Media Composer 5 will be one of those. Avid renumbered versions with the release of Adrenaline several years ago, so this version 5 is really more like version 17. Numbers notwithstanding, other milestones for Media Composer had been the old version 5.x and version 7.x and I believe this newest release (targeted for June) will have just as much impact for Avid editors.

Media Composer 5 goes a long way towards keeping Avid editors in the fold and may even get some Avid-to-FCP “switchers” to come back. It adds limited 3rd party i/o hardware support, wider codec support (including RED and QuickTime through AMA) Pro Tools-style audio features and more FCP-like timeline editing functions. I highly doubt that it will really get any FCP diehards to convert, but it might pique the interest of those selecting their first high-end NLE. Down the road, I’ll have proper review when it’s ready for actual use.

In addition to Media Composer 5, Avid also previewed its “editing in the cloud” concept. This is largely based on work already done by Maximum Throughput, which had been acquired by Avid. The demo looked pretty fluid, but I think it’s probably a number of years off. That’s OK as this was merely a technology preview; however, it does have relevance to large enterprises. The same concepts developed for editing over the internet clearly apply to editing on an internal companywide LAN or WAN system.

The direction that Avid seems to be taking here – along with its expansion of Interplay into a family of asset management products – sets them up to make the Professional Services department into an IBM-style corporate consultation service and profit center. In other words, if you are a large company or TV network and want to implement the “cloud” editing concept along with the necessary asset management tools, it’s going to take a knowledgeable organization to do that for you. Avid naturally has such expertise and is poised to leverage its internal assets into billable services. The small editing boutique may not have any interest in that concept, but if it makes Avid a stronger company overall, then I’m all for it.

Adobe Creative Suite 5

CS5 is just about here. It’s 64-bit and uses the Mercury Playback Engine. But will Premiere Pro really pick up steam as an NLE of choice? Like Media Composer, expect a real review in the coming months. I’ve used Premiere Pro in the past on paying gigs and didn’t have the sort of issues I see people complain about. These were smaller projects, so I didn’t hit some of the problems that have plagued Premiere Pro, which mainly relate to scalability. Although it’s not touted in the CS5 press info, it does appear to me that Adobe has done a lot of tweaking under the hood. This is related to the changes for 64-bit, so I really expect Premiere Pro CS5 to be a far better product than previous versions.

Whether that’s true or not is going to depend on your particular system. For example, much has been written about the Mercury Playback Engine. This is an optimization for the CUDA technology of specific high-end NVIDIA graphics cards. If you don’t have one of these cards installed, Premiere Pro shifts into software emulation. In some cases, it will be a big difference and in other cases it won’t. There’s lots of native codec and format support, but not all camera codecs are equal. Some are CPU-intensive, some GPU-intensive and others require fast disk arrays. If your system is optimized for DVCPRO HD, for example (older CPU, but fast arrays), you won’t see outstanding results with AVC-Intra, which is processor-intensive, requiring the newest, fastest CPUs.

There’s plenty in the other apps to sell editors on the CS5 Production Premium bundle, even if they never touch Premiere Pro. On the other hand, Premiere Pro CS5 is still pretty powerful, so editors without a vested interest in Avid, Apple or something else, will probably be quite happy with it.

One format to rule them all

With apologies to J. R. R. Tolkien, the hopes of a single media format seem to have been totally shattered at this NAB. When MXF and AAF were originally bounced around, the hope was for a common media and metadata format that could be used from camera to NLE to server without the need for translation, transcoding or any other sort of conversion.

I think that idea is toast, thanks to the camera manufacturers, who – along with impatient users – have pushed NLE developers to natively support just about every new camera format and codec imaginable. Since the software can handle it, we see NLEs evolving into a more browser-style format. This is the basis for how Premiere Pro and Final Cut Pro are structured. It is now becoming a model that others are embracing. Avid has AMA (a plug-in API for camera manufacturers), but you also see “soft import” in the Autodesk systems and “soft mount” in Quantel. All variations of the same theme. In fact, Apple is the “odd man out” in this scenario, forcing everything into QuickTime before FCP can work with it.

The three advanced formats that seem to have the broadest support today are Avid DNxHD, Apple ProRes and Panasonic AVC-Intra. To a lesser extent you can add AVCHD, Sony XDCAM (various flavors) and DV/DVCPRO/DV50/DVCPRO HD.

Stereo 3D

Just when we thought we had this HD thing figured out, the electronics manufacturers are pushing us into stereo 3D. There was plenty of 3D on the floor, but bear in mind that there are very few in the production community pushing to do this. It’s driven almost entirely by display manufacturers and studios looking to cash in on 3D theater distribution. I think we are headed for a 3D bubble that will eventually drop back into a niche, albeit a large niche for some.

Whether 3D is big or not doesn’t matter. It’s here now and something many of us will have to deal with, so you might as well start figuring things out. The industry is at the starting point and a lot is in flux. First off – the terminology. Walking around the floor there were references to Stereo 3D, S3D, Stereoscopic and so on. Or what about marketing slogans like Panasonic’s “from camera to couch”? Or Sony’s “make.believe”? Hmm… Did the marketing people really think that one through? New crew positions will evolve. Are you a “stereographer”? Or should you be called a “stereoscopist”?

I watched a lot of stereo 3D demos and I generally didn’t like most of them. Too much of 3D looks like a visual effect and not the way my eyes see reality. It also affects the creative direction. For instance, the clip of a Kenny Chesney 3D concert film, which was edited in a typical, fast-paced, rock-n-roll-style of cutting, was harder to adjust to than the nice slow camera moves from the Masters golf coverage.

I also observed that most 3D shots have an extremely deep depth-of-field. More so even in 3D, than if you just looked at the shot in 2D. Shallow depth-of-field, like the gorgeous shots from the HDSLRs that everyone loves, don’t seem to work in 3D. I tended to pay attention to objects in the background, instead of the foreground, which I would presume is the opposite of what a director would have wanted. Many of the 3D shots felt like multi-planed pieces of animation. I have heard this referred to as “density zones” and seems to be an anomaly with 3D shots. A lot of these shots simply had the effect of a moving version of the vintage View-Masters of the past.

Obviously a lot of companies will try to produce 3D content from archival 2D masters. To answer that need JVC showed a real-time 2D-to-3D convertor, which was able to take standard programs and adjust shots on-the-fly using a set of sophisticated algorithms.  This creates some interesting artifacts. First off, you have to interpolate the information so that alternating fields become left and right eye views. Viewing the result shows visible scanlines on an HD display. That seems to be a common problem with current 3D displays.

Second, there are errors in the 3D. Some of the computation is based on colors, which means that occasionally some objects are incorrectly placed due to their color. That part of an object (like a shirt or certain colors in a flag) will appear at a different point in Z-space compared to the rest of the object to which it is attached. My guess is that casual viewers will almost never see these things and therefore such products will be quite successful.

My whole take on this is that we simply don’t see real life the way that stereo 3D films force us to see. Many folks will disagree with me on this, including a number of scientists, but I feel that people largely view life in 2D. Your eyes converge on an object and focus (both physically and mentally) on that object. Other things are on the periphery, so you are aware of them, but not focused on them. When you want to look at something else, you change your attention and change your focus, much like a pan or tilt with a rack focus. By the same token, we don’t see the sort of extreme shallow depth-of-field caused by some lenses, but that somehow feels more natural. These issues may evolve as stereo 3D evolves, but for me, the most natural images were those that were closest to 2D. If that’s the case, then you have to conclude, “What’s the point?”

Disruptive technology

Blackmagic Design definitely generated the buzz this year. They bought the ailing DaVinci Systems company last year and promptly told everyone in the media that they had no intension of selling cheaper versions of these flagship systems. We now know that wasn’t true. It turns out that Blackmagic has once again been true to form – as everyone had initially thought – and brought a brand new Mac version of DaVinci Resolve to NAB at a very low price.

Upon acquiring DaVinci, Blackmagic decided to “end-of-life” all hardware products (like the DaVinci 2K), end all support contracts and focus on rebuilding the company around its flagship software products – Resolve (grading) and Revival (film restoration). They redesigned the signature DaVinci control surfaces to better fit into Blackmagic Design’s manufacturing pipeline. You can now purchase Resolve in three configurations: software-only Mac ($1K), software (Mac) with panels ($30K) or a Linux version with panels ($50K). Add to this the computer, high-end graphics cards and drives.

The software-only version will work with a panel like the Tangent Wave, so it will allow a user to create a color grading room with the “name brand” product at a ridiculously low price. This has plenty of folks on various forums pretty steamed. I suspect there will be three types of DaVinci products.

Customer A is the existing facility that upgrades from an older DaVinci to Resolve 7.0 These people will build a high-end room using a cluster of Linux towers. That’s not cheap, but will still cost far less than in the past.

Customer B will be the facility that wants to set up a less powerful “assist” station. It may also be the entrepreneurial colorist who decides to set up his own home system – either to branch out on his own – or to be able to work from home to avoid the commute.

Customer C – the one that scares most folks – is the shop that sets up a bare bones grading room around Resolve, just so they can say that they have a DaVinci room. There are obvious performance differences between Resolve on a Mac and a full-featured, real-time 2K-capable-and-more DaVinci suite, so the fear is that some folks will represent one as being the other.

No matter what, that’s the same argument made when FCP came out and also when Color arrived. Grant Petty (Blackmagic Design’s founder) has always been about empowering people by lowering the cost of entry. This is just another step in that journey. I think the real question will be whether owners who have set up Apple Color rooms will convert these to DaVinci. Color is good, but DaVinci has the brand recognition and there are plenty of experienced DaVinci colorists around. At an extra $1K for software, this might be an easy transition. Likewise for Avid shops. Media Composer’s and Symphony’s color correction tools are pretty long-in-the-tooth and those owners are looking for options. DaVinci makes a lot more sense for these shops than investing in the Final Cut Studio approach. Hard to tell at this point.

Digital cameras

RED had its RED Day event. I was registered, but blew it off. Too much other stuff to see and quite frankly, I have little or no interest in being teased by cameras that are yet to come (late or if ever). In my world, HDSLRs have far greater impact than RED One or Epic. Judging by the number of Canons and Nikons I saw being used on the floor for video coverage and podcasts, I’d have to say the rest of the world shares that experience.

The real news is that RED is no longer the only game if you want a digital cinematography camera. Sure there’s Sony and Panasonic, but more importantly there’s ARRI with the Alexa and Aaton with the Penelope-∆ (Delta). Both companies have a strong film pedigree and these new cameras coming this year and in 2011 will offer some options that will interest DPs. The Penelope is oddest in that it’s a hybrid film/digital camera using two interchangeable magazines – one for film and another that’s a digital back. It uses an optical viewfinder, so the sensor if attached to the digital magazine in precisely the same location as the film loop in the film magazine. This leaves it exposed when you swap magazines, but the folks at Aaton don’t see this as an issue, aside from occasional, simple cleaning. In reality, you probably won’t be swapping back and forth between film and digital on the same production.

In my opinion, where RED has gone wrong has been in placing resolution over workflow. No matter how smooth, native or fast current RED post workflow is, they will have a hard time shaking the common “slam” that their workflow is slow, hard or expensive. ARRI and Aaton offer somewhat lower resolution than RED, but they record both camera RAW and direct-to-edit formats. The Alexa records in ARRI RAW as well as ProRes, while Aaton uses DNxHD (for now) as its compressed file format. This means that the camera generates a file that is ready to edit in Avid or FCP straight from the shoot. If you are working in TV, that may be all you need. If you are doing a feature film, it becomes an offline editing format. The camera RAW file is preserved as a “digital negative”, which would be used for color grading and finishing. ARRI RAW is already supported by a number of systems, including Avid (with Metafuze) and Assimilate Scratch.

Pure magic

Last year I was “wowed” by Singular Software’s PluralEyes. This year it was GET from AV3 Software. GET is a phonetic search tool based on the same Nexidia  technology that is licensed to Avid for Media Composer’s ScriptSync feature. Think of GET as Spotlight for speech. GET operates as a standalone application that can be used in conjunction with Final Cut Pro. It shouldn’t be thought of as just a plug-in.

The process is simple. First, index the media files that are to be reviewed. This only needs to happen once and the company claims that files can be indexed 200 times faster than real time. (ScriptSync’s indexing is extremely fast.) Once files are indexed, enter the search term into the GET search field and all the possible choices are located. Adjusting the accuracy up or down will increase or decrease the number of matching clips.

You can also do searches using multiple parameters, such as a search term plus a date or a reel number. Since the algorithms are phonetic, correct spelling is less important, as long are it sounds the same. GET includes its own player and clips imported into FCP will have markers at the matching points within the master clip. The shipping version of the product (in a few months) will also subclip the matching segments.

Other snapshots

There are a few other interesting things to mention.

CatDV from Square Box Systems has come along nicely. Many of my FCP friends have looked at this and characterize it as “what Final Cut Server should have been.” Check it out.

I ran into Boris Yamnitsky (Boris FX founder) at the show and he was more than happy to show me some of their upcoming release. Boris FX wasn’t officially exhibiting this year, but they are starting to roll out BCC 7, starting with the After Effects version (ready for CS5). It will include a number of key new features, like particles. What really caught my eye, though, was a color correction filter that combined functionality from both Colorista and Color. It’s a single layer color correction filter with 3 color wheels, but the twist is that you can apply masks with both inside and outside grades – all within the same instance of the filter.

Lastly, Lightworks is back. Well, it never actually left – just changed hands a few times. This placed it with EditShare after they acquired Geevs Broadcast last year. Rather than bang it out with the “A” NLE vendors, EditShare has opted to release it as open source and see what the development community can do for the product. It already has a small, loyal following among film editors and has a few, unmatched touches for collaborative editing. For instance, two editors can work on exactly the same sequence (not copies). One editor at a time has “record” control. As one makes changes, the other can see these updated on his own timeline!

See, I told you it was a fun year.

©2010 Oliver Peters

Canon 5D Avid FCP roundtrip

No, this isn’t the 5D workflow article that you’ve been waiting for. That’s still coming in another couple of weeks. In the meantime, I’ve started on another Canon 5D commercial. This time I’m cutting the project in Avid Media Composer instead of Final Cut Pro. There are a number of reasons, including some recent stability issues I’ve had with FCP. In addition, the creative treatment calls for some nice speed ramp effects. Avid’s FluidMotion is simply a much better slomo technology than anything in Final Cut. So this time, Media Composer is the right tool for the job.

In order to make sure that video levels match what I’m used to with FCP, I’ve been doing some testing of how to roundtrip files back to Final Cut. Ultimately these are web spots, so I want to make sure what I do in Media Composer matches what I do in Final Cut. When I finish editing the spot, there may be a reason to continue in FCP – such as to use Color for grading. That’s another reason to be very sure the images match, regardless of the NLE used.

That’s the dilemma. Avid has always treated video as Rec. 601/709, which means that black and white equal 16 and 235 on a scale of 0-255. This allows headroom and footroom for superwhites and “blacker than black” shadow areas. FCP doesn’t really honor this scale and seems to internally use adjusted levels of 0-235 (my guess), so it makes it tricky whenever you convert clips in and out of QuickTime. Not every QuickTime conversion is equal and you may get level, gamma, saturation and hue shifts depending on where and how the conversion is done and which codec is used.

One visible evidence of this difference is how each UI displays images. An image in a Media Composer window will tend to look “flatter” on the computer display, i.e. less contrast, than the exact same image in a Final Cut window. That really doesn’t matter for most video. If you compare the Avid output through one of Avid’s DX units with FCP’s output through a Kona card, both would look the same on a broadcast monitor and scopes. In the case of these 5D spots, though, the web is the target. I have to make sure the process is as transparent as possible, since there is no I/O hardware between the NLE and the final product.

When you import a QuickTime file into Avid Media Composer you must decide whether the file’s video levels are mapped as RGB (a full 0-255 range) or 601/709 (a scaled 16-235 range). Computer files, like a Photoshop graphic, are almost always RGB. The movie files generated by the Canon EOS 5D Mark II conform to a full RGB range, so set the color level mapping to RGB when importing these files into Media Composer. This tells Media Composer that the range of levels is 0-255 and must be rescaled to 16-235 upon import, when an Avid media file is created. I had both the original H.264 and converted ProRes versions of these files available. Both matched each other, so the resulting levels inside Avid Media Composer were the same whether I picked the H.264 or ProRes file. During the import stage, these were transcoded to the DNxHD145 codec for editing within a 1080p/29.97 project.

At this point you’d edit the same as with any other project. When done you would export a finished file for web conversion. This was the critical stage in my testing, because I wanted to be sure that I could export a file that matched any FCP version. Obviously, if you are going to color grade the footage, it’s less of an issue, since the image is going to look different than the original anyway. My main concern was to assure that the roundtrip would be as transparent as possible. In theory, the easiest approach would be to simply export a QuickTime file with a target codec (like ProRes) and be done with it. It turns out that this isn’t actually as transparent as you’d expect, presumably because of how Avid is interacting with QuickTime to write a non-Avid QuickTime codec.

The better solution takes a couple of steps, but the results are worth it. First of all, you must export from Media Composer with RGB mapping. The 16-235 levels are thus rescaled back out to 0-255 in order to match your computer display. To get the closest overall level match, you should use the Avid 1:1 codec, not one of the Apple uncompressed or ProRes codecs. You aren’t done yet. The Avid codec does display within FCP, but when I attempted to render it on an FCP timeline, the result was just digital hash. The workaround is to do a second conversion in QuickTime 7. Open the Avid 1:1 exported file in QuickTime Pro 7 and export that file again using the Apple ProRes codec.

When I brought the “round-tripped” ProRes file into FCP and split-screened it with the same clip in H.264 (from the camera) or ProRes (first generation conversion of the camera file), there was very little difference between the two clips – either visually or on the waveform. With this knowledge in hand, I’m now ready and comfortable in cutting the spot in Media Composer and won’t feel like I will make any compromise in image quality.

Here’s a recap of the steps:

  1. Import the 5D files into Avid Media Composer
  2. Use RGB mapping
  3. Cut normally
  4. Export an Avid 1:1 QuickTime movie
  5. Use RGB mapping
  6. Open file in QuickTime 7
  7. Export as Apple ProRes
  8. Import into Apple Final Cut Pro and continue working

© 2010 Oliver Peters