The Social Network

Who would have thought that the online world of social media would make an interesting movie? That’s exactly what David Fincher set out to do in The Social Network, the story of how Harvard undergrad Mark Zuckerberg (played by Jesse Eisenberg) became the youngest billionaire in history – thanks to a little start-up called Facebook. The Aaron Sorkin script is based on the book, The Accidental Billionaires by Ben Mezrich.

This was a return engagement for a number of Fincher’s crew, including cinematographer Jeff Cronenweth (Fight Club) and editors Angus Wall (Panic Room, Zodiac, The Curious Case of Benjamin Button) and Kirk Baxter (The Curious Case of Benjamin Button). The past two films – shot with the Grass Valley Viper camera – raised the bar for an all-digital production and post production workflow. The Social Network does that again, as the first released studio picture shot with a RED ONE camera equipped with the upgraded Mysterium-X sensor. As in the past films, the editorial team used Apple Final Cut Pro connected to an Apple Xsan shared storage system as their weapon of choice.

Angus Wall explained the workflow, “From our standpoint as editors, it was a very easy film to work on. Tyler Nelson and Alex Olivares, the assistant editors, handled all the data management and file conversions at David’s production offices. They converted the native RED camera files to Apple ProRes 422 (LT) for us. After that, it was pretty much the same for us as on Button or Zodiac, except that this time we were working with 1920×1080 images, which was great.”

When I suggested that Benjamin Button must have been more of a challenge as an effects film, Kirk Baxter quickly pointed out the similarities. “There are about 1,000 effects shots in The Social Network. It has a lot of digital matte paintings, but there was also face replacement much like in Button. In this film, there are two characters who are twins, but in fact the actors aren’t. So a similar process was used to turn one of the actors into the twin of the other. Although the story isn’t driven by the same sort of visual effect, like the aging technique that was a dramatic device in Benjamin Button, it still has a lot of effects work.”

For a fresh feel, Fincher tapped Nine Inch Nails’ Trent Reznor and Atticus Ross for the score. Angus explained, “This worked out extremely well. Trent and Atticus were hired relatively early in the process.  Since they were working in tandem with the cutting, we were able to drop in a lot of near-final tracks instead of using temp music. This was great, because we had about 30 of their tracks to work with, all of which were actually intended for this film. That’s much better than the norm, where you scour your iTunes library to find some workable music to put under scenes.”

David Fincher shot approximately 280 hours of footage, recording all of the scenes with two and sometimes three RED cameras. The production schedule spanned from September to March, with a pick-up scene shot in July. Baxter and Wall worked out of Rock Paper Scissors (Wall’s LA editorial company) during shooting, staying up with the production during the first assembly process. Once production wrapped, editing was moved to Fincher’s production company offices. The two editors split up the scenes between themselves during the fine-cutting of the film.

Baxter explained, “David is a busy guy, so he doesn’t constantly sit over your shoulder while you’re editing. If Angus or I started out on a complex scene during the assembly, we usually stayed with it throughout post, since we were already familiar with all the footage. David would bounce between our two cutting rooms reviewing and offering his notes. He’s a very good director for an editor, because he knows exactly what he wants. He’s not an ‘I’ll know it when I see it’ type of guy. But he doesn’t overwhelm you either with information. At the beginning, he’ll set a general direction of what he’s looking for to get you started. Then deeper into the fine-cut, he’ll start tuning his approach and giving you more detailed comments.”

Most first assemblies are long and then the editor has to do major surgery to get the movie to the desired length. This wasn’t an issue with The Social Network. Wall explained, “The script was around 160 pages, so we were concerned that the first assembly was going to be correspondingly long. Our target was to keep the film under two hours. From the start, Kirk and I cut the scenes very tightly, using faster performances and generally keeping the pace of the film high. When the first assembly was completed, we were at a length of 1 hour 55 minutes – actually a minute shorter than the final version. Unlike most films, we were able to relax the pace and put some air back into the performances during the fine cut.”

Shooting with the RED ONE cameras introduced workflow changes for this film. Tyler Nelson (first assistant editor) handled the data management, creation of dailies and the final conforming of files to be sent to Light Iron Digital for the digital intermediate. Nelson explained, “I’m very particular about how the files get handled and so maintained control throughout the process. I was using two workstations with RED Rocket accelerator cards running ROCKETcine-X software to process our dailies. I would generate ProRes 422 (LT) QuickTimes for Angus and Kirk.  However, when it came to delivering visual effect elements and our final conform, we needed a bit more control, so I used a script that I wrote in FileMaker Pro to reference our codebook and pull our online media.”

Nelson continued, “When I received the locked cut, I generated an EDL for each video track and then used my FileMaker Pro script to parse the EDL to drive the transcode of the RED files into 4K DPX image sequences. I used these same EDLs to import each reel into After Effects CS5 to assemble our final conform. The footage was shot in 4K [4096x2048]. David framed his shots with a 2.40 matte, but with a twist.  We added an extra 4% padding on all sides so that if we wanted to reposition the frame north, south, east or west, we had a bit more image to work with. Effectively we had 3932×1638 pixels to use. The final images were exported as 2K [2048x1024] DPX sequences for Light Iron’s DI.” This extra padding on the edges of the frame came in handy, because Nelson also stabilized a number of shots. SynthEyes was used to generate tracking data for use in After Effects for this stabilization.

Early testing with various DI processes allowed the team to settle on the optimum RED settings to use in REDline (RED’s command line-driven software rendering engine). All files were delivered using the REDcolor (color space) and REDlog (gamma) values, which provided the most latitude to Light Iron’s colorist, Ian Vertovec. Light Iron CEO and DI supervisor, Michael Cioni explained, “Working with the full-range (flat) DPX files gives us nearly as much malleable range as with the native R3D raw files. Although it’s nice to grade in raw – because you have additional control to change color temperature or ISO values – that really isn’t practical in a film like this, with over 1,000 visual effects. You don’t want a lot of different vendors applying their own image conversions to the files and then later be unable to match the different shots at the DI stage. With log-like DPX files, they behave similar to scanned film negative and fit nicely into the existing pipelines.”

Cioni continued, “Ian graded the files using one of our Quantel Pablos. Since much of the look of the film was eloquently established on set, the grading came naturally to nearly every scene. The Social Network will really show off the expanded latitude and low-noise characteristics of RED’s M-X sensor. The scenes in this movie really live in the shadows. This film will deliver to audiences significantly more detail in images below 10 IRE as compared to typical digital cinema sensitivity. Although the majority of the first release will be seen as film prints, the future of all movies is digital, so the priority was given to the look of the digital master, rather than the other way around.” Technicolor handled the film-out recording for release prints, including digital-to-film color transforms from the DSM (Digital Source Master). The film’s final output is cropped for a 2.40:1 release format.

The technology angle of The Social Network is fascinating, but I wondered if there were any creative challenges for the editors. Kirk Baxter pointed out, “It was very well scripted and directed, so not a lot of story-telling issues had to be resolved in the edit. In fact, there were a number of scenes that were great fun to put together. For example, there’s an early scene about some of the legal depositions. It takes place in two different boardrooms at different times and locations, but the scene is intercut as if it is one continuous conversation. David gave us lots of coverage, so it was a real joy to solve the puzzle, matching eyelines and so on.”

Angus Wall added, “This is a movie about the birth of a major online power, but what happens on the computer is a very minor part.  For us, it was more important to concentrate on the drama and emotions of the characters and that’s what makes this a timeless story.  It’s utterly contemporary… but a little bit Shakespearean, too.   It’s about people participating in something that’s bigger than themselves, something that will change all of their lives in one way or another.”

UPDATE: Here are several nice pieces from Adobe , Post magazine (here and here) and the Motion Picture Editors Guild that also go into more detail about the post workflow.

Written for Videography magazine (NewBay Media LLC).

©2010 Oliver Peters

Avid DS shines for Metric

With all of the Media Composer 5 news, it might be easy to miss Avid’s latest update for the flagship system, Avid DS. Version 10.3.1 (see addendum below), released in mid-July, is a small point release that introduced two huge features – improved stereoscopic 3D control and support for RED Digital Cinema’s new “color science” and the Mysterium-X sensor. The new RED capabilities are showcased in the “All Yours” music video by the band Metric. It’s the official music video for The Twilight Saga: Eclipse, which featured the track under the end credits.

I spoke with Dermot Shane, a Vancouver-based VFX/DI supervisor who specializes in using Avid DS. Shane was working with 10.3.1 (in beta) when he got the call to handle finishing for “All Yours” (directed by Brantley Gutierrez). According to Shane, “The schedule on this was very tight and changes were being made up until the last minute. That’s because the video integrates clips from the movie and there had been a few last minute changes to the cut. In fact, we ended up getting one of these clips FTP’ed to us just in time for the deadline!” The production company for Metric shot the music video scenes using a RED One with the updated Mysterium-X sensor, which offers improved dynamic range. The newest RED software also improves how the camera raw files are converted into color information. These latest RED software updates have been integrated into the RED SDK used in Avid DS 10.3.1.

Shane described the workflow on this project. “The production company had cut the offline edit on [Apple] Final Cut Pro and provided us with an EDL. Avid DS can take this EDL and relink to the original R3D camera files, which gives me direct access to the raw data from the camera files by way of RED’s SDK. It’s an easy matter to scale the images for HD and to alter any of the looks of the images, based on changes that the director might want. Because these changes are made from the camera raw files, color grading is far cleaner than if I only had a flat image to start from. Once this is adjusted, I can cache the media into the DS and everything is real-time. On this project, the caches were working in 10-bit YUV high-def, and the master was rendered directly from the RED MX files. I probably changed the color information on all but three of the 162 clips in the music video.”

The new RED Mysterium-X support came in handy on this project. Shane continued, “The new sensor is much more sensitive and Avid DS 10.3.1 let me take advantage of this. For instance, I could create three versions of a clip all linked to the same R3D file. In each of these versions, I would create a different color setting using the RED source setting controls inside Avid DS.  One clip might be adjusted for the best shadow detail, another for the midrange and a third to preserve the highlights. These would then be composited into a single shot using the standard DS keyers and masks. The final image almost looks like a high dynamic range image. This is something you can’t do through standard grading techniques when the camera image has a ‘baked in’ look. It really shows the advantage of working with camera raw files.”

And what is the best thing about this new Avid DS release? “Stability,” answered Shane. “We worked around the clock for three or four days without a hiccup. That’s hard to sell people on up front, but it really matters when you are in a crunch. On this project, we literally finished about 20 minutes before the deadline. My client really appreciated the integrated environment that DS offers. Their previous projects had gone from Final Cut to a Smoke finish and a Lustre grade. These are very capable Autodesk finishing systems, but Avid DS is a complete finishing solution. You can do editing, effects and color grading all in one workstation. This makes it a lot better for the client, especially when last minute changes are made during the color correction pass.”

Stereo 3D tools have been enhanced in DS 10.3.1. Convergence tools now allow independent adjustment of 3D content for each eye. There is also real-time playback of stereoscopic containers and effects. Although “All Yours” wasn’t a stereo 3D project, I asked Shane about the new 3D tools. He replied, “So far I’ve only had a  chance to do some testing with the new tools. In previous versions, I would have to go out to [The Foundry’s] Nuke and use Ocula for stereo 3D work. Our DS has the Furnace plug-in set, which includes some stereoscopic tools. With Avid DS 10.3.1, I can complete one eye, apply the same grading to the other eye, adjust the convergence and then use one of the Furnace plug-ins to tweak the minor grading differences between the left and right eye views.”

Addendum: This article was originally written prior to the 2010 IBC exhibition in Amsterdam. At that conference, Avid announced the release of Avid DS 10.5, which will be available both as a full-featured software-only version and as a turnkey solution. The software version will be available for under $10K and comes bundled with a copy of Avid Media Composer 5. Some of the features in DS 10.5 – available for the first time in a software version – include full 2K playback and REDRocket accelerator support. In addition, the software has been ported to the Windows 7 64-bit OS, making it one of the most powerful editing/VFX/grading solutions for the PC platform.

Written for Videography and DV magazines (NewBay Media LLC)

©2010 Oliver Peters

RED Post – the Easy Way III

If you’ve read some of my past articles about RED, you know I’m not a huge fan of “native” editing using the camera raw files as source clips. I find that an offline/online workflow is still best for smoothly editing RED projects, yet it still retains access to the raw color data during the finishing process. Previously I discussed an easy workflow for Apple Final Cut Pro and Color users, but this isn’t the only solution. As you know, Avid Media Composer 5 and Adobe Premiere Pro CS5 have both integrated support for RED’s camera raw files. In this post, I’m going to discuss a couple of ways to use these tools in a non-native fashion.

Option A:  Avid Media Composer 5 offline-online RED workflow

Thanks to AMA and RED camera’s SDK, Media Composer 5 offers access to RED’s .R3D files. You can import camera files and adjust the source color settings from within the NLE’s interface. You can either edit directly from these files or transcode them to Avid media for a smoother and faster editing experience. Here is a short step-by-step explanation of a Media Composer-based workflow.

Step 1. Access/import RED .R3D files via AMA (Avid Media Access). Camera clips will open inside Media Composer bins, complete with camera metadata.

Step 2. If you want to change the levels/gamma/exposure/balance of the file by altering the camera raw data, then open the Source Settings for each clip and adjust the video.

Step 3. Adjust the clip framing by opening the bin Reformat column and set the option for each clip (center cut, letterboxed, etc.). Remember that your RED clips may have a 2:1 aspect ratio, but your Avid sequence will be either HD 16:9 or SD 16:9 / 4:3.

Step 4. Set the Media Creation render tab to a video resolution of DNxHD36 with a Debayer quality of “quarter”. Since the objective is a good rough cut – not “finishing” – this quality settings is more than adequate for editing and screening your creative edits.

Step 5. Transcode all source clips. This process runs at close to real-time on a fast machine. When transcoding is done, close all AMA bins and do not use them during the edit. You’ll edit with the transcoded media only.

Step 6. Edit as normal until you get an approved, “locked” picture.

Step 7. Now it’s time to switch to “finishing”. Move or hide all Avid media (the transcoded DNxHD36 clips) by taking them out of the Avid MediaFiles/MXF/1 folder(s) on your media hard drive(s). You could also delete them, but it’s safer not to do that unless you really have to. Best to simply move them into a relabeled folder. Once you’ve done this, your edited sequence will appear with all media off-line.

Step 8. Open the AMA bins (with the .R3D files) and relink the edited sequence to the AMA clips. Make sure the “Allow relinking of imported/AMA clips by Source File name” is NOT checked in the Relink dialogue window. When relinking is completed, the sequence will be repopulated with AMA media, which will be the native, camera raw .R3D files. If you want to change the raw color data at this point, you will need to change each source clip and then refresh the sequence to update the color for clips that appear within the timeline.

Step 9. Change the Media Creation settings to a higher video resolution (such as DNxHD 175 X) and a Debayer quality of “full”.

Step 10. Consolidate/transcode your sequence. This will create new Avid media clips at full quality that are only the length of the clips as they appear in the cut, plus handles. Since a transcode using a “full” Debayer setting will be EXTREMELY SLOW, make sure you set very short handle lengths. (Note: If you have a Red Rocket card installed, Avid supports hardware-assisted rendering to accelerate the transcoding of RED media.)

Step 11. Finish all effects and color grading within the NLE as you normally would.

Option B:  Apple FCP / Automatic Duck / Adobe CS5 workflow

You might be asking, why not just edit in Final Cut Pro or Premiere Pro? The hitch is that Final Cut doesn’t support 4K files and Premiere Pro has a good native, but not a good offline-online workflow for RED files. FCP users clearly outnumber Premiere Pro users among professional film and video editors, however, both After Effects and Premiere Pro offer some interesting finishing options. In fact, a number of feature films have used both for all or part of the finishing process. A combination of Apple and Adobe tools creates some interesting scenarios for RED post. (Note: Automatic Duck Pro Import AE 5.0 is required.)

Step 1. Ingest your RED .R3D clips into Final Cut Pro using Log and Transfer. Set the preferences to use ProRes Proxy (NOT “native”). Set the color to “as shot”. This requires that the RED plug-in for FCS has been installed. (Refer to the previous article for a more in-depth explanation of this first step.) Please note that it is important to do this with the R3D files and not to start by simply dragging the in-camera-generated H, M or P QuickTime reference files into the FCP browser. Many RED users erroneously consider these to be “proxy” edit files. They are not. They are reference files at different resolutions/sizes that are linked to the R3D files and do not work correctly in this process.

Step 2. Edit normally in FCP until the cut is “locked”.

Step 3. Export an XML of your Final Cut sequence. I prefer using Automatic Duck’s free XML exporter and have had more reliable results with it, but the built-in FCP XML exporter will also work.

Step 4. Launch Adobe After Effects CS5. (Pro Import AE 5 works with CS3 and CS4, too, but you need to use an Adobe CS version compatible with native RED files.) Import the XML file using Pro Import AE 5. Make sure your Automatic Duck preferences are set to “Replace proxy footage with .R3D files.” The result will be an After Effects timeline with settings that match the Final Cut Pro sequence settings, except that all the clips will now be linked to the original camera files.

Step 5. Since the ProRes Proxy files were most likely 2K files, and the newly relinked camera files are the original 4K size, you will need to reset the scale value of each clip in the composition. This reframes the shot to fit inside the 2K frame, just as they did in FCP. Or you can creatively reframe the shots, since you have all the “bleed” of the full 4K frame. Alternatively, you can change the After Effects composition setting to match the 4K size.

At this point you could completely finish the project in After Effects, and there are a number of folks who would advocate that. From my point-of-view, After Effects is a compositing tool, rather than a DI or editing application. With the changes in Premiere Pro CS5, my druthers would be to get the media into that application. I’m only using After Effects as a conduit between Final Cut Pro and Premiere Pro in this process.

You could go from After Effects to Premiere Pro via Adobe’s Dynamic Linking, but I’d rather not. That simply nests the After Effects composition as a single clip on the Premiere Pro timeline. I want the shots available as individual timeline clips, so follow these steps.

Step 6. Launch a new Premiere Pro CS5 project and select a new sequence setting from one of the RED presets, such as a 4K timeline.

Step 7. Highlight all of the .R3D clips in the After Effects composition and Copy.

Step 8. Switch to the Premiere Pro sequence window and Paste. All of the RED clips will now fill up the Premiere Pro sequence. At this point you should have a native 4K sequence with .R3D camera raw media. Corresponding master clips will show up in the Premiere Pro project window.

Step 9. To change the camera raw color settings of the .R3D files, open a clip from the project window and alter its source settings. These changes will automatically update that clip on the timeline.

Step 10. Finish effects and color grading as desired. If you are using this process with the intent of sending files to a DI house for film finishing, then your settings and any grading should be very neutral to allow for maximum latitude at the next stage.

Step 11. Export media. A big selling point of Premiere Pro CS5 to RED users is that it allows you to export DPX image sequences, in addition to all of the standard media options. DPX is the preferred format of most high-end DI solutions, like Quantel Pablo, Autodesk Lustre, etc. Premiere Pro CS5 is one of the few desktop solutions that enables an export of full-resolution 4K DPX files from the edited timeline.

OK, I’ve given you a lot to chew on. In three articles on RED post, I’ve covered quite a few ways to finish RED-acquired projects. Don’t get overwhelmed. Remember that you don’t have to use them all. Simply pick the one that’s best for you and have fun.

©2010 Oliver Peters

RED Post – the Easy Way II

The RED camera company has succeeded in shaking up the industry and getting all other camera manufacturers to rethink what a digital cinema camera should be. This year, the ARRI Alexa presents the first serious challenge by another system designed around a camera raw workflow. Although RED maintains a resolution advantage, which will increase with the forthcoming Epic, there are many other reasons producers might opt for an Alexa, a Panavision Genesis, a Panasonic VariCam/3700/2700/3000 or a Sony F23/F35/F900/F800.

One of the strategic errors that I feel RED made was to emphasize resolution over workflow. By doing so, their innovative approach was tagged early on by detractors as difficult and time-consuming. It’s actually rather straightforward with a lot of versatility and can be adapted to many different production needs. Unfortunately, no matter how easy it has become today, RED will continue to battle this perception issue. This is exacerbated by RED itself, who has never provided good documentation for its products, especially the post production tools. A byproduct of the “perpetual beta” mode in which the company operates.

Native vs. non-native

I haven’t been a big fan of dealing with the camera raw files during editing, opting instead to pre-grade/render/export the camera files first into an edit-friendly format. If you search through the RedUser forum, you’ll find plenty of posts pointing out that the preferred feature film workflow is to export flat-looking DPX files for conforming and grading in DI systems like daVinci, Pablo and Lustre. This is a common workflow for DI and digital acquisition. I’ve demonstrated some of the latitude such a flat image can offer, even though it isn’t camera raw any longer.

Apple and Assimilate were early adopters of being able to access RED’s raw color data. Since then, RED developed an SDK that has allowed many other NLE manufacturers access to the raw data through this spec. Now others, like Avid and Adobe, can open and manipulate RED files based on the camera raw data. This gives editors wide latitude over how the image can look, without being stuck to a “baked in” camera image as a starting point. It’s like editing from transferred film, yet having access to the original negative in the NLE. I’ve recently reviewed Avid Media Composer 5 and Adobe Premiere Pro CS5 and spent some time testing this out. Both do a very good job with native RED files, but my conclusion is still that an offline/online editing methodology works best for complex, long-form productions.

FCP’s Log and Transfer

Last year, I edited 90% of my projects with Final Cut Pro, so I’ve decided to revisit Apple’s “native” RED workflow with a fresh eye. FCP does not let you work directly with the actual .R3D camera files. Instead, RED files are imported via FCP’s Log and Transfer module. Here you have two options: a) import as native REDCODE (the .R3D file is copied and rewrapped with a QuickTime container); or b) import/transcode to an edit-friendly codec, like one of the ProRes codecs. During Log and Transfer, you may select one of several colorimetry presets or “as shot”. Once imported into FCP, you can’t access the source settings (as in Media Composer or Premiere Pro). Instead, the workflow is designed around Apple Color, where the tools are provided to once again access the camera raw color data.

A lot of the RED appeal is over the fact that the camera records 4K images. 4K refers to a frame size of 4096 x 2048 pixels (2:1 aspect ratio). The RED One camera is capable of various frame sizes, but 4K appeals to indie filmmakers as some sort of Holy Grail. That’s in spite of the fact that most feature film DI is done at 2K sizes and some films are even posted using HD video (1920×1080) as an intermediate step. Avid Media Composer 5 limits you to an HD frame while Adobe Premiere Pro CS5 and After Effects CS5 will let you work at 4K. FCP doesn’t allows 4K, so the effective workaround is to downsample the 4K RED images to 2K (2048×1024). FCP and Color deal with this image size quite effectively and i/o hardware like the AJA KONA3 includes presets for 2K images. I like the idea of 4K at the camera, but I’m perfectly okay with 2K and HD in post.

Size and debayering

The downsample issue is confusing, because it affects image size and debayering – the process that turns raw data into RGB video. Unfortunately, RED hasn’t provided clear information as to what is really happening. The rule of thumb is that 2K images are downsampled as 1:1, while larger images use a 2:1 ratio. Since you have no control over the debayering settings in either Final Cut or Color, the belief expressed by some users is that RED’s own post tools, like REDCINE-X, yield better image quality. I haven’t seen anything that’s an issue in my own testing and some of the threads at RedUser would indicate that the results are comparable in head-to-head testing. You’ll have to judge for yourself.

If you are planning to post via this workflow, then it’s important to think about the right image size before production starts. If you shoot at 4K 2:1 (4096×2048), the resulting 2K 2:1 image (2048×1024) in FCP will either have to be center-cut (a blow-up with some cropping on the edges) to fit an HD (1920×1080) frame  – or it will have to be displayed with a letterbox mask.

Color scales the 2K image in the Geometry room as it renders. Since the majority of producers using this workflow are mainly interested in a proper HD image (1920×1080), I would recommend that the original footage be recorded in either 4K 16:9 (4096×2304) or 4K HD 16:9 (3840×2160), aka “quad HD”. The former gives you a little wiggle room for minor reframing, while the latter is an even multiple and will provide the most accurate downsampled image.

RED step-by-step with Final Cut Studio

Let’s take a look at the recommended Apple Final Cut Studio/RED workflow using an offline/online approach and camera raw files. Experienced RED owners who use FCP will be very familiar with this workflow. It’s also clearly described in RED’s FCP whitepaper. On the other hand, if you are about to approach your first RED project and have some trepidation about post, then this is for you. I’ll assume that you didn’t plunk down five grand for a RED Rocket accelerator card and don’t have the budget for a high-end finishing facility using Assimilate Scratch, Quantel Pablo, Avid DS or similar tools. In short, you are looking for the best way to leverage Apple Final Cut Studio and get the most out of your RED files.

Step 1: Download and install the RED Final Cut Studio Installer. This adds the QuickTime codec and the support modules for Final Cut Pro and Color. (The whitepaper is also included in this download.)

Step 2: Copy the RED camera files to your local hard drive array for editing. Back-up the files to other archive media and store in a secure location. (Avoid any illegal characters – like slashes, number signs, etc. – when you label folders.)

Step 3: Start a new FCP project. Use FCP’s Log and Transfer module to import the RED camera files. Set the L&T preferences to a target format of ProRes Proxy. Apply a color preset, like “daylight” if desired or leave “as shot”. This preset will be applied globally to all clips imported in this session.

Step 4: Edit your sequences as you normally would do. If you need to apply certain “looks” to satisfy the producer or client, use the FCP color correction tools for a temporary adjustment. Remember that this is offline editing. The goal is a good rough cut and ultimately an approved, “locked” picture cut.

Step 5: Once the cut is “locked”, use FCP’s Media Manager to generate a version of the final sequence for finishing. Run Media Manager and “create offline” to generate a new FCP project. Set the desired target sequence settings  – most likely Pro Res HQ or Pro Res 4444 (1920×1080 24p 48kHz). Set handle lengths as desired.

Step 6: Open the new media-managed FCP project. Open the Log and Transfer tool. Change the L&T preferences to “native” and “as shot”. Select the master clips (media is currently off-line) and batch capture. The corresponding portions of these RED clips will now be re-imported as native files.

Step 7: Select the final sequence and “Send to Color”. Remember that all of the Color compatibility considerations still apply. Long sequences should be first broken down into shorter sequences. Speed ramps should be “baked in”. In short, do all the usual pre-flight preparation required by the FCP-Color roundtrip.

Step 8: Thanks to the RED Installer, Color has now gained a RED tab in the Primary In room. Camera raw adjustments include gamma, colorspace, temperature, tint, gains, ISO and more. This is similar to making camera raw adjustments to digital still photos in Photoshop. All clips with the native REDCODE codec can be modified by these settings. These changes are on a clip-by-clip basis, but you can copy-and-paste or drag the Primary In settings from one clip to multiple clips.

The rest of the color grading steps follow standard Color operation. Adjust the Geometry settings as desired, render and send back to FCP. There are no raw OLPF (optical low-pass filtering) controls for detail enhancement or sharpening within the RED tab. If you feel that the image is slightly soft, then apply some sharpening within the Color FX room.

It doesn’t really make a lot of difference whether you follow this approach or prep the files first and never return to the native .R3D files. Both methods work and result in great images. It really boils down to what works for you. The process isn’t as hard as people make it out to be. Jump in, test a bit first and then you’re ready to rock!

©2010 Oliver Peters

Adobe Premiere Pro CS5

Adobe is shipping its much-anticipated Creative Suite 5. The video applications are available either as single products or bundled in the Master Collection or Production Premium suite. Most video editors will be interested in the latter, because it includes Premiere Pro, OnLocation, Encore, After Effects, Photoshop Extended, Illustrator, Adobe Media Encoder, Soundbooth, Flash Catalyst and Flash Professional.

The big story is native 64-bit operation for all of the applications, which requires a 64-bit OS (Windows Vista, Windows 7 or Mac OS X “Snow Leopard”) running on a processor that supports 64-bit operation. The upside of this is much better performance, but the downside is that you’ll have to upgrade all of your plug-ins to 64-bit versions.

Concentration on performance

Adobe really honed in on performance. I’m running a late-2009 8-core (2.26GHz) Apple Mac Pro with 12GB RAM. The change from CS4 to CS5 provided noticeably faster launch times and in general, more responsiveness in all of the Adobe applications, but in particular, Premiere Pro.

There have been quite a few “under-the-hood” workflow improvements, but the general editing features have not significantly changed. If you liked Premiere Pro before, then you’ll really love CS5. If you weren’t a fan, then improved performance and the easy integration of RED and HDLSR footage might sway you. I’ve never had any real stability issues with Premiere Pro, but one complaint you often hear is that it doesn’t scale well to large, complex projects. I haven’t tackled a large job with CS5 yet, so I can’t say, but over all, the application “feels” much more solid to me than previous releases.

Accelerated effects

The highlights are the Mercury Playback Engine, more native file and camera support and accelerated effects. According to Karl Soule (Adobe Technical Evangelist, Dynamic Media), “The Mercury Playback Engine is made up of a number of different technologies that use the latest hardware in computers. The three main technologies are 64-bit native code, multicore optimization and GPU acceleration. 64-bit code means that Premiere can access more RAM than before and can process larger numbers much faster. Multicore optimization means that Premiere Pro will take full advantage of all cores in multicore CPUs, splitting processor threads so that the load is balanced and distributed evenly. GPU acceleration uses both OpenGL technology for display playback and [NVIDIA’s] CUDA-accelerated effects and filters for color correction, chromakeying and more.”

Sean Kilbride (NVIDIA Technical Marketing Manager) continues, “By moving core visual processing tasks in the Mercury Playback Engine to CUDA, the [Adobe] team was able to create highly efficient GPU accelerated functions with performance gains of up to 70 times.” Adobe has certified several CUDA-enabled NVIDIA graphics cards, including the Quadro FX 5800/4800/3800 series and the GeForce GTX 285.

Since the Mercury Playback Engine is more than just GPU-based hardware acceleration, you’ll see the benefits of increased performance even with other cards. Karl Soule points out that, “On my 17-inch MacBook Pro laptop, I can edit clips from my Canon DSLR camera natively, without any need to transcode the footage ahead of time. I can also play back somewhere between five to seven layers of formats like AVC-Intra with no problem.”

The Mercury Playback Engine is designed to accelerate certain effects (like color correction, the Ultra keyer or picture-in-picture layers) and formats (like RED or HDV) and in general, delivers more composited layers in real-time. As part of this redesign, the available Premiere Pro effects are marked with icons to let you know which offer hardware acceleration, 32-bit and/or YUV processing. I was able to test CS5 using both my stock GeForce 120 card, as well as a Quadro FX 4800 loaned by NVIDIA for this review. Clearly the FX 4800 offers superior performance, but it wasn’t shabby with the GeForce, either. For example, if most of your work consisted of “cuts-and-dissolves” projects shot on P2, then you’ll be very happy with a standard card.

Real-time

Premiere Pro CS5 now hosts many new, native formats, so you may typically see a yellow or red line over a timeline, but rendering isn’t a “given”.  A red render bar indicates a section that probably must be rendered to play back in real time at full frame rate. A yellow render bar indicates that it may not need to be rendered. If you are exporting to tape, you will need to render these sections, however, in most cases these sections will play smoothly enough to not interrupt your creative flow during editing.

Premiere Pro launches a version of Adobe Media Encoder when you choose to export the sequence to a deliverable file. It’s a full-featured encoder capable of compressing to a variety of formats for masters, web, BD/DVD and more. Mercury Playback kicks in here as well, because all rendering and encoding from Premiere Pro takes advantage of GPU-acceleration whenever possible. Depending on the format and the effects used, rendering with a CUDA-enabled card will be faster than one without this architecture. In order to maintain maximum quality, Premiere Pro CS5 encodes exported files by accessing the original source media. You have the option to use render files as part of the export, but generally these are considered temporary preview files.

A potpourri of formats

Some of the native formats handled by Premiere Pro CS5 include AVC-Intra, H.264, Apple ProRes and REDCODE camera raw. These formats all play smoothly under the right system requirements and Premiere Pro includes a number of corresponding project presets. (Some of these won’t be accessible in a trial mode.) Premiere Pro’s newfound performance doesn’t negate the need for a fast drive array, especially with native RED files.

I tested all of these formats with both the FX 4800 and the GeForce card. All played at least one stream in real-time on either card, but quality varied with the type of media. Premiere Pro throttles performance through its display resolution settings – typically full, half, quarter, etc. The FX 4800 clearly excelled with native RED 4K, playing more smoothly and at a higher resolution setting than the GeForce.

RED is a special case, of course, because thanks to the RED SDK, CS5 adds native control over the RAW colorimetry settings. You can actually edit a 4K sequence in Premiere Pro CS5! In fact, it’s less taxing to work in native 4K than to place the 4K media on an HD timeline, since less scaling is involved this way. Although you can work with native RED raw – and Premiere Pro handles it well – I wouldn’t really want to edit a project this way. For instance, going through the SDK doesn’t give you access to the curves control, as you do in RED’s own software. Second, it’s still a bit touchy. I had problems playing this media with either card in a full screen mode. Lastly, you can change the raw setting by opening and adjusting the source settings for the file, but then it is very slow to update the look within the Premiere Pro project. For RED, I’d still opt for an offline-online editing workflow.

Adobe has been working closely with the BBC to tightly integrate Premiere Pro with P2 media and metadata. AVC-Intra performance was especially impressive. This is a computationally-intensive codec, but even though I was playing from a striped pair (RAID-0) of FireWire 800 drives, 1080p/23.98 files (100Mbps AVC-I) played and scrubbed as if they were DV. Hybrid DSLRs like the Canon EOS 5D Mark II are hot, which Adobe has taken that into account with CS5. H.264 files from a Canon 5D or 7D play quite smoothly in Premiere Pro CS5, so even Final Cut Pro fans may find themselves using Premiere Pro as the first choice when working with these projects.

Premiere Pro’s Media Browser is a handy feature, that lets you find and review native-format camera files on your drives. Navigate to P2, XDCAM or RED media folders on your hard drive. It uses the format-specific folder/file hierarchy to hide the extraneous metadata and proxy folders that are associated with that specific format.

Pushing the Mercury

I put the NVIDIA Quadro FX 4800 through its paces. I was easily able to build up eight layers of native RED media on an HD timeline, complete with accelerated color correction effects and 2D picture-in-picture layering. The timeline stayed yellow as long as I was in the GPU-hardware-accelerated render setting. Remember, these are native 4K RED camera raw files, so there’s a ton of scaling happening! Since I was only playing the media from my FireWire 800 stripe, clearly the drives couldn’t keep up for long playback, but it did work and would have been better with a beefy drive array. As a general rule, when I could play native RED files at half-resolution with the Quadro card, the GeForce would have to run the same file at quarter-resolution to get acceptable playback.

A more realistic experiment was six layers of Apple ProResLT (with effects on each layer). This played fine in full screen at half resolution using the FX 4800, but started to drop frames at full resolution. Another variation was a single ProResLT layer with four filters (fast color corrector, Gaussian blur, noise and brightness/contrast), which played fine in full resolution as a full screen image. The same clip had to be dropped to half-resolution with the GeForce card.

As an example of how well the FX 4800 handled AVC-Intra, I built up nine layers of a two-minute long 1080p/23.98 clip. This played at full-resolution without dropping frames for the full length of the clip. Only when I added an accelerated effect to each of the nine layers did it start to drop frames, requiring me to drop to half-resolution for error-free playback.

Some bumps

One of the big selling points Adobe offers Final Cut and Avid editors is to use Premiere Pro as a conduit to get into After Effects. Once inside Premiere Pro, Adobe’s Dynamic Link offers superb integration with After Effects. Like CS4, Premiere Pro CS5 can import XML and AAF files. In actual practice, I haven’t had good luck with this. I’ve never been able to successfully bring in a Media Composer sequence and my success with Final Cut XML files has been spotty.

I was able to successfully import an FCP sequence only after I stripped out all effects filters, but then still had odd audio sync issues. The timeline clips were linked to ProResLT and AIFF files that were originally converted files from a Canon 5D camera and Zoom handheld audio recorder. Picture clips were perfectly positioned, but audio sync seemed to come from different sections of the audio files. Inexplicably, when I opened this same Premiere Pro project a day later, the sequence was perfectly in sync. The third day – back to random sync. My suspicion is that the double-system sound files from the Zoom might be the issue here. (EDIT: I did a little more digging and it seems that there is a known issue with Premiere Pro CS5 and AIFF files. Convert the audio files to WAVE format and it appears to fix this problem.)

Premiere Pro writes cache files for each piece of media, including database files and waveform caches. Adobe Media Encoder, Premiere Pro, Encore and Soundbooth share a common media cache database, so each of these applications can read from and write to the same set of cached media files. Premiere Pro also “conforms” all non-standard audio files to uncompressed 48kHz. This includes any compressed audio, like MP3 files, or audio with other sample rates. In the case of the handful of files I’ve been using for these tests, Premiere Pro has already consumed 1.5GB of space for conformed audio. This is by merely linking to files that already exist elsewhere on my hard drives. These files had 44.1kHz audio, requiring Premiere Pro to write new 48kHz audio files, which are used in the project. Generally 10GB of free space will be adequate for cache files and preview render files.

Conclusion

I’ve barely scared the surface, but you can see there’s a lot in Adobe Creative Suite 5. Aside from my few nitpicks, this is very healthy upgrade that provides a number of feature enhancements, but truly delivers on the side of performance. Premiere Pro’s Mercury Playback Engine contains over 30 image processing effects, which take advantage of the NVIDIA GPU’s CUDA processing power, but you’ll enjoy a significant performance upgrade even with a non-CUDA graphics card.

If you’re choosing a nonlinear editor without any preconceived notions, then clearly Adobe is an outstanding choice on either a Windows or Mac workstation or laptop. In addition, vendors like AJA, Blackmagic Design and Matrox currently (or later this year) will provide CS5-compatible hardware support with their i/o products. Even if you’re happy with another NLE, you’ll find plenty for reasons to pick up CS5 Production Premium and add it to your toolkit.

Written for Videography and DV magazines (NewBay Media LLC).

©2010 Oliver Peters

Avid Media Composer 5

Avid is on a roll in 2010, highlighted by the purchase of Euphonix and the release of Media Composer version 5, its signature creative editing application. The company has been on an accelerated development pace for the Media Composer/NewsCutter/Symphony editing family, with recent releases adding such innovative features as AMA (Avid Media Access), Stereo 3D editing tools and Frame Rate Mix and Match. New features in version 5 (approximately the seventeenth generation of Media Composer) encompass expanded AMA support, the ability to work in RGB color space, in-context timeline editing tools and a redesigned audio framework. This is also the first Media Composer product to formally support third-party monitoring hardware.

AMA (Avid Media Access) expands

AMA is a plug-in API for camera manufacturers that lets Media Composer systems natively open and edit various acquisition formats, without the need to first transcode these files into MXF media. Earlier versions supported Panasonic P2, Sony XDCAM and Ikegami GFCAM media, but AMA in version 5 has become an even more open environment, supporting more native formats than most of the competition. New support has been added for Canon’s XF format and RED camera raw files. The biggest news, however, is that Avid has taken the initiative to natively support QuickTime media. This is vitally important, as Apple’s ProRes codec has been adopted for acquisition on several devices, including the AJA Ki Pro and the new ARRI Alexa digital camera. This openness extends to the H.264 files recorded by HD-capable DSLRs, like the Canon EOS 5D/7D/1D hybrid cameras.

ProRes, RED and H.264 editing was the first thing I tested. Avid’s recommended workflow is to use AMA as a way to cull selects before transcoding the media into the MXF format, however the performance and stability indicate that it may be viable to stay in AMA for an entire project. To access AMA, you must link to an AMA volume, which can be a drive, folder or subfolder on your system. Unlike simply dragging a folder to your project window, Media Composer’s AMA imports all the camera metadata, where available, into a full-fledged Avid bin. The key difference is that the media is linked outside of Avid’s normal media databases. AMA-linked files are highlighted with yellow in the bin.

RED files come in through the RED SDK, so editors can manipulate the raw color metadata. As with other implementations of this SDK, the data access isn’t as deep as with RED’s own software (for example, no curves). Avid fits these files to an HD frame size at fixed parameters, so there is no adjustable control to scale or crop RED’s 4K images. Avid has added a new source-side reformat setting, so you have the option to use RED’s 2:1 aspect files with either a letterboxed or a center-cut framing inside a 16:9 HD frame.

On my 8-core Mac Pro (12GB of RAM, stock GeForce card), RED files played adequately at a draft (yellow) or medium (yellow-green) video quality setting, but not well at full quality (full green).

My conclusion – as with other native RED implementations – is that you wouldn’t want to edit a complex project using the camera raw files. I still contend, RED projects are best handled in a traditional offline/online editing workflow. Part of the Avid/RED story, however, is support for the RED Rocket card, a custom accelerator board. I couldn’t test that without a card, but Media Composer 5 is supposed to access this hardware (if installed) to vastly speed up transcoding RED files into MXF media.

Other formats provided a more pleasing experience. H.264 Canon files edited and played well, but were a bit clunky when scrubbing through the media files. Far more impressive was working with Apple ProRes media. Scrubbing, playing and editing these files was nearly as fluid as with Avid DNxHD media. It really does mean that you could record with an AJA Ki Pro, open the drive in Media Composer and start editing.

Next, I performed a basic layer test (five tracks – one background and four PIP layers). This dropped frames with five layers of ProRes, but had no problems playing in real-time when I used DNxHD media. This same layer test in Apple Final Cut Pro using the ProRes files performed as well as DNxHD media in Media Composer. The bottom line is that Media Composer 5 now handles ProRes files quite well, but you’ll still get a performance edge with media that is native to Avid.

Timeline enhancements

The biggest changes for veteran Avid editors are a new Smart Tool mode with drag-and-drop capabilities and a new audio track framework. Smart Tool offers contextual timeline editing functionality, for behavior more like Final Cut, Vegas Pro or Premiere Pro. When you hover over portions of a timeline track, Media Composer automatically enables certain segment editing modes. When you get close to a cut, a trim tool is automatically enabled. It’s easier to perform direct edits within the timeline without first entering a special mode. This behavior is optional, controlled by a new Smart Tool palette, thus giving editors two styles of working.

The audio side of Media Composer 5 has gained significant features from its Pro Tools sibling. I’ve harped on this for years, so kudos to the Avid designers and engineers involved in this effort. Media Composer now has both stereo and mono audio tracks and adds real-time, track-based audio plug-ins.

In fact, two Pro Tools plug-in formats are supported for the best of both worlds. Audiosuite filters can be applied to and rendered with individual clips (as before). Now real-time RTAS filters can also be applied to an entire audio track, as is common in DAW software, like Pro Tools. Up to five RTAS plug-ins can be applied per track. Real-time performance is based on the horsepower of your machine, so applying a handful of RTAS filters should be no problem, but if you had 16 tracks, each with five filters applied, it could be a different matter.

Avid currently only qualifies the RTAS filters that ship with Media Composer 5. It’s up to the third-party developers to qualify their own RTAS filters for Media Composer. I encountered existing RTAS versions of my BIAS plug-ins that had been installed as part of a past BIAS Peak Pro installation. These were in an existing plug-ins folder (in the application support files) and showed up in the Media Composer 5 effects palette. Unfortunately they didn’t work correctly. My point is that you might unknowingly have some existing RTAS plug-ins installed on your system from other unrelated audio software. These filters may not be fully compliant and should be removed.

Third-party hardware

Avid Media Composer editors have been screaming for I/O options outside of Avid’s proprietary hardware solutions. Media Composer 5 opens that door ever so slightly with the qualification of Matrox’s MXO2 Mini as a monitoring solution. You can still operate with Avid hardware, including Mojo, Mojo SDI, Adrenaline, Mojo DX and Nitris DX, but the Mini addresses the needs of file-based workflows, where tape ingest is of little importance. Naturally, users hope this will be broadened to include full support of all Matrox, Blackmagic Design, AJA and MOTU products – not to mention the newly acquired Euphonix Artist Series controllers. For now, the Mini is a good first step.

I tested Media Composer 5 on my MacBook Pro with a Matrox MXO2 Mini and the system worked as advertised. The same Matrox MXO2 Mac drivers (1.9.2 or higher) work for both Apple Final Cut Pro and Avid Media Composer 5. Be sure to install (or re-install) the MXO2 software after Media Composer has been installed. If done correctly, a button on the timeline toolbar toggles between 1394 and MXO2. Select MXO2 and video output passes through the Mini. The MXO2 Mini features HDMI and analog (composite, S or component) output in both SD and HD formats. It will also up-, down- and cross-convert the video, but Media Composer 5 presently doesn’t support ingest through the Mini.

Avid’s own solutions, like the Avid Nitris DX hardware, do provide some performance boosts, thanks to hardware scaling of thin raster formats, like HDV and DVCPRO HD, along with hardware decoding of the DNxHD codecs. You will need a Nitris DX to take full advantage of Media Composer 5’s support of the HD RGB color space. If you are working with HDCAM-SR 4:4:4, a dual-link SDI connection (available on Nitris DX) is required for ingest.

Final thoughts

Avid provides both Mac OS and Windows installers with the same purchase, but OS requirements have tightened. Media Composer 5 will run on Windows XP (SP3, 32-bit), Vista (SP2, 64-bit) or Windows 7 (64-bit). Mac users must upgrade to “Snow Leopard” Mac OS 10.6.3. The boxed, retail version of Media Composer 5 ($2495) includes Production Suite (Avid DVD, Avid FX, BCC filters, Sorenson Squeeze and SmartSound Sonicfire Pro), worth $3800 MSRP if purchased individually. The download version of Media Composer 5 is less ($2295), with the option to purchase Production Suite separately ($295). Avid doesn’t market Media Composer 5 as a “studio”, “suite” or “collection”, but the total package offers more than just an editing tool.

Media Composer 5 is a powerful upgrade to an industry-standard NLE, but I hope that new attention will be paid to the interface on the next revision cycle. The Smart Tool palette was placed into the timeline window with no ability to hide or move it. The custom color options have been streamlined, but you can no longer alter button colors, shapes or highlights. Avid has added a UI brightness slider (similar to Adobe’s applications), but you don’t get lighter text on a darker background until the darkest setting. A medium grey background leaves you with dark text that’s hard to read. Interface design is important to many editors and customization has been a high point for Avid, so I was a disappointed to see these changes.

There are plenty of other small and large improvements, that I haven’t included. For instance, film-based metadata enhancements, support for AVCHD import, capture to the DVCPRO HD or XDCAM HD50 codecs over HD-SDI and e-mail notification of completed renders, just for starters. It’s clear that Media Composer 5 is a milestone software release for Avid. From native camera formats to RED and QuickTime support through AMA to unique editing tools, like Stereo 3D – this is a powerhouse post production solution. Avid editors will love it, but even those using competing tools are bound to look at Media Composer 5 with renewed interest. There is simply no other NLE that packs in as many creative features as Avid Media Composer 5.

Written for Videography and DV magazines (NewBay Media LLC).

©2010 Oliver Peters

Canon EOS 5D Mark II in the real world

blg_canon5d_11

A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.

blg_canon5d_3

Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.

blg_canon5d_4

Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.

blg_canon5d_1

Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.

blg_canon5d_2

Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.

blg_canon5d_8

Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.

blg_canon5d_5

Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.

blg_canon5d_10

Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.

blg_canon5d_6

Frame from My Room video

Editing

Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.

blg_canon5d_7

Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.

blg_canon5d_9

What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters