More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

NAB 2014 Thoughts

Whodathunkit? More NLEs, new cameras from new vendors and even a new film scanner! I’ve been back from NAB for a little over a week and needed to get caught up on work while decompressing. The following are some thoughts in broad strokes.

Avid Connect. My trip started early with the Avid Connect costumer event. This was a corporate gathering with over 1,000 paid attendees. Avid execs and managers outlined the corporate vision of Avid Everywhere in presentations that were head-and-shoulders better than any executive presentations Avid has given in years. For many who attended, it was to see if there was still life in Avid. I think the general response was receptive and positive. Avid Everywhere is basically a realignment of existing and future products around a platform concept. That has more impact if you own Avid storage or asset management software. Less so, if you only own a seat of Media Composer or ProTools. No new software features were announced, but new pricing models were announced with options to purchase or rent individual seats of the software – or to rent floating licenses in larger quantities.

4K. As predicted, 4K was all over the show. However, when you talked to vendors and users, there was little clear direction about actual mastering in 4K. It is starting to be a requirement in some circles, like delivering to Netflix, for example; but for most users 4K stops at acquisition. There is interest for archival reasons, as well as for reframing shots when the master is HD or 2K.

Cameras. New cameras from Blackmagic Design. Not much of a surprise there. One is the bigger, ENG-style URSA, which is Blackmagic’s solution to all of the add-ons people use with smaller HDSLR-sized cameras. The biggest feature is a 10” flip-out LCD monitor. AJA was the real surprise with its own 4K Cion camera. Think KiPro Quad with a camera built around it. Several DPs I spoke with weren’t that thrilled about either camera, because of size or balance. A camera that did get everyone jazzed was Sony’s A7s, one of their new Alpha series HDSLRs. It’s 4K-capable when recorded via HDMI to an external device. The images were outstanding. Of course, 4K wasn’t everywhere. Notably not at ARRI. The news there is the Amiraa sibling to the Alexa. Both share the same sensor design, with the Amira designed as a documentary camera. I’m sure it will be a hit, in spite of being a 2K camera.

Mac Pro. The new Mac Pro was all over the show in numerous booths. Various companies showed housings and add-ons to mount the Mac Pro for various applications. Lots of Thunderbolt products on display to address expandability for this unit, as well as Apple laptops and eventually PCs that will use Thunderbolt technology. The folks at FCPworks showed a nice DIT table/cart designed to hold a Mac Pro, keyboard, monitoring and other on-set essentials.

FCP X. Speaking of FCP X, the best place to check it out was at the off-site demo suite that FCPworks was running during the show. The suite demonstrated a number of FCP X-based workflows using third-party utilities, shared storage from Quantum and more. FCP X was in various booths on the NAB show floor, but to me it seemed limited to partner companies, like AJA. I thought the occurrences of FCP X in other booths was overshadowed by Premiere Pro CC sightings. No new FCP X feature announcements or even hints were made by Apple in any private meetings.

NLEs. The state of nonlinear editing is in more flux than ever. FCP X seems to be picking up a little steam, as is Premiere Pro. Yet, still no clear market leader across all sectors. Autodesk announced Smoke 2015, which will be the last version you can buy. Following Adobe’s lead, this year they shift to a rental model for their products. Smoke 2015 diverges more from the Flame UI model with more timeline-based effects than Smoke 2013. Lightworks for the Mac was demoed at the EditShare booth, which will make it another new option for Mac editors. Nothing new yet out of Avid, except some rebranding – Media Composer is now Media Composer | Software and Sphere is now Media Composer | Cloud. Expect new features to be rolled in by the end of this year. The biggest new player is Blackmagic Design, who has expanded the DaVinci Resolve software into a full-fledged NLE. With a cosmetic resemblance to FCP X, it caused many to dub it “the NLE that Final Cut Pro 8 should have been”. Whether that’s on the mark or just irrational exuberance has yet to be determined. Suffice it to say that Blackmagic is serious about making it a powerful editor, which for now is targeted at finishing.

Death of i/o cards. I’ve seen little mention of this, but it seems to me that dedicated PCIe video capture cards are a thing of the past. KONA and Decklink cards are really just there to support legacy products. They have less relevance in the file-based world. Most of the focus these days is on monitoring, which can be easily (and more cheaply) handled by HDMI or small Thunderbolt devices. If you looked at AJA and Matrox, for example, most of the target for PCIe cards is now to supply the OEM market. AJA supplies Quantel with their 4K i/o cards. The emphasis for direct customers is on smaller output-only products, mini-converters or self-contained format converters.

Film. If you were making a custom, 35mm film scanner – get out of the business, because you are now competing against Blackmagic Design! Their new film scanner is based on technology acquired through the purchase of Cintel a few months ago. Now Blackmagic introduced a sleek 35mm scanner capable of up to 30fps with UltraHD images. It’s $30K and connects to a Mac Pro via Thunderbolt2. Simple operation and easy software (plus Resolve) will likely rekindle the interest at a number of facilities for the film transfer business. That will be especially true at sites with a large archive of film.

Social. Naturally NAB wouldn’t be the fun it is without the opportunity to meet up with friends from all over the world. That’s part of what I get out of it. For others it’s the extra training through classes at Post Production World. The SuperMeet is a must for many editors. The Avid Connect gala featured entertainment by the legendary Nile Rodgers and his band Chic. Nearly two hours of non-stop funk/dance/disco. Quite enjoyable regardless of your musical taste. So, another year in Vegas – and not quite the ho-hum event that many had thought it would be!

Click here for more analysis at Digital Video’s website.

©2014 Oliver Peters

 

The NLE that wouldn’t die II

df_nledie2_sm

With echoes of Monty Python in the background, two years on, Final Cut Pro 7 and Final Cut Studio are still widely in use. As I noted in my post from last November, I still see facilities with firmly entrenched and mature FCP “legacy” workflows that haven’t moved to another NLE yet. Some were ready to move to Adobe until they learned subscription was the only choice going forward. Others maintain a fanboy’s faith in Apple that the next version will somehow fix all the things they dislike about Final Cut Pro X. Others simply haven’t found the alternative solutions compelling enough to shift.

I’ve been cutting all manner of projects in FCP X since the beginning and am currently using it on a feature film. I augment it in lots of ways with plug-ins and utilities, so I’m about as deep into FCP X workflows as anyone out there. Yet, there are very few projects in which I don’t touch some aspect of Final Cut Studio to help get the job done. Some fueled by need, some by personal preference. Here are some ways that Studio can still work for you as a suite of applications to fill in the gaps.

DVD creation

There are no more version updates to Apple’s (or Adobe’s) DVD creation tools. FCP X and Compressor can author simple “one-off” discs using their export/share/batch functions. However, if you need a more advanced, authored DVD with branched menus and assets, DVD Studio Pro (as well is Adobe Encore CS6) is still a very viable tool, assuming you already own Final Cut Studio. For me, the need to do this has been reduced, but not completely gone.

Batch export

Final Cut Pro X has no batch export function for source clips. This is something I find immensely helpful. For example, many editorial houses specify that their production company client supply edit-friendly “dailies” – especially when final color correction and finishing will be done by another facility or artist/editor/colorist. This is a throwback to film workflows and is most often the case with RED and ALEXA productions. Certainly a lot of the same processes can be done with DaVinci Resolve, but it’s simply faster and easier with FCP 7.

In the case of ALEXA, a lot of editors prefer to do their offline edit with LUT-corrected, Rec 709 images, instead of the flat, Log-C ProRes 4444 files that come straight from the camera. With FCP 7, simply import the camera files, add a LUT filter like the one from Nick Shaw (Antler Post), enable TC burn-in if you like and run a batch export in the codec of your choice. When I do this, I usually end up with a set of Rec 709 color, ProResLT files with burn-in that I can use to edit with. Since the file name, reel ID and timecode are identical to the camera masters, I can easily edit with the “dailies” and then relink to the camera masters for color correction and finishing. This works well in Adobe Premiere Pro CC, Apple FCP 7 and even FCP X.

Timecode and reel IDs

When I work with files from the various HDSLRs, I prefer to convert them to ProRes (or DNxHD), add timecode and reel ID info. In my eyes, this makes the file professional video media that’s much more easily dealt with throughout the rest of the post pipeline. I have a specific routine for doing this, but when some of these steps fail, due to some file error, I find that FCP 7 is a good back-up utility. From inside FCP 7, you can easily add reel IDs and also modify or add timecode. This metadata is embedded into the actual media file and readable by other applications.

Log and Transfer

Yes, I know that you can import and optimize (transcode) camera files in FCP X. I just don’t like the way it does it. The FCP 7 Log and Transfer module allows the editor to set several naming preferences upon ingest. This includes custom names and reel IDs. That metadata is then embedded directly into the QuickTime movie created by the Log and Transfer module. FCP X doesn’t embed name and ID changes into the media file, but rather into its own database. Subsequently this information is not transportable by simply reading the media file within another application. As a result, when I work with media from a C300, for example, my first step is still Log and Transfer in FCP 7, before I start editing in FCP X.

Conform and reverse telecine

A lot of cameras offer the ability to shoot at higher frame rates with the intent of playing this at a slower frame rate for a slow motion effect – “overcranking” in film terms. Advanced cameras like the ALEXA, RED One, EPIC and Canon C300 write a timebase reference into the file that tells the NLE that a file recorded at 60fps is to be played at 23.98fps. This is not true of HDSLRs, like a Canon 5D, 7D or a GoPro. You have to tell the NLE what to do. FCP X only does this though its Retime effect, which means you are telling the file to be played as slomo, thus requiring a render.

I prefer to use Cinema Tools to “conform” the file. This alters the file header information of the QuickTime file, so that any application will play it at the conformed, rather than recorded frame rate. The process is nearly instant and when imported into FCP X, the application simply plays it at the slower speed – no rendering required. Just like with an ALEXA or RED.

Another function of Cinema Tools is reverse telecine. If a camera file was recorded with built-in “pulldown” – sometimes called 24-over-60 – additional redundant video fields are added to the file. You want to remove these if you are editing in a native 24p project. Cinema Tools will let you do this and in the process render a new, 24p-native file.

Color correction

I really like the built-in and third-party color correction tools for Final Cut Pro X. I also like Blackmagic Design’s DaVinci Resolve, but there are times when Apple Color is still the best tool for the job. I prefer its user interface to Resolve, especially when working with dual displays and if you use an AJA capture/monitoring product, Resolve is a non-starter. For me, Color is the best choice when I get a color correction project from outside where the editor used FCP 7 to cut. I’ve also done some jobs in X and then gone to Color via Xto7 and then FCP 7. It may sound a little convoluted, but is pretty painless and the results speak for themselves.

Audio mixing

I do minimal mixing in X. It’s fine for simple mixes, but for me, a track-based application is the only way to go. I do have X2Pro Audio Convert, but many of the out-of-house ProTools mixers I work with prefer to receive OMFs rather than AAFs. This means going to FCP 7 first and then generating an OMF from within FCP 7. This has the added advantage that I can proof the timeline for errors first. That’s something you can’t do if you are generating an AAF without any way to open and inspect it. FCP X has a tendency to include many clips that are muted and usually out of your way inside X. By going to FCP 7 first, you have a chance to clean up the timeline before the mixer gets it.

Any complex projects that I mix myself are done in Adobe Audition or Soundtrack Pro. I can get to Audition via the XML route – or I can go to Soundtrack Pro through XML and FCP 7 with its “send to” function. Either application works for me and most of my third-party plug-ins show up in each. Plus they both have a healthy set of their own built-in filters. When I’m done, simply export the mix (and/or stems) and import the track back into FCP X to marry it to the picture.

Project trimming

Final Cut Pro X has no media management function.  You can copy/move/aggregate all of the media from a single Project (timeline) into a new Event, but these files are the source clips at full length. There is no ability to create a new project with trimmed or consolidated media. That’s when source files from a timeline are shortened to only include the portion that was cut into the sequence, plus user-defined “handles” (an extra few frames or seconds at the beginning and end of the clip). Trimmed, media-managed projects are often required when sending your edited sequence to an outside color correction facility. It’s also a great way to archive the “unflattened” final sequence of your production, while still leaving some wiggle room for future trimming adjustments. The sequence is editable and you still have the ability to slip, slide or change cuts by a few frames.

I ran into this problem the other day, where I needed to take a production home for further work. It was a series of commercials cut in FCP X, from which I had recut four spots as director’s cuts. The edit was locked, but I wanted to finish the mix and grade at home. No problem, I thought. Simply duplicate the project with “used media”, create the new Event and “organize” (copies media into the new Event folder). I could live with the fact that the media was full length, but there was one rub. Since I had originally edited the series of commercials using Compound Clips for selected takes, the duping process brought over all of these Compounds – even though none was actually used in the edit of the four director’s cuts. This would have resulted in copying nearly two-thirds of the total source media. I could not remove the Compounds from the copied Event, without also removing them from the original, which I didn’t want to do.

The solution was to send the sequence of four spots to FCP 7 and then media manage that timeline into a trimmed project. The difference was 12GB of trimmed source clips instead of HUNDREDS of GB. At home, I then sent the audio to Soundtrack Pro for a mix and the picture back to FCP X for color correction. Connect the mix back to the primary storyline in FCP X and call it done!

I realize that some of this may sound a bit complex to some readers, but professional workflows are all about having a good toolkit and knowing how to use it. FCP X is a great tool for productions that can work within its walls, but if you still own Final Cut Studio, there are a lot more options at your disposal. Why not continue to use them?

©2013 Oliver Peters

The East

df_east_1Director Zal Batmanglij’s The East caught the buzz at Sundance and SXSW. It was produced by Scott Free Productions with Fox Searchlight Pictures. Not bad for the young director’s sophomore outing. The film takes its name from The East, a group of eco-terrorists and anarchists led by Benji, who is played by Alexander Skarsgard (True Blood). The group engages in “jams” – their term for activist attacks on corporations, which they tape and put out on the web. Sarah, played by Brit Marling (Arbitrage), is a corporate espionage specialist who is hired to infiltrate the group. In that process, she comes to sympathize with the group’s ideals, if not its violent tactics. She finds herself both questioning her allegiances and is falling in love with Benji. Marling also co-wrote the screenplay with Batmanglij.

In addition to a thriller plot, the film’s production also had some interesting twists along the way to completion. First, it was shot with an ARRI ALEXA, but unlike most films that use the ALEXA, the recording was done as ProRes4444 to the onboard SxS cards, instead of ARRIRAW to an external recorder. That will make it one of the few films to date in mainstream release to do so. ProRes dailies were converted into color-adjusted Avid DNxHD media for editing.

Second, the film went through a change of editors due to prior commitments. After the production wrapped and a first assembly of the film was completed, Andrew Weisblum (Moonrise Kingdom, Black Swan) joined the team to cut the film. Weisblum’s availability was limited to four months, though, since he was already committed to editing Darren Aronofsky’s Noah. At that stage, Bill Pankow (The Black Dahlia, Carlito’s Way) picked up for Weisblum and carried the film through to completion.

df_east_2Andrew Weisblum explained, “When I saw the assembly of The East, I really felt like there was a good story, but I had already committed to cut Noah. I wasn’t quite sure how much could be done in the four months that I had, but left the film at what we all thought was a cut that was close to completion. It was about 80% done and we’d had an initial preview. Bill [Pankow] was a friend, so I asked if he would pick it up for me there, assuming that the rest would be mainly just a matter of tightening up the film. But it turned out to be more involved than that.”

Bill Pankow continued, “I came on board June of last year and took the picture through to the locked cut and the mix in November. After that first screening, everyone felt that the ending needed some work. The final scene between the main characters wasn’t working in the way Zal and Brit had originally expected. They decided to change some things to serve the drama better and to resolve the relationship of the main characters. This required shooting additional footage, as well as reworking some of the other scenes. At that point we took a short hiatus while Zal and Brit  rewrote and reshot the last scene. Then another preview and we were able to lock the cut.”

df_east_3Like nearly all films, The East took on a life of its own in the cutting room. According to Weisblum, “The film changed in the edit from the script. Some of what I did in the cut was to bring in more tension and mystery in the beginning to get us to the group [The East] more quickly. We also simplified a number of story points. Nothing really radical – although it might have felt like that at the time – but just removing tangents that distracted from the main story.” Pankow added, “We didn’t have any length constraints from Fox, so we were able to optimize each scene. Towards the end of the film, there were places that needed extra ‘moments’ to accentuate some of the emotion of what the Sarah character was feeling. In a few cases, this meant re-using shots that might have appeared earlier. In addition to changing the last scene, a few other areas were adjusted. One or two scenes were extended, which in some cases replaced other scenes.”

Since the activists document their activities with video cameras, The East incorporates a number of point-of-view shots taken with low-res cameras. Rather than create these as visual effects shots, low-res cameras were used for the actual photography of that footage. Some video effects were added in the edit and some through the visual effects company. Weisblum has worked as a VFX editor (The Fountain, Chicago), so creating temporary visual effects is second nature. He said, “I usually do a number of things either in the Avid or using [Adobe] After Effects. These are the typical ‘split screen’ effects where takes are mixed to offset the timing of the performances. In this film, there was one scene where two characters [Tim and Sarah] are having a conversation on the bed. I wanted to use a take where Tim is sitting up, but of course, he’s partially covered by Sarah. This took a bit more effort, because I had to rotoscope part of one shot into the other, since the actors were overlapping each other. I’ll do these things whenever I can, so that the film plays in as finished a manner as possible during screening. It also gives the visual effects team a really good roadmap to follow.”

df_east_4Bill Pankow has worked as an editor or assistant on over forty features and brings some perspective to modern editing. He said, “I started editing digitally on Lightworks, but then moved to Avid. At the time, Lightworks didn’t keep up and Avid gave you more effects and titling tools, which let editors produce a more polished cut. On this film the set-up included two Avid Media Composer systems connected to shared storage. I typically like to work with two assistants when I can. My first assistant will add temporary sound effects and clean up the dialogue, while the second assistant handles the daily business and paperwork of the cutting room. Because assistants tend to have their own specialties these days, it’s harder for assistants to learn how to edit. I try to make a point of involving my assistants in watching the dailies, reviewing a scene when it’s cut and so on. This way they have a chance to learn and can someday move into the editor’s chair themselves.”

Both editors agree that working on The East was a very positive experience. Weisblum said, “Before starting, I had a little concern for how it would be working with Zal and Brit, especially since Brit was the lead actress, but also co-writer and producer. However, it was very helpful to have her involved, as she really helped me to understand the intentions of the character. It turned out to be a great collaboration.” Pankow concluded, “I enjoyed the team, but more so, I liked the fact that this film resonates emotionally, as well as politically, with the current times. I was very happy to be able to work on it.”

Originally written for Digital Video magazine

©2013 Oliver Peters

Phil Spector

df_philspector_3Phil Spector became famous as a music industry icon. The legendary producer, who originated the “wall of sound” production technique of densely-layered arrangements, worked with a wide range of acts, including the Ronettes, the Righteous Brothers and the Beatles. Unfortunately, fame can also have its infamous side. Spector abruptly came back into public notice through the circumstances of the 2003 death of actress Lana Clarkson and his subsequent criminal trials, culminating in a 2009 conviction for second-degree murder.

The story of his first murder trial and the relationship between Spector (Al Pacino) and defense attorney Linda Kenney Baden (Helen Mirren) form the basis for the new film by HBO Films. Phil Spector, which is executive produced by Barry Levinson (Rain Man), was directed by celebrated screenwriter/director David Mamet (The Unit, The Shield, Hannibal, Wag the Dog). Rather than treat it as a biopic or news story, Mamet chose to take a fictionalized approach that chronicles Spector’s legal troubles as a fall from grace.

One key member of the production team was editor Barbara Tulliver (Too Big to Fail, Lady in the Water, Signs), who has previously collaborated with Mamet. She started as a film editor working on commercials in New York, but quickly transitioned into features. According to Tulliver, “I assisted on David’s first two films and then cut my first feature as an editor with him, so we have established a relationship. I also cut Too Big to Fail for HBO and brought a lot of the same editorial crew for this one, so it was like a big family.”

df_philspector_4As with most television schedules, Phil Spector was shot and completed in a time frame and with a budget more akin to a well-funded independent feature, rather than a typical studio film. Tulliver explained, “Our schedule to complete this film was between that of a standard TV project and a feature. If a studio film has six weeks to complete a mix, a film like this would have three. The steps are the same, but the schedule is shrunk. I was cutting during the thirty-day production phase, so I had a cut ready for David a week after he wrapped. HBO likes to see things early, so David had his initial cut done after five weeks, instead of the typical ten-week time frame. Like any studio, HBO will give us notes, but they are very respectful of the filmmakers, which is why they can attract the caliber of talent that they do for these films. At that point we went into a bit of a hold, because David wanted some additional photography and that took awhile until HBO approved it.”

The production itself was handled like a film shoot using ARRI Alexa cameras in a single-camera style. An on-set DIT generated the dailies used for the edit. Although you wouldn’t consider this a visual effects film, it still had its share of shots. Tulliver said, “There were a lot of comps that are meat-and-potatoes effects these days. For instance, the film was shot in New York, so in scenes when Spector arrives at the courthouse in Los Angeles, the visual effects department had to build up all of the exteriors to look like LA. There are a number of TV and computer screens, which were all completed in post. Plus a certain amount of frame clean-ups, like removing unwanted elements from a shot.”

df_philspector_2Mamet wrote a very lean screenplay, so the length of the cut didn’t present any creative challenges for Tulliver. She continued, “David’s scripts are beautifully crafted, so there was no need to re-arrange scenes. We might have deleted one scene. David makes decisions quickly and doesn’t overshoot. Like any director, he is open to changes in performance; but, the actors have such respect for his script, that there isn’t a lot of embellishment that might pose editing challenges in another film. Naturally with a cast like this, the performances were all good. The main challenge we had, was to find ways to integrate Spector’s songs into the story. How to use the music to open up scenes in the film and add montages. This meant all of the songs had to be cleared. We were largely successful, except with John Lennon’s Imagine, where Yoko Ono had the final say. Although she was open to our using the song, ultimately she and David couldn’t agree to how it would be integrated creatively into the film.”

Phil Spector was cut digitally on an Avid Media Composer. Like many feature editors, Barbara Tulliver started her career cutting film. She said, “I’m one of the last editors to embrace digital editing. I went into it kicking and screaming, but so did the directors I was working with at the time. When I finally moved over to Avid, they were pretty well established as the dominant nonlinear edit system for films. I do miss some things about editing on film, though. There’s a tactile sense of the film that’s almost romantic. Because it takes longer to make changes, film editing is more reflective. You talk about it more and often in the course of these discussions, you discover better solutions than if you simply tried a lot of variations. In the film days, you talked about the dramatic and emotional impact of these options. This is still the case, but one has to be more vigilant about making that happen – as opposed to just re-cutting a scene twenty different ways, because it is easy and fast – and then not know what you are looking at anymore.”

df_philspector_1“Today, I cut the same way I did when I was cutting film. I like to lay out my cut as a road map. I’ll build it rough to get a sense of the whole scene, rather than finesse each single cut as I go. After I’ve built the scene that way, I’ll go back and tweak and trim to fine-tune the cut. Digital editing for me is not all about the bells-and-whistles. I don’t use some of the Avid features, like multi-camera editing or Script Sync. While these are great features, some are labor-intensive to prepare. When you have a minimal crew without a lot of assistants, I prefer to work in a more straightforward fashion.”

Tulliver concluded with this thought, “Although I may be nostalgic about the days of film editing, it would be a complete nightmare to go back to that. In fact, several years ago one director was interested in trying it, so I investigated what it would take. It’s hard to find the gear anymore and when you do, it hasn’t been properly maintained, because no one has been using it. Not to mention finding mag stripe and other materials that you would need. The list of people and labs that actually know how to handle a complete film project is getting smaller each year, so going back would just about be impossible. While film might not be dead as a production medium, it has passed that point in post.”

Originally written for Digital Video magazine.

©2013 Oliver Peters

Zero Dark Thirty

df_zdt_1Few films have the potential to be as politically charged as Zero Dark Thirty. Director Kathryn Bigelow (The Hurt Locker, K-19: The Widowmaker) and producer/writer Mark Boal (The Hurt Locker, In the Valley of Elah) have evaded those minefields by focusing on the relentless CIA detective work that led to the finding and killing of Osama bin Laden by US Navy SEALs. Shot and edited in a cinema verite style, Zero Dark Thirty is more of a suspenseful thriller, than an action-adventure movie. It seeks to tell a raw, powerful story that’s faithful to the facts without politicizing the events.

The original concept started before the raid on bin Laden’s compound occurred. It was to be about the hunt, but not finding him, after a decade of searching. The SEAL raid changed the direction of the film; but, Bigelow and Boal still felt that the story to be told was in the work done on the ground by intelligence operatives that led to the raid. Zero Dark Thirty is based on the perspective of CIA operative Maya (Jessica Chastain), whose job it is to find terrorists. The Maya character is based on a real person.

Zero Dark Thirty was filmed digitally, using ARRI Alexa cameras. This aided Kathryn Bigelow’s style of shooting by eliminating the limitation of the length of film mags. Most scenes were shot with four cameras and some as many as six or seven at once. The equivalent of 1.8 million feet of film (about 320 hours) was recorded. The production ramped up in India with veteran film editor Dylan Tichenor (Lawless, There Will Be Blood) on board from the beginning.

According to Tichenor, “I was originally going to be on location for a short time with Kathryn and Mark and then return to the States to cut. We were getting about seven hours of footage a day and I like to watch everything. When they asked me to stay on for the entire India shoot, we set up a cutting room in Chandigarh, added assistants and Avids to stay up to camera while I was there. Then I rejoined my team in the States when the production moved to Jordan. A parallel cutting room had been set up in Los Angeles, where the same footage was loaded. There, the assistants could also help pull selects from my notes, to make going through the footage and preparing to cut more manageable.”df_zdt_3

William Goldenberg (Argo, Transformers: Dark of the Moon) joined the team as the second editor in June, after wrapping up Argo. Goldenberg continued, “This film had a short post schedule and there was a lot of footage, so they asked me to help out. I started right after they filmed the Osama bin Laden raid scene, which was one of the last locations to be shot and the first part of the film that I edited. The assembled film without the raid was about three hours long. There was forty hours of material just for the raid and this took about three weeks to a month to cut. After I finished that, Dylan and I divided up the workload to refine and hone scenes, with each making adjustments on the other’s cuts. It’s very helpful to have a second pair of eyes in this situation, bouncing ideas back and forth.”

As an Alexa-based production, the team in India, Jordan and London included a three-man digital lab. Tichenor explained, “This film was recorded using ARRIRAW. With digital features in the past, my editorial team has been tasked to handle the digital dailies workload, too. This means the editors are also responsible for dealing with the color space workflow issues and that would have been too much to deal with on this film. So, the production set up a three-person team with a Codex Digilab and Colorfront software in another hotel room to process the ARRIRAW files. These were turned into color-corrected Avid DNxHD media for us and a duplicate set of files for the assistants in LA.” Director of photography Greig Fraser (Snow White and the Huntsman, Killing Them Softly) was able to check in on the digilab team and tweak the one-light color correction, as well as get Tichenor’s input for additional shots and coverage he might need to help tell the story.

df_zdt_4Tichenor continued, “Kathryn likes to set up scenes and then capture the action with numerous cameras – almost like it’s a documentary. Then she’ll repeat that process several times for each scene. Four to seven camera keep rolling all day, so there’s a lot of footage. Plus the camera operators are very good about picking up extra shots and b-roll, even though they aren’t an official second unit team. There are a lot of ways to tell the story and Kathryn gave us – the editors – a lot of freedom to build these scenes. The objective is to have a feeling of ‘you are there’ and I think that comes across in this film. Kathryn picks people she trusts and then lets them do their job. That’s great for an editor, but you really feel the responsibility, because it’s your decisions that will end up on the screen.”

Music for the film was also handled in an unusual manner. According to Goldenberg, “On most films a composer is contracted, you turn the locked picture over to him and he scores to that cut. Zero Dark Thirty didn’t start with a decision on a composer. Like most films, Dylan and I tried different pieces of temp music under some of the scenes that needed music. Of all the music we tried, the work of Alexandre Desplat (Argo, Moonrise Kingdom) fit the best. Kathryn and Mark showed Alexandre a cut to see if he might be interested. He loved it and found time in his schedule to score the film. Right away he wrote seven pieces that he felt were right. We cut those in to fit the scene lengths, which he then used as a template for his final score. It was a very collaborative process.”

Company 3 handled the digital intermediate mastering. Goldenberg explained, “The nighttime raid scene has a very unique look. It was very dark, as shot. In fact, we had to turn off all the lights in the cutting room to even see an image on the Avid monitors. Company 3 got involved early on by color timing about ten minutes of that footage, because we were eager and excited to see what the sequence could look like when it was color timed. When it came to the final DI, the film really took on another layer of richness. We’d been looking at the one-light images so long that it actually took a few screenings to enjoy the image that we’d been missing until then.”

df_zdt_2Both Tichenor and Goldenberg have been cutting on Avid Media Composers for years, but this film didn’t tax the capabilities of the system. Tichenor said, “This isn’t an effects-heavy film. Some parts of the stealth helicopters are CG, but in the Avid, we mainly used effects for some monitor inserts, stabilization and split screens.” Goldenberg added, “One thing we both do is build our audio tracks as LCR [left, center, right channel] instead of the usual stereo. It takes a bit more work to build a dedicated center channel, but screenings sound much better.”

Avid has very good multicamera routines, so I questioned whether these were of value with the number of cameras being used. Tichenor replied, “We grouped clips, of course, but not actual multicam. You can switch cameras easily with a grouped clip. I actually did try for one second on a scene to see if I could use the multicam split screen camera display for watching dailies, but no, there was too much going on.” Goldenberg added, “There are some scenes that – although they were using multiple cameras – the operators would be shooting completely different things. For instance, actors in a car with one camera and other cameras grabbing local flavor and street life. So multicam or group clips were less useful in those cases.”

The film’s post schedule took about four months from the first full assembly until the final mix. Goldenberg said, “I don’t think you can say the cut was ever completely locked until the final mix, since we made minor adjustments even up to the end; but, there was a point at one of the internal screenings where we all knew the structure was in place. That was a big milestone, because from there, it was just a matter of tightening and honing. The story felt right.” Tichenor explained, “This movie actually came together surprisingly well in the time frame we had. Given the amount of footage, it’s the sort of film that could easily have been in post for two years. Fortunately with this script and team, it all came together. The scenes balanced out nicely and it has a good structure.”

For addition stories:

DV’s coverage of Zero Dark Thirty’s cinematography

An interview with William Goldenberg about Argo

FXGuide talks about the visual effects created for the film.

New York Times articles (here and here) about Zero Dark Thirty

Avid interview with William Goldenberg.

DP/30 interview with sound and picture editors on ZDT.

Originally written for DV magazine / Creative Planet Network

©2012, 2013 Oliver Peters