The Zero Theorem

df_tzt_1Few filmmakers are as gifted as Terry Gilliam when it comes to setting a story inside a dystopian future. The Monty Python alum, who brought us Brazil and Twelve Monkeys, to name just a few, is back with his newest, The Zero Theorem. It’s the story of Qohen Leth – played by Christoph Walz (Django Unchained, Water for Elephants, Inglorious Basterds) – an eccentric computer programmer who has been tasked by his corporate employer to solve the Zero Theorem. This is a calculation, that if solved, might prove that the meaning of life is nothingness.

The story is set in a futuristic London, but carries many of Gilliam’s hallmarks, like a retro approach to the design of technology. Qohen works out of his home, which is much like a rundown church. Part of the story takes Qohen into worlds of virtual reality, where he frequently interacts with Bainsley (Melanie Thierry), a webcam stripper that he met at a party, but who may have been sent by his employer, Mancom, to distract him. The Zero Theorem is very reminiscent of Brazil, but in concept, also of The Prisoner, a 1960s-era television series. Gilliam explores themes of isolation versus loneliness, the pointlessness of mathematical modeling to derive meaning and privacy issues.

I recently had a Skype chat with Mick Audsley, who edited the film last year. Audsley is London-based, but is currently nearing completion of a director’s cut of the feature film Everest in Iceland. This was his third Gilliam film, having previously edited Twelve Monkeys and The Imaginarium of Doctor Parnassus. Audsley explained, “I knew Terry before Twelve Monkeys and have always had a lot of admiration for him. This is my third film with Terry, as well as a short, and he’s an extraordinarily interesting director to work with. He still thinks in a graphic way, since he is both literally and figuratively an artist. He can do all of our jobs better than we can, but really values the input from other collaborators. It’s a bit like playing in a band, where everyone feeds off of the input of the other band members.”df_tzt_5

The long path to production

The film’s screenplay writer Pat Rushin teaches creative writing at the University of Central Florida in Orlando, Florida. He originally submitted the script for The Zero Theorem to the television series Project Greenlight, where it made the top 250. The script ended up with the Zanuck Company. It was offered to Gilliam in 2008, but initially other projects got in the way. It was revived in June 2012 with Gilliam at the helm. The script was very ambitious for a limited budget of under $10 million, so production took place in Romania over a 37-day period. In spite of the cost challenges, it was shot on 35mm film and includes 250 visual effects.

df_tzt_6Audsley continued, “Nicola [Pecorini, director of photography] shot a number of tests with film, RED and ARRI ALEXA cameras . The decision was made to use film. It allowed him the latitude to place lights outside of the chapel set – Qohen’s home – and have light coming in through the windows to light up the interior. Kodak’s lab in Bucharest handled the processing and transfer and then sent Avid MXF files to London, where I was editing. Terry and the crew were able to view dailies in Romania and then we discussed these over the phone. Viewing dailies is a rarity these days with digitally-shot films and something I really miss. Seeing the dailies with the full company provides clarity, but I’m afraid it’s dying out as part of the filmmaking process.”df_tzt_7

While editing in parallel to the production, Audsley didn’t upload any in-progress cuts for Gilliam to review. He said, “It’s hard for the director to concentrate on the edit, while he’s still in production. As long as the coverage is there, it’s fine. Certainly Terry and Nicola have a supreme understanding of film grammar, so that’s not a problem. Terry knows to get those extra little shots that will make the edit better. So, I was editing largely on my own and had a first cut within about ten days of the time that the production wrapped. When Terry arrived in London, we first went over the film in twenty-minute reels. That took us about two to three weeks. Then we went through the whole film as one piece to get a sense for how it worked as a film.”

Making a cinematic story

df_tzt_4As with most films, the “final draft” of the script occurs in the cutting room. Audsley continued, “The film as a written screenplay was very fluid, but when we viewed it as a completed film, it felt too linear and needed to be more cinematic – more out of order. We thought that it might be best to move the sentences around in a more interesting way. We did that quite easily and quickly. Thus, we took the strength of the writing and realized it in cinematic language. That’s one of the big benefits of the modern digital editing tools. The real film is about the relationship between Bainsley and Qohen and less about the world they inhabit. The challenge as filmmakers in the cutting room is to find that truth.”

df_tzt_8Working with visual effects presents its own editorial challenge. “As an editor, you have to evaluate the weight and importance of the plate – the base element for a visual effect – before committing to the effect. From the point-of-view of cost, you can’t keep undoing shots that have teams of artists working on them. You have to ensure that the timing is exactly right before turning over the elements for visual effects development. The biggest, single visual challenge is making Terry’s world, which is visually very rich. In the first reel, we see a futuristic London, with moving billboards. These shots were very complex and required a lot of temp effects that I layered up in the timeline. It’s one of the more complex sequences I’ve built in the Avid, with both visual and audio elements interacting. You have to decide how much can you digest and that’s an open conversation with the director and effects artists.”

The post schedule lasted about twenty weeks ending with a mix in June 2013. Part of that time was tied up in waiting for the completion of visual effects. Since there was no budget for official audience screenings, the editorial team was not tasked with creating temp mixes and preview versions before finishing the film. Audsley said, “The first cut was not overly long. Terry is good in his planning. One big change that we made during the edit was to the film’s ending. As written, Qohen ends up in the real world for a nice, tidy ending. We opted to end the film earlier for a more ambiguous ending that would be better. In the final cut the film ends while he’s still in a virtual reality world. It provides a more cerebral versus practical ending for the viewer.”

Cutting style 

df_tzt_9Audsley characterizes his cutting style as “old school”. He explained, “I come from a Moviola background, so I like to leave my cut as bare as possible, with few temp sound effects or music cues. I’ll only add what’s needed to help you understand the story. Since we weren’t obliged on this film to do temp mixes for screenings, I was able to keep the cut sparse. This lets you really focus on the cut and know if the film is working or not. If it does, then sound effects and music will only make it better. Often a rough cut will have temp music and people have trouble figuring out why a film isn’t working. The music may mask an issue or, in fact, it might simply be that the wrong temp music was used. On The Zero Theorem, George Fenton, our composer, gave us representative pieces late in the  process that he’d written for scenes.” Andre Jacquemin was the sound designer who worked in parallel to Audsley’s cut and the two developed an interactive process. Audsley explained, “Sometimes sound would need to breath more, so I’d open a scene up a bit. We had a nice back-and-forth in how we worked.”

df_tzt_3Audsley edited the film using Avid Media Composer version 5 connected to an Avid Unity shared storage system. This linked him to another Avid workstation run by his first assistant editor, Pani Ahmadi-Moore. He’s since upgraded to version 7 software and Avid ISIS shared storage. Audsley said, “I work the Avid pretty much like I worked when I used the Moviola and cut on film. Footage is grouped into bins for each scene. As I edit, I cut the film into reels and then use version numbers as I duplicate sequences to make changes. I keep a daily handwritten log about what’s done each day. The trick is to be fastidious and organized. Pani handles the preparation and asset management so that I can concentrate on the edit.”

df_tzt_2Audsley continued, “Terry’s films are very much a family type of business. It’s a family of people who know each other. Terry is supremely in control of his films, but he’s also secure in sharing with his filmmaking family. We are open to discuss all aspects of the film. The cutting room has to be a safe place for a director, but it’s the hub of all the post activity, so everyone has to feel free about voicing their opinions.”

Much of what the editor does, proceeds in isolation. The Zero Theorem provided a certain ironic resonance for Audsley, who commented, “At the start, we see a guy sitting naked in front of a computer. His life is harnessed in manipulating something on screen, and that is something I can relate to as a film editor! I think it’s very much a document of our time, about the notion that in this world of communication, there’s a strong aspect of isolation. All the communication in the world does not necessarily connect you spiritually.” The Zero Theorem is scheduled to open for limited US distribution in September.

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Cold In July

df_cij_2_smJim Mickle started his career as a freelance editor in New York, working on commercials and corporate videos, like so many others. Bitten by the filmmaking bug, Mickle has gone on to successfully direct four indie feature films, including his latest, Cold in July. Like his previous film, We Are What We Are, both films had a successful premiere at the Sundance Film Festival.

Cold In July, which is based on a novel by Joe R. Lansdale, is a noir crime drama set in 1980s East Texas. It stars Michael C. Hall (Dexter), Sam Shepard (Out of the Furnace, Killing Them Softly) and Don Johnson (Django Unchained, Miami Vice). Awakened in the middle of the night, small town family man Richard Dane (Hall) kills a burglar in his house. Dane soon fears for his family’s safety when the burglar’s ex-con father, Ben (Shepard), comes to town, bent on revenge. However, the story takes a twist into a world of corruption and violence. Add Jim Bob (Johnson) to this mix, as a pig-farming, private eye, and you have an interesting trio of characters.

According to Jim Mickle, Cold In July was on a fast-track schedule. The script was optioned in 2007, but production didn’t start until 2013. This included eight weeks of pre-production beginning in May and principal photography starting in July (for five weeks) with a wrap in September. The picture was “locked” shortly after Thanksgiving. Along with Mickle, John Paul Hortsmann (Killing Them Softly) shared editing duties.

df_cij_1_smI asked Mickle how it was to work with another editor. He explained, “I edited my last three films by myself, but with this schedule, post was wedged between promoting We Are What We Are and the Sundance deadline. I really didn’t have time to walk away from it and view it with fresh eyes. I decided to bring John Paul on board to help. This was the first time I’ve worked with another editor. John Paul was cutting while I was shooting and edited the initial assembly, which was finished about a week before the Sundance submission deadline. I got involved in the edit about mid-October. At that point, we went back to tighten and smooth out the film. We would each work on scenes and then switch and take a pass at each other’s work.”

df_cij_4_smMickle continued, “The version that we submitted to Sundance was two-and-a-half hours long. John Paul and I spent about three weeks polishing and were ready to get feedback from the outside. We held a screening for 20 to 25 people and afterwards asked questions about whether the plot points were coherent to them. It’s always good for me, as the director, to see the film with an audience. You get to see it fresh – with new eyes – and that helps you to trim and condense sections of the film. For example, in the early versions of the script, it generally felt like the middle section of the film lost tension. So, we had added a sub-plot element into the script to build up the mystery. This was a car of agents tailing our hero that we could always reuse, as needed. When we held the screening, it felt like that stuff was completely unnecessary and simply put on top of the rest of the film. The next day we sliced it all out, which cut 10 minutes out of the film. Then it finally felt like everything clicked.”

df_cij_3_smThe director-editor relationship always presents an interesting dynamic, since the editor can be objective in cutting out material that may have cost the director a lot of time and effort on set to capture. Normally, the editor has no emotional investment in production of the footage. So, how did Jim Mickle as the editor, treat his own work as the director? Mickle answered, “As an editor, I’m more ruthless on myself as the director. John Paul was less quick to give up on scenes than I. There are things I didn’t think twice about losing if they didn’t work, but he’d stay late to fix things and often have a solution the next day. I shoot with plenty of coverage these days, so I’ll build a scene and then rework it. I love the edit. It’s the first time you really feel comfortable and can craft the story. On the set, things happen so quickly, that you always have to be reactive – working and thinking on your feet.”

df_cij_5_smAlthough Mickle had edited We Are What We Are with Adobe Premiere Pro, the decision was made to shift back to Apple Final Cut Pro 7 for the edit of Cold In July. Mickle explained, “As a freelance editor in New York, I was very comfortable with Final Cut, but I’m also an After Effects user. When doing a lot of visual effects, it really feels tedious to go back and forth between Final Cut and After Effects. The previous film was shot with RED cameras and I used a raw workflow in post, cutting natively with Premiere Pro. I really loved the experience – working with raw files and Dynamic Link between Premiere and After Effects. When we hired John Paul as the primary editor on the film, we opted to go back to Final Cut, because that is what he is most comfortable with. That would get the job done in the most expedient fashion, since he was handling the bulk of the editing.”

df_cij_6_sm“We shot with RED cameras again, but the footage was transcoded to ProRes for the edit. I did find the process to be frustrating, though, because I really like the fluidness of using the raw files in Premiere. I like the editing process to live and breath and not be delineated. Having access to the raw films, lets me tweak the color correction, which helps me to get an idea of how a scene is shaping up. I get the composer involved early, so we have a lot of the real music in place as a guide while we edit. This way, your cutting style – and the post process in general – are more interactive. In any case, the ProRes files were only used to get us to the locked cut. Our final DI was handled by Light Iron in New York and they conformed the film from the original RED files for a 2K finish.”

The final screening with mix, color correction and all visual effects occurred just before Sundance. There the producers struck a distribution deal with IFC Films. Cold In July started its domestic release in May of this year.

Originally written for Digital Video magazine/CreativePlanetNetwork.

©2014 Oliver Peters

Filmmaking Pointers

df_fmpointersIf you want to be a good indie filmmaker, you have to understand some of the basic principles of telling interesting visual stories and driving the audience’s emotions. These six   ideas transcend individual components of filmmaking, like cinematography or editing. Rather, they are concepts that every budding director should understand and weave into the entire structure of how a film is approached.

1. Get into the story quickly. Films are not books and don’t always need a lengthy backstory to establish characters and plot. Films are a journey and it’s best to get the characters on that road as soon as possible. Most scripts are structured as three-act plays, so with a typical 90-100 minute running time, you should be through act one at roughly one third of the way into the film. If not, you’ll lose the interest of the audience. If you are 20 minutes into the film and you are still establishing the history of the characters without having advanced the story, then look for places to start cutting.

Sometimes this isn’t easy to tell and an extended start may indeed work well, because it does advance the story. One example is There Will Be Blood. The first reel is a tour de force of editing, in which editor Dylan Tichenor builds a largely dialogue-free montage that quickly takes the audience through the first part of Daniel Plainview’s (Daniel Day-Lewis) history in order to bring the audience up to the film’s present day. It’s absolutely instrumental to the rest of the film.

2. Parallel story lines. A parallel story structure is a great device to show the audience what’s happening to different characters at different locations, but at more or less the same time. With most scripts, parallel actions are designed to eventually converge as related or often unrelated characters ultimately end up in the same place for a shared plot. An interesting take on this is Cloud Atlas, in which an ensemble cast plays different characters spread across six different eras and locations – past, present and future.

The editing style pulled off by Alexander Berner is quite a bit different than traditional parallel story editing. A set of characters might start a scene in one era. Halfway through the scene – through some type of abrupt cut, such as walking through a door – the characters, location and eras shift to somewhere else. However, the story and the editing are such that you clearly understand how the story continues for the first half of that scene, as well as how it led into the second half. This is all without explicitly shooting those parts of each scene. Scene A/era A informs your understanding of scene B/era B and vice versa.

3. Understand camera movement. When a camera zooms, moves or is used in a shaky, handheld manner, this elicits certain emotions from the audience. As a director or DP, you need to understand when each style is appropriate and when it can be overdone. Zooming into a close-up while an actor delivers a line should be done intentionally. It tells the audience, “Listen up. This is important.” If you shoot handheld footage, like most of the Bourne series, it drives a level of documentary-style, frenetic action that should be in keeping with the concept.

The TV series NYPD Blue is credited with introducing TV audiences to the “shaky-cam” style of camera work. Many pros thought it was overdone, with movement often being introduced in an unmotivated fashion. Yet, the original Law & Order series also made extensive use of handheld photography. As this was more in keeping with a subtle documentary style, few complained about its use on that show.

4. Color palettes and art direction. Many new filmmakers often feel that you can get any look you want through color grading. The reality is that it all starts with art direction. Grading should enhance what’s there, not manufacture something that isn’t. To get that “orange & teal” look, you need to have a set and wardrobe that has some greens and blues in it. To get a warm, earthy look, you need a set and wardrobe with browns and reds.

This even extends to black & white films. To get the right contrast and tonal values in black & white, you often have to use set/wardrobe color choices that are not ideal in a color world. That’s because different colors carry differing luminance and midrange values, which becomes very obvious, once you eliminate the color information from the picture. Make sure you take that into account if you plan to produce a black & white film.

5. Score versus sound design. Music should enhance and underscore a film, but it does not have to be wall-to-wall. Some films, like American Hustle and The Wolf of Wall Street, are driven by a score of popular tunes. Others are composed with an original score. However, often the “score” consists of sound design elements and simple musical drones designed to heighten tension and otherwise manipulate emotion. The absence of score in a scene can achieve the same effect. Sound effects elements with stark simplicity may have more impact  on the audience than music. Learn when to use one or the other or both. Often less is more.

6. Don’t tell too much story. Not every film requires extensive exposition. As I said at the top, a film is not a book. Visual cues are as important as the spoken word and will often tell the audience a lot more in shorthand, than pages and pages of script. The audience is interested in the journey your film’s characters are on and frequently need very little backstory to get an understanding of the characters. Don’t shy away from shooting enough of that sort of detail, but also don’t be afraid to cut it out, when it becomes superfluous.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

Using FCP X with Adobe CC

df_x-cc_1

While the “battle” rages on between the proponents of using either Apple Final Cut Pro X or Adobe Premiere Pro CC as the main edit axe, there is less disagreement about the other Adobe applications. Certainly many users like Motion, Aperture and Logic, but it’s pretty clear that most editors favor Adobe solutions over others. I have encountered very few power users of Motion, as compared with After Effects wizards – nor graphic designers who can get by without touching Illustrator or Photoshop. This post isn’t intended to change anyone’s opinion, but rather to offer a few pointers on how to productively use some of the Adobe Creative Cloud (or CS6) applications to complement your FCP X workflows. (Click images below for an expanded view.)

Photoshop

df_x-cc_2_sm

For many editors, Adobe Photoshop is the title tool of choice. FCP X has some nice text tools, but Photoshop is significantly better – especially for logo creation. When you import a layered Photoshop file into FCP X, it comes in as a special layered graphics file. Layers can be adjusted, animated or disabled when you “open in timeline”. Photoshop layer effects, like a drop shadow, glow or emboss, do not show up correctly inside FCP X. If you drop the imported Photoshop file onto the timeline, it becomes a self-contained title clip. Although you cannot “open in editor” to modify the file, there is a workaround.

To re-edit the Photoshop file in Adobe Photoshop, select the clip in FCP X and “reveal in Finder”. From the Finder window open the file in Photoshop. Now you can make any changes you like. Once saved, the changes are updated in FCP X. There is one caveat that I’ve noticed. All changes that you make have to be made within the existing layers. New, additional layers do not update back inside FCP X. However, if you created layer effects and then merge that layer to bake in the effects, the update is successful in FCP X and the effects become visible.

This process is very imperfect because of FCP X’s interpretation of the Photoshop files. For example, layer alignment that matches in Photoshop may be misaligned in FCP X. All layers must have some content. You cannot create blank layers and later add content into them. When you do this, the updates will not be recognized in FCP X.

Audition

df_x-cc_3_sm

Sound mixing is still a weak link in Final Cut Pro X. All mixing is clip-based without a proper mixing pane, like most other NLEs have. There are methods (X2Pro Audio Convert) to send the timeline audio to Pro Tools, but many editors don’t use Pro Tools. Likewise sending an FCPXML to Logic X works better than before, but why buy an extra application if you already own Adobe Audition? I tested a few options, like using X2Pro to get an AAF into Premiere Pro and then into Audition, but none of this worked. What does work is using XML.

First, duplicate the sequence and work from the copy for safety. Review your edited sequence in FCP X and detach/delete any unused audio elements, such as muted audio associated with connected clips that are used as video-only B-roll. Next, break apart any compound clips. I recommend detaching the desired audio, but that’s optional. Now export an FCPXML for that sequence. Open the FCPXML in the Xto7 application and save the audio tracks as a new XML file.

Launch Audition and import the new XML file. This will populate your multitrack mixing window with the sequence and clips. At this stage, all clips that were inside FCP X Libraries will be offline. Select these clips and use the “link media” command. The good news is that the dialogue window will allow you to see inside the Library file and let you navigate to the correct file. Unfortunately, the correct name match will not be bolded. Since these files are typically date/time-stamped, make sure to read the names carefully when you select the first clip. The rest will relink automatically. Note that level changes and fades that were made in FCP X do not come across into Audition.

Now you can mix the session. When done, export a stereo (or other) mixed master file. Import that into FCP X and attach as a connected clip to the head of your sequence. Make sure to delete, disable (make “invisible”) or mute all previous audio.

After Effects

df_x-cc_4_sm

For many editors, Adobe After Effects is the finishing tool of choice – not just for graphics and effects, but also color correction and other embellishments. Thanks to the free ClipExporter application, it’s easy to go from FCP X to After Effects.

Similar to the Audition step, I recommend detaching/deleting all audio. Some folks like to have audio inside After Effects, but most of the time it’s in the way for me. Break part all compound clips. You might as well remove any FCP X titles and effects filters/transitions, since these don’t translate into After Effects. Lastly, I recommend selecting all connected clips and using the “overwrite to storyline” command. This will place everything onto the primary storyline and result in a straightforward cascade of layers once inside After Effects.

Export an FCPXML file for the sequence. Open ClipExporter and select the AE conversion tab. Import the FCPXML file. An important feature is that ClipExporter supports FCP X’s retiming function, but only for AE exports. Now run ClipExporter and save the resultant After Effects script file.

Launch Adobe After Effects and from the File/Script pulldown menu, select the saved script file created by ClipExporter. The script will run and load the clips and a your sequence as a new composition. Each individual shot is stashed into its own mini-composition and these are then placed into a stack of layers for the timeline of the main AE composition. Should you need to trim/slip the media for a shot, all available media can be accessed and adjusted within the shot’s individual mini-comp. If a shot has been retimed in FCP X, those adjustments also appear in the mini-comp and not in the main composition.

Build your effects and render a flattened file with everything baked in. Import that file into FCP X and add it as a connected clip to the top of your sequence. Disable all other video clips.

©2014 Oliver Peters

Why film editors love Avid Media Composer

df_filmeditors_mc_1_sm

The editing of feature films is a small niche of the overall market for editing software, yet companies continue to highlight features edited with their software as a form of aspirational marketing to attract new users. Avid Technology has had plenty of competition since the start of the company, but the majority of mainstream feature films are still edited using Avid Media Composer software. Lightworks and Final Cut Pro “legacy” have their champions (soon to be joined by FCP X and Premiere Pro CC), but Media Composer has held the lead – at least in North America – as the preferred software for feature film editors.

Detractors of Avid like to characterize these film editors as luddites who are resistant to change. They like to suggest that the interface is stodgy and rigid and just not modern enough. I would suggest that change for change’s sake is not always a good thing. Originally Final Cut Pro got a foothold, because it did well with file-based workflows and was very cheap compared to turnkey workstations running Media Composer or Film Composer. Those days are long gone, so trying to make the argument based on cost alone doesn’t go very far.

Editing speed is gained through familiarity and muscle memory. When you hire a top-notch feature editor, you aren’t hiring them for their software prowess. Instead, you are hiring them for their mind, ideas and creativity. As such, there is no benefit to one of these editors in changing to another piece of software, just because it’s the cool kid on the block. Most know how they need to manipulate the software tools so well, that thinking about what to do in the interface just disappears.

Change is attractive to new users, with no preconceived preferences. FCP X acolytes like to say how much easier it is to teach new users FCP X than a track-based system, like FCP 7, Premiere Pro or Media Composer. As someone who’s taught film student editing workshops, my opinion is that it simply isn’t true. It’s all in what you teach, how you teach it and what you expect them to accomplish. In fact, I’ve had many who are eager to learn Media Composer, precisely because they know that it continues to be the “gold standard” for feature film editing software.

There are some concrete reasons why film editors prefer Media Composer. For many, it’s because Avid was their first NLE and it felt logical to them. For others, it’s because Avid has historically incorporated a lot of user input into the product. Here are a dozen factors that I believe keep the equation in favor of Avid Media Composer.

1. Film metadata - At the start of the NLE area, Avid was an offline editing system, designed to do the creative cut electronically. The actual final cut for release was done by physically conforming (cutting) camera negative to match the rough cut. Avid built in tools to cut at 24fps and to track the metadata back to film for frame-accurate lists that went to the lab and the negative cutter. Although negative cutting is all but dead today, this core tracking of metadata benefits modern versions of Media Composer and is still applicable to file-based workflows.

2. Trimming - Avid editors rave about the trim mode in Media Composer. It continues to be the best there is and has been augmented by Smart Tools for FCP-style contextual timeline editing. Many editors spend a lot of time trimming and nothing matches Media Composer.

3. Logical layout - When Avid started out, they sought the direct input from many working editors and this helped the interface evolve into something totally logical. For example, the keyboard position of JKL (transport controls) or mark/clear/go-to in/out is based on hand positions when placed on the keyboard and not an arbitrary choice by a software designer. If you look at the default keyboard map in Media Composer, there are fewer layers than the other apps. I would argue that Media Composer’s inherent design makes more layers unnecessary. In fact, more layers become more confusing.

 4. Script integration – Early on, Avid’s designers looked at how an actual written script might be used within the software. This is completely different than simply attaching copied-and-pasted text to a clip. With Media Composer, you can set up the bin with the actual script pages and link clips to the text of the dialogue. This is included with the base software as a manual process, but if you want to automate the linking, then the optional ScriptSync add-on will lighten the load. A second dialogue-driven option, PhraseFind, is great for documentary filmmakers. Some editors never use these features, but those that do, wouldn’t want to work any other way.

5. Built-in effect tools - The editorial team on most features gets involved in creating temporary visual effects. These are placeholders and style ideas meant to help the director and others visualize the effects. Sometimes these are editorial tricks, like an invisible split screen to combine different takes. The actual, final effect is done by the visual effects compositors. Avid’s internal tools, however, allow a talented film editor or assistant editor, to temp in an effect at a very high quality level. While Media Composer is certainly not a finishing tool equal to Avid DS (now EOL’ed) or Autodesk Smoke, the internal tools surpass all other desktop offline editors. FCP X requires third-party plug-ins or Motion 5 and Premiere Pro CC requires After Effects. With Media Composer, you have built-in rotosplining, tracking, one of the better keyers, stabilization and more. All without leaving the primary editing interface.

6. Surround mixing – Often film editors will build their rough cut with LCR (left-center-right) or full 5.1 surround panning. This helps to give a better idea of the theatrical mix and preps a sequence for early screenings with a preview or focus group audience. Other systems let you work in surround, too, but none as easily as with Media Composer, assuming you have the right i/o hardware.

7. Project sharing – You simply cannot share the EXACT SAME project file among simultaneous, collaborative users with any other editing application in the same way as you can with Media Composer and Avid’s Unity or Isis shared storage networks. Not every user needs that and there are certainly functional alternatives for FCP and Premiere Pro, as well as Media Composer. For film editors, the beauty of the Avid approach, is that everyone on the team can be looking at the exact same project. When changes are made to a sequence for a scene and the associated bin is saved, that updated info ripples to everyone else’s view. Large films may have as many as 15 to 20 connected users, once you tally editors, assistants and visual effects editors. This function is hard to duplicate with any alternative software.

8. Cross-platform and easy authorization – Media Composer runs under both Mac OS X and Windows on a wide range of machines. This makes it easy for editors on location to shift between a desktop workstation and a laptop, which may be of differing OS platforms. In the past, software licensing was via a USB license key (dongle), but newer versions use software authorization to activate the application. The software may be installed on any number of machines, with one active and authorized at any given time. De-activation and re-activation only takes a few seconds if you are connected to the Internet.

9. Portability of projects and media – Thanks to Avid’s solid media management with internal media databases, it’s easy to move drives between machines with no linking issues. Keep a common and updated project file on two machines and you can easily move a media drive back and forth between them. The software will instantly find all the correct media when Media Composer is launched. In addition, Avid has held one of the best track records for project interchange among older and newer versions.

10. Interoperability with lists – Feature film workflows are all about “playing well with others”. This means industry-standard list formats, like EDLs, AAFs and OMFs. I wish Media Composer would also natively read and write XMLs, but that’s a moving target and generally not as widely accepted in the facilities that do studio-level work. The other standards are all there and built into the tools. So sending lists to a colorist or audio editor/mixer requires no special third-party software.

11. Flexible media architecture – Avid has moved forward from the days when it only handled proprietary Avid media formats. Thanks to AMA, many native camera file formats and QuickTime codecs are supported. Through a licensing deal with Apple, even ProRes is natively supported, including writing ProRes MXF files on Apple workstations. This gives Media Composer wider support for professional codecs than nearly every other editing application. On top of that, you still have Avid’s own DNxHD, one of the best compression schemes currently in professional film and video use.

12. Robust – In most cases, Media Composer is a rock-solid application, with minimal hiccups and crashes. Avid editors have become very used to reliability and will definitely pipe up when that doesn’t happen. Generally Avid editors do not experience the sorts of RAM leaks that seem to plaque other editing software.

For the sake of full disclosure, I am a member of one of the advisory councils that are part of the Avid Customer Association. Obviously, you might feel that this taints what I’ve written above. It does not.

I’ve edited with Avid software since the early 90s, but I’ve edited for years with other applications, too. Most of the last decade leading up to Apple’s launch of Final Cut Pro X was spent on FCP “legacy”. The last couple of years have been spent trying to work the kinks out of FCP X. I’ve cut feature films on Media Composer, FCP 4-7, FCP X and even a Sony BVE-9100. I take a critical view towards all of them and go with what is best for the project.

Even though I don’t use many of the Avid-specific features mentioned above, like ScriptSync, I do see the strengths and why other film editors wouldn’t want to use anything else. My main goal here was to answer the question I hear so often, which is, “Why do they still use Avid?” I hope I’ve been able to offer a few answers.

For some more thoughts, take a look at these videos about DigitalFilm Tree’s transition from FCP to Media Composer and Alan Bell’s approach to using Avid products for cutting films like “The Hunger Games: Catching Fire”.

©2014 Oliver Peters