Cold In July

df_cij_2_smJim Mickle started his career as a freelance editor in New York, working on commercials and corporate videos, like so many others. Bitten by the filmmaking bug, Mickle has gone on to successfully direct four indie feature films, including his latest, Cold in July. Like his previous film, We Are What We Are, both films had a successful premiere at the Sundance Film Festival.

Cold In July, which is based on a novel by Joe R. Lansdale, is a noir crime drama set in 1980s East Texas. It stars Michael C. Hall (Dexter), Sam Shepard (Out of the Furnace, Killing Them Softly) and Don Johnson (Django Unchained, Miami Vice). Awakened in the middle of the night, small town family man Richard Dane (Hall) kills a burglar in his house. Dane soon fears for his family’s safety when the burglar’s ex-con father, Ben (Shepard), comes to town, bent on revenge. However, the story takes a twist into a world of corruption and violence. Add Jim Bob (Johnson) to this mix, as a pig-farming, private eye, and you have an interesting trio of characters.

According to Jim Mickle, Cold In July was on a fast-track schedule. The script was optioned in 2007, but production didn’t start until 2013. This included eight weeks of pre-production beginning in May and principal photography starting in July (for five weeks) with a wrap in September. The picture was “locked” shortly after Thanksgiving. Along with Mickle, John Paul Hortsmann (Killing Them Softly) shared editing duties.

df_cij_1_smI asked Mickle how it was to work with another editor. He explained, “I edited my last three films by myself, but with this schedule, post was wedged between promoting We Are What We Are and the Sundance deadline. I really didn’t have time to walk away from it and view it with fresh eyes. I decided to bring John Paul on board to help. This was the first time I’ve worked with another editor. John Paul was cutting while I was shooting and edited the initial assembly, which was finished about a week before the Sundance submission deadline. I got involved in the edit about mid-October. At that point, we went back to tighten and smooth out the film. We would each work on scenes and then switch and take a pass at each other’s work.”

df_cij_4_smMickle continued, “The version that we submitted to Sundance was two-and-a-half hours long. John Paul and I spent about three weeks polishing and were ready to get feedback from the outside. We held a screening for 20 to 25 people and afterwards asked questions about whether the plot points were coherent to them. It’s always good for me, as the director, to see the film with an audience. You get to see it fresh – with new eyes – and that helps you to trim and condense sections of the film. For example, in the early versions of the script, it generally felt like the middle section of the film lost tension. So, we had added a sub-plot element into the script to build up the mystery. This was a car of agents tailing our hero that we could always reuse, as needed. When we held the screening, it felt like that stuff was completely unnecessary and simply put on top of the rest of the film. The next day we sliced it all out, which cut 10 minutes out of the film. Then it finally felt like everything clicked.”

df_cij_3_smThe director-editor relationship always presents an interesting dynamic, since the editor can be objective in cutting out material that may have cost the director a lot of time and effort on set to capture. Normally, the editor has no emotional investment in production of the footage. So, how did Jim Mickle as the editor, treat his own work as the director? Mickle answered, “As an editor, I’m more ruthless on myself as the director. John Paul was less quick to give up on scenes than I. There are things I didn’t think twice about losing if they didn’t work, but he’d stay late to fix things and often have a solution the next day. I shoot with plenty of coverage these days, so I’ll build a scene and then rework it. I love the edit. It’s the first time you really feel comfortable and can craft the story. On the set, things happen so quickly, that you always have to be reactive – working and thinking on your feet.”

df_cij_5_smAlthough Mickle had edited We Are What We Are with Adobe Premiere Pro, the decision was made to shift back to Apple Final Cut Pro 7 for the edit of Cold In July. Mickle explained, “As a freelance editor in New York, I was very comfortable with Final Cut, but I’m also an After Effects user. When doing a lot of visual effects, it really feels tedious to go back and forth between Final Cut and After Effects. The previous film was shot with RED cameras and I used a raw workflow in post, cutting natively with Premiere Pro. I really loved the experience – working with raw files and Dynamic Link between Premiere and After Effects. When we hired John Paul as the primary editor on the film, we opted to go back to Final Cut, because that is what he is most comfortable with. That would get the job done in the most expedient fashion, since he was handling the bulk of the editing.”

df_cij_6_sm“We shot with RED cameras again, but the footage was transcoded to ProRes for the edit. I did find the process to be frustrating, though, because I really like the fluidness of using the raw files in Premiere. I like the editing process to live and breath and not be delineated. Having access to the raw films, lets me tweak the color correction, which helps me to get an idea of how a scene is shaping up. I get the composer involved early, so we have a lot of the real music in place as a guide while we edit. This way, your cutting style – and the post process in general – are more interactive. In any case, the ProRes files were only used to get us to the locked cut. Our final DI was handled by Light Iron in New York and they conformed the film from the original RED files for a 2K finish.”

The final screening with mix, color correction and all visual effects occurred just before Sundance. There the producers struck a distribution deal with IFC Films. Cold In July started its domestic release in May of this year.

Originally written for Digital Video magazine/CreativePlanetNetwork.

©2014 Oliver Peters

Filmmaking Pointers

df_fmpointersIf you want to be a good indie filmmaker, you have to understand some of the basic principles of telling interesting visual stories and driving the audience’s emotions. These six   ideas transcend individual components of filmmaking, like cinematography or editing. Rather, they are concepts that every budding director should understand and weave into the entire structure of how a film is approached.

1. Get into the story quickly. Films are not books and don’t always need a lengthy backstory to establish characters and plot. Films are a journey and it’s best to get the characters on that road as soon as possible. Most scripts are structured as three-act plays, so with a typical 90-100 minute running time, you should be through act one at roughly one third of the way into the film. If not, you’ll lose the interest of the audience. If you are 20 minutes into the film and you are still establishing the history of the characters without having advanced the story, then look for places to start cutting.

Sometimes this isn’t easy to tell and an extended start may indeed work well, because it does advance the story. One example is There Will Be Blood. The first reel is a tour de force of editing, in which editor Dylan Tichenor builds a largely dialogue-free montage that quickly takes the audience through the first part of Daniel Plainview’s (Daniel Day-Lewis) history in order to bring the audience up to the film’s present day. It’s absolutely instrumental to the rest of the film.

2. Parallel story lines. A parallel story structure is a great device to show the audience what’s happening to different characters at different locations, but at more or less the same time. With most scripts, parallel actions are designed to eventually converge as related or often unrelated characters ultimately end up in the same place for a shared plot. An interesting take on this is Cloud Atlas, in which an ensemble cast plays different characters spread across six different eras and locations – past, present and future.

The editing style pulled off by Alexander Berner is quite a bit different than traditional parallel story editing. A set of characters might start a scene in one era. Halfway through the scene – through some type of abrupt cut, such as walking through a door – the characters, location and eras shift to somewhere else. However, the story and the editing are such that you clearly understand how the story continues for the first half of that scene, as well as how it led into the second half. This is all without explicitly shooting those parts of each scene. Scene A/era A informs your understanding of scene B/era B and vice versa.

3. Understand camera movement. When a camera zooms, moves or is used in a shaky, handheld manner, this elicits certain emotions from the audience. As a director or DP, you need to understand when each style is appropriate and when it can be overdone. Zooming into a close-up while an actor delivers a line should be done intentionally. It tells the audience, “Listen up. This is important.” If you shoot handheld footage, like most of the Bourne series, it drives a level of documentary-style, frenetic action that should be in keeping with the concept.

The TV series NYPD Blue is credited with introducing TV audiences to the “shaky-cam” style of camera work. Many pros thought it was overdone, with movement often being introduced in an unmotivated fashion. Yet, the original Law & Order series also made extensive use of handheld photography. As this was more in keeping with a subtle documentary style, few complained about its use on that show.

4. Color palettes and art direction. Many new filmmakers often feel that you can get any look you want through color grading. The reality is that it all starts with art direction. Grading should enhance what’s there, not manufacture something that isn’t. To get that “orange & teal” look, you need to have a set and wardrobe that has some greens and blues in it. To get a warm, earthy look, you need a set and wardrobe with browns and reds.

This even extends to black & white films. To get the right contrast and tonal values in black & white, you often have to use set/wardrobe color choices that are not ideal in a color world. That’s because different colors carry differing luminance and midrange values, which becomes very obvious, once you eliminate the color information from the picture. Make sure you take that into account if you plan to produce a black & white film.

5. Score versus sound design. Music should enhance and underscore a film, but it does not have to be wall-to-wall. Some films, like American Hustle and The Wolf of Wall Street, are driven by a score of popular tunes. Others are composed with an original score. However, often the “score” consists of sound design elements and simple musical drones designed to heighten tension and otherwise manipulate emotion. The absence of score in a scene can achieve the same effect. Sound effects elements with stark simplicity may have more impact  on the audience than music. Learn when to use one or the other or both. Often less is more.

6. Don’t tell too much story. Not every film requires extensive exposition. As I said at the top, a film is not a book. Visual cues are as important as the spoken word and will often tell the audience a lot more in shorthand, than pages and pages of script. The audience is interested in the journey your film’s characters are on and frequently need very little backstory to get an understanding of the characters. Don’t shy away from shooting enough of that sort of detail, but also don’t be afraid to cut it out, when it becomes superfluous.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

Using FCP X with Adobe CC

df_x-cc_1

While the “battle” rages on between the proponents of using either Apple Final Cut Pro X or Adobe Premiere Pro CC as the main edit axe, there is less disagreement about the other Adobe applications. Certainly many users like Motion, Aperture and Logic, but it’s pretty clear that most editors favor Adobe solutions over others. I have encountered very few power users of Motion, as compared with After Effects wizards – nor graphic designers who can get by without touching Illustrator or Photoshop. This post isn’t intended to change anyone’s opinion, but rather to offer a few pointers on how to productively use some of the Adobe Creative Cloud (or CS6) applications to complement your FCP X workflows. (Click images below for an expanded view.)

Photoshop

df_x-cc_2_sm

For many editors, Adobe Photoshop is the title tool of choice. FCP X has some nice text tools, but Photoshop is significantly better – especially for logo creation. When you import a layered Photoshop file into FCP X, it comes in as a special layered graphics file. Layers can be adjusted, animated or disabled when you “open in timeline”. Photoshop layer effects, like a drop shadow, glow or emboss, do not show up correctly inside FCP X. If you drop the imported Photoshop file onto the timeline, it becomes a self-contained title clip. Although you cannot “open in editor” to modify the file, there is a workaround.

To re-edit the Photoshop file in Adobe Photoshop, select the clip in FCP X and “reveal in Finder”. From the Finder window open the file in Photoshop. Now you can make any changes you like. Once saved, the changes are updated in FCP X. There is one caveat that I’ve noticed. All changes that you make have to be made within the existing layers. New, additional layers do not update back inside FCP X. However, if you created layer effects and then merge that layer to bake in the effects, the update is successful in FCP X and the effects become visible.

This process is very imperfect because of FCP X’s interpretation of the Photoshop files. For example, layer alignment that matches in Photoshop may be misaligned in FCP X. All layers must have some content. You cannot create blank layers and later add content into them. When you do this, the updates will not be recognized in FCP X.

Audition

df_x-cc_3_sm

Sound mixing is still a weak link in Final Cut Pro X. All mixing is clip-based without a proper mixing pane, like most other NLEs have. There are methods (X2Pro Audio Convert) to send the timeline audio to Pro Tools, but many editors don’t use Pro Tools. Likewise sending an FCPXML to Logic X works better than before, but why buy an extra application if you already own Adobe Audition? I tested a few options, like using X2Pro to get an AAF into Premiere Pro and then into Audition, but none of this worked. What does work is using XML.

First, duplicate the sequence and work from the copy for safety. Review your edited sequence in FCP X and detach/delete any unused audio elements, such as muted audio associated with connected clips that are used as video-only B-roll. Next, break apart any compound clips. I recommend detaching the desired audio, but that’s optional. Now export an FCPXML for that sequence. Open the FCPXML in the Xto7 application and save the audio tracks as a new XML file.

Launch Audition and import the new XML file. This will populate your multitrack mixing window with the sequence and clips. At this stage, all clips that were inside FCP X Libraries will be offline. Select these clips and use the “link media” command. The good news is that the dialogue window will allow you to see inside the Library file and let you navigate to the correct file. Unfortunately, the correct name match will not be bolded. Since these files are typically date/time-stamped, make sure to read the names carefully when you select the first clip. The rest will relink automatically. Note that level changes and fades that were made in FCP X do not come across into Audition.

Now you can mix the session. When done, export a stereo (or other) mixed master file. Import that into FCP X and attach as a connected clip to the head of your sequence. Make sure to delete, disable (make “invisible”) or mute all previous audio.

After Effects

df_x-cc_4_sm

For many editors, Adobe After Effects is the finishing tool of choice – not just for graphics and effects, but also color correction and other embellishments. Thanks to the free ClipExporter application, it’s easy to go from FCP X to After Effects.

Similar to the Audition step, I recommend detaching/deleting all audio. Some folks like to have audio inside After Effects, but most of the time it’s in the way for me. Break part all compound clips. You might as well remove any FCP X titles and effects filters/transitions, since these don’t translate into After Effects. Lastly, I recommend selecting all connected clips and using the “overwrite to storyline” command. This will place everything onto the primary storyline and result in a straightforward cascade of layers once inside After Effects.

Export an FCPXML file for the sequence. Open ClipExporter and select the AE conversion tab. Import the FCPXML file. An important feature is that ClipExporter supports FCP X’s retiming function, but only for AE exports. Now run ClipExporter and save the resultant After Effects script file.

Launch Adobe After Effects and from the File/Script pulldown menu, select the saved script file created by ClipExporter. The script will run and load the clips and a your sequence as a new composition. Each individual shot is stashed into its own mini-composition and these are then placed into a stack of layers for the timeline of the main AE composition. Should you need to trim/slip the media for a shot, all available media can be accessed and adjusted within the shot’s individual mini-comp. If a shot has been retimed in FCP X, those adjustments also appear in the mini-comp and not in the main composition.

Build your effects and render a flattened file with everything baked in. Import that file into FCP X and add it as a connected clip to the top of your sequence. Disable all other video clips.

©2014 Oliver Peters