NLE Tips – Week 1

df_nle1_1_sm

Avid Media Composer Pointers

Getting better results out of your editing experience means learning a few useful tricks. For the next few posts, I’ll offer some suggestions intended to improve your efficiency on several popular editing applications. This first post covers three quick tips with Avid Media Composer. (Click images for an expanded view.)

Film strips

df_nle1_3_smOne of the features of Apple’s FCP X that I really like is the way the selected clip is displayed when the “event” browser (bin) is set to the list view. The selected clip is shown at the top of the browser window as a film strip covering the length of that clip. This makes it very easy to look at the strip and identify at a glance that the shot starts as a wide and zooms to a close-up. The Avid frame view won’t give you such information without scrubbing. But did you know there’s a similar film strip solution in Media Composer?

Most editors are used to double-clicking a clip in a bin to load it into the source viewer. For many, it’s a habit that ignores another approach. When selecting a clip in a bin, simply hit the enter key to load it into the viewer. No need to click or double-click. That’s the first step in this tip.

df_nle1_2_smThe Avid timeline window always loads two timelines – the edited sequence and the source. You can toggle between source and edit timelines with a keystroke. The timeline window can also be set to display a “film” video track. When doing so, you get a film strip view of the entire timeline. When you view the source side of the timeline window, the result is a film strip display of the entire source clip. By leaving the timeline window toggled to the source view with the film track enabled, you can quickly go through your bin selections using the enter key and checking out the clip in this film strip display. This will give you a fast way to review your footage with minimal scrubbing and clicking.

The Find menu

df_nle1_4_smWhen you call up the Media Composer Find menu (cmd-F on a Mac), you get several search options, including Phrase Find, if you’ve purchased that option and have indexed the audio files. Find works with more than Phrase Find, though. It can search for clips across all bins, but it also allows you to search for any text in locators (markers). If you’ve placed locators in your sequence and labelled these with text info, simply type the text into the Find menu search field, click the Find button and your play head will jump to that locator in the timeline.

Master bus

df_nle1_5_smWith Media Composer 7, Avid has added a master bus to the audio mixer panel. Aside from controlling overall levels, this bus will also accept real-time audio plug-ins from Media Composer’s standard set (RTAS) or from compatible third-party audio filters. I often will add a basic compressor/limiter to my mixes and with the new master bus, Avid has given me an ideal place for it.

Some additional Media Composer tips here and here.

If you are serious about your Media Composer chops, here are three great books that will help you up your game.

Avid Uncut: Workflows, Tips, and Techniques from Hollywood Pros (Steve Hullfish)

Avid Agility: Working Faster and More Intuitively with Avid Media Composer, Third Edition (Steven Cohen)

Avid Media Composer 6.x Cookbook  (Ben Hershleder)

df_nle1_6

©2014 Oliver Peters

FCP X Screen Layouts

df_fcpxscrns_1_sm

One of the things I really liked about Final Cut Pro “legacy” was the ability to create and customize numerous screen layouts. By rearranging its collection of tabbed and floating windows, it was easy to design and save numerous, task-specific, personalized screen layouts of the user interface. When I edit, I prefer to work on dual-display workstations, so I can lay out my tools with plenty of screen real estate. This usually means source bins and clips in one screen and the viewers and timeline in the other.

This level of interface customization is one of the features that I miss in Final Cut Pro X. Apple’s basic design for FCP X is intended to optimize it for single-display use, especially iMacs and MacBook Pros. The user interface for FCP X is more static than FCP “legacy” – using fly-out panels instead of moveable, floating, tabbed or docking windows. Nevertheless, if you have a dual-screen set-up, there are actually quite a few variations that the interface enables. A nice feature is that some of the show/hide toggles can be mapped to the keyboard. For now, you can’t save configurations, but it is reasonably quick to open, close and swap interface elements. (Click any of the images for an expanded view.)

df_fcpxscrns_2_sm

One interesting concept is that you can access various open FCP X Libraries using Mission Control. It’s not always fool-proof and I haven’t found it all that useful, but it is possible.

df_fcpxscrns_3_sm

In a typical two-display workstation with the main menu on the right display, you can open the viewers or event browser on the secondary display. That’s left in this example, with the events set to display on the secondary monitor. The event browser includes a panel that displays the libraries, events, keyword collections and smart collections. A button in the lower left corner of the interface lets you hide this panel. Doing so sets the browser focus only on the clips for the selected location and thus reduces clutter.

df_fcpxscrns_4_sm

The event browser can be set to display clips as skimmable, filmstrip thumbnails or as a list of clips. In the list view, the selected clip is displayed as a single filmstrip across the top of the event browser. The viewer can be set to be a single, unified viewer that toggles between clips and the timeline. Alternatively, a second event viewer can be opened for a traditional 2-up source/record display.

df_fcpxscrns_5_sm

You may also choose to display the viewers on the secondary display, which leaves the timeline and events on the main display. Video scopes are tied to the viewers and can be displayed in a horizontal (next to the image) or vertical (under the image) position.

df_fcpxscrns_6_sm

Some plug-ins use on-screen controls. One such filter is Hawaiki Color – a color grading tool. Its OSC may be displayed around the image or fullscreen as an image overlay. With the viewer on the secondary screen and scopes enabled, the editor maintains focus only on one screen while color correcting shots.

df_fcpxscrns_7_sm

The timeline display offers several clip height options. The smallest is the “chiclet” view. The timeline clips can be expanded with other views that emphasize more of the picture information or more of the audio waveforms. In addition, video animation can be revealed for a clip. This will display keyframes for in-timeline adjustments.

df_fcpxscrns_8_sm

I recently discovered that the order of which monitor is considered the primary and secondary display can be swapped. Simply drag the main window by the top header bar to the other monitor. As you do, the window on the secondary display automatically shifts to the opposite monitor. Then, click the green plus symbol at the top corner of the window to have it properly fill the screen. This example demonstrates that the fullscreen viewer window can be shifted onto either screen.

df_fcpxscrns_9_sm

In this final example, the viewer/timeline and event browser (on the secondary display) are shifted from one screen to the next.

©2014 Oliver Peters

Comparing Final Cut Pro X, Media Composer and Premiere Pro CC

df_nle_1_sm

The editing world includes a number of software options, such as Autodesk Smoke, Grass Valley EDIUS, Lightworks, Media 100, Sony Vegas and Quantel. The lion’s share of editing is done on three platforms: Apple Final Cut Pro, Avid Media Composer or Adobe Premiere Pro. For the last two years many users have been holding onto legacy systems, wondering when the dust would settle and which editing tool would become dominant again. By the end of 2013, these three companies released significant updates that give users a good idea of their future direction and has many zeroing in on a selection.

df_nle_11_sm

Differing business models

Adobe, Apple and Avid have three distinctly different approaches. Adobe and Avid offer cross-platform solutions, while Final Cut Pro X only works on Apple hardware. Adobe offers most of its content creation software only through a Creative Cloud subscription. Individual users have access to all creative applications for $49.99 a month (not including promotional deals), but when they quit subscribing, the applications cease to function after a grace period. Users may install the software on as many computers as they like (Mac or PC), but only two can be activated at any time.

Apple’s software sells through the Mac App Store. Final Cut Pro X is $299.99 with another $49.99 each for Motion and Compressor. Individual users may install and use these applications on any Mac computers they own, but enterprise users are supposed to purchase volume licenses to cover one installation per computer. With the release of FCP X 10.1, it appears that Apple is offering updates at no charge, meaning that once you buy Final Cut, you never pay for updates. Whether that continues as the official Apple policy from here on is unknown. FCP X uses a special version of XML for timeline interchange with other applications, so if you need to send material via EDL, OMF or AAF – or even interchange with previous versions of Final Cut Pro – you will need to augment FCP X with a variety of third-party utilities.

Avid Media Composer remains the only one of the three that follows a traditional software ownership model. You purchase, download and install the software and activate the license. You may install it on numerous Macs and PCs, but only one at a time can be activated. The software bundle runs $999 and includes Media Composer, several Avid utilities, Sorenson Squeeze, Avid FX from BorisFX and AvidDVD by Sonic. You can expand your system with three extra software options: Symphony (advanced color correction), ScriptSync (automated audio-to-script alignment) and PhraseFind (a dialogue search tool). The Symphony option also includes the Boris Continuum Complete filters.

Thanks to Avid’s installation and activation process, Media Composer is the most transportable of the three. Simply carry Mac and Windows installers on a USB key along with your activation codes. It’s as simple as installing the software and activating the license, as long as any other installations have been de-activated prior to that. While technically the FCP X application could be moved between machines, it requires that the new machine be authorized as part of a valid Apple ID account. This is often frowned upon in corporate environments. Similarly, you can activate a new machine as one of yours on a Creative Cloud account (as long as you’ve signed out on the other machines), but the software must be downloaded again to this local machine. No USB key installers here.

df_nle_5_sm

Dealing with formats

All three applications are good at handling a variety of source media codecs, frame rates and sizes. In some cases, like RED camera files, plug-ins need to be installed and kept current. Both Apple and Avid will directly handle some camera formats without conversion, but each uses a preferred codec – ProRes for Final Cut Pro X and DNxHD for Media Composer. If you want the most fluid editing experience, then transcode to an optimized codec within the application.

Adobe hasn’t developed its own mezzanine codec. In fact, Premiere Pro CC has no built-in transcoding tools, leaving that instead to Adobe Prelude or Adobe Media Encoder. By design, the editor imports files in their native format without transcoding or rewrapping and works with those directly in the sequence. A mix of various formats, frame rates, codecs and sizes doesn’t always play as smoothly on a single timeline as would optimized media, like DNxHD or ProRes; but, my experience is that of these three, Premiere Pro CC handles such a mix the best.

Most of us work with HD (or even SD) deliverables, but higher resolutions (2K, UHD, 4K) are around the corner. All three NLEs handle bigger-than-HD formats as source media without much difficulty. I’ve tested the latest RED EPIC Dragon 6K camera files in all three applications and they handle the format well. Both Adobe and Apple can output bigger sequence sizes, too, such as 2K and 4K. For now, Avid Media Composer is still limited to HD (1920 x 1080 maximum) sequences and output sizes. Here are some key features of the most recent updates.

df_nle_3_sm

Adobe Premiere Pro CC (version 7.2.1)

The current build of Premiere Pro CC was released towards the end of 2013. Adobe has been enhancing editing features with each new update, but two big selling points of this version are Adobe Anywhere integration and Direct Link between Premiere Pro CC and SpeedGrade CC. Anywhere requires a shared server for collaborative workflows and isn’t applicable to most users who don’t have an Anywhere infrastructure in place. Nevertheless, this adds the client-side application integration, so those who do, can connect, sign in and work.

df_nle_7_smOf more interest is Direct Link, which sends the complete Premiere Pro CC timeline into SpeedGrade CC for color correction. Since you are working directly with the Premiere Pro timeline, SpeedGrade functions with a subset of its usual controls. Operations, like conforming media to an EDL, are inactive. Direct Link facilitates the use of various compressed codecs that SpeedGrade wouldn’t normally handle by itself, since this is being taken care of by Premiere Pro’s media engine. When you’ve completed color correction, the saved timeline is sent back to Premiere Pro. Each clip has an applied Lumetri filter that contains grading information from SpeedGrade. The roundtrip is achieved without any intermediate rendering.

df_nle_6_smThis solution is a good first effort, but I find that the response of SpeedGrade’s controls via Direct Link are noticeably slower than working directly in a SpeedGrade project. That must be a result of Premiere Pro working in the background. Clips in Premiere Pro with applied Lumetri effects also require more resources to play well and rendering definitely helps. The color roundtrip results were good in my tests, with the exception of any clips that used a filter layer with a LUT. These displayed with bizarre colors back in Premiere Pro.

You can’t talk about Premiere Pro without addressing Creative Cloud. I still view this as a “work in progress”. For instance, you are supposed to be able to sync files between your local drive and the Cloud, much like DropBox. Even though everything is current on my Mac Pro, that tab in the Creative Cloud application still says “coming soon”. Others report that it’s working for them.

df_nle_2_sm

Apple Final Cut Pro X (version 10.1)

This update is the tipping point for many FCP 7 users. Enough updates have been released in over two years to address many of the concerns professional editors have expressed. 10.1 requires an operating system update to Mavericks (10.9 or later) and has three marquee items – a revised media structure, optimization for 4K and overall better performance. It is clear that Apple is not about to change the inherent design of FCP X. This means no tracks and no changes to the magnetic timeline. As with any update, there are plenty of small tweaks, including enhanced retiming, audio fades on individual channels, improved split edits and a new InertiaCam stabilization algorithm.

df_nle_9_smThe most obvious change is the move from separate Events and Projects folders to unified Libraries, similar to Aperture. Think of a Library as the equivalent to a Final Cut Pro 7 or Premiere Pro CC project file, containing all data for clips and sequences associated with a production. An FCP X Library as viewed in the Finder is a bundled file, which can be opened using the “show package contents” Finder command. This reveals internal folders and files for Events, Projects and aliases linked to external media files. Imported files that are optionally copied into a Library are also contained there, as are rendered and transcoded files. The Libraries no longer need to live at the root of a hard drive and can be created for individual productions. Editors may open and close any or all of the Libraries needed for an edit session.

df_nle_8_smFCP X’s performance was optimized for Mavericks, the new Mac Pro and dual GPU processing. By design, this means improved 4K throughput, including native 4K support for ProRes, Sony XAVC and REDCODE camera raw media files. This performance boost has also filtered down to older machines. 10.1 brought better performance with 1080p ProRes and even 5K RED files to my 2009 Mac Pro. Clearly Apple wants FCP X to be a showcase for the power of the new Mac Pro, but you’ll get benefits from this update, even if you aren’t ready to leap to new hardware.

Along with Final Cut Pro X 10.1, Apple also released updates to Motion and Compressor. The Motion update was necessary to integrate the new FxPlug3 architecture, which enables developers to add custom interface controls. Compressor was the biggest change, with a complete overhaul of the interface in line with the look of FCP X.

df_nle_4_sm

Avid Media Composer (version 7.0.3)

The biggest feature of Media Composer 7.0.3 is optimization for new operating systems. It is qualified for Windows 8.1 and Mac OS X 10.8.5, 10.9 and 10.9.1. There are a number of interface changes, including separate audio and video effects palette tabs and changing the appearance of background processing indicator icons. 24fps sound timecode is now supported, the responsiveness with the Avid Artist Color Controller has been improved and the ability to export a simplified AAF file has been  added.

df_nle_10_smTranscode choices gain a set of H.264 proxy file codecs. These had been used in other Avid news and broadcast tools, but are now extended into Media Composer. Support for RED was updated to handle the RED Dragon format. With the earlier introduction of 7.0, Avid added background transcoding services and FrameFlex – Avid’s solution for bigger-than-HD files. FrameFlex enables resizing and pan/scan/zoom control within that file’s native resolution. Media Composer also accepts mixed frame rates within a single timeline, by applying Motion Adapters to any clip that doesn’t match the frame rate of the project. 7.0.3 improves control over the frame blending method to give the editor a better choice between temporal or spatial smoothness.

There is no clear winner among these three. If you are on Windows, then the choice is between Adobe and Avid. If you need 4K output today, Apple or Adobe are your best option. All three handle a wide range of popular camera formats well – especially RED. If you like tracks – go Avid or Adobe. If you want the best application for the new Mac Pro, that will clearly be Apple Final cut Pro X. These are all great tools, capable of any level of post production – be it commercial, corporate, web, broadcast entertainment or feature films. If you’ve been on the fence for two years, now is the time to switch, because there are no bad tools – only preferences.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Lone Survivor

df_ls_01

The Iraq and Afghanistan wars have already spawned a number of outstanding films, but one that is bound to set the bar higher is the recently-released Lone Survivor, starring Mark Wahlberg as US Navy SEAL Marcus Luttrell. In early screenings, a number of critics have already compared the film favorably with Saving Private Ryan and Black Hawk Down.

df_ls_06Skip ahead a paragraph if you are concerned about spoilers. The story is based on Luttrell’s best-selling memoir by the same name. It focuses on the failed 2005 Operation Red Wings in which a four-man SEAL team that included Luttrell was sent to retrieve an Al Queda-aligned Taliban leader. Their position was discovered by local shepherds and the SEALs had to decide whether to let them go or kill them. After a unit discussion, the locals were released, which presumably compromised the SEAL unit’s position. Finding themselves surrounded and outnumbered, a firefight ensued. A helicopter sent to extract the team was shot down resulting in the deaths of sixteen, including SEAL and Army special operations units. In the fight, Luttrell’s three teammates were also killed. The story continues with the unusual circumstances that led to his survival through the help of local tribesmen and subsequent rescue. The team leader, Lt. Michael P. Murphy, received a posthumous Congressional Medal of Honor for his actions on that day.

Making this film was entrusted to writer/director Peter Berg (Battleship, Hancock, The Kingdom, Friday Night Lights). For the edit, Berg tapped Colby Parker, Jr., who has cut seven films with Berg. Parker works on a mix of films, music videos and commercials. In fact, he first met Berg doing a Limp Bizkit music video. For commercials, Parker works out of LA’s Rock Paper Scissors editorial company, also home to Academy Award-winning editors, Angus Wall and Kirk Baxter (The Girl with the Dragon Tattoo, The Social Network).

df_ls_07I recently spoke with Parker about his experiences on Lone Survivor. Parker explained, “While we were working on Hancock, Peter brought in Marcus and introduced him. We were going to go full speed into Lone Survivor, but then Battleship came up first, so that had to be put on the back burner. After Battleship, it was back to Lone Survivor, but Peter had to find independent financing for it. He had to work really hard to make it happen. Peter has a great affection for three things – his son, football and the military. His father was a Navy historian, so this was a passion project for him.”

Marcus Luttrell was instrumental in the film staying technically accurate. Parker continues, “Marcus was involved in approving the locations, as well as the edit. In a typical war movie, you see a lot of yelling in a battle as commands are issued back and forth. That’s completely different from how the SEALs operate. They are very disciplined units and each member knows what each person’s role is and where there should be. Communication is often silent through signals and there’s a lot of flanking. The SEALs call it ‘water through trees’. The SEALs tend to shoot sparsely and then wait for a response, so the enemy will reveal their position. I had to recut some scenes, to minimize the yelling that wasn’t correctly portrayed.”

df_ls_05Lone Survivor involved a 44-day shoot in New Mexico, where the mountains were a sufficient substitute for Afghanistan. Tobias Schliessler (The Fifth Estate, Hancock) was the director of photography, working with RED cameras. According to Parker, they watched a lot of other war films for the right frame of mind. He explained, “As a reference for how the environment should look, the guideline was the documentary Restrepo about the Afghanistan war. This was his basis for sky, lighting and terrain.”

Editing took about six months. Parker said, “Peter likes to shoot with three cameras all the time, so there’s a lot of coverage. I edit while they are shooting, but I wasn’t on location. I like to blast through the footage to keep up with the camera. This way I can let Peter know if any extra coverage is needed. Often I’ll get word to the 1st AD and he’ll sneak in extra shots if the schedule permits. Although I will have a first assembly when the production wraps, Peter will never sit though a complete viewing of that. He works in a very linear manner, so as we start to view a scene, if there’s something that bothers him, we’ll stop and address it. My first cut was about two-and-a-half hours and the final length came in at two hours.”

df_ls_03Parker continued, “There were a number of scenes that paced well when we intercut them, rather than let them play as written in a linear fashion. For instance, we wanted to let the mission briefing scene play normally. This is where the SEAL team is briefed on their target.  That scene was followed by a scene of the target beheading a local. However, we realized that an actual briefing is very technical and rote – so intercutting these scenes helped keep the audience engaged.” In a film that is intended to accurately portray the frenetic events and chaos of a battle, continuity becomes a challenge for the editor. Parker explained, “In some of the key scenes, the cameras would be on the stars for their takes and then would be turned around to cover the side of the scene showing the Taliban. It was always an issue of matching the energy.”

df_ls_04“I purposefully wanted to make the battlefield clear for the audience. I didn’t want it to be a messy confusing battle. I wanted the audience to experience exactly what the SEALs felt, which was the Taliban closing in on them. I slowed down the pacing so the audience could really track the scene. I’ve had people tell me after screenings that they appreciated the way the first battle is presented, because they’re never lost or confused. In the key scene, where the seal team is debating about what to do with the goat herders, there was a lot of improvisation and a lot of coverage. There was so much strong footage that it was overwhelming. I ended up transcribing every line of dialogue to index cards, then I would lay them on the floor and edit the scene together with these cards.”

Another struggle was how much violence to show. Parker continued, “During the battle, there are scenes with long falls and jumps down the mountainside as the SEALS are looking for cover. These were very brutal visually and I had to be conscious of whether I was getting desensitized to the brutality and needed to dial it back some. One scene that I fought hard to keep in the way I’d cut it, was when Marcus breaks his leg. There’s a bone sticking out through the skin and he has to push it back in. Some folks thought that showing this was just too much, because it was too gruesome. That’s obviously extremely painful, but it’s accurate to what happened and tells a lot about what sort of people become SEALs. I’m glad it stayed in.”

df_ls_02Visual effects played a large role in making Lone Survivor. Image Engine Design (Elysium, Zero Dark Thirty, District 9) in Vancouver handled the majority of effects. The Chinook helicopter crash sequence was completed by ILM. Parker said, “There were a lot of practical visual effects done on location, but these were augmented by Image Engine. The crew did trek up into the upper mountains in New Mexico into some difficult places, so that created a realistic starting point. Muzzle flashes were added or enhanced and mountains were added to some backgrounds. The sets of the villages were only one or two huts and then Image Engine built everything around those. Same for the SEAL base. There were only a few real buildings and from that, they built out a larger base.”

Sound was also a key part of the experience. Parker explained, “Wylie Stateman (Django Unchained, Inglourious Basterds) was the supervising sound editor and working with him was very inspiring. He uses a lot of foley, rather than canned effects, and was able to build up a whole sound design ‘language’ for the environment of each scene. It was very collaborative. Wylie and I would discuss our ideas and massage edits to make the sound design more effective.”

The editorial department was set up with four Avid Media Composer systems connected via Unity shared storage. Parker is a big proponent of Avid. He said, “I strictly cut on Avid, but I like some of the improvements they made, thanks for the pressure put on them by Final Cut Pro. This includes some of the timeline-based editing changes, like the ability to copy and paste within the timeline.” The final DI and color grading was handled by Company 3 in Los Angeles.

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters

 

Color Concepts and Terminology

df_clrterms_10_sm

It’s time to dive into some of the terms and concept that brought you modern color correction software. First of all – color grading versus color correction. Many use these terms to identify different processes, such as technical shot matching versus giving a shot a subjective “look”. I do this too, but the truth of the matter is that they are the same and are interchangeable. Grading tends to be a more European way of naming the process, but it is the same as color correction. (Click on any of the images in this article for an expanded and more descriptive view.)

All of our concepts stem from the film lab process known as color timing. Originally this described the person who knew how long to leave the negative in the chemical bath to achieve the desired result (the “timer”). Once the industry figured out to manipulate color in the negative-to-positive printing process, the “color timer” was the person who controlled the color analyzer and who dialed in degrees of density and red/blue/green coloration. The Dale Grahn Color iPad application will give you a good understanding of this process. Alexis Van Hurkman also covers it in his “Color Correction Handbook”.df_clrterms_09_sm

Electronic video color correction started with early color cameras and telecine (film-to-tape transfer or “film chain”) devices. These were based on red/blue/green color systems, where the video engineer (or “video shader”) would balance out the three components, along with exposure and black level (shadows). He or she would adjust the signal of the pick-up systems, including tubes, CCDs and photoelectric cells.

RCA added circuitry onto their cameras called a chroma proc, which divided the color spectrum according to the six divisions of the vectorscope – red, blue, green, cyan, magenta and yellow. The chroma proc let the operator shift the saturation and/or hue of each one of these six slices. For instance, you could desaturate the reds within the image. Early color correction modules for film-to-tape transfer systems adopted this same circuity. The “primary” controls manipulated the actual pick-up devices, while the “secondary” controls were downstream in the signal chain and let you further fine tune the color according to this simple, six-vector division.

df_clrterms_11_sm

Early color correction system were built to transfer color film to air or to videotape. They were part machine control and part color corrector. Modern color correction for post production came to be, because of these three key advances: memory storage, scene detection and signal decoding.

Memory storage. Once you could store and recall color correction settings, then it was easy to go back and forth between camera angles or shots and apply a different setting to each. Or you could create several looks and preview those for the client. The addition of this technology was the basis for a seminal patent lawsuit, known as the Rainbow patent suit, as the battle ranged over who first developed this technology.

Scene detection. Film transfer systems had to play in real-time to be recorded to videotape, which meant that shot changes had to trigger the change from one color correction setting to the next. Early systems did this via the operator manually marking an edit point (called “notching”), via an EDL (edit decision list) or through automatic scene detection circuitry. This was important for the real-time transfer of edited content, including film prints, cut negative and eventually videotape programs.

Signal decoding. The ability of color correction systems to decode a composite or component analog (and later digital) signal through added hardware, shifted color correction from camera shading and film transfer to being another general post production tool at a post facility. The addition of a signal decoder board in a DaVinci unit split the input signal into RGB parts and enabled the colorist to enhance the correction of an already-edited master using the “secondary” signal electronics of the system. This enabled “tape-to-tape” color correction of edited masters. Thanks to scene detection or an EDL, color correction could be shot-to-shot and frame-accurate, when played back in real-time for its re-encoded, corrected output back to a second videotape master.

Eventually the tools used in hardware-based, tape-to-tape color correction systems became standard. Quantel and Avid led the way by being first to incorporate these features into their nonlinear editing software.

_________________________________

Color correction software tends to break up its control into primary and secondary functions. As you can see from the earlier explanations, there’s really no reason to do that, since we are no longer controlling the pick-up devices within a camera or telecine. Nevertheless, it’s terminology we seem to be comfortable with. Often secondary controls enable masking and keys to isolate color – not because it has to be that way – but, because DaVinci added these features into their secondary control set. In modern correction tools, any function could happen on any layer, node, room, etc.df_clrterms_03_sm

The core language for color manipulation still boils down to the simple controls exemplified by the Dale Grahn app. A signal can be brighter, darker, more or less “dense” (contrast) and have its colorimetry shifted by added or subtracting red, blue or green for the overall image or in the highlight, midrange or shadow portions of the image. This basic approach can be controlled through sliders, knobs, color wheels and other user interfaces. Different software applications and plug-ins get to the same point through different means, so I’ll cover a few approaches here. Bear in mind, that since some of these actually represent somewhat different color science and math, examples that I present might not yield exactly the same results. Many controls are equivalent in their effect, though not necessarily identical in how they affect the image.

df_clrterms_01_smA common misconception is that shadow/mid/highlight controls on a 3-way color corrector will evenly divide the waveform into three discrete ranges. In fact, these are very large, overlapping ranges that interact with each other. If you shift a shadow luminance control up, it doesn’t typically just expand or compress the lower third of the waveform. Although some correctors act this way, most tend to shift the whole waveform up or down. If you change the color balance of the midrange, this color change will also affect shadows and highlights. The following is a quick explanation of some of the popular color control models.

Contrast/pivot/temperature/tint

df_clrterms_07_smContrast and temperature controls have recently become more popular and are considered a more photographic approach to correction. When you adjust contrast, the image levels expand or stretch as viewed on a waveform. Highlights get brighter and shadows deepen. This contrast expansion centers on a pivot point, which by default is at the center of the signal. If you change the pivot slider you are shifting the center point of this contrast expansion. In one direction, this means the contrast control will stretch the range below the pivot point more than above it. Shift the pivot slider in the other direction for the opposite effect.

df_clrterms_06_smColor temperature and tint (also called magenta) controls balance the red/blue/green signal channels in relationship to each other. If you slide a color temperature control while watching an RGB parade display on a waveform, you’ll note that adjustments shift the red and blue channels up or down in the opposite direction to each other, while leaving green unaffected. When you adjust the tint (or magenta) slider, you are adjusting the green channel. As you raise or lower the green, both the red and blue channels move together in a compensating direction.

Slope/offset/power

df_clrterms_08_smThe SOP model is used for CDL (color decision list) values and breaks down the signal according to luma (master), red, green and blue and are expressed in the form of plus or minus values for slope, offset and power. Scratch Play’s color adjustments are a good example of the SOP model in action. Slope is equivalent to gain. Picture the waveform as a diagonal line from dark to light. As you rotate this imaginary line, the higher part becomes taller, which represents brightness values. Think of the slope concept as this rotating line. As such, its results are comparable to a contrast control.

The offset control shifts the entire signal up or down, similar to other shadow or lift controls. The power control alters gamma. As you adjust power, the gamma signal is curved in a positive or negative direction, effectively making the midrange tones lighter or darker.

Lift/gamma/gain

df_clrterms_02_smThe LGG model is the common method used for most 3-way color wheel-style correctors. It effectively works in a similar manner to contrast and SOP, except that the placement of controls makes more sense to most casual users. Gain, as the name implies, increases the signal, effectively expanding the overall values and making highlights brighter. Lift shifts the entire signal higher or lower. Changing a lift control to darken shadows, will also have some effect on the overall image. Gamma bends the curve and effectively makes the midrange values lighter or darker.

Luma ranges

df_clrterms_04_smThe portions of the signal altered by highlight/shadow/midrange controls (like SOP, LGG or other) overlap. If you change the color balance for the midrange tones, you will also contaminate shadows and highlights with this color shift. The extent of the portion that is affected is controlled by a luma range control. Many color correction applications do not give you control over shifting the crossover points of these luma ranges. Some that do, include Avid Symphony, Synthetic Aperture Color Finesse and Adobe SpeedGrade. Each offers curves or sliders to reduce or expand the area controlled by each luma range and effectively tightens or widens the overlap or crossover between the ranges.

DaVinci Resolve includes a similar function within its log-style color wheels panel. It uses range adjustments that can limit the area affected by the balance and saturation controls. Similar results may be achieved by using HSL keyers or qualifiers that include softening controls.

Channels or printer lights

df_clrterms_05_smVideo signals are made up of red, blue and green channel information. It is not uncommon for properly-balanced digital cameras to still maintain a green color cast to the overall image, especially if log-profile recording was used. Here, it’s best to simply balance the overall channels first to neutralize the image, rather than attempt to do this through color wheel adjustments. Some software uses actual channel controls, so it’s easy to make a base-level adjustment to the output or mix of a channel. If your software uses printer lights, you can achieve the same results. Printer lights harken back to lab color timing, using point values that equate to color analysis values. Regardless, dialing in a plus or minus red/blue/green printer light value effectively gives you the same results as altering the output value of a specific color channel.

This is just a short post to go over some of the more confusing terminology found in modern color correction software. Many applications tend to blend the color science models, so as you apply the points mentioned to your favorite tool, you may see somewhat different results. Hopefully I’ve gotten you in the ballpark, in order to understand what happens when you twirl the knob the next time.

©2014 Oliver Peters

The Wolf of Wall Street

df_wows_02

Few directors have Martin Scorsese’s talent to tell entertaining stories about the seamier side of life. He has a unique ability to get us to understand and often be seduced by the people who live outside of the accepted norms. That’s an approach he’s used with great success in films like Taxi Driver, Goodfellas, Gangs of New York and others. Following this path is Scorsese’s newest, The Wolf of Wall Street, based on the memoir of stock broker Jordan Belfort.

Belfort founded the brokerage firm Stratton Oakmont in the 1990s, which eventually devolved into an operation based on swindling investors. The memoir chronicles Belfort’s excursions into excesses and debauchery that eventually led to his downfall and federal prosecution for securities fraud and money laundering. He served three years in federal prison and was sentenced to pay $110 million in restitution after cooperating with the FBI. The film adaptation was written by Terence Winter (Boardwalk Empire, Sopranos), who himself spent some time working in a tamer environment at Merrill Lynch during law school. Leo DiCaprio stars as Belfort, along with Jonah Hill and Matthew McConaughy as fellow brokers. ( Note: Due to the damage caused by the real Belfort and Stratton Oakmont to its investors, the release of the film is not without its critics. Click here, here and here for some reactions.)

df_wows_04I recently spoke with Thelma Schoonmaker, film editor for The Wolf of Wall Street. Schoonmaker has been a long-time collaborator with Martin Scorsese, most recently having edited Hugo. I asked her how it was to go from such an artistic and technically complex film, like Hugo, to something as over-the-top as The Wolf of Wall Street. She explained, “When I encounter people outside of this industry and they learn I had some connection with Hugo, they make a point of telling me how much they loved that film. It really touched them. The Wolf of Wall Street is a completely different type of film, of course.”

df_wows_01“I enjoyed working on it, because of its unique humor, which no one but Scorsese expected. It’s highly entertaining. Every day I’d get these fantastically funny scenes in dailies. It’s more of an improvisational film like Raging Bull, Casino or Goodfellas. We haven’t done one of those in awhile and I enjoyed getting back to that form. I suppose I like the challenge, because of the documentary background that Marty and I have from our early careers. Continuity doesn’t always match from take to take, but that’s what makes the editing great fun, but also hard.  You have to find a dramatic shape for the improvised scenes, just as you do in a documentary.”

Schoonmaker continued, “The scenes and dialogue are certainly scripted and Scorsese tells the actors that they need to start ‘here’ and end up ‘there’. But then, ‘have fun with the part in the middle’. As an editor, you have to make it work, because sometimes the actors go off on wonderful tangents that weren’t in the script. The cast surrounding Belfort and his business partner, Donnie Azoff (played by Jonah Hill), very quickly got into creating the group of brokers who bought into the method Belfort used to snag investors into questionable stock sales. They are portrayed as not necessarily the smartest folks and Belfort used that to manipulate them and become their leader.  This is fertile ground for comedy and everyone dove into their parts with incredible gusto – willing to do anything to create the excess that pervaded Belfort’s company.  They also worked together perfectly as an ensemble – creating jealousies between themselves for the film.”

df_wows_03The Wolf of Wall Street is in many ways a black comedy. Schoonmaker addressed the challenges of working with material that portrayed some pretty despicable behavior. “Improvisation changed the nature of this film. You could watch the actors say the most despicable things in a take and then they’d crack up afterwards. I asked Leo at one point how he could even say some of the lines with a straight face! Some of it is pretty bizarre, like talking about how to create a dwarf-tossing contest, which Belfort organized as morale boosting for his office parties. Or offering a woman $10,000 to shave her head. And this was actually done in dead seriousness, just for sport.”

In order to get the audience to follow the story, you can’t avoid explaining the technical intricacies of the stock market. Schoonmaker explained, “Belfort started out selling penny stocks. Typically these have a fifty percent profit compared with blue chips stocks that might only have a one percent profit margin. Normally poorer investors buy penny stocks, but Belfort got his brokers to transfer those sales techniques to richer clients, who were first sold a mix of blue chip and penny stocks. From there, he started to manipulate the penny stocks for his own gain, ultimately leading to his downfall. We had to get some of that information across, without getting too technical. Just enough – so the audience could follow the story. Not everything is explained and there are interesting jumps forward. Leo fills in a lot of this information with his voice-overs. These gave Leo’s character additional flavor, reinforcing his greed and callousness because of the writing.  A few times Scorsese would have Leo break the fourth wall by talking directly to the audience to explain a concept.”

The Wolf of Wall Street started production in 2012 for a six-month-long shoot and completed post in November 2013. It was shot primarily on 35mm film, with additional visual effects and low-light material recorded on an ARRI ALEXA. The negative was scanned and delivered as digital files for editing on a Lightworks system.

df_wows_06Schoonmaker discussed the technical aspects. “[Director of Photography] Rodrigo Prieto did extensive testing of both film and digital cameras before the production. Scorsese had shot Hugo with the ALEXA, and was prepared to shoot digitally, but he kept finding he liked the look of the film tests best. Rob Legato was our visual effects supervisor and second unit director again. This isn’t an effects film, of course, but there are a lot of window composites and set extensions. There were also a lot of effects needed for the helicopter shots and the scenes on the yacht. Rob was a great collaborator, as always.

Scott Brock, my associate editor, helped me with the temp sound mixes on the Lightworks and Red Charyszyn was my assistant handling the complex visual effects communication with Rob. They both did a great job.” Scott Brock added some clarification on their set-up. According to Brock, “The lab delivered the usual Avid MXF media to us on shuttle drives, which we copied to our EditShare Xstream server.  We used two Avids and three Lightworks for Wolf, all of which were networked to the Xstream server.  We would use one of the Avids to put the media into Avid-style folders, then our three Lightworks could link to that media for editing.”

Schoonmaker continued, “I started cutting right at the beginning of production. As usual, screening dailies with Scorsese was critical, for he talks to me constantly about what he has shot. From that and my own feelings, I start to edit. This was a big shoot with a very large cast of extras playing the brokers in the brokerage bullpens. These extras were very well-trained and very believable, I think. You really feel immersed in the world of high-pressure selling. The first cut of the film came in long, but still played well and was very entertaining. Ultimately we cut about an hour out to get to the final length of just under three hours with titles.”

df_wows_05“The main ‘rewriting of the scenes’ that we did in the edit was because of the improvisations and the occasional need for different transitions in some cases.  We had to get the balance right between the injected humor and the scripted scenes. The center of the film is the big turning point. Belfort turns a potentially damaging blow to an IPO that the company is offering into a triumph, as he whips up his brokers to a fever pitch. We knew we had to get to that earlier than in the first cut. Scorsese didn’t want to simply do a ‘rise and fall’ film. It’s about the characters and the excesses that they found themselves caught up in and how that became so intoxicating.”

An unusual aspect of The Wolf of Wall Street is the lack of a traditional score. Schoonmaker said, “Marty has a great gift for putting music to film. He chose  unexpected pre-recorded pieces to reflect the intensity and craziness of Belfort’s world. Robbie Robertson wrote an original song for the end titles, but the rest of the film relies completely on existing songs, rather than score. It’s not intended to be period-accurate, but rather music that Scorsese feels is right for the scene. He listens to [SiriusXM] The Loft while he’s shaving in the morning and often a song he hears will just strike him as perfect. That’s where he got a lot of his musical inspiration for Wolf.”

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters

LUTs and FCP X

df_luts_01

LUTs or color look-up tables are a method of converting images from one color space or gamma profile into another. LUTs are usually a mathematically correct transform of one set of color and level values into another. For most editors and colorists, LUTs are commonly associated with log profiles that are increasingly used with various digital cameras, like an ARRI ALEXA, RED One, RED Epic or Blackmagic Design Cinema Camera. (Click on the images in this article for an expanded view.)

The concept gets confusing, because there are various types of LUTs and they can be inserted into different stages of the pipeline. There are display LUTs, used to convert the viewing color space, such as from Rec. 709 (video) into P3 (digital cinema projection). These can be installed into hardware conversion boxes, monitors and within software grading applications. There are camera LUTs, which are used to convert gamma profiles, such as from log-C to Rec. 709. And finally, there are creative LUTs used for aesthetic purposes, like film stock emulation.

df_luts_02One of the really sweet parts of Apple Final Cut Pro X is that it offers a vastly improved color pipeline that ties in closely to underpinnings of the OS, such as ColorSync. This offers developers opportunities over FCP “legacy” and quite frankly over many other competitors. Built into the code is the ability to recognize certain camera metadata if the camera manufacturer chooses to take advantage of Apple’s SDK. ARRI, Sony and RED are among those that have done so. For example, when you import ARRI ALEXA footage that was recorded with a log-C gamma profile, a metadata flag in the file toggles on log processing automatically within FCP X. Instead of seeing the flat log-C image, you see one that has already been converted, on-the-fly into Rec. 709 color space.

This built-in log processing comes with some caveats, though. The capability is only enabled with files recorded on ALEXA cameras with more recent firmware. It cannot be manually applied to older log-C footage, nor to any other log-encoded video file. It can only be toggled on or off without any adjustments. Finally, because this is done via under-the-hood ColorSync profile changes, it happens prior to the point any filters or color correction can be applied within FCP X itself.

df_luts_03A different approach has been developed by colorist Denver Riddle, known for his Color Grading Central website, products and tutorials. His new product, LUT Utility, is designed to provide FCP X editors with a better way of using LUTs for both corrective and creative color transforms. The plug-in installs into both Final Cut Pro X and Motion 5 and comes with a number of built-in LUTs for various cameras, such as the ALEXA, Blackmagic and even the Cinestyle profiles used with the Canon HDSLRs. Simply drop the filter onto a clip and select the LUT from the pulldown menu in the FCP X inspector pane. As a filter, you can freely apply any LUT selection, regardless of camera – plus, you can adjust the strength of the LUT via a slider. It can work within a series of filters applied to the same clip and can be placed upstream or downstream of any other filters, as well as within an adjustment layer (blank title effect). You can also stack multiple instances of the LUT with different settings on the same clip for creative effect.

df_luts_04The best part of LUT Utility is that you aren’t limited to the built-in LUTs. When you install the plug-in, a LUT Utility pane is added to System Preferences. In that pane, you can add additional LUTs sold by Color Grading Central or that you have created yourself. (External LUT files can be directly accessed within the filter when working in Motion 5.) One such package is the set of Osiris Digital Film Emulation LUTs developed jointly by Riddle and visionCOLOR. These are a set of nine film LUTs designed to mimic the looks of various film stocks. Each has two settings designed for either log or Rec. 709 video. For example, you can take an ALEXA log-C file and apply two instances of LUT Utility. Set the first filter to use the log-C-to-Rec.709 LUT. Then in the second filter, pick one of the film LUTs, but use the Rec. 709 version of it. Or, you could apply one instance of the LUT Utility filter and simply pick the same film LUT, but instead, select its log version. Both work, but will give you slightly different looks. Using the filter’s amount slider, it’s easy to fine tune the intensity of the effect.

df_luts_05LUT Utility is applied as a filter, which means you can still add other color correction filters before or after it. Applying a filter, like Hawaiki Color, prior to a log conversion LUT, means that you would be adjusting color values of the log image, before converting it into Rec. 709. If you add such a filter after the LUT, then you are grading the already-converted image. Each position will give you different results, but most of this is handled gracefully, thanks to FCP X’s floating-point processing. Finally, you can also apply the LUT as a filter and then do additional corrections downstream of the filter by using the built-in Color Board tools.

I found these LUTs easy to install and use. They appear to be pretty lightweight in how they affect FCP X playback performance. I’m running a 2009 Mac Pro with a new Mavericks installation. I can apply one or more instances of the LUT Utility filter and my unrendered ProRes media plays in real-time. With the widespread use of log and log-style gamma profiles, this is one of the handiest filter sets to have if you are a heavy FCP X user. Not only are most of the common cameras covered, but the Osiris LUTs add a nice creative edge that you won’t find at this price point in competitive products. If you use FCP X for color correction and finishing, then it’s really an essential tool.

©2014 Oliver Peters