The On Camera Interview


Many projects are based on first person accounts using the technique of the on camera interview. This approach is used in documentaries, news specials, corporate image presentations, training, commercials, and more. I’ve edited a lot of these, especially for commercials, where a satisfied customer might give a testimonial that gets cut into a five-ish minute piece for the web or a DVD and then various commercial lengths (:10, :15, :30, :60, 2 min.). The production approach and editing techniques are no different in this application than if you are working on a documentary.

The interviewer

The interview is going to be no better than the quality of the interviewer asking the off camera (and unheard) questions. Asking good questions in the right manner will yield successful results. Obviously the interviewer needs to be friendly enough to establish a rapport with the subject. People get nervous on camera, so the interviewer needs to get them relaxed. Then they can comfortably answer the questions and tell the story in their own words. The interviewer should structure the questions in a way that the totality of the responses tells a story. Think in terms of story arc and strive to elicit good beginning and ending statements.

df4115_iview_2Some key points to remember. First, make sure you get the person to rephrase the question as part of their answer, since the audience won’t hear the interviewer. This makes their answer a self-contained statement. Second, let them talk. Don’t interject or jump on the end of the answer, since this will make editing more difficult.

Sometimes in a commercial situation, you have a client or consultant on set, who wants to make sure the interviewee hits all the marketing copy points. Before you get started, you’ll need to have an understanding with the client that the interviewee’s answers will often have to be less than perfect. The interviewees aren’t experienced spokespersons. The more you press them to phrase the answer in the exact way that fits the marketing points or to correctly name a complex product or service in every response, the more stilted their speaking style will become. Remember, you are going for naturalness, honesty and emotion.

The basics

df4115_iview_1As you design the interview set, think of it as staging a portrait. Be mindful of the background, the lighting, and the framing. Depending on the subject matter, you may want a matching background. For example, a doctor’s interview might look best in a lab or in the medical office with complex surgical gear in the background. An interview with an average person is going to look more natural in a neutral environment, like their living room.

You will want to separate the interview subject from the background and this can be achieved through lighting, lens selection, and color scheme. For example, a blonde woman in a peach-colored dress will stand out quite nicely against a darker blue-green background. A lot of folks like the shallow depth-of-field and bokeh effect achieved by a full-frame Canon 5D camera with the right lens. This is a great look, but you can achieve it with most other cameras and lenses, too. In most cases, your video will be seen in the 16:9 HD format, so an off-center framing is desirable. If the person is looking camera left, then they should be on the right side of the frame. Looking camera right, then they should be on the left side.df4115_iview_7

Don’t forget something as basic as the type of chair they are sitting in. You don’t want a chair that rocks, rolls, leans back, or swivels. Some interviews take a long time and subjects that have a tendency to move around in a chair become very distracting – not to mention noisy – in the interview, if that chair moves with them. And of course, make sure the chair itself doesn’t creak.

Camera position

df4115_iview_3The most common interview design you see is where the subject is looking slightly off camera, as they are interacting with the interviewer who sitting to the left or the right of the camera. You do not want to instruct them to look into the camera lens while you are sitting next to the camera, because most people will dart between the interviewer and the camera when they try to attempt this. It’s unnatural.

The one caveat is that if the camera and interviewer are far enough away from the interview subject – and the interviewer is also the camera operator – then it will appear as if the interviewee is actually looking into the lens. That’s because the interviewer and the camera are so close to each other. When the subject addresses the interviewer, he or she appears to be looking at the lens when in fact the interviewee is really just looking at the interviewer.

df4115_iview_16If you want them looking straight into the lens, then one solution is to set up a system whereby the subject can naturally interact with the lens. This is the style documentarian Errol Morris has used in a rig that he dubbed the Interrotron. Essentially it’s a system of two teleprompters. The interviewer and subject can be in the same studio, although separated in distance – or even in other rooms. The two-way mirror of the teleprompter is projecting each person to the other. While looking directly at the interviewer in the teleprompter’s mirror, the interviewee is actually looking directly in the lens. This feels natural, because they are still looking right at the person.

Most producers won’t go to that length, and in fact the emotion of speaking directly to the audience, may or may not be appropriate for your piece. Whether you use Morris’ solution or not, the single camera approach makes it harder to avoid jump cuts. Morris actually embraces and uses these, however, most producers and editors prefer to cover these in some way. Covering the edit with a b-roll shot is a common solution, but another is to “punch in” on the frame, by blowing up the shot digitally by 15-30% at the cut. Now the cut looks like you used a tighter lens. This is where 4K resolution cameras come in handy if you are finishing in 2K or HD.

df4115_iview_6With the advent of lower-cost cameras, like the various DSLR models, it’s quite common to produce these interviews as two camera shoots. Cameras may be positioned to the left or the right of the interviewer, as well as on either side. There really is no right or wrong approach. I’ve done a few where the A-camera is right next to the interviewer, but the B-camera is almost 90-degrees to the side. I’ve even seen it where the B-camera exposes the whole set, including the crew, the other camera, and the lights. This gives the other angle almost a voyeuristic quality. When two cameras are used, each should have a different framing, so a cut between the cameras doesn’t look like a jump cut. The A-camera might have a medium framing including most of the person’s torso and head, while the B-camera’s framing might be a tight close-up of their face.

While it’s nice to have two matched cameras and lens sets, this is not essential. For example, if you end up with two totally mismatched cameras out of necessity – like an Alexa and a GoPro or a C300 and an iPhone – make the best of it. Do something radical with the B-camera to give your piece a mixed media feel. For example, your A-camera could have a nice grade to it, but the B-camera could be black-and-white with pronounced film grain. Sometimes you just have to embrace these differences and call it a style!


df4115_iview_4When you are there to get an interview, be mindful to also get additional b-roll footage for cutaway shots that the editor can use. Tools of the trade, the environment, the interview subject at work, etc. Some interviews are conducted in a manner other than sitting down. For example, a cheesemaker might take you through the storage room and show off different rounds of cheese. Such walking-talking interviews might make up the complete interview or they might be simple pieces used to punctuate a sit-down interview. Remember, that if you have the time, get as much coverage as you can!

Audio and sync

It’s best to use two microphones on all interviews – a lavaliere on the person and a shotgun mic just out of the camera frame. I usually prefer the sound of the shotgun, because it’s more open; but depending on how noisy the environment is, the lav may be the better channel to use. Recording both is good protection. Not all cameras have great sound systems, so you might consider using an external audio recorder. Make sure you patch each mic into separate channels of the camera and/or external recorder, so that they are NOT summed.

df4115_iview_8Wherever you record, make sure all sources receive audio. It would be ideal to feed the same mics to all cameras and recorders, but that’s not always possible. In that case, make sure that each camera is at least using an onboard camera mic. The reason to do this is for sync. The two best ways to establish sync is common timecode and a slate with a clapstick. Ideally both. Absent either of those, then some editing applications (as well as a tool like PluralEyes) can analysis the audio waveform and automatically sync clips based on matching sound. Worst case, the editor can manually sync clips be marking common aural or visual cues.

Depending on the camera model, you may have media cards that don’t span and automatically start a new clip every 4GB (about every 12 minutes with some formats). The interviewer should be mindful of these limits. If possible, all cameras should be started together and re-slated at the beginning of each new clip.

Editing workflow

df4115_iview_13Most popular nonlinear editing applications (NLE) include great features that make editing on camera interviews reasonably easy. To end up with a solid five minute piece, you’ll probably need about an hour of recorded interview material (per camera angle). When you cut out the interviewer’s questions, the little bit of chit chat at the beginning, and then repeats or false starts that an interviewee may have, then you are generally left with about thirty minutes of useable responses. That’s a 6:1 ratio.

The goal as an editor is to be a storyteller by the soundbites you select and the order into which you arrange them. The goal is to have the subject seamlessly tell their story without the aide of an on camera host or voice-over narrator. To aid the editing process use NLE tools like favorites, markers, and notes, along with human tools like written transcripts and your own notes to keep the project organized.

This is the standard order of things for me:

Sync sources and create multi-cam sequences or multi-cam clips depending on the NLE.

Pass 1 – create a sequence with all clips synced up and organized into a single timeline.

Pass 2 – clean up the interview and remove all interviewer questions.

Pass 3 – whittle down the responses into a sequence of selected answers.

Pass 4 – rearrange the soundbites to best tell the story.

Pass 5 – cut between cameras if this is a multi-camera production.

Pass 6 – clean up the final order by editing out extra words, pauses, and verbal gaffs.

Pass 7 – color correct clips, mix audio, add b-roll shots.

df4115_iview_9As I go through this process, I am focused on creating a good “radio cut” first. In other words, how does the story sound if you aren’t watching the picture. Once I’m happy with this, I can worry about color correction, b-roll, etc. When building a piece that includes multiple interviewees, you’ll need to pay attention to several other factors. These include getting a good mix of diversity – ethnic, gender, job classification. You might want to check with the client first as to whether each and every person interviewed needs to be used in the video. Clearly some people are going to be duds, so it’s best to know up front whether or not you’ll need to go through the effort to find a passable soundbite in those cases or not.

There are other concerns when re-ordering clips among multiple people. Arranging the order of clips so that you can cut between alternating left and right-framed shots makes the cutting flow better. Some interviewees comes across better than others, however, make sure not to lean totally on these responses. When you get multiple, similar responses, pick the best one, but if possible spread around who you pick in order to get the widest mix of respondents. As you tell the story, pay attention to how one soundbite might naturally lead into another – or how one person’s statement can complete another’s thoughts. It’s those serendipitous moments that you are looking for in Pass 4. It’s what should take the most creative time in your edit.

Philosophy of the cut

df4115_iview_11In any interview, the editor is making editorial selections that alter reality. Some broadcasters have guidelines at to what is and isn’t permissible, due to ethical concerns. The most common editorial technique in play is the “Frankenbite”. That’s where an edit is made to truncate a statement or combine two statements into one. Usually this is done because the answer went off into a tangent and that portion isn’t relevant. By removing the extraneous material and creating the “Frankenbite” you are actually staying true to the intent of the answer. For me, that’s the key. As long as your edit is honest and doesn’t twist the intent of what was said, then I personally don’t have a problem with doing it. That part of the art in all of this.

df4115_iview_10It’s for these reasons, though, that directors like Morris leave the jump cuts in. This lets the audience know an edit was made. Personally, I’d rather see a smooth piece without jump cuts and that’s where a two camera shoot is helpful. Cutting between two camera angles can make the edit feel seamless, even though the person’s expression or body position might not truly match on both sides of the cut. As long as the inflection is right, the audience will accept it. Occasionally I’ll use a dissolve, white flash or blur dissolve between sections, but most of the time I stick with cuts. The transitions seem like a crutch to me, so I use them only when there is a complete change of thought that I can’t bridge with an appropriate soundbite or b-roll shot.

df4115_iview_12The toughest interview edit tends to be when you want to clean things up, like a repeated word, a stutter, or the inevitable “ums” and “ahs”. Fixing these by cutting between cameras normally results in a short camera cut back and forth. At this point, the editing becomes a distraction. Sometimes you can cheat these jump cuts by staying on the same camera angle and using a short dissolve or one of the morphing transitions offered by Avid, Adobe, or MotionVFX (for FCPX). These vary in their success depending on how much a person has moved their body and head or changed expressions at the edit point. If their position is largely unchanged, the morph can look flawless. The more the change, the more awkward the resulting transition can be. The alternative is to cover the edit with a cutaway b-roll shot, but that’s often not desirable if this happens the first time we see the person. Sometimes you just have to live with it and leave these imperfections alone.

Telling the story through sight and sound is what an editor does. Working with on camera interviews is often the closest an editor comes to being the writer, as well. But remember that mixing and matching soundbites can present nearly infinite possibilities. Don’t get caught in the trap so many do of never finishing. Bring it to a point where the story is well-told and then move on. If the entire production is approached with some of these thoughts in mind, the end result can indeed be memorable.

©2015 Oliver Peters

PDFviewer for Premiere Pro


Small developers often create the coolest tools for editing. Such is the case with Primal Cuts and their PDFviewer extension for Premiere Pro CC. Ever find yourself shuffling between paper scripts and storyboards, while trying to edit? Or juggling between different apps on-screen to view electronic versions, while going back-and-forth to your NLE? That’s what PDFviewer solves for you.

df4015_pcpdf_3Adobe has created a feature called extensions, which allows a developer to create a custom, dockable panel to perform certain function right inside the application’s interface. TypeMonkey is one example of this for After Effects. The same interface feature is also available in Premiere Pro. Extensions developed for Adobe applications also have the benefit of being cross-platform compatible.

PDFviewer is an extension designed for Adobe Premiere Pro CC. Once installed, it’s accessible from the extensions pulldown menu. When you select it, PDFviewer opens as a floating interface panel that can then be docked anywhere in the interface. If you dock it, make sure to do so in all of your workspaces and save those configurations. That way, if you have a file open, it will stay open as you jump between different layouts.

df4015_pcpdf_2Any PDF file can be opened in PDFviewer, including scripts, storyboards, and other documents. If you work in scripted long-form productions, then check if the script supervisor is using ScriptE Systems products. These are ideal for generating numerous electronic versions of common filming documents, including shot logs and lined scripts. However, any PDF works, including manually scanned PDFs of handwritten reports and lined scripts. Simply open up the lined script in the PDFviewer panel and now you have it right there within Premiere Pro. It’s not exactly the same as Avid’s Script Integration tools in Media Composer, but it’s the next best thing to it.

df4015_pcpdf_4PDFviewer lets you open multiple PDFs by clicking the “+” icon and adding another file. Multiple PDFs are accessible as tabs across the top of the PDFviewer window. It also includes a “hand” tool to easily scroll and pan within larger documents. Search is another great feature, which is perfect for working with transcripts. Search terms will be highlighted throughout the document. You can also copy-and-paste text from within PDFviewer to any metadata field in Premiere Pro.

Primal Cuts’ PDFviewer is a straightforward tool that every Premiere Pro editor will find to be a handy addition to their toolkit. At $10, the price is hard to pass up, simply based on the convenience of not shuffling more paper on your desk.

©2015 Oliver Peters

Fear the Walking Dead


When AMC cable network decided to amp up the zombie genre with The Walking Dead series, it resulted in a huge hit. Building upon that success, they’ve created a new series that could be viewed as a companion story, albeit without any overlapping characters. Fear the Walking Dead is a new, six-episode series that starts season one on August 23. The story takes place across the country in Los Angeles and chronologically just before the outbreak in the original series. The Walking Dead was based on Robert Kirkman’s graphic novels by the same name and he has been involved in both versions as executive producer.

Unlike the original series, which was shot on 16mm film, Fear the Walking Dead is being shot digitally with ARRI ALEXA cameras and anamorphic lenses. That’s in an effort to separate the two visual styles, while maintaining a cinematic quality to the new series. I recently spoke with Tad Dennis, the editor of two of the six episodes in season one, about the production.

Tad Dennis started his editing career as an assistant editor on reality TV shows. He says, “I started in reality TV and then got the bump-up to full-time editing (Extreme Makeover: Home Edition, America’s Next Top Model, The Voice). However, I realized my passion was elsewhere and made the shift to scripted television. I started there again as an assistant and then was bumped back up to editing (Fairly Legal, Manhattan, Parenthood). Both types of shows really do have a different workflow, so when I shifted to scripted TV, it was good to start back as an assistant. That let me be very grounded in the process.”

Creating a new show with a shared concept

Dennis started with these thoughts on the new show, “We think of this series as more of a companion show to the other and not necessarily a spin-off or prequel. The producers went with different cameras and lenses for a singular visual aesthetic, which affects the style. In trying to make it more ‘cinematic’, I tend linger on wider shots and make more selective use of tight facial close-ups. However, the material really has to dictate the cut.”

df3615_ftwd_3Three editors and three assistant editors work on the Fear the Walking Dead series, with each editor/assistant team cutting two of the six shows of season one. They are all working on Avid Media Composer systems connected to an Avid Isis shared storage solution. Scenes were shot in both Vancouver and in Los Angeles, but the editing teams were based in Los Angeles. ALEXA camera media was sent to Encore Vancouver and Encore Hollywood, depending on the shooting location. Encore staff synced sound and provided the editors with Avid DNxHD editorial media. The final color correction, conform, and finishing was also handled at Encore Hollywood.

Dennis described how post on this show differed from other network shows he’s worked on in the past. He says, “With this series, everything was shot and locked for the whole season by the first airdate. On other series, the first few shows will be locked, but then for the rest of the season, it’s a regular schedule of locking a new show each week until the end of the season. This first season was shot in two chunks for all six episodes – the Vancouver settings and then the Los Angeles scenes. We posted everything for the Vancouver scenes and left holes for the LA parts. The shows went all the way through director cuts, producer cuts, and network notes with these missing sections. Then when the LA portions came in, those scenes were edited and incorporated. This process was driven by the schedule. Although we didn’t have the pressure of a weekly airdate, the schedule was definitely tight.” Each of the editors had approximately three to four days to complete their cut of an episode after receiving the last footage. Then the directors got another four days for a director’s cut.

df3615_ftwd_5Often films and television shows go through adjustments as they move from script to actual production and ultimately the edit. Dennis feels this is more true of the first few shows in a new series than with an established series. He explains, “With a new series, you are still trying to establish the style. Often you’ll rethink things in the edit. As I went through the scenes, performances that were coming across as too ‘light’ had to be given more ‘weight’. In our story, the world is falling apart and we wanted every character to feel that all the way throughout the show. If a performance didn’t convey a sense of that, then I’d make changes in the takes used or mix takes, where picture might be better on one and audio better on the other.”

Structure and polish in post

In spite of the tight schedule, the editors still had to deal with a wealth of footage. Typical of most hour-long dramas, Fear the Walking Dead is shot with two or three cameras. For very specific moments, the director would have some of the footage shot on 48fps. In those cases, where cameras ran at different speeds, Dennis would treat these as separate clips. When cameras ran at the same speed (for example, at 24fps for sync sound), such as in dialogue scenes, Susan Vinci (assistant editor) would group the clips as multicam clips. He explains, “The director really determines the quality of the coverage. I’d often get really necessary options on both cameras that weren’t duplicated otherwise. So for these shows, it helped. Typically this meant three to four hours of raw footage each day. My routine is to first review the multicam clips in a split view. This gives me a sense of what the coverage is that I have for the scene. Then I’ll go back and review each take separately to judge performance.”

df3615_ftwd_4Dennis feels that sound is critical to his creative editing process. He continues, “Sound is very important to the world of Fear the Walking Dead. Certain characters have a soundscape that’s always associated with them and these decisions are all driven by editorial. The producers want to hear a rough cut that’s as close to airable as possible, so I spend a lot of time with sound design. Given the tight schedule on this show, I would hand off a lot of this to my long-time assistant, Susan. The sound design that we do in the edit becomes a template for our sound designer. He takes that, plus our spotting notes, and replaces, improves, and enhances the work we’ve done. The show’s music composer also supplied us with a temp library of past music he’d composed for other productions. We were able to use these as part of our template. Of course, he would provide the final score customized to the episode. This score would be based on our template, the feelings of the director, and of course the composer’s own input for what best suited each show.”

df3615_ftwd_2Dennis is an unabashed Avid Media Composer proponent. He says, “Over the past few years, the manufacturers have pushed to consolidate many tools from different applications. Avid has added a number of Pro Tools features into Media Composer and that’s been really good for editors. There are many tools I rely on, such as those audio tools. I use the Audiosuite and RTAS filters in all of my editing. I like dialogue to sound as it would in a live environment, so I’ll use the reverb filters. In some cases, I’ll pitch-shift audio a bit lower. Other tools I’ll use include speed-ramping and invisible split-screens, but the the trim tool is what defines the system for me. When I’m refining a cut, the trim tool is like playing a precise instrument, not just using a piece of software.”

Dennis offered these parting suggestions for young editors starting out. “If you want to work in film and television editing, learn Media Composer inside and out. The dominant tool might be Final Cut or Premiere Pro in some markets, but here in Hollywood, it’s largely Avid. Spend as much time as possible learning the system, because it’s the most in-demand tool for our craft.”

Originally written for Digital Video magazine / CreativePlanetNetwork

©2015 Oliver Peters

Automatic Duck Redux


Automatic Duck invented timeline translations between applications. Necessity is the mother of invention, leading Wes Plate, an Avid Media Composer editor who tackled compositing in Adobe After Effects, to team with his programmer father, Harry. The goal was to design a tool to get Avid timelines into After Effects compositions. Automatic Duck grew from this beginning to create a series of translation products that let editors seamlessly move timelines between a number of different hosts, including Media Composer, Pro Tools, After Effects, and Apple Final Cut Pro “classic”.

Four years ago Adobe licensed the IP for the original Automatic Duck Pro Import products, as well as brought the father/son team on board to develop tools for Adobe. Now they are back on their own and have decided to reboot Automatic Duck, which has been mothballed for the past four years. Seeing an opportunity in Apple’s Final Cut Pro X, the company has developed Ximport AE, a timeline translation tool to bring Final Cut Pro X projects (edited sequences) into After Effects. The team is no stranger to Final Cut Pro X’s new FCPXML format, since it was the first developer to create a companion utility that translated Final Cut Pro X 10.0 projects into Pro Tools sessions.

Knowing the market

df3915_ad_2First, let’s define the market. Who is Automatic Duck Ximport AE for? Editors who do most of their heavy lifting in Media Composer, Final Cut, or Premiere Pro might not see the attraction. On the flip side, though, there are quite a few editors for whom After Effects is the tool of choice for all effects and even finishing. For this group, the NLE is where they spend the least amount of time. They use an editing application for shot selection and assembly and then go straight to After Effects for everything else.

If you are a motion graphics designer who relies on After Effects, then your occasional need for an NLE might be best served by FCP X. The interface is fast and easy to master, compared with more traditional track-based edit software. Finally, if you are a dedicated FCP X editor, you no longer have a “send to Motion” function as in the old Final Cut Studio. This means you can’t send more than a single shot to Motion for treatment. Besides, After Effects may still be your preferred motion graphics application. Take all of these points into consideration and you’ll see that there’s a clear need to get a project from FCP X into After Effects – the industry’s dominant motion graphics application.

How it works

df3915_ad_4Automatic Duck Ximport AE is designed as a plug-in that’s installed into After Effects, including CS6 up through the current CC2015 version (and beyond). There are several other competing translation tools on the market, which convert between flavors of XML or from FCPXML into AE Scripts. Automatic Duck is the only one that integrates directly into the After Effects import menu. Ximport AE cuts out one middle step in the process and should provide for a more complete translation from FCP X into After Effects.

I’ve been beta testing the product for a few months and it certainly hits the mark for serious users. The steps are simple. Just cut your sequence in Final Cut Pro X and then export an FCPXML for that project (sequence). When you open After Effects, select File > Import > Automatic Duck Ximport AE. This opens a dialogue box with a few settings and it’s where you navigate to the correct FCPXML file. Settings include whether to let your clips cascade up or down in the After Effects timeline, as well as an option to create pre-comps from Final Cut’s secondary storylines. The question mark icon also launches the user guide.

In the timelines I’ve tested, the translation is quite good. Compound clips are packaged as pre-comps. The active angle of Multicam clips and the selected pick of Audition clips are translated. Alternate angles aren’t.  Generally transform, crop, opacity, and blend functions are supported, as are audio and video keyframes. A number of third party filters are accurately translated between applications, assuming that the same filter is installed into each host. At launch, these include selected plug-ins from Boris FX, Digital Anarchy, Noise Industries/FxFactory, PHYX, Red Giant, and Yanobox. Check the user guide for a detailed list with specific filters.

Some caveats

df3915_ad_3It’s worth noting, however, that just about all of the built-in FCP X filters are not translated into an equivalent filter in After Effects. For example, the color board metadata is included in the FCPXML, but there’s no way to read that info on the After Effects side. This is true even when there are filters that appear to be the same. For example, both hosts include a native Gaussian blur filter, yet that doesn’t get translated. On the other hand, if you apply a Flipped filter in FCP X, it will be correctly translated into the -100 transform scale value in After Effects. So again, read the user guide and do a little experimentation to see what works and what doesn’t in your projects. Whenever an effect is not supported, a note is made in the companion HTML file created at import. A marker is also placed on that clip in the After Effects timeline, naming the missing plug-in.

df3915_ad_6I tested a number of supported third-party products, staying mainly within the Red Giant family. Translation was good between the Magic Bullet tools, but not without issue. For example, Universe ToonIt Expressionist Noise was available in both hosts, yet the effect was not applied in the After Effect composition. That’s because at the time I tested this using a beta build, that specific Universe filter had not been included. This has since been corrected. Other effects, like Looks, Colorista III, Mojo, Universe Glow, and others worked flawlessly. According to Wes Plate, the plug-in has been architected in a way to easily add support for new effects plug-ins. The bottom line is that if you stay within the supported features, you will get the richest translation experience from FCP X into After Effects that’s currently available in the market.

Automatic Duck Media Copy 4.0

df3915_ad_5Along with Ximport AE, the company will also introduce Automatic Duck Media Copy 4.0. The original Media Copy grew out the need to collect, copy, and move sequences and their associated media. The original version worked for Avid Media Composer and Apple Final Cut Pro “classic” sequences. It would read either the AAF or XML file and copy all associated media, plus the timeline edit info. This new folder could then be moved to another system for more editing or used as a back-up archive. Media Copy 4.0 has been updated to add FCPXML support. As before, it collects media and timeline files for use elsewhere. It does not trim or transcode the media, but you have the choice to copy media all into a single folder or to maintain a folder hierarchy matching the original paths within the newly created location. Media Copy works well as a standalone application or as a companion to Ximport AE. It supports Avid Media Composer, Final Cut Pro X, and Final Cut Pro 6/7.

With the reboot of Automatic Duck, they’ve decided to partner with Red Giant Software to provide marketing, sales, and customer support. Red Giant will offer Automatic Duck Ximport AE for $199 and Media Copy 4.0 for $99. If you still have need for Automatic Duck’s legacy products, the company is posting them again on their own website for free, with an optional “donate” button. These include Pro Import FCP, Pro Export FCP (for FCP 7 users), and Pro Import AE (for importing AAF and XML into AE CS 5.5 or earlier).

Regardless of which NLE you use, I’ve found Media Copy to be an essential tool, whether or not you work with effects or motion graphics. It’s great to see Automatic Duck update it, as well as launch their next great product, Ximport AE. Adobe After Effects will continue to be the ubiquitous compositing and motion graphics choice for most editors, so this marriage between Final Cut Pro X and After Effects make great sense.

For more, here’s a good interview with Wes Plate at Red Shark News.

©2015 Oliver Peters

Blackmagic Design DaVinci Resolve 12

The industry has been eager to check out Blackmagic Design’s DaVinci Resolve 12. This “first look” is based on the initial build of the Resolve 12 public beta. A number of functions have not yet been enabled, so expect to see some changes in the product by the time you read this.

As with any public beta, the point is to get feedback and reap the benefit of crowdsourced quality testing, so be careful about using it on real jobs. That being said, so far I’ve found the public beta builds to be reasonably stable. I’ve had a chance to test the application on several different machines, including two 2009-2010 Mac Pro towers and a new 15” Retina MacBook Pro. Testing included a Sapphire 7950 and an Nvidia Quadro 4000 GPU, as well as the built-in Nvidia card on the laptop.

Blackmagic is no longer using the “Lite” name to identify the free version. The branding is now DaVinci Resolve 12 (free) and DaVinci Resolve 12 Studio ($995). The free version includes the majority of features and is limited to an output no larger that UltraHD 4K. The paid version adds advanced features, including stereoscopic functions, networked collaboration between users, multiple GPU support, and the ability to output at larger than UltraHD 4K frame sizes.

Blackmagic Design hardware products are required to output an analog or digital signal to an external video monitor or tape deck. If you are comfortable making color judgements based on the viewer image, then no hardware is require for operation and rendering. You can also hot rod your system with the DaVinci Control Surface ($29,995) or a number of supported third-party surfaces that are less costly.

Refreshing the user interface

df3115_R12_3DaVinci Resolve 12 ushers in a fresh user interface. Previous versions mimicked the style of Apple Final Cut Pro X, but the new UI is flatter with thinner fonts. It takes on the trendy design aesthetic employed in Windows 8/10 and Mac OS X/iOS. The background colors are a lighter grey with a faint blue cast to them. Although pleasing, I find that last part strange for a color correction application, where a true grey is considered the norm.

The interface has been optimized for single and dual-monitor systems, as well as higher-density displays, like Apple’s Retina. Resolve 12 is divided into four modes or pages: Media, Edit, Color, and Deliver. Software control panels can be opened or closed as needed, including videoscopes, media storage locations, mixers, audio meters, inspector, effects, and more. There are some interesting options to control whether or not a panel or window runs the full horizontal or vertical length of your display. However, there is no way to create a custom workspace by docking panels in different places and then saving that as your personal layout. Interface colors also can’t be personalized.

As before, timelines support sources with mixed formats and frame rates, however, the base timeline setting must match that of the project. This means you cannot have a 720p/59.94 and a 1080i/29.97 timeline within the same project. You can’t have multiple timelines open, but it’s easy to access different timelines in the same project quickly. You can also cut one timeline into another as a nested sequence. Such nests (as well as compound clips) can be decomposed in the timeline, leaving the original source clips to work with.

Resolve 12 no longer includes a separate section in the UI for timelines, as these are placed together with the source media in the Media Pool. One simple solution is to create a Bin for your edits and manually drag the timelines you’ve created into that Bin. Another option is to filter timelines into a Smart Bin by including some common element in the name. For example, you could append “seq” (for sequence) to the end of the name of each timeline. Set your filtering criteria to names that contain “seq” and then timelines will automatically show up in the Smart Bin that you’ve created for timelines.

Editing with Resolve 12

df3115_R12_4As a a nonlinear editing application (NLE), Resolve 12 is an interesting mash-up among several other NLEs, including Premiere Pro, FCP 7 and FCP X. There are new features clearly intended for editors, including multi-camera editing. You can now organize clips and timelines into custom bins, add metadata, assign sortable color flags and other metadata values, and automatically filter clips into Smart Bins. You can sync grouped clips (double-system sound) and multi-camera clips using in-points, timecode, or audio waveforms. The multi-cam editing routine is similar to other NLEs, where you drop a multi-cam clip onto your main timeline and then cut between camera angles.

Blackmagic placed a lot of attention on timeline trim functions. It’s now possible to do some very elaborate asymmetrical trims of multiple clips. Slip/slide trimming and split audio is all very easy and fluid. There is no trim window, so on-the-fly JKL trimming – a la Media Composer – isn’t possible. When you trim via the mouse or keyboard, you get a 2-up preview in the viewer and a 4-up display when slipping and sliding clips. You can access a curve editor in the timeline for transitions, which lets you control the transition acceleration. When you select source clips in the list view mode of the browser, you get a skimmable filmstrip of the selected clip, much like in FCP X.

Video effects are still based on OpenFX, so any third-party filters and transitions that offer OFX host support (FilmConvert, BorisFX, NewBlueFX, etc.) will show up in either the Edit page effects palette or the Color page, depending on whether the filter is something that requires a color correction node in order to be applied. Blackmagic also includes its own toolbox of effects and transitions, including the new Smoothcut transition. This is a morphing dissolve designed to smooth jumpcuts between edited soundbites from on-camera interviews. It is similar to Adobe’s Morph Cut or Avid’s FluidMorph, but seems to rely more heavily on GPU processing. Therefore, you don’t have to wait until a lengthy analysis pass is completed before you can review the results. As with all of these effects, real-world results vary with how closely the alignment is on both sides of the cut. It tends to work best with a duration of two to four frames.

Audio went through big changes in Resolve 12 to improve performance and to add features. VST and AU plug-ins are supported. Any that are installed on your system will show up in the audio effects palette. Effects can be applied to clips or tracks and there’s automation-style track mixing. The way audio tracks are implemented seems confusing to me – especially audio track patching. Tracks can be mono, stereo, 5.1, or adaptive, but there’s no indication in the timeline window as to what type of track it is. When you edit a multi-cam clip to the timeline and the source audio contains several channels, then it is no longer possible to break those clips apart or access individual channels from the timeline. Both Adobe and Apple use similar methods, but with a better approach in each’s implementation. Like in Premiere Pro, it is best to start out by properly setting the source audio channel configuration in the clip properties menu for each clip. You can access this in the Media page.

Other improvements

df3115_R12_5DaVinci Resolve 12 is not only about editing. Since Resolve is used a lot as a DIT tool to generate dailies, there’s a new capability in the Media page to apply color space changes and camera LUTs to a group of clips. If you shot log-encoded footage and apply a Rec709 LUT on the Media page, you’ll now see the corrected color throughout. The downside is that such LUTs are not visible on the Color page and can’t be removed in any of the color adjustment nodes.

The new blue and greenscreen 3D keyer is accessible on the Color page. It yields high-quality results and is aided by new, matte finesse controls, plus Resolve’s great masking and tracking capabilities. There’s also improved ACES support, better shot-matching between clips, and more.

Resolve 12 uses a central database to house all project files. This makes it harder to move files between users than with other NLEs. Previous versions let you export Resolve projects to move them to other systems, but now Resolve 12 adds copy, move, transcode, relink, and consolidate functions. Support for FCPXML (for projects offline-edited using FCP X) has been updated to the newest version of this format.

There had been a bug in how Resolve wrote FCPXML files, so going back into FCP X from Resolve exhibited relinking issues. This only occurred when importing on a different machine than where the files were generated. This bug appears to have been fixed in version 3 of the public beta build.

To include another tool for editors, Blackmagic added an AAF export to Pro Tools feature. I don’t have ProTools, so I wasn’t able to test the Pro Tools export properly. All audio clips are exported in .MXF format, which means many applications can’t play the audio. For example, when I imported the AAF into Apple Logic Pro X, the track sheet was blank. I have been able to send audio from Final Cut Pro X into Logic Pro X using X2Pro Audio Convert to create an AAF.


df3115_R12_2Real-time media performance is critical to a good editing experience. Resolve 12 is optimized for hardware using the PCIe 3.0 bus, which supports greater bandwidth. Older Mac Pro towers or Windows computers that use PCIe 2.0, are going to be challenged when loaded with PCIe cards. You see this mainly in the Edit page, because more things are going on in the interface on that page. Windows user with the newest hardware and Mac users who own new “trash can” Mac Pros will most likely have a better editing experience than owners of legacy machines.

I experienced choppy video being displayed in the viewer of the Edit page, even though output through the Decklink was fine. Ironically, viewer and video output were smooth on the other pages. After consulting with Blackmagic, the following recommendations gave me the performance I would expect out of an NLE: run in the single-screen layout, close the audio mixer panel, close the audio meters, and/or switch the video monitoring setting to 8-bit. Of these, the mixer suggestion made the biggest difference. The ability to create on-the-fly, low-resolution proxies for editing wasn’t enabled with the first few builds of the public beta. It was turned on in build three. This gives you similar results to that of other NLEs running in a half-resolution, quarter-resolution or “dynamic real-time” mode.

One common mistake that I see users make, when I read some of the internet forum posts, is that they load up the timeline clips with color correction nodes and still expect real-time editing performance. Physics hasn’t changed. Adding effects and color correction to clips is going to negatively impact playback. As a general rule, get all of your editing done first and then save your color correction until last. You’ll be a lot happier.

Final thoughts

Once the official Resolve 12 release rolls out, we’ll see where it finds a place as an editor. This release won’t sway editors who are currently happy with one of the other popular NLEs to switch to Resolve 12 as their main axe. However, I suspect it will increasingly become the finishing tool of choice – probably edging out Autodesk Smoke over time. Now that the editing tools and performance are there, it becomes the ideal application for final edit revisions, grading, and mastering. It can already combine lists and media from a range of creative editing systems.

The other element in this equation is Fusion, the node-based composting application they picked up from EyeOn. There’s already a connecting plug-in between it and NLE timelines that Avid has enjoyed. With a bit more development time, I could clearly see some integration between Resolve and Fusion. That might be why “Studio” is now part of the name change. Hmmm…

When Resolve 11 came out, it, too, was touted as an editor. My critical assessment was that it was a grading tool that could be used as an editor, but you wouldn’t want to. With Resolve 12, Blackmagic has produced an application that is both grading tool and an editor. I could easily see myself using it as my secondary NLE. There is certainly great synergy between Final Cut Pro X and Resolve. Why not have both in your arsenal?

The enticement of a free editing application to many new users is hard to resist. Not to mention that it is cross-platform and unfettered by a software subscription business model. Clearly the development pace by Blackmagic Design since they acquired the product has been impressive. This makes me believe that Resolve will find a new audience willing to use it as their primary creative tool for start-to-finish post production.

Click here for a look back at Resolve 11, which will give you an additional insight into some of Resolve’s feature set.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Final Cut Pro X Organizing Tips


Every nonlinear editing application is a database; however, Apple took this concept to a higher level when it launched Final Cut Pro X. One of its true strengths is making the organization of a mountain of media easier than ever. To get the best experience, an editor should approach the session holistically and not simply rely on FCP X to do all the heavy lifting.

At the start of every new production, I set up and work within a specific folder structure. You can use an application like Post Haste to create a folder layout, pick up some templates online, like those from FDPTraining, or simply create your own template. No matter how you get there, the main folder for that production should include subfolders for camera files, audio, graphics, other media, production documents, and projects. This last folder would include your FCP X library, as well as others, like any After Effects or Motion projects. The objective is to end up with everything that you accrue for this production in a single master folder that you can archive upon completion.

FCP X Libraries

df1415_organize_5It helps to understand the Final Cut Pro X Library structure. The Library is what would otherwise be called a project file, but in FCP X terminology an edited sequence is referred to as a Project, while the main session/production file is the Library. Unlike previous versions and other NLEs, the Library is not a closed, binary data file. It is a package file that you can open and peruse, by right-clicking the Library icon and using the “show package contents” command. In there you will find various binary files (labeled CurrentVersion.fcpevent) along with a number of media folders. This structure is similar to the way Avid Media Composer project folders are organized on a hard drive. Since FCP X allows you to store imported, proxy, transcoded, and rendered media within the Library package, the media folders can be filled with actual media used for this production. When you pick this option your Library will grow quite large, but is completely under the application’s control, thus making the media management quite robust.

df1415_organize_4Another option is to leave media files in their place. When this is selected the Library package’s media folders will contain aliases or shortcut links to the actual media files. These media files are located in one or more folders on your hard drive. In this case, your Library file will stay small and is easier to transfer between systems, since the actual audio and video files are externally located. I suggest spreading things out. For example, I’ll create my Library on one drive, the location of the autosaved back-up files on another, and my media on a third. This has the advantage of no single point of failure. If the Library files are located on a drive that is backed up via Time Machine or some other system-wide “cloud” back-up utility, you have even more redundancy and protection.

Following this practice, I typically do not place the Library file in the projects folder for the production, unless this is a RAID-5 (or better) drive array. If I don’t save it there during actual editing, then it is imperative to copy the Library into the project folder for archiving. The rub is that the package contains aliases, which certain software – particular LTO back-up software – does not like. My recommendation is to create a compressed archive (.zip) file for every project file (FCP X Library, AE project, Premiere Pro project, etc.) prior to the final archiving of that production. This will prevent conflicts caused by these aliases.

If you have set up a method of organization that saves Libraries into different folders for each production, it is still possible to have a single folder, which shows you all the Libraries on your drives. To do this, create a Smart Folder in the Finder and set up the criteria to filter for FCP X Libraries. Any Library will automatically be filtered into this folder with a shortcut. Clicking on any of these files will launch FCP X and open to that Library.

Getting started

df1415_organize_2The first level of organization is getting everything into the appropriate folders on the hard drive. Camera files are usually organized by shoot date/location, camera, and card/reel/roll. Mac OS lets you label files with color-coded Finder tags, which enables another level of organization for the editor. As an example, you might have three different on-camera speakers in a production. You could label clips for each with a colored tag. Another example, might be to label all “circle takes” picked by the director in the field with a tag.

The next step is to create a new FCP X Library. This is the equivalent of the FCP 7 project file. Typically you would use a single Library for an entire production, however, FCP X permits you to work with multiple open Libraries, just like you could have multiple projects open in FCP 7. In order to set up all external folder locations within FCP X, highlight the Library name and then in the Inspector panel choose “modify settings” for the storage locations listed in the Library Properties panel. Here you can designate whether media goes directly into the internal folders of the Library package or to other target folders that you assign. This step is similar to setting the Capture Scratch locations in FCP 7.

How to organize clips within FCP X

df1415_organize_8Final Cut Pro X organizes master source clips on three levels – Events, Keyword Collections, and Smart Collections. These are an equivalent to Bins in other NLEs, but don’t completely work in the same fashion. When clips are imported, they will go into a specific Event, which is closest in function to a Bin. It’s best to keep the number of Events low, since Keyword Collections work within an Event and not across multiple Events. I normally create individual Events for edited sequences, camera footage, audio, graphics, and a few more categories. Clips within an Event can be grouped in the browser display in different ways, such as by import date. This can be useful when you want to quickly find the last few files imported in a production that spans many days. Most of the time I set grouping and sorting to “none”.

df1415_organize_7To organize clips within an Event, use Keywords. Setting a Keyword for a clip – or a range within a clip – is comparable to creating subclips in FCP 7. When you add a Keyword, that clip or range will automatically be sorted into a Keyword Collection with a matching name. Keywords can be assigned to keyboard hot keys, which creates a very quick way to go through every clip and assign it into a variety of Keyword Collections. Clips can be assigned to more than one Collection. Again, this is equivalent to creating subclips and placing them into separate Bins.

On one set of commercials featuring company employees, I created Keyword Collections for each person, department, shoot date, store location, employees, managers, and general b-roll footage. This made it easy to derive spots that featured a diverse range of speakers. It also made it easy to locate a specific clip that the director or client might ask for, based on “I think Mary said that” or “It was shot at the Kansas City store”. Keyword Collections can be placed into folders. Collections for people by name went into one folder, Collections by location into another, and so on.

df1415_organize_3The beauty of Final Cut Pro X is that it works in tandem with any organization you’ve done in the Finder. If you spent the time to move clips into specific folders or you assigned color-coded Finder tags, then this information can be used when importing clips into FCP X. The import dialogue gives you the option to “leave files in place” and to use Finder folders and tags to automatically create corresponding Keyword Collections. Camera files that were organized into camera/date/card folders will automatically be placed into Keyword Collections that are organized in the same fashion. If you assigned clips with Mary, John, and Joe to have red, blue, and green tags for each person, then you’ll end up with those clips automatically placed into Keyword Collections named red, blue, and green. Once imported, simply rename the red, blue, and green Collections to Mary, John, and Joe.


The third level of clip organization is Smart Collections. Use these to automatically filter clips based on the criteria that you set. With the release of FCP X version 10.2, Smart Collections have been moved from the Event level (10.1.4 or earlier) to the Library level – meaning that filtering can occur across multiple Events within the Library. By default, new Libraries are created with several preset Smart Collections that can be used, deleted, or modified. Here’s an example of how to use these. When you sync double-system sound clips or multiple cameras, new grouped clips are created – Synchronized Clips and Multicam Clips. These will appear in the Event along with all the other source files, which can be unwieldy. To focus strictly on these new grouped clips, create a Smart Collection with the criteria set by type to include these two categories. Then, as new grouped clips are created, they will automatically be filtered into this Smart Collection, thus reducing clutter for the editor.

Playing nice with others

df1415_organize_9Final Cut Pro X was designed around a new paradigm, so it tends to live in its own world. Most professional editors have the need for a higher level of interoperability with other applications and with outside vendors. To aid in these functions, you’ll need to turn to third party applications from a handful of vendors that have focused on workflow productivity utilities for FCP X. These include Intelligent Assistance/Assisted Editing, XMiL, Spherico, Marquis Broadcast, and Thomas Szabo. Their utilities make it possible to go between FCP X and the outside world, through list formats like AAF, EDL, and XML.

df1415_organize_11Final Cut’s only form of decision list exchange is FCPXML, which is a distinctly different data format than other forms of XML. Apple Logic Pro X, Blackmagic Design DaVinci Resolve and Autodesk Smoke can read it. Everything else requires a translated file and that’s where these independent developers come in. Once you use an application like XtoCC (formerly Xto7) from Intelligent Assistance to convert FCPXML to XML for an edited sequence, other possibilities are opened up. The translated XML file can now be brought into Adobe Premiere Pro or FCP 7. Or you can use other tools designed for FCP 7. For instance, I needed to generate a print-out of markers with comments and thumbnail images from a film, in order to hand off notes to the visual effects company. By bringing a converted XML file into Digital Heaven’s Final Print – originally designed with only the older Final Cut in mind – this became a doable task.

df1415_organize_13Thomas Szabo has concentrated on some of the media functions that are still lacking with FCP X. Need to get to After Effects or Nuke? The ClipExporter and ClipExporter2 applications fit the bill. His newest tool is PrimariesExporter. This utility uses FCPXML to enable batch exports of clips from a timeline, a series of single-frame exports based on markers, or a list of clip metadata. Intelligent Assistance offers the widest set of tools for FCP X, including Producer’s Best Friend. This tool enables editors to create a range of reports needed on most major jobs. It delivers them in spreadsheet format.

Understanding the thought processes behind FCP X and learning to use its powerful relational database will get you through complex projects in record time. Better yet, it gives you the confidence to know that no editorial stone was left unturned. For more information on advanced workflows and organization with Final Cut Pro X, check out FCPworks, MacBreak Studio (hosted by Pixel Corps), Larry Jordan, and Ripple Training.

For those that want to know more about the nuts and bolts of the post production workflow for feature films, check out Mike Matzdorff’s “Final Cut Pro X: Pro Workflow”, an iBook that’s a step-by-step advanced guide based on the lessons learned on Focus.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

A Deeper Dive into Lumetri Color


With the introduction of Premiere Pro CC 2015, Adobe altered how color correction can be handled within its editing application. The addition of the Lumetri Color effect puts a very powerful and intuitive color correction tool at the editor’s fingertips. I touched on some of its capabilities with SpeedGrade look files in a previous post, but now I’d like to dive into a deeper explanation of the features of Lumetri Color.

Previously in Premiere Pro CC 2014, the Lumetri effect was the conduit between grades in SpeedGrade and Premiere Pro. When you sent a sequence to SpeedGrade CC via Direct Link, the correction done there would show up back in Premiere Pro CC as a self-contained Lumetri effect applied to the clip or an adjustment layer. You could add more effects to the clip, but not edit the Lumetri effect itself in Premiere Pro. If you bounced back into SpeedGrade, then you had further edit control to change the settings from the earlier SpeedGrade session.

Now in Premiere Pro CC 2015, that previous method has been altered. When a Lumetri Color effect is added in the Premiere Pro CC timeline, that is no longer editable when you send it to SpeedGrade CC via Direct Link. Any grading added in SpeedGrade is in addition to the Lumetri Color effect. When you go back to Premiere Pro, those corrections will show up as a SpeedGrade Custom group at the bottom of the Lumetri Color effect stack. It is a separate, self-contained, uneditable correction applied to the clip. It can only be disabled if desired. In other words, Lumetri Color adjustments made in Premiere Pro are separate and apart from any color corrections done in SpeedGrade.

You can apply a Lumetri Color effect in two ways. The first, traditional way is to drag-and-drop the filter from the Effects palette (Color Correction folder) onto the clip or adjustment layer. The new, CC 2015 way is to select the Color workspace, which automatically reveals the Lumetri Color panel and the new, real-time Lumetri scopes. If you change any setting in the panel, it immediately applies a Lumetri Color effect to that clip. Color corrections can be made either in the Lumetri panel or in the standard Effects Control panel. If you don’t like the Lumetri Color effect or panel, you can still use the other color correction filters, like the Three-Way Color Corrector, Luma Curve, etc. These options have not been removed. (Click on any image for an expanded view.)

Master Clip Effects

df2715_lumetri_2_smSince CC 2014, Premiere Pro has enabled Master Clip effects. These are source-side settings and any change made as a Master Clip effect will affect all instances of that clip throughout the timeline. This is important with camera raw files, like CinemaDNG or REDCODE raw, because there are color metadata adjustments that can be made at the point where the raw image is encoded into RGB video. This is in addition to any color corrections made in the Lumetri Color panel, another filter, or in SpeedGrade. Previously these controls were accessed as a right-click contextual menu option called Source Settings.

With CC 2015, source setting adjustments have been moved to the Effects Control panel. At the top of the panel you’ll see the clip name appear twice – once as the master clip (left) and once in the sequence (right). The sequence portion has all the usual controls, like motion, opacity, time remapping, and any applied filters. The master clip portion will show all the source color controls. In the case of RED files, you’ll find the full range of RED controls made available from their SDK. For CinemaDNG files, such as from Blackmagic cameras, the options are limited to exposure, temperature and tint. You should make any necessary camera raw adjustments to these clips here, before applying Lumetri Color effects.

In addition to raw adjustments, Lumetri Color effects can also be applied as Master Clip effects and/or as timeline effects. The Lumetri Color panel also displays the clip name twice – master clip (left) and sequence clip (right). Generally you are going to make your corrections to sequence clips, however, some common settings, like adding a Log-to-Rec709 LUT might be best done as a Master Clip effect. Just understand that adjustments in the Lumetri Color panel can be applied to either or both sides, but that Master Clip effects will automatically ripple to other instances of that same clip elsewhere on the timeline. When you make changes to the sequence side (right), you are only altering that one location on the timeline.

The Lumetri Color Panel

df2715_lumetri_8_smThe Lumetri Color panel is organized as a stack of five control groups – Basic Correction, Creative, Curves, Color Wheels and Vignette. The controls within each group are revealed when you click on that section. You can enable or disable a group, but you can’t change the order of the stack, which flows from Basic out through Vignette. This control method and the types of controls offered are very similar to Adobe Lightroom’s Develop page. Its control groups include Basic, Tone Curve, HSL/Color/B&W, Split Toning, Detail, Lens Corrections, Effects and Camera Calibration. There are more groups in Lightroom simply because there are more image attributes available to be adjusted within a still photo image.

Basic Correction 

df2715_lumetri_3_smThe Basic Correction group is where you’ll perform the majority of your primary color grading. It includes a pulldown for input LUTs (camera-specific color transforms), white balance, tone and saturation. White balance adjusts temperature and tint. When you move the temperature slider it increases or decreases red versus blue in an inverse relationship of one to the other, with minimal change of green. Sliding tint alters red and blue together versus green.

Tone gives you control over the luminance of the image with sliders for exposure, contrast, highlights, shadows, whites and blacks. White and black controls move the top and bottom ends of the image up or down toward clip points, while the highlight and shadow sliders adjust the upper and lower portions of the image within the parameters set by the white and black sliders. The highlight and shadow sliders would be what you use to see more or less detail within the bright or dark areas of the image.


df2715_lumetri_4_smThe Creative group is where stylistic adjustments are made, including the addition of creative “looks” (.look or .cube LUTs).  There are sliders for the intensity of the LUT, plus adjustment controls for a faded film effect, sharpening, vibrance and saturation. Finally, there are shadow and highlight tint controls with a balance slider to change the crossover threshold between them.

The faded film slider moves the black level you’ve established for the image higher for elevated blacks, but without opening any shadow detail. If you slide the control more to the right it will also compress the highlights, thus creating an overall flatter image. The sharpen slider blurs or enhanced detail in the image. Saturation uniformly increases the intensity of all chroma. Vibrance is a smart tool that increases the saturation of the more muted colors and has less change on the already-intense colors. The highlight and shadow tint controls shift the color balance of those portions of the image towards any area on the color wheel. The tint balance slider changes how much much of the image is considered to be the shadow or highlight range. For example, if you move the slider all the way to the left, then all of the image is affected by the highlight tint wheel only.


df2715_lumetri_5_smThe Curves group includes both standard RGB curves and a color wheel for control of the hue/saturation curve. The RGB curves offer four dots – white (overall control), plus red, green and blue for individual control over each of the R, G or B curves. The hue/sat curve is really a vector-based secondary color control and is akin to Lightroom’s HSL group. However, in the Lumetri Color panel a wheel control is used.

If you select one of the six color vector dots under the hue/sat curve wheel, then three control points are added along the circular curve. The center point is the color chosen and the points to the left and right establish a boundary. Pull the center point up or down to increase or decrease the saturation of the curve. Pulling the point left or right doesn’t change the hue of that color. The wheel works like a “hue vs. sat” curve and not as “hue vs. hue” when you compare it to the way in which other color correction tools operate. If I select red, I can increase or decrease the intensity of red, but pulling the control point towards orange or magenta doesn’t shift the red within the image itself towards that hue. You can also select one or more points along the curve without selecting a vector color first and make more extensive adjustments to the image.

Color Wheels 

df2715_lumetri_6_smColor Wheels is the next control group and it functions as an standard three-way corrector would. There are luma sliders and a color wheel for shadows, midtones and highlights. Moving the color wheel control effectively adds a color wash to that portion of the image instead of shifting the color balance. If you shift a wheel towards blue, the blue portion of the parade signal on a scope is increased, but red and green are not lowered in a corresponding fashion. Therefore, these wheels act as secondary color controls, which explains why Adobe placed them further down in the stack.


df2715_lumetri_7_smThe last group is Vignette and it works in much the same fashion as the Post-Crop Vignetting control in Lightroom. There are sliders for amount, midpoint, roundness and feather. In general, it acts more like a photographic vignette or one that’s a result of a lens artifact – and less like masks that you typically add in creative grading for vignette effects. Moving the amount slider controls the lightness or darkness of the vignette (yes, you can have a white vignette), but it only changes the outer edges of the frame. You cannot invert the effect. Midpoint moves the vignette edge farther into or out of the frame. Roundness adjusts the aspect ratio of the vignette and feather controls the softness of the edge.

There is no position control to move the vignette away from dead center. While the vignette group is useful for “pinching in the edges of the frame” (as a DP friend of mine is fond of saying), it’s less useful for directing the viewer’s attention. That’s the “power windows” approach, which I often use in tools like Resolve, Color, or SpeedGrade. There are other ways to achieve that inside of Premiere Pro, but just not self-contained within a single instance of the Lumetri Color effect.

It’s clear that Adobe has added a very deep toolset within this single effect and its corresponding control panel. For most color correction sessions, you can pretty well get everything done using just Lumetri Color. I believe most editors prefer to use a comprehensive grading tool that allows them to stay within the confines of the editing application. Lumetri Color within Premiere Pro CC 2015 brings that wish to reality without the need for roundtrips or third-party color correction filters.

©2015 Oliver Peters