The On Camera Interview


Many projects are based on first person accounts using the technique of the on camera interview. This approach is used in documentaries, news specials, corporate image presentations, training, commercials, and more. I’ve edited a lot of these, especially for commercials, where a satisfied customer might give a testimonial that gets cut into a five-ish minute piece for the web or a DVD and then various commercial lengths (:10, :15, :30, :60, 2 min.). The production approach and editing techniques are no different in this application than if you are working on a documentary.

The interviewer

The interview is going to be no better than the quality of the interviewer asking the off camera (and unheard) questions. Asking good questions in the right manner will yield successful results. Obviously the interviewer needs to be friendly enough to establish a rapport with the subject. People get nervous on camera, so the interviewer needs to get them relaxed. Then they can comfortably answer the questions and tell the story in their own words. The interviewer should structure the questions in a way that the totality of the responses tells a story. Think in terms of story arc and strive to elicit good beginning and ending statements.

df4115_iview_2Some key points to remember. First, make sure you get the person to rephrase the question as part of their answer, since the audience won’t hear the interviewer. This makes their answer a self-contained statement. Second, let them talk. Don’t interject or jump on the end of the answer, since this will make editing more difficult.

Sometimes in a commercial situation, you have a client or consultant on set, who wants to make sure the interviewee hits all the marketing copy points. Before you get started, you’ll need to have an understanding with the client that the interviewee’s answers will often have to be less than perfect. The interviewees aren’t experienced spokespersons. The more you press them to phrase the answer in the exact way that fits the marketing points or to correctly name a complex product or service in every response, the more stilted their speaking style will become. Remember, you are going for naturalness, honesty and emotion.

The basics

df4115_iview_1As you design the interview set, think of it as staging a portrait. Be mindful of the background, the lighting, and the framing. Depending on the subject matter, you may want a matching background. For example, a doctor’s interview might look best in a lab or in the medical office with complex surgical gear in the background. An interview with an average person is going to look more natural in a neutral environment, like their living room.

You will want to separate the interview subject from the background and this can be achieved through lighting, lens selection, and color scheme. For example, a blonde woman in a peach-colored dress will stand out quite nicely against a darker blue-green background. A lot of folks like the shallow depth-of-field and bokeh effect achieved by a full-frame Canon 5D camera with the right lens. This is a great look, but you can achieve it with most other cameras and lenses, too. In most cases, your video will be seen in the 16:9 HD format, so an off-center framing is desirable. If the person is looking camera left, then they should be on the right side of the frame. Looking camera right, then they should be on the left side.df4115_iview_7

Don’t forget something as basic as the type of chair they are sitting in. You don’t want a chair that rocks, rolls, leans back, or swivels. Some interviews take a long time and subjects that have a tendency to move around in a chair become very distracting – not to mention noisy – in the interview, if that chair moves with them. And of course, make sure the chair itself doesn’t creak.

Camera position

df4115_iview_3The most common interview design you see is where the subject is looking slightly off camera, as they are interacting with the interviewer who sitting to the left or the right of the camera. You do not want to instruct them to look into the camera lens while you are sitting next to the camera, because most people will dart between the interviewer and the camera when they try to attempt this. It’s unnatural.

The one caveat is that if the camera and interviewer are far enough away from the interview subject – and the interviewer is also the camera operator – then it will appear as if the interviewee is actually looking into the lens. That’s because the interviewer and the camera are so close to each other. When the subject addresses the interviewer, he or she appears to be looking at the lens when in fact the interviewee is really just looking at the interviewer.

df4115_iview_16If you want them looking straight into the lens, then one solution is to set up a system whereby the subject can naturally interact with the lens. This is the style documentarian Errol Morris has used in a rig that he dubbed the Interrotron. Essentially it’s a system of two teleprompters. The interviewer and subject can be in the same studio, although separated in distance – or even in other rooms. The two-way mirror of the teleprompter is projecting each person to the other. While looking directly at the interviewer in the teleprompter’s mirror, the interviewee is actually looking directly in the lens. This feels natural, because they are still looking right at the person.

Most producers won’t go to that length, and in fact the emotion of speaking directly to the audience, may or may not be appropriate for your piece. Whether you use Morris’ solution or not, the single camera approach makes it harder to avoid jump cuts. Morris actually embraces and uses these, however, most producers and editors prefer to cover these in some way. Covering the edit with a b-roll shot is a common solution, but another is to “punch in” on the frame, by blowing up the shot digitally by 15-30% at the cut. Now the cut looks like you used a tighter lens. This is where 4K resolution cameras come in handy if you are finishing in 2K or HD.

df4115_iview_6With the advent of lower-cost cameras, like the various DSLR models, it’s quite common to produce these interviews as two camera shoots. Cameras may be positioned to the left or the right of the interviewer, as well as on either side. There really is no right or wrong approach. I’ve done a few where the A-camera is right next to the interviewer, but the B-camera is almost 90-degrees to the side. I’ve even seen it where the B-camera exposes the whole set, including the crew, the other camera, and the lights. This gives the other angle almost a voyeuristic quality. When two cameras are used, each should have a different framing, so a cut between the cameras doesn’t look like a jump cut. The A-camera might have a medium framing including most of the person’s torso and head, while the B-camera’s framing might be a tight close-up of their face.

While it’s nice to have two matched cameras and lens sets, this is not essential. For example, if you end up with two totally mismatched cameras out of necessity – like an Alexa and a GoPro or a C300 and an iPhone – make the best of it. Do something radical with the B-camera to give your piece a mixed media feel. For example, your A-camera could have a nice grade to it, but the B-camera could be black-and-white with pronounced film grain. Sometimes you just have to embrace these differences and call it a style!


df4115_iview_4When you are there to get an interview, be mindful to also get additional b-roll footage for cutaway shots that the editor can use. Tools of the trade, the environment, the interview subject at work, etc. Some interviews are conducted in a manner other than sitting down. For example, a cheesemaker might take you through the storage room and show off different rounds of cheese. Such walking-talking interviews might make up the complete interview or they might be simple pieces used to punctuate a sit-down interview. Remember, that if you have the time, get as much coverage as you can!

Audio and sync

It’s best to use two microphones on all interviews – a lavaliere on the person and a shotgun mic just out of the camera frame. I usually prefer the sound of the shotgun, because it’s more open; but depending on how noisy the environment is, the lav may be the better channel to use. Recording both is good protection. Not all cameras have great sound systems, so you might consider using an external audio recorder. Make sure you patch each mic into separate channels of the camera and/or external recorder, so that they are NOT summed.

df4115_iview_8Wherever you record, make sure all sources receive audio. It would be ideal to feed the same mics to all cameras and recorders, but that’s not always possible. In that case, make sure that each camera is at least using an onboard camera mic. The reason to do this is for sync. The two best ways to establish sync is common timecode and a slate with a clapstick. Ideally both. Absent either of those, then some editing applications (as well as a tool like PluralEyes) can analysis the audio waveform and automatically sync clips based on matching sound. Worst case, the editor can manually sync clips be marking common aural or visual cues.

Depending on the camera model, you may have media cards that don’t span and automatically start a new clip every 4GB (about every 12 minutes with some formats). The interviewer should be mindful of these limits. If possible, all cameras should be started together and re-slated at the beginning of each new clip.

Editing workflow

df4115_iview_13Most popular nonlinear editing applications (NLE) include great features that make editing on camera interviews reasonably easy. To end up with a solid five minute piece, you’ll probably need about an hour of recorded interview material (per camera angle). When you cut out the interviewer’s questions, the little bit of chit chat at the beginning, and then repeats or false starts that an interviewee may have, then you are generally left with about thirty minutes of useable responses. That’s a 6:1 ratio.

The goal as an editor is to be a storyteller by the soundbites you select and the order into which you arrange them. The goal is to have the subject seamlessly tell their story without the aide of an on camera host or voice-over narrator. To aid the editing process use NLE tools like favorites, markers, and notes, along with human tools like written transcripts and your own notes to keep the project organized.

This is the standard order of things for me:

Sync sources and create multi-cam sequences or multi-cam clips depending on the NLE.

Pass 1 – create a sequence with all clips synced up and organized into a single timeline.

Pass 2 – clean up the interview and remove all interviewer questions.

Pass 3 – whittle down the responses into a sequence of selected answers.

Pass 4 – rearrange the soundbites to best tell the story.

Pass 5 – cut between cameras if this is a multi-camera production.

Pass 6 – clean up the final order by editing out extra words, pauses, and verbal gaffs.

Pass 7 – color correct clips, mix audio, add b-roll shots.

df4115_iview_9As I go through this process, I am focused on creating a good “radio cut” first. In other words, how does the story sound if you aren’t watching the picture. Once I’m happy with this, I can worry about color correction, b-roll, etc. When building a piece that includes multiple interviewees, you’ll need to pay attention to several other factors. These include getting a good mix of diversity – ethnic, gender, job classification. You might want to check with the client first as to whether each and every person interviewed needs to be used in the video. Clearly some people are going to be duds, so it’s best to know up front whether or not you’ll need to go through the effort to find a passable soundbite in those cases or not.

There are other concerns when re-ordering clips among multiple people. Arranging the order of clips so that you can cut between alternating left and right-framed shots makes the cutting flow better. Some interviewees comes across better than others, however, make sure not to lean totally on these responses. When you get multiple, similar responses, pick the best one, but if possible spread around who you pick in order to get the widest mix of respondents. As you tell the story, pay attention to how one soundbite might naturally lead into another – or how one person’s statement can complete another’s thoughts. It’s those serendipitous moments that you are looking for in Pass 4. It’s what should take the most creative time in your edit.

Philosophy of the cut

df4115_iview_11In any interview, the editor is making editorial selections that alter reality. Some broadcasters have guidelines at to what is and isn’t permissible, due to ethical concerns. The most common editorial technique in play is the “Frankenbite”. That’s where an edit is made to truncate a statement or combine two statements into one. Usually this is done because the answer went off into a tangent and that portion isn’t relevant. By removing the extraneous material and creating the “Frankenbite” you are actually staying true to the intent of the answer. For me, that’s the key. As long as your edit is honest and doesn’t twist the intent of what was said, then I personally don’t have a problem with doing it. That part of the art in all of this.

df4115_iview_10It’s for these reasons, though, that directors like Morris leave the jump cuts in. This lets the audience know an edit was made. Personally, I’d rather see a smooth piece without jump cuts and that’s where a two camera shoot is helpful. Cutting between two camera angles can make the edit feel seamless, even though the person’s expression or body position might not truly match on both sides of the cut. As long as the inflection is right, the audience will accept it. Occasionally I’ll use a dissolve, white flash or blur dissolve between sections, but most of the time I stick with cuts. The transitions seem like a crutch to me, so I use them only when there is a complete change of thought that I can’t bridge with an appropriate soundbite or b-roll shot.

df4115_iview_12The toughest interview edit tends to be when you want to clean things up, like a repeated word, a stutter, or the inevitable “ums” and “ahs”. Fixing these by cutting between cameras normally results in a short camera cut back and forth. At this point, the editing becomes a distraction. Sometimes you can cheat these jump cuts by staying on the same camera angle and using a short dissolve or one of the morphing transitions offered by Avid, Adobe, or MotionVFX (for FCPX). These vary in their success depending on how much a person has moved their body and head or changed expressions at the edit point. If their position is largely unchanged, the morph can look flawless. The more the change, the more awkward the resulting transition can be. The alternative is to cover the edit with a cutaway b-roll shot, but that’s often not desirable if this happens the first time we see the person. Sometimes you just have to live with it and leave these imperfections alone.

Telling the story through sight and sound is what an editor does. Working with on camera interviews is often the closest an editor comes to being the writer, as well. If the entire production is approached with some of these thoughts in mind, the end result can indeed be memorable.

©2015 Oliver Peters

Final Cut Pro X Organizing Tips


Every nonlinear editing application is a database; however, Apple took this concept to a higher level when it launched Final Cut Pro X. One of its true strengths is making the organization of a mountain of media easier than ever. To get the best experience, an editor should approach the session holistically and not simply rely on FCP X to do all the heavy lifting.

At the start of every new production, I set up and work within a specific folder structure. You can use an application like Post Haste to create a folder layout, pick up some templates online, like those from FDPTraining, or simply create your own template. No matter how you get there, the main folder for that production should include subfolders for camera files, audio, graphics, other media, production documents, and projects. This last folder would include your FCP X library, as well as others, like any After Effects or Motion projects. The objective is to end up with everything that you accrue for this production in a single master folder that you can archive upon completion.

FCP X Libraries

df1415_organize_5It helps to understand the Final Cut Pro X Library structure. The Library is what would otherwise be called a project file, but in FCP X terminology an edited sequence is referred to as a Project, while the main session/production file is the Library. Unlike previous versions and other NLEs, the Library is not a closed, binary data file. It is a package file that you can open and peruse, by right-clicking the Library icon and using the “show package contents” command. In there you will find various binary files (labeled CurrentVersion.fcpevent) along with a number of media folders. This structure is similar to the way Avid Media Composer project folders are organized on a hard drive. Since FCP X allows you to store imported, proxy, transcoded, and rendered media within the Library package, the media folders can be filled with actual media used for this production. When you pick this option your Library will grow quite large, but is completely under the application’s control, thus making the media management quite robust.

df1415_organize_4Another option is to leave media files in their place. When this is selected the Library package’s media folders will contain aliases or shortcut links to the actual media files. These media files are located in one or more folders on your hard drive. In this case, your Library file will stay small and is easier to transfer between systems, since the actual audio and video files are externally located. I suggest spreading things out. For example, I’ll create my Library on one drive, the location of the autosaved back-up files on another, and my media on a third. This has the advantage of no single point of failure. If the Library files are located on a drive that is backed up via Time Machine or some other system-wide “cloud” back-up utility, you have even more redundancy and protection.

Following this practice, I typically do not place the Library file in the projects folder for the production, unless this is a RAID-5 (or better) drive array. If I don’t save it there during actual editing, then it is imperative to copy the Library into the project folder for archiving. The rub is that the package contains aliases, which certain software – particular LTO back-up software – does not like. My recommendation is to create a compressed archive (.zip) file for every project file (FCP X Library, AE project, Premiere Pro project, etc.) prior to the final archiving of that production. This will prevent conflicts caused by these aliases.

If you have set up a method of organization that saves Libraries into different folders for each production, it is still possible to have a single folder, which shows you all the Libraries on your drives. To do this, create a Smart Folder in the Finder and set up the criteria to filter for FCP X Libraries. Any Library will automatically be filtered into this folder with a shortcut. Clicking on any of these files will launch FCP X and open to that Library.

Getting started

df1415_organize_2The first level of organization is getting everything into the appropriate folders on the hard drive. Camera files are usually organized by shoot date/location, camera, and card/reel/roll. Mac OS lets you label files with color-coded Finder tags, which enables another level of organization for the editor. As an example, you might have three different on-camera speakers in a production. You could label clips for each with a colored tag. Another example, might be to label all “circle takes” picked by the director in the field with a tag.

The next step is to create a new FCP X Library. This is the equivalent of the FCP 7 project file. Typically you would use a single Library for an entire production, however, FCP X permits you to work with multiple open Libraries, just like you could have multiple projects open in FCP 7. In order to set up all external folder locations within FCP X, highlight the Library name and then in the Inspector panel choose “modify settings” for the storage locations listed in the Library Properties panel. Here you can designate whether media goes directly into the internal folders of the Library package or to other target folders that you assign. This step is similar to setting the Capture Scratch locations in FCP 7.

How to organize clips within FCP X

df1415_organize_8Final Cut Pro X organizes master source clips on three levels – Events, Keyword Collections, and Smart Collections. These are an equivalent to Bins in other NLEs, but don’t completely work in the same fashion. When clips are imported, they will go into a specific Event, which is closest in function to a Bin. It’s best to keep the number of Events low, since Keyword Collections work within an Event and not across multiple Events. I normally create individual Events for edited sequences, camera footage, audio, graphics, and a few more categories. Clips within an Event can be grouped in the browser display in different ways, such as by import date. This can be useful when you want to quickly find the last few files imported in a production that spans many days. Most of the time I set grouping and sorting to “none”.

df1415_organize_7To organize clips within an Event, use Keywords. Setting a Keyword for a clip – or a range within a clip – is comparable to creating subclips in FCP 7. When you add a Keyword, that clip or range will automatically be sorted into a Keyword Collection with a matching name. Keywords can be assigned to keyboard hot keys, which creates a very quick way to go through every clip and assign it into a variety of Keyword Collections. Clips can be assigned to more than one Collection. Again, this is equivalent to creating subclips and placing them into separate Bins.

On one set of commercials featuring company employees, I created Keyword Collections for each person, department, shoot date, store location, employees, managers, and general b-roll footage. This made it easy to derive spots that featured a diverse range of speakers. It also made it easy to locate a specific clip that the director or client might ask for, based on “I think Mary said that” or “It was shot at the Kansas City store”. Keyword Collections can be placed into folders. Collections for people by name went into one folder, Collections by location into another, and so on.

df1415_organize_3The beauty of Final Cut Pro X is that it works in tandem with any organization you’ve done in the Finder. If you spent the time to move clips into specific folders or you assigned color-coded Finder tags, then this information can be used when importing clips into FCP X. The import dialogue gives you the option to “leave files in place” and to use Finder folders and tags to automatically create corresponding Keyword Collections. Camera files that were organized into camera/date/card folders will automatically be placed into Keyword Collections that are organized in the same fashion. If you assigned clips with Mary, John, and Joe to have red, blue, and green tags for each person, then you’ll end up with those clips automatically placed into Keyword Collections named red, blue, and green. Once imported, simply rename the red, blue, and green Collections to Mary, John, and Joe.


The third level of clip organization is Smart Collections. Use these to automatically filter clips based on the criteria that you set. With the release of FCP X version 10.2, Smart Collections have been moved from the Event level (10.1.4 or earlier) to the Library level – meaning that filtering can occur across multiple Events within the Library. By default, new Libraries are created with several preset Smart Collections that can be used, deleted, or modified. Here’s an example of how to use these. When you sync double-system sound clips or multiple cameras, new grouped clips are created – Synchronized Clips and Multicam Clips. These will appear in the Event along with all the other source files, which can be unwieldy. To focus strictly on these new grouped clips, create a Smart Collection with the criteria set by type to include these two categories. Then, as new grouped clips are created, they will automatically be filtered into this Smart Collection, thus reducing clutter for the editor.

Playing nice with others

df1415_organize_9Final Cut Pro X was designed around a new paradigm, so it tends to live in its own world. Most professional editors have the need for a higher level of interoperability with other applications and with outside vendors. To aid in these functions, you’ll need to turn to third party applications from a handful of vendors that have focused on workflow productivity utilities for FCP X. These include Intelligent Assistance/Assisted Editing, XMiL, Spherico, Marquis Broadcast, and Thomas Szabo. Their utilities make it possible to go between FCP X and the outside world, through list formats like AAF, EDL, and XML.

df1415_organize_11Final Cut’s only form of decision list exchange is FCPXML, which is a distinctly different data format than other forms of XML. Apple Logic Pro X, Blackmagic Design DaVinci Resolve and Autodesk Smoke can read it. Everything else requires a translated file and that’s where these independent developers come in. Once you use an application like XtoCC (formerly Xto7) from Intelligent Assistance to convert FCPXML to XML for an edited sequence, other possibilities are opened up. The translated XML file can now be brought into Adobe Premiere Pro or FCP 7. Or you can use other tools designed for FCP 7. For instance, I needed to generate a print-out of markers with comments and thumbnail images from a film, in order to hand off notes to the visual effects company. By bringing a converted XML file into Digital Heaven’s Final Print – originally designed with only the older Final Cut in mind – this became a doable task.

df1415_organize_13Thomas Szabo has concentrated on some of the media functions that are still lacking with FCP X. Need to get to After Effects or Nuke? The ClipExporter and ClipExporter2 applications fit the bill. His newest tool is PrimariesExporter. This utility uses FCPXML to enable batch exports of clips from a timeline, a series of single-frame exports based on markers, or a list of clip metadata. Intelligent Assistance offers the widest set of tools for FCP X, including Producer’s Best Friend. This tool enables editors to create a range of reports needed on most major jobs. It delivers them in spreadsheet format.

Understanding the thought processes behind FCP X and learning to use its powerful relational database will get you through complex projects in record time. Better yet, it gives you the confidence to know that no editorial stone was left unturned. For more information on advanced workflows and organization with Final Cut Pro X, check out FCPworks, MacBreak Studio (hosted by Pixel Corps), Larry Jordan, and Ripple Training.

For those that want to know more about the nuts and bolts of the post production workflow for feature films, check out Mike Matzdorff’s “Final Cut Pro X: Pro Workflow”, an iBook that’s a step-by-step advanced guide based on the lessons learned on Focus.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Building FCP X Effects – Update


A few weeks ago I built and posted a small FCP X color correction effect using the Motion template process. While I have no intention of digging deeper into plug-in design, it’s an interesting experiment into understanding how you can use the power of Motion and Final Cut Pro X to develop custom effects, transitions, and generators. In this process, I’ve done a bit of tweaking, created a few more effects, and gotten a better understanding of how it all works. If you download the updated effects, there are a total of three filters (Motion templates) – a color corrector, a levels filter and a DVE.

Color science

In going through this exercise, a few things have been brought to my attention. First of all, filters are not totally transparent. If you apply my color correction filter, you’ll see slight changes in the videoscopes even when each tab is at its default. This doesn’t really matter since you are applying a correction anyway; but if it annoys you, then simply uncheck the item you aren’t using, like brightness or contrast.

df2615_fcpxfilterupdate_3Secondly, the exact same filter in FCP X may or may not use the same color science as the Motion version, even though they are called the same thing. Specifically this is the case with the Hue/Saturation filter. My template uses the one from Motion, of course. The FCP X Hue/Sat filter uses a color model in which saturation is held constant and luma (a composite of RGB) varies. The Motion version holds luma constant and allows saturation to vary.

The quickest way to test this is with a solid red generator. Apply the FCP X Hue/Sat filter and rotate the hue control. Set the scopes to display an RGB parade, vectorscope, and the waveform set to luma. As you rotate the hue around the dial, you’ll notice that the color dot stays neatly in the boxes of the vectorscope and moves in a straight, diagonal line from vector to vector. The RGB parade will show a perfect combination of red, blue, and green values to achieve the correct RMBCGY coordinates. However, the waveform luma levels will move up and down with large changes.

Now compare this to the hue control in the Hue/Sat filter included in my template. This is from Motion. As you rotate the hue control around the dial, the saturation value moves in what seems to be an erratic fashion around the vectorscope; but, the luma display changes very little. If you apply this same test to real footage, instead of a generated background color, you’ll get perceptually better results with Motion’s Hue/Sat filter than with the FCP X version. In most cases, either approach is acceptable, since for the purposes of color correction, you will likely only move the dial a few degrees left or right from the default of zero. Hue changes in color grading should be very subtle.


Expanding filter features

After I built this first Motion template, I decided to poke around some more inside Motion to see if it offered other filters that had value for color correction. And as a matter of fact, it does. Motion includes a very nice Levels filter. It includes sliders for RGB as a group, as well as individual settings for red, green, and blue. Each group is broken down into sliders for black in/out, white in/out, and gamma. Then there’s an overall mix value. That a total of 21 sliders, not counting opacity, which I didn’t publish in my template. Therefore, you have fairly large control over grading using only the Levels filter.

df2615_fcpxfilterupdate_4I thought about building it into the earlier Oliver Color filter I had created, but ran into some obvious design issues. When you build these effects, it’s important to think through the order of clicking publish on the parameters that you want to appear inside of FCP X. This sequence will determine where these values appear in the stack of controls in the FCP X inspector. In other words, even though I placed this Levels filter ahead of Color Balance within Motion, the fact that I clicked publish after these other values had already been published, meant that these new controls would be placed to the bottom of my stack once this was displayed in FCP X. The way to correct this is to first unpublish everything and then select publish for each parameter in the order that you want it to appear.

A huge interface design concern is just how cluttered you do or don’t want your effect controls to be inside of FCP X. This was a key design issue when FCP X was created. You’ll notice that Apple’s built-in FCP X effects have a minimalist approach to the number of sliders available for each filter. Adding Levels into my Color filter template meant adding 21 more sliders to an interface that already combined a number of parameters for each of the other components. Going through this exercise makes it clear why Apple took the design approach they did and why other developers have resorted to various workarounds, such as floating controls, HUDs, and other solutions. The decision for me was simply to create a separate Oliver Levels filter that could be used separately, as needed.


More value from color presets 

An interesting discovery I made was how Color Board presets can be used in FCP X 10.2. When you choose a preset from the Color Board’s pulldown menu, you can access these settings as you always have. The downside is that you can’t preview a setting like you can other effects in the effects palette. You have to apply a preset from the Color Board to see what it will look like with your image.df2615_fcpxfilterupdate_5

FCP X 10.2 adds the ability to save filter presets. Since color correction using the Color Board has now been turned into a standard filter, you can save color presets as an effects preset. This means that if you have a number of Color Board presets (the built-in FCP X settings, mine, or any custom ones you’ve created) simply apply the color preset and then save that color correction filter setting as a new effects preset. When you do this you get a choice of what category to save it into. You can create your own, such as My Color Presets. Now these presets will show up in that category inside the effects palette. When you skim over the preset icon, your image will be previewed with that color correction value applied.

Although these presets appear in the same palette as other Motion templates, the effects presets themselves are stored in a different place. They are located in the OS X user library under Application Support/ProApps/Effects Presets. For example, I created 40 Color Board presets that can all be turned into Effects Presets visible within the Effects palette. I’m not going to post them that way, but if you feel ambitious, I would invite you to download the Color Board presets and make your own effects presets out of them.

All of this is a great way to experiment and see how you can use the resources Apple has provided to personalize a system tailored to your own post needs.

Click here to download the Motion template effects.

Click here to download updated and additional Motion template effects (FCP X 10.2.1 or later).

Click here to download the Color Board presets.

For some additional resources for free plug-ins, check out Ripple Training, Alex4D and FxFactory.

©2015 Oliver Peters

Lumetri plus SpeedGrade Looks


Last year I created a series of Looks presets that are designed to work with SpeedGrade CC. These use Adobe’s .look format, which is a self-contained container format that includes SpeedGrade color correction layers and built-in effects. Although I specifically designed these for use with SpeedGrade, I received numerous inquiries as to how they could be used directly within Premiere Pro. There have been solutions, but finally with the release of Premiere Pro CC 2015, this has become very easy. (Look for a full review of Premiere Pro CC 2015 in a future post.) Click any image for an expanded view.

df2515_lumsglooks_1_smOne of the top features of the CC 2015 release is the new Lumetri Color panel for Premiere Pro. When you select the Color workspace, the Premiere Pro interface will automatically display the Lumetri Color panel along with new, real-time videoscopes. This new panel provides extensive color correction features in a single panel (controls are also available in the Effects Control panel). It is based on a layer design that is similar to the Lightroom adjustment controls.

df2515_lumsglooks_6_smThe top control of the panel lets you select either the source clip (left name) or that one instance on the timeline (right name). If you select the source clip, then any correction is applied as a master clip effect. This correction will ripple to any other instances of that source on the timeline. If you select the timeline clip, then corrections only affect that one spot on the timeline. Key, for the purposes of this article, is the fact that the Lumetri Color panel includes two entry points for LUTs, using either the .cube or .look format. Adobe supplies a set of Adobe and LookLabs (SpeedLooks) LUTs. You can access built-in or third-party files from either the Basic or the Creative tab of the Lumetri Color panel.

df2515_lumsglooks_5_smIf you want to use any custom Look file – such as the free ones that I built or a purchased set, like SpeedLooks – simply choose browse from the pulldown menu and navigate to your hard drive location containing the file that you want to use. Sometimes this will require two LUTs. For example, SpeedLooks are based on corrections to a default log format optimized for LookLabs products. This means you’ll need to apply one of their camera patches to move the camera color into their unified log format. On the other hand, my Looks are based on a standard image, so you may or may not need an additional LUT. If you have ARRI Alexa footage recorded with a log-C gamma profile, then you’ll want to add Adobe’s default Log-C-to-Rec709 LUT, along with the Look file. In both examples, you would add the camera LUT in the Basic tab, since this is where the correction pipeline starts. Camera LUTs should be applied as source effects, so that they are applied as master clip effects.df2515_lumsglooks_2_sm

The next step is to apply your creative “look”, which might be a film emulation LUT or some other type of subjective look. This is applied through the pulldown in the Creative tab. Usually it’s best to apply this as a timeline effect. Simply select a built-in option or browse to other choices on your hard drive. In the case of my SpeedGrade Looks, pick the one you like based on the style you are after. Since the .look format can contain SpeedGrade’s built-in effect filters and vignettes, these will be included when applied in the Lumetri panel as part of a single LUT file.

df2515_lumsglooks_4_smAs with any LUT, not all settings work ideally with your own footage. This means you MUST adjust the other settings in the Lumetri Color panel to get the results you want. A creative LUT is only a starting point and never the final look. As you look through the various controls on the tabs, you’ll see a plethora of grading tools for exposure, contrast, color balance, curves, vignettes, and more. Tweak to your heart’s content and you’ll get some outstanding results without ever leaving the Premiere Pro environment.

Click here to download a .zip archive of the free SpeedGrade Looks file.

©2015 Oliver Peters

DaVinci Resolve – 10 Tips to Improve Your Skills


Blackmagic Design’s DaVinci Resolve is one of the pre-eminent color correction applications – all the more amazing that it’s so accessible to any user. Even the free Lite version does nearly everything you’d want from any color grading software. If you have an understanding of how to use a 3-way color correction filter and you comprehend procedural nodes as a method of stacking corrections, then it’s easy to get proficient with Resolve, given a bit of serious seat time. The following tips are designed to help you get a little more comfortable with the nuances of Resolve. (Click on the images below for enhanced views.)

df2215_resolvetips_1_smPrimary sliders. Resolve gives you two ways to adjust primary color correction – color wheels and sliders. Most people gravitate to the wheels control panel, but the sliders panel is often faster and more precise. Adjustments made in either control will show up in the other. If you adjust color balance using the sliders, while monitoring the RGB parade display and/or the histogram on the video scopes, then it’s very easy to dial in perfect black and white balance for a shot. If the blue shadow portion looks too high on the RGB parade display, it means that the shadows of the image will look bluish. Simply move the blue lift slider lower to push the shadows closer to a true black. An added benefit of this panel is that the controls react to a wheeled mouse. This is great if you don’t have access to a control surface. Hover the mouse over the slider that you want to adjust and twirl the mouse wheel up or down to make your correction.

df2215_resolvetips_2_smGang/ungang curves. Given the propensity of cameras to record with log gamma profiles, you often find the need to apply an s-shaped luma curve during color correction. This shifts the low and high ranges of the image to expand the signal back to full levels, while retaining a “filmic” quality to that image. In the custom curves panel you’ll encounter a typical layout of four curves for luma and RGB. The default is for these to be ganged together. Adjust one and they all change. However, this means you are jacking around chroma levels when you might simply want to alter luma. Therefore, make sure to disable ganging before you start. Then adjust the luma curve. Only adjust the R, G or B curves if it’s beneficial to your look.

df2215_resolvetips_3_smHue/sat curves. If you toggle the curves pulldown menu, you’ll notice a number of other options, like hue vs. hue, hue vs. sat, and so on. These curve options let you grab a specific color and adjust its hue, saturation or brightness, without changing the tone of the entire image. When you sample a color, you end up with three points along the curve – the pin for the selected color and a range boundary pin on either side of that color. These boundary points determine the envelope of your selection. In other words, how broad of a range of hues that you want to affect for the selected color. Think of it as a comparable function to an audio EQ.

It is possible to select multiple points along the curve. Let’s say you want to lower the saturation of both bright yellows and bright blues within the frame. Choose the hue vs. sat curve and select points for both yellow and blue. Pulling these points down will lower the saturation of each of these colors using a single panel.

The hue vs. hue curve is beneficial for skin tones. A film that I’m currently grading features a Korean lead actress. Her skin tones normally skew towards yellow or green in many shots. The Caucasian and African American actors in the same shots appear with “normal” skin tones. By selecting the color that matches her flesh tones on the curve, I am able to shift the hues towards a value that is more in keeping with pleasing flesh tone colors. When used in combination with a mask, it’s possible to isolate this correction to just her part of the frame, so as not to affect the coloration of the other actors within the same shot.

df2215_resolvetips_4_smTracking/stabilization. Most folks know that Resolve has one of the best and fastest trackers of any application. Add an oval mask to someone’s face, so that you can brighten up just that isolated area. However, as the person moves within the shot, you have to adjust the mask to follow their face. This is where Resolve’s cloud-point tracker is a lifesaver. It’s fast and most of the time stays locked to the subject. The tracking window also enables stabilization. Use the pulldown menu to toggle from tracking to stabilization. This is a two-step process – first analyze and then stabilize. You can dial in an amount of smoothness, if you want to retain some of the camera drift for a more natural appearance to the shot.

df2215_resolvetips_5_smBlurs/masks/tracking. Resolve (including the free version) enables blurring of the image. This can be used in conjunction with a mask and with tracking, if you need to blur and track an object, like logos that need to be obscured in non-scripted TV shows. Using a blur with a vignette mask lets you create a dreamy effect. This is all possible without resorting to third-party filters or plug-ins.

df2215_resolvetips_6_smScene detection/slicing. There are three ways to get a show into Resolve: a) edit from scratch in Resolve; b) roundtrip from another NLE using FCPXML, XML, AAF or an EDL; or 3) export a flattened media file of your timeline from another NLE and import that master file into Resolve. This process is similar to when masters were output to tape, which in turn were graded in a DaVinci “tape-to-tape” color correction session. Resolve has the ability to analyze the file and determine edit points with reasonable accuracy. It will break up the files into individual master clips within your media pool. Unfortunately, these are viewed in the timeline as individual media clips with boundaries, thus making trimming difficult.

My preference is to place the clip onto a new timeline and then manually add splices at all edit points and dissolves. Since Resolve includes editing capabilities, you can trim, alter or add points in case of error or missed edits. This can be aided by importing a matching, blank XML or EDL and placing it onto a higher track, which then lets you quickly identify all edit points that you’ll need to create.

df2215_resolvetips_7_smAdd dissolves. In the example above, how do you handle video dissolves that exist in the master file? The solution (in the Resolve timeline) is to add an edit point at the midpoint of the dissolve that’s embedded within the media file. Next, add a new dissolve equal to the length of the existing dissolve in the video. This way, color correction for one shot will naturally dissolve to the color correction of the second shot. In effect, you aren’t dissolving video sources – only color correction values. This technique may also be used within a single shot if you have correction changes inside that shot. Although in the second case, adding correction keyframes in the Color page is normally a better solution. This might be the case if you are trying to counteract level changes within the shot, such as an in-camera iris change.

df2215_resolvetips_8_smNode strategy. Resolve allows you to store complex grades for shots – which will include as many nodes as required to build the look – at a single memory register. You can build up each adjustment in multiple nodes to create the look you desire, store it and then apply that grade to other shots in a single step. This is very useful; however, I tend to work a bit differently when going through a scene in a dramatic project.

I generally go through the scene in multiple “passes”. For instance, I’ll quickly go through each shot with a single node to properly balance the color and make the shots reasonably consistent with each other. Next, I’ll go back through and add a second node (no adjustment yet) for each shot. Once that’s done, I’ll go back to the head of the scene and in that second node make the correction to establish a look. I can now use a standard copy command (cmd-C on the Mac) to store those values for that single node. When I go to the next shot, the second node is already selected, so then I simply paste (cmd-V on the Mac) those values. Let’s say the scene is a two-person dialogue scene using two singles. Angle A is a slightly different color than Angle B. Set the second node adjustment for Angle A, copy, and then paste to each Angle A shot (leapfrogging the Angle B shots). Then repeat for the Angle B shots.

Lastly, I might want to add a vignette. Go back through the scene and add a third, blank node for each shot. Create the vignette in node three of the first shot, then copy and paste into each of the others. I can still adjust the darkness, softness and position of the vignette at each shot, as needed. It’s a bit of an assembly line process, but I find it’s a quick way to go through a scene and build up adjustments without getting fixated on a single shot. At any point, I can review the whole scene and get a better feel for the result of my corrections in the context of the entire scene.

df2215_resolvetips_9_smLUTs. Resolve enables the application of technical and creative LUTs (color look-up tables). While I find their use limited and should be applied selectively, it’s possible to add your own to the palette. Any .cube LUT file – whether you found it, bought it, or created your own – can be added to Resolve’s library of LUTs. On the Mac, the Resolve LUT folder is found in Library/Application Support/Blackmagic Design/DaVinci Resolve/LUT.

df2215_resolvetips_10_smExport with audio. You can export a single finished timeline or individual clips using the Deliver page. At the time of this post, Resolve 12 has yet to be released, but hopefully the audio export issues I’ve encountered have been completely fixed. In my experience using Resolve 11 with RED camera files, it has not been possible to accurately export a complete timeline and have the audio stay in sync. I haven’t found this to be the case with other camera formats, though. So if you are exporting a single master file, expect the potential need to bring the picture into another application or NLE, in order to marry it with your final mix. Resolve 11 and earlier are not really geared for audio – something which Resolve 12 promises to fix. I’ll have a review of Resolve 12 at some point in the future.

Hopefully these tips will give you a deeper dive into Resolve. For serious training, here are some resources to check out:

Color Grading Central




Mixing Light

Ripple Training

Tao of Color

©2015 Oliver Peters

The FCP X – RED – Resolve Dance II


Last October I wrote about the roundtrip workflow surrounding Final Cut Pro X and Resolve, particularly as it relates to working with RED camera files. This month I’ve been color grading a small, indie feature film shot with RED One cameras at 4K resolution. The timeline is 1080p. During the course of grading the film in DaVinci Resolve 11, I’ve encountered a number of issues in the roundtrip process. Here are some workflow steps that I’ve found to be successful.

Step 1 – For the edit, transcode the RED files into 1080p Apple ProRes Proxy QuickTime movies baking in camera color metadata and added burn-in data for clip name and timecode. Use either REDCINE-X Pro or DaVinci Resolve for the transcode.

Step 2 – Import the proxies and double-system audio (if used) into FCP X and sync within the application or use Sync-N-Link X. Ideally all cameras should record reference audio and timecode should match between the cameras and the sound recorder. Slates should also be used as a fall-back measure.

Step 3 – Edit in FCP X until you lock the cut. Prepare a duplicate sequence (Project) for grading. In that sequence, strip off (detach and remove) all audio. As an option, you can create a mix-down track for reference and attach it as a connected clip. Flatten the timeline down to the Primary Storyline where ever possible, so that Resolve only sees this as one track of video. Compound clips should be broken apart, effects should be removed, and titles removed. Audition clips should be finalized, but multicam clips are OK. Remove effects filters. Export an FCPXML (version 1.4 “previous”) list. You should also export a self-contained reference version of the sequence, which can be used to check the conform in Resolve.

Step 4 – Launch Resolve and make sure that the master project settings match that of your FCP X sequence. If it’s supposed to be 1920×1080 at 23.976 (23.98) fps, then make sure that’s set correctly. Resolve defaults to a frame rate of 24.0fps and that won’t work. Locate all of your camera original source media (RED camera files in this example) and add them to your media bin in the Media page. Import the FCPXML (1.4), but disable the setting to automatically load the media files in the import dialogue box. The FCPXML file will load and will relink to the RED files without issue if everything has gone correctly. The timeline may have a few clip conflicts, so look for the little indicator on the clip corner in the Edit window timeline. If there’s a clip conflict, you’ll be presented with several choices. Pick the correct one and that will correct the conflict.

Step 5 – At this point, you should verify that the files have conformed correctly by comparing against a self-contained reference file. Compound clips can still be altered in Resolve by using the Decompose function in the timeline. This will break apart the nested compound clips onto separate video tracks. In general, reframing done in the edit will translate, as will image rotation; however, flips and flops won’t. To flip and flop an image in FCP X requires a negative X or Y scale value (unless you used a filter), which Resolve cannot achieve. When you run across these in Resolve, reset the scale value in the Edit page inspector to normal from that clip. Then in the Color page use the horizontal or vertical flip functions that are part of the resizing controls. Once this is all straight, you can grade.

Step 6 option A – When grading is done, shift to the Deliver page. If your project is largely cuts-and-dissolves and you don’t anticipate further trimming or slipping of edit points in your NLE, then I would recommend exporting the timeline as a self-contained master file. You should do a complete quality check the exported media file to make sure there were no hiccups in the render. This file can then be brought back into any NLE and combined with the final mixed track to create the actual show master. In this case, there is no roundtrip procedure needed to get back into the NLE.

Step 6 option B – If you anticipate additional editing of the graded files – or you used transitions or other effects that are unique to your NLE – then you’ll need to use the roundtrip “return” solution. In the Deliver page, select the Final Cut Pro easy set-up roundtrip. This will render each clip as an individual file at the source or timeline resolution with a user-selected handle length added to the head and tail of each clip. Resolve will also write a corresponding FCPXML file (version 1.4). This file will retain the original transitions. For example, if you used FCP X’s light noise transition, it will show up as a dissolve in Resolve’s timeline. When you go back to FCP X, it will retain the proper transition information in the list, so you’ll get back the light noise transition effect.

Resolve generates this list with the assumption that the media files were rendered at source resolution and not timeline resolution. Therefore, even if your clips are now 1920×1080, the FCPXML represents these as 4K. When you import this new FCPXML back into FCP X, a spatial conform will be applied to “fit” the files into the 1920×1080 raster space of the timeline. Change this to “none” and the 1080 media files will be blown up to 4K. You can choose to simply live with this, leave it to “fit”, and render the files again on FCP X’s output – or follow the next step for a workaround.

Step 7 – Create a new Resolve project, making sure the frame rate and timeline format are correct, such as 1920×1080 at 23.976fps. Load the new media files that were exported from Resolve into the media pool. Now import the FCPXML that Resolve has generated (uncheck the selection to automatically import media files and uncheck sizing information). The media will now be conformed to the timeline. From the Edit page, export another FCPXML 1.4 for that timeline (no additional rendering is required). This FCPXML will be updated to match the media file info for the new files – namely size, track configuration, and frame rate.

At this stage, you will encounter a second serious flaw in the FCP X/Resolve/FCP X roundtrip process. Resolve 11 does not write a proper FCPXML file and leaves out certain critical asset information. You will encounter this if you move the media and lists between different machines, but not if all of the work is being done on a single workstation. The result will be a timeline that loads into FCP X with black clips (not the red “missing” icon). When you attempt to reconnect the media, FCP X will fail to relink and will issue an “incompatible files” error message. To fix the problem, either the colorist must have FCP X installed on the Resolve system or the editor must have Resolve 11 installed on the FCP X system. This last step is the one remaining workaround.

Step 8 option A – If FCP X is installed on the Resolve machine, import the FCPXML into FCP X and reconnect the media generated by Resolve. Then re-export a new FCPXML from FCP X. This new list and media can be moved to any other system. You can move the FCP X Library successfully, as well.

Step 8 option B – If Resolve is installed on the FCP X machine, then follow Step 7. The new FCPXML that you create there will load into FCP X, since you are on the same system.

That’s the state of things right now. Maybe some of these flaws will be fixed with Resolve 12, but I don’t know at this point. The FCPXML list format involves a bit of voodoo at times and this is one of those cases. The good news is that Resolve is very solid when it comes to relinking, which will save you. Good luck!

©2015 Oliver Peters

Building a Free FCP X Color Correction Filter

df2315_opcolor_1One nice aspect of the symbiotic relationship between Final Cut Pro X and Motion is that Motion can be used to create effects, transitions, titles and generators for use in FCP X. These are Motion Templates and they form the basis for the creation of nearly all third-party effects filters, both paid and free. This means that if you learn a bit about Motion, you can create your own custom effects or make modifications to the existing ones supplied with FCP X. This has become very easy to do in the newest versions (FCP X 10.2.1 and Motion 5.2.1).

I decided to build a color correction filter that covered most of the standard adjustments you need with the usual types of footage. There are certainly a number of really good color correction/grading filters already on the market for FCP X. Apple’s own color board works well and with 10.2 has been broken out as a normal effects filter. However, a lot of folks don’t like its tab/puck/swatch interface and would still rather work with sliders or color wheels. So as an experiment, I built my own color correction filter for use with FCP X – and you may download here and use it for free as well.

df2315_opcolor_4_smLet me point out that I am no Motion power user. I have nowhere near the skills of Mark Spencer, Simon Ubsdell or Alex Gollner when it comes to using Motion to its fullest. So all I’ve done is combine existing Motion filters into a single combined filter with zero modifications. But that’s the whole point and why this function has so much potential. A couple of these individual filters already exist singly within FCP X, but Motion has a lot more to choose from. Once you launch Motion, the starting point is to open a new Final Cut Effects project from the Motion project browser. This will default to a blank composition ready to have things added to it. Since I was creating a color correction filter, all I needed to do was select the existing Motion filters to use from the Library browser and drag-and-drop the choices into the composition.

df2315_opcolor_5I decided to combine Brightness, Contrast, Color Balance, Hue/Saturation and Tint, which were also stacked in that exact order. The next step in the process was to determine the state of the filter when you apply it and which parameters and sliders to publish. Items that are published, such as a slider, will show up in the inspector in FCP X and can be adjusted by the editor. In my case, I decided to publish every parameter in the stack. To publish, simply click on the right side edge of each parameter line and you’ll find a pulldown selection that includes a publish/unpublish toggle. Note that the order in which you click the publish commands will determine the order of how these commands are stacked when they show up inside FCP X. To make the most sense, I followed a straight sequence order, top to bottom.

df2315_opcolor_3_smYou can also determine the starting state when you first apply or preview the effect.  For example, whether a button starts out enabled or disabled. In the case of this filter, I’ve enabled everything and left it at a neutral or default value, with the exception of Tint. This starts in the ‘off’ position, because I didn’t want a color cast to be applied when you first add the filter to a clip. Once everything is set-up, you simply save the effect to a desired location in the Motion Templates folder. You can subsequently open the Motion project from there to modify the effect. When it’s saved again, the changes are updated to the filter in FCP X.

If you’ve downloaded my effects filter, unzip the file and follow the Read Me document. I’ve created an “Oliver FX” category and this complete folder should be placed into the User/Movies/Motion Templates/Effects folder on your hard drive.df2315_opcolor_2

Applying the filter inside Final Cut Pro X is the same as any of the other effects options. It has the added benefit that all parameters can be keyframed. The Color Balance portion works like a 3-way color corrector, except that it uses the OS color picker wheels in lieu of a true 3-color-wheel interface. As a combination of native filters, performance is good without taxing the machine.

UPDATE (12 June 2015) : I have added one addition filter into the download file. The second filter is called “Oliver DVE” and designed to give you a full set of transform controls that include XYZ rotation. It comes from the transform control set included with Motion. This provides you with the equivalent of a 2.5D DVE, which is not available in the default control set of FCP X.

UPDATE 2 (15 June 2015) : These filters are not backward compatible. They will work in FCP X 10.1.2 and Motion 5.1.2 and forward (hopefully), but not in earlier versions. That’s due to technology changes between these versions. If you downloaded these prior to June 15, for FCP X 10.1.2 or 10.1.4 and they aren’t working, please download again. I have modified the files to work in FCP X 10.1.2 and later. Thank you.

Download the free “Oliver Color” and “Oliver DVE” filters here. My previously-created, free FCP X color board presets may be found here.

©2015 Oliver Peters