Audio Splits and Stems in Premiere Pro


When TV shows and feature films are being mixed, the final deliverables usually include audio stems as separate audio files or married to a multi-channel video master file or tape. Stems are the isolated submix channels for dialogue, sound effects and music. These elements are typically called DME (dialogue, music, effects) stems or splits and a multi-channel master file that includes these is usually called a split-track submaster. These isolated tracks are normally at mix level, meaning that you can combine them and the sum should equal the same level and mix as the final composite mixed track.

The benefit of having such stems is that you can easily replace elements, like re-recording dialogue in a different language, without having to dive back into the original audio project. The simplest form is to have 3 stereo stem tracks (6 mono tracks) for left and right dialogue, sound effects and music. Obviously, if you have a 5.1 surround mix, you’ll end up with a lot more tracks. There are also other variations for sports or comedy shows. For example, sports shows often isolate the voice-over announcer material from an on-camera dialogue. Comedy shows may isolate the laugh track as a stem. In these cases, rather than 3 stereo DME stems, you might have 4 or more. In other cases, the music and effects stems are combined to end up with a single stereo M&E track (music and effects minus dialogue).

Although this is common practice for entertainment programming, it should also be common practice if you work in short films, corporate videos or commercials. Creating such split-track submasters at the time you finish your project can often save your bacon at some point down the line. I ran into this during the past week. df2916_audspltppro_1A large corporate client needed to replace the music tracks on 11 training videos. These videos were originally editing in 2010 using Final Cut Pro 7 and mixed in Pro Tools. Although it may have been possible to resurrect the old project files, doing so would have been problematic. However, in 2010, I had exported split-track submasters with the final picture and isolated stereo tracks for dialogue, sound effects and music. These have become the new source for our edit – now 6 years later. Since I am editing these in Premiere Pro CC, it is important to also create new split-track submasters, with the revised music tracks, should we ever need to do this again in the future.

Setting up a new Premiere Pro sequence 

I’m usually editing in either Final Cut Pro X or Premiere Pro CC these days. It’s easy to generate a multi-channel master file with isolated DME stems in FCP X, by using the Roles function. However, to do this, you need to make sure you properly assign the correct Roles from the get-go. Assuming that you’ve done this for dialogue, sound effects and music Roles on the source clips, then the stems become self-sorting upon export – based on how you route a Role to its corresponding export channel. When it comes to audio editing and mixing, I find Premiere Pro CC’s approach more to my liking. This process is relatively easy in Premiere, too; however, you have to set up a proper sequence designed for this type of audio work. That’s better than trying to sort it out at the end of the line.

df2916_audspltppro_4The first thing you’ll need to do is create a custom preset. By default, sequence presets are configured with a certain number of tracks routed to a stereo master output. This creates a 2-channel file on export. Start by changing the track configuration to multi-channel and set the number of output channels. My requirement is to end up with an 8-channel file that includes a stereo mix, plus stereo stems for isolated dialogue, sound effects and music. Next, add the number of tracks you need and assign them as “standard” for the regular tracks or “stereo submix” for the submix tracks.

df2916_audspltppro_2This is a simple example with 3 regular tracks and 3 submix tracks, because this was a simple project. A more complete project would have more regular tracks, depending on how much overlapping dialogue or sound effects or music you are working with on the timeline. For instance, some editors like to set up “zones” for types of audio. You might decide to have 24 timeline tracks, with 1-8 used for dialogue, 9-18 for sound effects and 17-24 for music. In this case, you would still only need 3 submix tracks for the aggregate of the dialogue, sound effects and music.

df2916_audspltppro_5Rename the submix tracks in the timeline. I’ve renamed Submix 1-3 as DIA, SFX and MUS for easy recognition. With Premiere Pro, you can mix audio in several different places, such as the clip mixer or the audio track mixer. Go to the audio track mixer and assign the channel output and routing. (Channel output can also be assigned in the sequence preset panel.) For each of the regular tracks, I’ve set the pulldown for routing to the corresponding submix track. Audio 1 to DIA, Audio 2 to SFX and Audio 3 to MUS. The 3 submix tracks are all routed to the Master output.

df2916_audspltppro_3The last step is to properly assign channel routing. With this sequence preset, master channels 1 and 2 will contain the full mix. First, when you export a 2-channel file as a master file or a review copy, by default only the first 2 output channels are used. So these will always get the mix without you having to change anything. Second, most of us tend to edit with stereo monitoring systems. Again, output channels 1 and 2 are the default, which means you’ll always be monitoring the full mix, unless you make changes or solo a track. Output channels 3-8 correspond to the stereo stems. Therefore, to enable this to happen automatically, you must assign the channel output in the following configuration: DIA (Submix 1) to 1-2 and 3-4, SFX (Submix 2) to 1-2 and 5-6, and MUS (Submix 3) to 1-2 and 7-8. The result is that everything goes to both the full mix, as well as the isolated stereo channel for each audio component – dialogue, sound effects and music.

Editing in the custom timeline

Once you’ve set up the timeline, the rest is easy. Edit any dialogue clips to track 1, sound effects to track 2 and music to track 3. In a more complex example, like the 24-track timeline I referred to earlier, you’d work in the “zones” that you had organized. If 1-8 are routed to the dialogue submix track, then you would edit dialogue clips only to tracks 1-8. Same for the corresponding sound effects and music tracks. Clips levels can still be adjusted as you normally would. But, by having submix tracks, you can adjust the level of all dialogue by moving the single, DIA submix fader in the audio track mixer. This can also be automated. If you want a common filter, like a compressor, added all of one stem – like a compressor across all sound effects – simply assign it from the pulldown within that submix channel strip.

Exporting the file

df2916_audspltppro_6The last step is exporting your spilt-track submaster file. If this isn’t correct, the rest was all for naught. The best formats to use are either a QuickTime ProRes file or one of the MXF OP1a choices. In the audio tab of the export settings panel, change the pulldown channel selection from Stereo to 8 channels. Now each of your timeline output channels will be exported as a separate mono track in the file. These correspond to your 4 stereo mix groups – the full mix plus stems. Now in one single, neat file, you have the final image and mix, along with the isolated stems that can facilitate easy changes down the road. Depending on the nature of the project, you might also want to export versions with and without titles for an extra level of future-proofing.

Reusing the file

df2916_audspltppro_7If you decide to use this exported submaster file at a later date as a source clip for a new edit, simply import it into Premiere Pro like any other form of media. However, because its channel structure will be read as 8 mono channels, you will need to modify the file using the Modify-Audio Channels contextual menu (right-click the clip). Change the clip channel format from Mono to Stereo, which turns your 8 mono channels back into the left and right sides of 4 stereo channels. You may then ignore the remaining “unassigned” clip channels. Do not change any of the check boxes.

Hopefully, by following this guide, you’ll find that creating timelines with stem tracks becomes second nature. It can sure help you years later, as I found out yet again this past week!

©2016 Oliver Peters

Film Editor Techniques


Editing is a craft that each editor approaches with similarities and differences in style and technique. If you follow my editor interviews or those at Steve Hullfish’s Art of the Cut series, then you know that most of the top editors are more than willing to share how they do things. This post will go through a “baker’s dozen” set of tips and techniques that hopefully will help your next, large project go just a bit more smoothly.

Transcoding media. While editing with native media straight from the camera is all the rage in the NLE world, it’s the worst way to work on long-term projects. Camera formats vary in how files are named, what the playback load is on the computer, and so on. It’s best to create a common master format for all the media in your project. If you have really large files, like 4K camera media, you might also transcode editing proxies. Cut with these and then flip to the master quality files when it comes time to finish.

Transcode audio. In addition to working with common media formats, it’s a good practice to get all of your audio into a proper format. Most NLEs can deal with a mix of audio formats, bit depths and sample rates, but that doesn’t mean you should. It’s quite common to get VO and temp music as MP3 files with 44.1kHz sampling. Even though your NLE may work with this just fine, it can cause problems with sync and during audio post later. Before you start working with audio in your project, transcode it to .wav of .aif formats with 48kHz sampling and 16-bit or 24-bit bit-depth. Higher sampling rates and bit-depths are OK if your NLE can handle them, but they should be multiples of these values.

Break up your project files by reel. Most films are broken down into 20 minute “reels”. Typically a feature will have five or six reels that make up the entire film. This is an old-school approach that goes back to the film day, yet, it’s still a good way to work in the modern digital era. How this is done differs by NLE brand.

With Media Composer, the root data file is the bin. Therefore, each film reel would be a separate timeline, quite possibly placed into a separate bin. This facilitates collaboration among editors and assistants using different systems, but still accessing the same project file. Final Cut Pro X and Premiere Pro CC don’t work this way. You cannot share the exact same FCPX library or Premiere Pro project file between two editors at one time.

In Final Cut Pro X, the library file is the basic data file/container, so each reel would be in its own library with a separate master library that contains only the final edited sequence for each of the reels. Since FCPX editors can open multiple libraries, it’s possible to work across reels this way or to have different editors open and work on different libraries independent of each other.

With Premiere you can only have a single project file open at one time. When a film is broken into one reel per project, it becomes easy for editors and assistants to work collaboratively. Then a master project can be created to import the final version of each reel’s timeline to create the combined film timeline. Media Browser within Premiere Pro should be used to access sequences from within other project files and import them into a new project.

Show/hide, sifting and sorting. Each NLE has its own way of displaying or hiding clips and subclips. Learning how to use these controls will help you speed up the organization of the media. Final Cut Pro X has a sophisticated method of assigning “favorites” and “rejects” to clips and ranges within clips. You can also assign keywords. By selecting what to see and to hide, it’s easy to cull a mass of footage into the few, best options. Likewise with Media Composer and Premiere Pro, you can show and hide clips and also sort by custom column criteria. Media Composer includes a custom sift feature, which is a filtering solution within the bin. It is easy to sift a bin by specific data in certain columns. Doing so hides everything else and reveals only the matching set of media on a per-bin basis.

Stringouts. A stringout is a sequence of selected footage. Many editors use stringouts as the starting point and then whittle down the scene from there. For example, Kirk Baxter likes his assistants to create a stringout for a dialogue scene that is broken down by line and camera. For each line of dialogue, you would see every take and camera angle covering that line of dialogue from wide to tight. Then the next line of dialogue and so on. The result is a very long sequence for the scene, but he can quickly assess the performance and best angle for each portion of the scene. Then he goes through and picks his favorites by pushing the video clip up one track for quick identification. The assistant then cleans up the stringout by creating a second version containing only these selected clips. Now the real cutting can begin.

Julian Clarke has his assistants create a similar stringout for action scenes. All takes and angles are organized back-to-back matching the choreography of the action. So – every angle/take for each crash or blast or punch within the scene. From these he has a clear idea of coverage and how to proceed cutting the scene, which otherwise might have an overwhelming amount of footage at first glance.

I use stringouts a lot for interview-driven documentaries. One sequence per person with everything. The second and third stringouts are successive cutdowns from that initial all-inclusive stringout. At this stage I start combining portions of sequences based on topics for a second round of stringouts. These will get duplicated and then culled, trimmed and rearranged as I refine the story.

Pancakes and using sequences as sources. When you use stringouts, it’s common to have one sequence become the source for another sequence. There are ways to handle this depending on your NLE. Many will nest the source sequence as a single clip on the new timeline. I contend that nesting should be avoided. Media Composer only allows one sequence in the “record” window to be active at any one time (no tabbed timeline). However, you can also drag a sequence to the source window and its tracks and clips can be viewed by toggling the timeline display between source and record. At least this way you can mark ins and outs for sections. Both Final Cut Pro “legacy” and Premiere Pro enable several sequences to be loaded into the timeline window where they are accessible through tabs. Final Cut Pro X dropped this feature, replacing it with a timeline history button to step forward or backward through several loaded sequences. To go between these sequences in all three apps, using copy-and-paste functions are typically the best way to bring clips from one sequence into another.

One innovative approach is the so-called “pancake” timeline, popularized by editor/blogger Vashi Nedomansky. Premiere Pro permits you to stack two or more timelines into separate panels. The selected sequence becomes active in the viewer at any given time. By dragging between timeline panels, it is possible to edit from one sequence to another. This is a very quick and efficient way to edit from a longer stringout of selects to a shorting one with culled choices.

Scene wall. Walter Murch has become synonymous with the scene wall, but in fact, many editors use this technique. In a scene wall, a series of index cards for each scene is placed in story order on a wall or bulletin board. This provides a quick schematic of the story at any given time during the edit. As you remove or rearrange scenes, it’s easy to see what impact that will have. Simply move the cards first and review the wall before you ever commit to doing the actual edit. In addition, with the eliminated cards (representing scenes) moved off to the side, you never lose sight of what material has been cut out of the film. This is helpful to know, in case you want to go back and revisit those.

Skinning, i.e. self-contained files. Another technique Murch likes to use is what he calls adding a skin to the topmost track. The concept is simple. When you have a lot of mixed media and temp effects, system performance can be poor until rendered. Instead of rendering, the timeline is exported as a self-contained file. In turn, that is re-imported into the project and placed onto the topmost track, hiding everything below it. Now playback is smooth, because the system only has to play this self-contained file. It’s like a “skin” covering the “viscera” of the timeline clips below it.

As changes are made to add, remove, trim or replace shots and scenes, an edit is made in this self-contained clip and the ends are trimmed back to expose the area in which changes are being made. Only the part where “edit surgery” happens isn’t covered by the “skin”, i.e. self-contained file. Next a new export is done and the process is repeated. By seeing the several tracks where successive revisions have been made to the timeline, it’s possible to track the history of the changes that have been made to the story. Effectively this functions as a type of visual change list.

Visual organization of the bin. Most NLEs feature list and frame views of a bin’s contents. FCPX also features a filmstrip view in the event (bin), as well as a full strip for the selected clip at the top of the screen when in the list view. Unfortunately, the standard approach is for these to be arranged based on sorting criteria or computer defaults, not by manual methods. Typically the view is a tiled view for nice visual organization. But, of course, the decision-making process can be messy.

Premiere Pro at least lets you manually rearrange the order of the tiles, but none of the NLEs is as freeform as Media Composer. The bin’s frame view can be a completely messy affair, which editors use to their advantage. A common practice is to move all of the selected takes up to the top row of the bin and then have everything else pulled lower in the bin display, often with some empty space in between.

Multi-camera. It is common practice, even on smaller films, to shoot with two or more cameras for every scene. Assuming these are used for two angles of the same subject, like a tight and a wide shot on the person speaking, then it’s best to group these as multi-camera clips. This gives you the best way to pick among several options. Every NLE has good multi-camera workflow routines. However, there are times when you might not want to do that, such as in this blog post of mine.

Multi-channel source audio. Generally sound on a film shoot is recorded externally with several microphones being tracked separately. A multi-channel .wav file is recorded with eight or more tracks of materials. The location sound mixer will often mix a composite track of the microphones for reference onto channel one and/or two of the file. When bringing this into the edit, how you handle it will vary with each NLE.

Both Media Composer and Premiere Pro will enable you to merge audio and picture into synchronized clips and select which channels to include in the combined file. Since it’s cumbersome to drag along eight or more source channels for every edit in these track-based timelines, most editors will opt to only merge the clips using channel one (the mixed track) of the multi-channel .wav file. There will be times when you need to go to one of the isolated mics, in which case a match-frame will get you back to the source .wav, from which you can pull the clean channel containing the isolated microphone. If your project goes to a post-production mixer using Pro Tools, then the mixer normally imports and replaces all of the source audio with the multi-channel .wav files. This is common practice when the audio work done by the picture editor is only intended to be used as a temp mix.

With Final Cut Pro X, source clips always show up as combined a/v clips, with multi-channel audio hidden within this “container”. This is just as true with synchronized clips. To see all of the channels, expand the clip or select it and view the details in the inspector. This way the complexity doesn’t clog the timeline and you can still selectively turn on or off any given mic channel, as well as edit within each audio channel. No need to sync only one track or to match-frame back to the audio source for more involved audio clean-up.

Multi-channel mixing. Most films are completed as 5.1 surround mixes – left, center, right, left rear surround, right rear surround, and low-frequency emitter (subwoofer). Films are mixed so that the primary dialogue is mono and largely in the center channel. Music and effects are spread to the left and right channels with a little bit also in the surrounds. Only loud, low frequencies activate the subwoofer channel. Usually this means explosions or some loud music score with a lot of bottom. In order to better approximate the final mix, many editors advocate setting up their mixing rooms for 5.1 surround or at least an LCR speaker arrangement. If you’ve done that, then you need to mix the timeline accordingly. Typically this would mean mono dialogue into the center channel and effects and music to the left and right speakers. Each of these NLEs support sequence presets for 5.1, which would accommodate this edit configuration, assuming that your hardware is set up accordingly.

Audio – organizing temp sound. It’s key that you organize the sounds you use in the edit in such a way that it is logical for other editors with whom you may be collaborating. It should also make sense to the post-production mixer who might do the final mix. If you are using a track-based NLE, then structure your track organization on the timeline. For example, tracks 1-8 for dialogue, tracks 9-16 for sound effects, and tracks 17-24 for music.

If you are using Final Cut Pro X, then it’s important to spend time with the roles feature. If you correctly assign roles to all of your source audio, it doesn’t matter what your timeline looks like. Once properly assigned, the selection of roles on output – including when using X2Pro to send to Pro Tools – determines where these elements show up on an exported file or inside of a Pro Tools track sheet. The most basic roles assignment would be dialogue, effects and music. With multi-channel location recordings, you could even assign a role or subrole for each channel, mic or actor. Spending a little of this time on the front end will greatly improve efficiency at the back end.

For more ideas, click on the “tips and tricks” category or start at 12 Tips for Better Film Editing and follow the bread crumbs forward.

©2016 Oliver Peters

From the Trenches


Not all editing is done in comfy edit suites with powerful workstations. Many film editors cut on location for part of the film. Plenty of other editors make their living on the corporate circuit cutting convention and conference highlights, “happy faces” videos, and recaps for social media. If you are one of the latter group, much of your work is done in hotel rooms, ad hoc media centers set up in conference rooms, and/or backstage in the bowels of some convention center. Your gigs bounce between major cities and resorts around the country and sometimes the world. Learning to travel light, but without compromise is important.

I work a number of these events each year. In the past, the set-up was usually a full-blown Avid Media Composer system and decks – often augmented by a rack of VHS decks for dubs on-site. Today, more often than not, a laptop with accessories will do the trick. Media is all file-based and final copies get delivered on USB thumb drives or straight to the web. The following road warrior tips will help any editor who has to cut on the run.

Nail down the deal. Before you commit, find out who is supplying the  gear. If you are expected to bring editing gear, then define the rate and what is expected of you. Most of these gigs are on a 10-hour day-rate, but define the call and end times. If the camera crew gets an early start, your call time (and the start of 10 hours) might not be until noon. Make sure you know who is covering meals and how the expenses are being handled (air fare, hotels, car rental, per diem, mileage, parking, etc.).

Which NLE to use. This is often a matter of personal preference, but on some jobs, one over the other becomes the client’s decision. A set-up with several editors working in collaboration is best handled using Avid Media Composer and Avid or Facilis shared storage. Live ingest and quick turnaround of the CEO’s keynote is also best handled with Avid Media Composer and Avid i/o hardware. When those are the criteria, odds are the production company will be supplying the gear. You just have to know how to use it. Apple Final Cut Pro X is great for a lot of the convention video work being done, but I’ve also had clients specify Adobe Premiere Pro, just so everything is compatible with their home operation or with the other editors on the same gig.

df2116_trench_4_smRemember that with Adobe Creative Cloud, the software needs to “phone home” monthly for continued authorization. This requires an internet connection to log-in. If, for some reason, you are going to be out of internet contact for more than a month, this could become an issue, because your software will kick into the trial mode. That may be sufficient to get you through the project, but maybe not.

Video standards. Before you start any editing, make sure everyone is on the same page. Typically video packages are going to be produced and finished as 1080p/29.97 or 1080p/23.976. However, if you are recutting the highlights from a large corporate keynote presentation, odds are this was recorded with broadcast cameras, meaning that the project will be 1080i/59.94. But don’t assume that a US conference is always going to be only using US TV standards. A large European company holding an event in the US might want to stay consistent with their other videos. Therefore, you might be working in 1080p/25. As with many things in life – don’t assume.

Mac versus PC. Inevitably you’ll run up against the compatibility issue of passing files and drives between Macs and PC. Maybe you are in a team of editors with mixed systems. Maybe you have a Mac, but the client is PC-based. Whatever the circumstance, know how you are going to exchange files. Some folks trust ExFAT-formatted drives to work cross-platform. On a recent job, only the PCs could mount the LaCie drives that were formatted as ExFAT, whereas Seagate drives were OK on both platforms. Go figure. My recommendation is to have all the Macs install Tuxera (to read NTFS drives) and have the PCs install MacDrive (to read HFS+).

Media drives. Speaking of drives, who is supplying the drives for the edit? Are you expected to bring them or use your internal drive? Is the client supplying drives and will they be compatible? If you are supplying the drives, you probably still need to factor in time to copy the media, projects and final masters back to the customer’s drive at the end of the job. All of this means connection compatibility is important. Newer Macs work with USB 3.0 (compatible with USB 2.0) and Thunderbolt 1 and 2. Most newer PC laptops generally only connect to USB devices, with a few that also support Thunderbolt and/or eSATA.

Unless you are all Mac-based, drives that connect via USB 3.0 will give you sufficient speed and will connect cross-platform. In you own a Mac with Thunderbolt, then it’s worthwhile to pick up a few adapters. For example, I have FireWire 800, Ethernet, DVI and VGA adapters and they’ve all been useful. In addition, my MacBook Pro will also connect via HDMI to external video and audio monitors.

df2116_trench_3_smCamera card readers. When it comes to media, card readers are another concern. Since post is largely file-based these days, you are going to have to be able to read the media and the files provided by the camera crew. As an editor, you can’t really be expected to provide a reader for every possible camera type. For instance, Canon C300 cameras record to CF cards, which will normally plug into most common, multi-card readers that plug in via USB. However, if the crew is using an ARRI Amira that records to CFast cards, then you’ll need a different reader. Same if they were using something that records to Pak, P2, SxS and other media types. Therefore, it’s best when the camera crew supplies a reader that matches their  camera. That’s something you need to get straight with the company hiring you before the gigs starts.

Peripherals. It’s helpful to bring along some extra goodies. In addition to a generic multi-card reader and Thunderbolt adapters, other useful items include extra USB sticks, a generic USB (or better yet, USB 3.0) hub adapter, and possibly a video i/o device. While most of the time you don’t need to capture live video or feed masters out to external recorders, there will be times where live capture or monitoring is required. When it isn’t, then cutting from your laptop screen is fine and playing it full screen for review will also work. However, if you do need to do this, then small i/o devices from Blackmagic Design or AJA are your best bet. If you want more screen real estate then Duet Display is an app to turn an iPad into a second desktop screen.

Audio. While video monitoring isn’t that tough with a 15” laptop – even for client review – audio monitoring is a different story. Unless you’re cutting in a quiet hotel room, odds are you are going to be in a noisy environment, like backstage. If you are working with a team of editors, the noise factor just went up – all of which means headphones are essential. If you like to travel as light as possible, then you might try to get by with ear buds, but those can be very uncomfortable if you wear them all day long – not to mention bad for your hearing. An alternative to consider might be in-ear monitors like the ones Fender now makes.

Ultimately it’s personal preference, until it’s time to work with the producer or review the cut with the client. For a two-person operation, you might consider bringing a Y-shaped headphone adapter and a second set of lightweight headphones. Obviously you aren’t going to want to share in-ear monitors the way you might a large-cup standard headphone.

When it comes time to review the cut with the client – often two or more people looking over your shoulder at the laptop screen – then headphones won’t work. You are stuck momentarily using the laptop’s sound system. In the case of Apple MacBook Pros, the max volume is completely unacceptable in professional use. At times I’ve brought small external powered speakers, but that’s extra weight and room. Another solution I’ve used, which has worked reasonably well, is a single small, battery-powered “boom box” style amplified speaker that connects via Bluetooth. It packs a lot of oomph in a tiny footprint. The only downside is that sync during playback from an NLE timeline is very rubbery at times, although most clients will excuse that when it’s just for a quick review. With exported files, it’s fine.

df2116_trench_2_smDepending on the job, you might also need a microphone for scratch (or maybe even final) voice-over recordings. The Shure MOTIV series is worth considering. These mics are designed for the “iDevices”, but will also connect to Macs and PCs via USB. Another solution for down-and-dirty recordings would be a cheap webcast USB mic.

Graphics. Most editors are not very good graphic designers. We tend to work best when provided with templates and packaged branding elements. However, this means you as the editor have to be compatible with the client’s needs. For example, if the elements are supplied to you as editable After Effects or Photoshop files, then it’s essential that you have the latest version of those applications installed. For example, an old version of Photoshop or using Affinity Photo instead, won’t cut it. On a recent job, the client supplied lower third templates as animated Photoshop files. This worked like a champ and was yet another example of how Adobe Creative Cloud integration between applications is one of the best in the business. However, this only worked, because I was current on all of the Creative Cloud apps I had installed – and not just Premiere Pro CC.

Music. Try to make sure you are using licensed music that has been provided by the client. Corporate events are notorious for skirting around music licensing in the belief that, because it’s a closed conference, this is fine. Or that they are somehow covered under Fair Use guidelines (they aren’t). As an editor, you may not have control over this, but you should make sure to only use music that has been supplied to you by the client or purchased by the client from a stock music source during the course of the project.

Internet. Since many of these conference videos are intended for quick turnaround to the web, having a fast pipe to the internet is essential. Tying your edit machine to the center’s wifi isn’t going to be fast enough. A high quality 1080p MP4 that’s several minutes long will be several hundred MB in size. If you have several of these to upload each day, fast upload speeds are critical. Usually this means that the production company is going to have to pay for a dedicated line and ethernet cabling. This needs to be available past the point that the conference floor itself closes, since post will still be going on awhile longer. From the editor’s standpoint, this requires a machine that can accept an ethernet cable, which is getting harder to come by on both Mac and PC laptops. For the MacBook Pros, you can get a Thunderbolt-to-Ethernet adapter, which works flawlessly.

Schedule. Last, but not least, there’s the schedule. What can realistically be accomplished in the allotted time? Most corporate clients are not production people. They have no idea of what’s involved when they decide to interview and edit a given number of people in the course of a day. Even if the editor starts later, it still requires a producer to be involved in both the shoot and the edit. Realistically there should be a 50/50 ratio of shoot time to edit time. Naturally that’s not always possible, but this allows enough time to cut the piece, make tweaks, get approval, and then encode. On a fast laptop, the time required to encode high quality MP4 files is roughly the running time of the piece. Therefore, if you have an hour of total edited content, you’ll need to allow for at least an hour of encoding. Add to this upload time and the back-up to the client’s drives, and you have a rather packed schedule.

©2016 Oliver Peters

Final Cut Pro X Keyboard Tips


As most editors know, customizing your NLE keyboard settings can improve efficiency in how you use that tool. Final Cut Pro X already gives you a large number of existing commands mapped to the keyboard, but, as with any NLE, they are not all in places that work best for every editor. Therefore, it’s preferable to customize the toolset for what will work best for you.

The choices among the basic keys plus the modifier keys are extensive, but interestingly the “F” or function keys are not already mapped. This leaves you fertile ground to add your own commands without changing the standard map. Of course, the number of function keys you have depends on your keyboard. The Apple extended keyboard (the one with the number keypad) has 19 function keys. The smaller Bluetooth keyboards only have 12.

In my case, I’ve decided to map a few useful tools to some of the F-keys, as well as the shifted-function positions. Most of these are interface-related, but not entirely. FCPX doesn’t let you create and save custom workspace layouts like FCP7 or Adobe Premiere Pro CC, however, there are a lot of interface panels that can be displayed or hidden depending on the task at hand. By mapping these tools to the function keys, you get nearly the same effect as swapping workspaces, because it reconfigures your screen layout at the click of a button. Unfortunately you still can’t move modules from where Apple has chosen to have them appear.

I work with dual screens more often than not in a fixed edit bay. This lets me get the most out of the various FCPX windows and modules. If you own both an Apple laptop and an iPad, the Duet Display app also enables you to pair the two devices into a dual display arrangement. This eliminates the need to drag along an external screen for location editing gigs. Therefore, you can still get the maximum benefit of these layouts.

Here are the commands I’ve currently mapped. These work for me and, of course, might change in the future as I tweak my workflow. You’ll note that a number of these commands already have existing keyboard locations, so mapping these to a function key is redundant. Quite true, but I find that placing these concisely into the F-key row makes switching between them easier and you’ll be more likely to use them as a result.

The basic F-key row (no modifier key):

F1 – Show Events on Second Display

F2 – Show/Hide Viewer on Second Display

F3 – Show/Hide Event Viewer

F4 – Show/Hide Inspector

F5 – Show/Hide Effects Browser

F6 – Show/Hide Timeline Index

F7 – Show/Hide Video Scopes

F8 – Replace from Start*

F9 – Replace from End*

*These last two selections are edit commands.

The F-key row plus the Shift modifier:

Shift-F1 – Clip Appearance: Waveforms Only

Shift-F2 – Clip Appearance: Large Waveforms

Shift-F3 – Clip Appearance: Waveforms and Filmstrips

Shift-F4 – Clip Appearance: Large Filmstrips

Shift-F5 – Clip Appearance: Filmstrips Only

Shift-F6 – Clip Appearance: Clip Labels Only

As you can see, I use the shifted function keys to switch between the various timeline appearance settings that are available from the lower right pop-up menu. It’s nice that once you adjust the size of the filmstrips or waveforms using the slider, that setting stays until changed. Therefore, you can go between a large waveform view and the thin clip label (aka “chiclet”) view, simply by switching between these function keystrokes.

Customized keyboard maps can be saved and recalled for easy access. You can also create more than one customized keyboard preset.

©2016 Oliver Peters

Adobe Premiere Pro CC Tips


Adobe Premiere Pro CC is the dominant NLE that I encounter amongst my clients. Editors who’ve shifted over from Final Cut Pro “classic” may have simply transferred existing skills and working methods to Premiere Pro. This is great, but it’s easy to miss some of the finer points in Premiere Pro that will make you more productive. Here are seven tips that can benefit nearly any project.

df0616_ppro_1LUTs/Looks – With the addition of the Lumetri Color panel, it’s easy to add LUTs into your color correction workflow. You get there through the Color workspace preset or by applying a Lumetri Color effect to a timeline clip. Import a LUT from the Basic Correction or Creative section of the controls. From here, browse to any stored LUT on your hard drive(s) and it will be applied to the clip. There are plenty of free .cube LUTs floating around the web. However, you may not know that Look files, created through Adobe SpeedGrade CC in the .look format, may also be applied within the Creative section. You can also find a number of free ones on the web, including a set I created for SpeedGrade. Unlike LUTs, these also support effects used in SpeedGrade.

df0616_ppro_2Audio MixingPremiere Pro features very nice audio tools and internal audio mixing is a breeze. I typically use three filters on nearly every mix I create. First, I will add a basic dynamic compressor to all of my dialogue tracks. To keep it simple, I normally use the default preset. Second, I will add an EQ filter to my music tracks. Here, I will set it to notch out the midrange slightly, which lets the dialogue sit a bit better in the mix. Finally I’ll add limiting to the master track. Normally this is set to soft clip at -10db. If I have specific loudness specs as part of my delivery requirements, then I’ll route my mix through a submaster bus first and apply the limiting to the submaster. I will apply the RADAR loudness meter to the master bus and adjust accordingly to be compliant.

df0616_ppro_3Power windows – This is a term that came from DaVinci Resolve, but is often used generically to talk about building up a grade on a shot by isolating areas within the image. For example, brightening someone’s face more so than the overall image. You can do this in Premiere Pro by stacking up more than one Lumetri Color effect onto a clip. Start by applying a Lumetri Color effect and grade the overall shot. Next, apply a second instance of the effect and use the built-in Adobe mask tool to isolate only the selection that you want to add the second correction to, such as an oval around someone’s head. Tweak color as needed. If the shot moves around, you can even use the internal tracker to have your mask follow the object. Do you have another area to adjust? Simply add a third effect and repeat the process.

df0616_ppro_4Export/import titles – Premiere Pro titles are created in the Title Designer module and these titles can be exported as a separate metadata file (.prtl format). Let’s say that you have a bunch of titles that you plan to use repeatedly on new projects, but you don’t want to bring these in from one project to the next. You can do this more simply by exporting and re-importing the title’s data file. Simply select the title in the bin and then File/Export/Title. The hitch is that Adobe’s Media Browser will not recognize the .prtl format and so the easiest way to import it into a new project is to drag it from the Finder location straight into the new Premiere Pro project. This will create a new title inside of the new project. Both instances of this title are unique, so editing the title in any project won’t effect how it appears elsewhere.

df0616_ppro_5Replace with clip – I work on a number of productions where there’s a base version of a commercial and then a lot of versions with small changes to each. A typical example is a spot that uses many different lower third phone numbers, which are market-specific. The Replace function shaves hours off of this workflow. I first duplicate a completed sequence and rename it. Then I select the correct phone number in the bin, followed by selecting the clip in the timeline to be changed. Right-click and choose Replace with Clip/From Bin. This will update the content of my timeline clip with the new phone number. Any effects or keyframes that have been applied in the timeline remain.

df0616_ppro_6Optical flow speed changes – In a recent update, optical flow interpolation was added as one of the speed change choices. Other than the obvious uses of speed changes, I found this to be a get way of creating slower camera moves that look nearly perfect. Optical flow can be tricky – sometimes creating odd motion artifacts – and at other times it’s perfect. I have a camera slider move or pan along a mantle containing family photos. The move is too fast. So, yes, I can slow it down, but the horizontal motion will leave it as stuttering or blurred. However, if I slow it to exactly 50% and select optical flow, in most cases, I get very good results. That’s because this speed and optical flow have created perfect “in-between” frames. A :05 move is now :10 and works better in the edit. If I’m going to use this same clip a lot, I simply render/export it is as a new piece of media, which I’ll bring back into the project as if it were a VFX clip.

df0616_ppro_7Render and replace – Premiere Pro CC is great when you have a ton of different camera formats and want to work with native media. While that generally works, a large project will really impede performance, especially in the editing sequence. The alternative is to transcode the clips to an optimized or so-called mezzanine format. Adobe does this in the sequence rather than in the bin and it can be done for individual clips or every clip within the sequence. You might have a bunch of native 4K .mp4 camera clips in a 1080p timeline. Simply select the clips within the timeline that you would like to transcode and right-click for the Render and Replace dialogue. At this point you have a several options, including whether to use clip or sequence settings, handle length, codec, and file location. If you choose “clip”, then what you get is a new, trimmed clip in an optimized codec, which will be stored in a separate folder. This becomes a great way to consolidate your media. The clip is imported into your bin, so you have access to both the original and the optimized clip at the original settings. Therefore, your consolidated clips are still 4K if that’s how they started.

This also works for Dynamic Link After Effects compositions. Render and Replace those for better timeline performance. But if you need to go back to the composition in order to update it in After Effects, that’s just a few clicks away.

©2016 Oliver Peters

The On Camera Interview


Many projects are based on first person accounts using the technique of the on camera interview. This approach is used in documentaries, news specials, corporate image presentations, training, commercials, and more. I’ve edited a lot of these, especially for commercials, where a satisfied customer might give a testimonial that gets cut into a five-ish minute piece for the web or a DVD and then various commercial lengths (:10, :15, :30, :60, 2 min.). The production approach and editing techniques are no different in this application than if you are working on a documentary.

The interviewer

The interview is going to be no better than the quality of the interviewer asking the off camera (and unheard) questions. Asking good questions in the right manner will yield successful results. Obviously the interviewer needs to be friendly enough to establish a rapport with the subject. People get nervous on camera, so the interviewer needs to get them relaxed. Then they can comfortably answer the questions and tell the story in their own words. The interviewer should structure the questions in a way that the totality of the responses tells a story. Think in terms of story arc and strive to elicit good beginning and ending statements.

df4115_iview_2Some key points to remember. First, make sure you get the person to rephrase the question as part of their answer, since the audience won’t hear the interviewer. This makes their answer a self-contained statement. Second, let them talk. Don’t interject or jump on the end of the answer, since this will make editing more difficult.

Sometimes in a commercial situation, you have a client or consultant on set, who wants to make sure the interviewee hits all the marketing copy points. Before you get started, you’ll need to have an understanding with the client that the interviewee’s answers will often have to be less than perfect. The interviewees aren’t experienced spokespersons. The more you press them to phrase the answer in the exact way that fits the marketing points or to correctly name a complex product or service in every response, the more stilted their speaking style will become. Remember, you are going for naturalness, honesty and emotion.

The basics

df4115_iview_1As you design the interview set, think of it as staging a portrait. Be mindful of the background, the lighting, and the framing. Depending on the subject matter, you may want a matching background. For example, a doctor’s interview might look best in a lab or in the medical office with complex surgical gear in the background. An interview with an average person is going to look more natural in a neutral environment, like their living room.

You will want to separate the interview subject from the background and this can be achieved through lighting, lens selection, and color scheme. For example, a blonde woman in a peach-colored dress will stand out quite nicely against a darker blue-green background. A lot of folks like the shallow depth-of-field and bokeh effect achieved by a full-frame Canon 5D camera with the right lens. This is a great look, but you can achieve it with most other cameras and lenses, too. In most cases, your video will be seen in the 16:9 HD format, so an off-center framing is desirable. If the person is looking camera left, then they should be on the right side of the frame. Looking camera right, then they should be on the left side.df4115_iview_7

Don’t forget something as basic as the type of chair they are sitting in. You don’t want a chair that rocks, rolls, leans back, or swivels. Some interviews take a long time and subjects that have a tendency to move around in a chair become very distracting – not to mention noisy – in the interview, if that chair moves with them. And of course, make sure the chair itself doesn’t creak.

Camera position

df4115_iview_3The most common interview design you see is where the subject is looking slightly off camera, as they are interacting with the interviewer who sitting to the left or the right of the camera. You do not want to instruct them to look into the camera lens while you are sitting next to the camera, because most people will dart between the interviewer and the camera when they try to attempt this. It’s unnatural.

The one caveat is that if the camera and interviewer are far enough away from the interview subject – and the interviewer is also the camera operator – then it will appear as if the interviewee is actually looking into the lens. That’s because the interviewer and the camera are so close to each other. When the subject addresses the interviewer, he or she appears to be looking at the lens when in fact the interviewee is really just looking at the interviewer.

df4115_iview_16If you want them looking straight into the lens, then one solution is to set up a system whereby the subject can naturally interact with the lens. This is the style documentarian Errol Morris has used in a rig that he dubbed the Interrotron. Essentially it’s a system of two teleprompters. The interviewer and subject can be in the same studio, although separated in distance – or even in other rooms. The two-way mirror of the teleprompter is projecting each person to the other. While looking directly at the interviewer in the teleprompter’s mirror, the interviewee is actually looking directly in the lens. This feels natural, because they are still looking right at the person.

Most producers won’t go to that length, and in fact the emotion of speaking directly to the audience, may or may not be appropriate for your piece. Whether you use Morris’ solution or not, the single camera approach makes it harder to avoid jump cuts. Morris actually embraces and uses these, however, most producers and editors prefer to cover these in some way. Covering the edit with a b-roll shot is a common solution, but another is to “punch in” on the frame, by blowing up the shot digitally by 15-30% at the cut. Now the cut looks like you used a tighter lens. This is where 4K resolution cameras come in handy if you are finishing in 2K or HD.

df4115_iview_6With the advent of lower-cost cameras, like the various DSLR models, it’s quite common to produce these interviews as two camera shoots. Cameras may be positioned to the left or the right of the interviewer, as well as on either side. There really is no right or wrong approach. I’ve done a few where the A-camera is right next to the interviewer, but the B-camera is almost 90-degrees to the side. I’ve even seen it where the B-camera exposes the whole set, including the crew, the other camera, and the lights. This gives the other angle almost a voyeuristic quality. When two cameras are used, each should have a different framing, so a cut between the cameras doesn’t look like a jump cut. The A-camera might have a medium framing including most of the person’s torso and head, while the B-camera’s framing might be a tight close-up of their face.

While it’s nice to have two matched cameras and lens sets, this is not essential. For example, if you end up with two totally mismatched cameras out of necessity – like an Alexa and a GoPro or a C300 and an iPhone – make the best of it. Do something radical with the B-camera to give your piece a mixed media feel. For example, your A-camera could have a nice grade to it, but the B-camera could be black-and-white with pronounced film grain. Sometimes you just have to embrace these differences and call it a style!


df4115_iview_4When you are there to get an interview, be mindful to also get additional b-roll footage for cutaway shots that the editor can use. Tools of the trade, the environment, the interview subject at work, etc. Some interviews are conducted in a manner other than sitting down. For example, a cheesemaker might take you through the storage room and show off different rounds of cheese. Such walking-talking interviews might make up the complete interview or they might be simple pieces used to punctuate a sit-down interview. Remember, that if you have the time, get as much coverage as you can!

Audio and sync

It’s best to use two microphones on all interviews – a lavaliere on the person and a shotgun mic just out of the camera frame. I usually prefer the sound of the shotgun, because it’s more open; but depending on how noisy the environment is, the lav may be the better channel to use. Recording both is good protection. Not all cameras have great sound systems, so you might consider using an external audio recorder. Make sure you patch each mic into separate channels of the camera and/or external recorder, so that they are NOT summed.

df4115_iview_8Wherever you record, make sure all sources receive audio. It would be ideal to feed the same mics to all cameras and recorders, but that’s not always possible. In that case, make sure that each camera is at least using an onboard camera mic. The reason to do this is for sync. The two best ways to establish sync is common timecode and a slate with a clapstick. Ideally both. Absent either of those, then some editing applications (as well as a tool like PluralEyes) can analysis the audio waveform and automatically sync clips based on matching sound. Worst case, the editor can manually sync clips be marking common aural or visual cues.

Depending on the camera model, you may have media cards that don’t span and automatically start a new clip every 4GB (about every 12 minutes with some formats). The interviewer should be mindful of these limits. If possible, all cameras should be started together and re-slated at the beginning of each new clip.

Editing workflow

df4115_iview_13Most popular nonlinear editing applications (NLE) include great features that make editing on camera interviews reasonably easy. To end up with a solid five minute piece, you’ll probably need about an hour of recorded interview material (per camera angle). When you cut out the interviewer’s questions, the little bit of chit chat at the beginning, and then repeats or false starts that an interviewee may have, then you are generally left with about thirty minutes of useable responses. That’s a 6:1 ratio.

The goal as an editor is to be a storyteller by the soundbites you select and the order into which you arrange them. The goal is to have the subject seamlessly tell their story without the aide of an on camera host or voice-over narrator. To aid the editing process use NLE tools like favorites, markers, and notes, along with human tools like written transcripts and your own notes to keep the project organized.

This is the standard order of things for me:

Sync sources and create multi-cam sequences or multi-cam clips depending on the NLE.

Pass 1 – create a sequence with all clips synced up and organized into a single timeline.

Pass 2 – clean up the interview and remove all interviewer questions.

Pass 3 – whittle down the responses into a sequence of selected answers.

Pass 4 – rearrange the soundbites to best tell the story.

Pass 5 – cut between cameras if this is a multi-camera production.

Pass 6 – clean up the final order by editing out extra words, pauses, and verbal gaffs.

Pass 7 – color correct clips, mix audio, add b-roll shots.

df4115_iview_9As I go through this process, I am focused on creating a good “radio cut” first. In other words, how does the story sound if you aren’t watching the picture. Once I’m happy with this, I can worry about color correction, b-roll, etc. When building a piece that includes multiple interviewees, you’ll need to pay attention to several other factors. These include getting a good mix of diversity – ethnic, gender, job classification. You might want to check with the client first as to whether each and every person interviewed needs to be used in the video. Clearly some people are going to be duds, so it’s best to know up front whether or not you’ll need to go through the effort to find a passable soundbite in those cases or not.

There are other concerns when re-ordering clips among multiple people. Arranging the order of clips so that you can cut between alternating left and right-framed shots makes the cutting flow better. Some interviewees comes across better than others, however, make sure not to lean totally on these responses. When you get multiple, similar responses, pick the best one, but if possible spread around who you pick in order to get the widest mix of respondents. As you tell the story, pay attention to how one soundbite might naturally lead into another – or how one person’s statement can complete another’s thoughts. It’s those serendipitous moments that you are looking for in Pass 4. It’s what should take the most creative time in your edit.

Philosophy of the cut

df4115_iview_11In any interview, the editor is making editorial selections that alter reality. Some broadcasters have guidelines at to what is and isn’t permissible, due to ethical concerns. The most common editorial technique in play is the “Frankenbite”. That’s where an edit is made to truncate a statement or combine two statements into one. Usually this is done because the answer went off into a tangent and that portion isn’t relevant. By removing the extraneous material and creating the “Frankenbite” you are actually staying true to the intent of the answer. For me, that’s the key. As long as your edit is honest and doesn’t twist the intent of what was said, then I personally don’t have a problem with doing it. That part of the art in all of this.

df4115_iview_10It’s for these reasons, though, that directors like Morris leave the jump cuts in. This lets the audience know an edit was made. Personally, I’d rather see a smooth piece without jump cuts and that’s where a two camera shoot is helpful. Cutting between two camera angles can make the edit feel seamless, even though the person’s expression or body position might not truly match on both sides of the cut. As long as the inflection is right, the audience will accept it. Occasionally I’ll use a dissolve, white flash or blur dissolve between sections, but most of the time I stick with cuts. The transitions seem like a crutch to me, so I use them only when there is a complete change of thought that I can’t bridge with an appropriate soundbite or b-roll shot.

df4115_iview_12The toughest interview edit tends to be when you want to clean things up, like a repeated word, a stutter, or the inevitable “ums” and “ahs”. Fixing these by cutting between cameras normally results in a short camera cut back and forth. At this point, the editing becomes a distraction. Sometimes you can cheat these jump cuts by staying on the same camera angle and using a short dissolve or one of the morphing transitions offered by Avid, Adobe, or MotionVFX (for FCPX). These vary in their success depending on how much a person has moved their body and head or changed expressions at the edit point. If their position is largely unchanged, the morph can look flawless. The more the change, the more awkward the resulting transition can be. The alternative is to cover the edit with a cutaway b-roll shot, but that’s often not desirable if this happens the first time we see the person. Sometimes you just have to live with it and leave these imperfections alone.

Telling the story through sight and sound is what an editor does. Working with on camera interviews is often the closest an editor comes to being the writer, as well. But remember that mixing and matching soundbites can present nearly infinite possibilities. Don’t get caught in the trap so many do of never finishing. Bring it to a point where the story is well-told and then move on. If the entire production is approached with some of these thoughts in mind, the end result can indeed be memorable.

©2015 Oliver Peters

Final Cut Pro X Organizing Tips


Every nonlinear editing application is a database; however, Apple took this concept to a higher level when it launched Final Cut Pro X. One of its true strengths is making the organization of a mountain of media easier than ever. To get the best experience, an editor should approach the session holistically and not simply rely on FCP X to do all the heavy lifting.

At the start of every new production, I set up and work within a specific folder structure. You can use an application like Post Haste to create a folder layout, pick up some templates online, like those from FDPTraining, or simply create your own template. No matter how you get there, the main folder for that production should include subfolders for camera files, audio, graphics, other media, production documents, and projects. This last folder would include your FCP X library, as well as others, like any After Effects or Motion projects. The objective is to end up with everything that you accrue for this production in a single master folder that you can archive upon completion.

FCP X Libraries

df1415_organize_5It helps to understand the Final Cut Pro X Library structure. The Library is what would otherwise be called a project file, but in FCP X terminology an edited sequence is referred to as a Project, while the main session/production file is the Library. Unlike previous versions and other NLEs, the Library is not a closed, binary data file. It is a package file that you can open and peruse, by right-clicking the Library icon and using the “show package contents” command. In there you will find various binary files (labeled CurrentVersion.fcpevent) along with a number of media folders. This structure is similar to the way Avid Media Composer project folders are organized on a hard drive. Since FCP X allows you to store imported, proxy, transcoded, and rendered media within the Library package, the media folders can be filled with actual media used for this production. When you pick this option your Library will grow quite large, but is completely under the application’s control, thus making the media management quite robust.

df1415_organize_4Another option is to leave media files in their place. When this is selected the Library package’s media folders will contain aliases or shortcut links to the actual media files. These media files are located in one or more folders on your hard drive. In this case, your Library file will stay small and is easier to transfer between systems, since the actual audio and video files are externally located. I suggest spreading things out. For example, I’ll create my Library on one drive, the location of the autosaved back-up files on another, and my media on a third. This has the advantage of no single point of failure. If the Library files are located on a drive that is backed up via Time Machine or some other system-wide “cloud” back-up utility, you have even more redundancy and protection.

Following this practice, I typically do not place the Library file in the projects folder for the production, unless this is a RAID-5 (or better) drive array. If I don’t save it there during actual editing, then it is imperative to copy the Library into the project folder for archiving. The rub is that the package contains aliases, which certain software – particular LTO back-up software – does not like. My recommendation is to create a compressed archive (.zip) file for every project file (FCP X Library, AE project, Premiere Pro project, etc.) prior to the final archiving of that production. This will prevent conflicts caused by these aliases.

If you have set up a method of organization that saves Libraries into different folders for each production, it is still possible to have a single folder, which shows you all the Libraries on your drives. To do this, create a Smart Folder in the Finder and set up the criteria to filter for FCP X Libraries. Any Library will automatically be filtered into this folder with a shortcut. Clicking on any of these files will launch FCP X and open to that Library.

Getting started

df1415_organize_2The first level of organization is getting everything into the appropriate folders on the hard drive. Camera files are usually organized by shoot date/location, camera, and card/reel/roll. Mac OS lets you label files with color-coded Finder tags, which enables another level of organization for the editor. As an example, you might have three different on-camera speakers in a production. You could label clips for each with a colored tag. Another example, might be to label all “circle takes” picked by the director in the field with a tag.

The next step is to create a new FCP X Library. This is the equivalent of the FCP 7 project file. Typically you would use a single Library for an entire production, however, FCP X permits you to work with multiple open Libraries, just like you could have multiple projects open in FCP 7. In order to set up all external folder locations within FCP X, highlight the Library name and then in the Inspector panel choose “modify settings” for the storage locations listed in the Library Properties panel. Here you can designate whether media goes directly into the internal folders of the Library package or to other target folders that you assign. This step is similar to setting the Capture Scratch locations in FCP 7.

How to organize clips within FCP X

df1415_organize_8Final Cut Pro X organizes master source clips on three levels – Events, Keyword Collections, and Smart Collections. These are an equivalent to Bins in other NLEs, but don’t completely work in the same fashion. When clips are imported, they will go into a specific Event, which is closest in function to a Bin. It’s best to keep the number of Events low, since Keyword Collections work within an Event and not across multiple Events. I normally create individual Events for edited sequences, camera footage, audio, graphics, and a few more categories. Clips within an Event can be grouped in the browser display in different ways, such as by import date. This can be useful when you want to quickly find the last few files imported in a production that spans many days. Most of the time I set grouping and sorting to “none”.

df1415_organize_7To organize clips within an Event, use Keywords. Setting a Keyword for a clip – or a range within a clip – is comparable to creating subclips in FCP 7. When you add a Keyword, that clip or range will automatically be sorted into a Keyword Collection with a matching name. Keywords can be assigned to keyboard hot keys, which creates a very quick way to go through every clip and assign it into a variety of Keyword Collections. Clips can be assigned to more than one Collection. Again, this is equivalent to creating subclips and placing them into separate Bins.

On one set of commercials featuring company employees, I created Keyword Collections for each person, department, shoot date, store location, employees, managers, and general b-roll footage. This made it easy to derive spots that featured a diverse range of speakers. It also made it easy to locate a specific clip that the director or client might ask for, based on “I think Mary said that” or “It was shot at the Kansas City store”. Keyword Collections can be placed into folders. Collections for people by name went into one folder, Collections by location into another, and so on.

df1415_organize_3The beauty of Final Cut Pro X is that it works in tandem with any organization you’ve done in the Finder. If you spent the time to move clips into specific folders or you assigned color-coded Finder tags, then this information can be used when importing clips into FCP X. The import dialogue gives you the option to “leave files in place” and to use Finder folders and tags to automatically create corresponding Keyword Collections. Camera files that were organized into camera/date/card folders will automatically be placed into Keyword Collections that are organized in the same fashion. If you assigned clips with Mary, John, and Joe to have red, blue, and green tags for each person, then you’ll end up with those clips automatically placed into Keyword Collections named red, blue, and green. Once imported, simply rename the red, blue, and green Collections to Mary, John, and Joe.


The third level of clip organization is Smart Collections. Use these to automatically filter clips based on the criteria that you set. With the release of FCP X version 10.2, Smart Collections have been moved from the Event level (10.1.4 or earlier) to the Library level – meaning that filtering can occur across multiple Events within the Library. By default, new Libraries are created with several preset Smart Collections that can be used, deleted, or modified. Here’s an example of how to use these. When you sync double-system sound clips or multiple cameras, new grouped clips are created – Synchronized Clips and Multicam Clips. These will appear in the Event along with all the other source files, which can be unwieldy. To focus strictly on these new grouped clips, create a Smart Collection with the criteria set by type to include these two categories. Then, as new grouped clips are created, they will automatically be filtered into this Smart Collection, thus reducing clutter for the editor.

Playing nice with others

df1415_organize_9Final Cut Pro X was designed around a new paradigm, so it tends to live in its own world. Most professional editors have the need for a higher level of interoperability with other applications and with outside vendors. To aid in these functions, you’ll need to turn to third party applications from a handful of vendors that have focused on workflow productivity utilities for FCP X. These include Intelligent Assistance/Assisted Editing, XMiL, Spherico, Marquis Broadcast, and Thomas Szabo. Their utilities make it possible to go between FCP X and the outside world, through list formats like AAF, EDL, and XML.

df1415_organize_11Final Cut’s only form of decision list exchange is FCPXML, which is a distinctly different data format than other forms of XML. Apple Logic Pro X, Blackmagic Design DaVinci Resolve and Autodesk Smoke can read it. Everything else requires a translated file and that’s where these independent developers come in. Once you use an application like XtoCC (formerly Xto7) from Intelligent Assistance to convert FCPXML to XML for an edited sequence, other possibilities are opened up. The translated XML file can now be brought into Adobe Premiere Pro or FCP 7. Or you can use other tools designed for FCP 7. For instance, I needed to generate a print-out of markers with comments and thumbnail images from a film, in order to hand off notes to the visual effects company. By bringing a converted XML file into Digital Heaven’s Final Print – originally designed with only the older Final Cut in mind – this became a doable task.

df1415_organize_13Thomas Szabo has concentrated on some of the media functions that are still lacking with FCP X. Need to get to After Effects or Nuke? The ClipExporter and ClipExporter2 applications fit the bill. His newest tool is PrimariesExporter. This utility uses FCPXML to enable batch exports of clips from a timeline, a series of single-frame exports based on markers, or a list of clip metadata. Intelligent Assistance offers the widest set of tools for FCP X, including Producer’s Best Friend. This tool enables editors to create a range of reports needed on most major jobs. It delivers them in spreadsheet format.

Understanding the thought processes behind FCP X and learning to use its powerful relational database will get you through complex projects in record time. Better yet, it gives you the confidence to know that no editorial stone was left unturned. For more information on advanced workflows and organization with Final Cut Pro X, check out FCPworks, MacBreak Studio (hosted by Pixel Corps), Larry Jordan, and Ripple Training.

For those that want to know more about the nuts and bolts of the post production workflow for feature films, check out Mike Matzdorff’s “Final Cut Pro X: Pro Workflow”, an iBook that’s a step-by-step advanced guide based on the lessons learned on Focus.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters