Film Editor Techniques


Editing is a craft that each editor approaches with similarities and differences in style and technique. If you follow my editor interviews or those at Steve Hullfish’s Art of the Cut series, then you know that most of the top editors are more than willing to share how they do things. This post will go through a “baker’s dozen” set of tips and techniques that hopefully will help your next, large project go just a bit more smoothly.

Transcoding media. While editing with native media straight from the camera is all the rage in the NLE world, it’s the worst way to work on long-term projects. Camera formats vary in how files are named, what the playback load is on the computer, and so on. It’s best to create a common master format for all the media in your project. If you have really large files, like 4K camera media, you might also transcode editing proxies. Cut with these and then flip to the master quality files when it comes time to finish.

Transcode audio. In addition to working with common media formats, it’s a good practice to get all of your audio into a proper format. Most NLEs can deal with a mix of audio formats, bit depths and sample rates, but that doesn’t mean you should. It’s quite common to get VO and temp music as MP3 files with 44.1kHz sampling. Even though your NLE may work with this just fine, it can cause problems with sync and during audio post later. Before you start working with audio in your project, transcode it to .wav of .aif formats with 48kHz sampling and 16-bit or 24-bit bit-depth. Higher sampling rates and bit-depths are OK if your NLE can handle them, but they should be multiples of these values.

Break up your project files by reel. Most films are broken down into 20 minute “reels”. Typically a feature will have five or six reels that make up the entire film. This is an old-school approach that goes back to the film day, yet, it’s still a good way to work in the modern digital era. How this is done differs by NLE brand.

With Media Composer, the root data file is the bin. Therefore, each film reel would be a separate timeline, quite possibly placed into a separate bin. This facilitates collaboration among editors and assistants using different systems, but still accessing the same project file. Final Cut Pro X and Premiere Pro CC don’t work this way. You cannot share the exact same FCPX library or Premiere Pro project file between two editors at one time.

In Final Cut Pro X, the library file is the basic data file/container, so each reel would be in its own library with a separate master library that contains only the final edited sequence for each of the reels. Since FCPX editors can open multiple libraries, it’s possible to work across reels this way or to have different editors open and work on different libraries independent of each other.

With Premiere you can only have a single project file open at one time. When a film is broken into one reel per project, it becomes easy for editors and assistants to work collaboratively. Then a master project can be created to import the final version of each reel’s timeline to create the combined film timeline. Media Browser within Premiere Pro should be used to access sequences from within other project files and import them into a new project.

Show/hide, sifting and sorting. Each NLE has its own way of displaying or hiding clips and subclips. Learning how to use these controls will help you speed up the organization of the media. Final Cut Pro X has a sophisticated method of assigning “favorites” and “rejects” to clips and ranges within clips. You can also assign keywords. By selecting what to see and to hide, it’s easy to cull a mass of footage into the few, best options. Likewise with Media Composer and Premiere Pro, you can show and hide clips and also sort by custom column criteria. Media Composer includes a custom sift feature, which is a filtering solution within the bin. It is easy to sift a bin by specific data in certain columns. Doing so hides everything else and reveals only the matching set of media on a per-bin basis.

Stringouts. A stringout is a sequence of selected footage. Many editors use stringouts as the starting point and then whittle down the scene from there. For example, Kirk Baxter likes his assistants to create a stringout for a dialogue scene that is broken down by line and camera. For each line of dialogue, you would see every take and camera angle covering that line of dialogue from wide to tight. Then the next line of dialogue and so on. The result is a very long sequence for the scene, but he can quickly assess the performance and best angle for each portion of the scene. Then he goes through and picks his favorites by pushing the video clip up one track for quick identification. The assistant then cleans up the stringout by creating a second version containing only these selected clips. Now the real cutting can begin.

Julian Clarke has his assistants create a similar stringout for action scenes. All takes and angles are organized back-to-back matching the choreography of the action. So – every angle/take for each crash or blast or punch within the scene. From these he has a clear idea of coverage and how to proceed cutting the scene, which otherwise might have an overwhelming amount of footage at first glance.

I use stringouts a lot for interview-driven documentaries. One sequence per person with everything. The second and third stringouts are successive cutdowns from that initial all-inclusive stringout. At this stage I start combining portions of sequences based on topics for a second round of stringouts. These will get duplicated and then culled, trimmed and rearranged as I refine the story.

Pancakes and using sequences as sources. When you use stringouts, it’s common to have one sequence become the source for another sequence. There are ways to handle this depending on your NLE. Many will nest the source sequence as a single clip on the new timeline. I contend that nesting should be avoided. Media Composer only allows one sequence in the “record” window to be active at any one time (no tabbed timeline). However, you can also drag a sequence to the source window and its tracks and clips can be viewed by toggling the timeline display between source and record. At least this way you can mark ins and outs for sections. Both Final Cut Pro “legacy” and Premiere Pro enable several sequences to be loaded into the timeline window where they are accessible through tabs. Final Cut Pro X dropped this feature, replacing it with a timeline history button to step forward or backward through several loaded sequences. To go between these sequences in all three apps, using copy-and-paste functions are typically the best way to bring clips from one sequence into another.

One innovative approach is the so-called “pancake” timeline, popularized by editor/blogger Vashi Nedomansky. Premiere Pro permits you to stack two or more timelines into separate panels. The selected sequence becomes active in the viewer at any given time. By dragging between timeline panels, it is possible to edit from one sequence to another. This is a very quick and efficient way to edit from a longer stringout of selects to a shorting one with culled choices.

Scene wall. Walter Murch has become synonymous with the scene wall, but in fact, many editors use this technique. In a scene wall, a series of index cards for each scene is placed in story order on a wall or bulletin board. This provides a quick schematic of the story at any given time during the edit. As you remove or rearrange scenes, it’s easy to see what impact that will have. Simply move the cards first and review the wall before you ever commit to doing the actual edit. In addition, with the eliminated cards (representing scenes) moved off to the side, you never lose sight of what material has been cut out of the film. This is helpful to know, in case you want to go back and revisit those.

Skinning, i.e. self-contained files. Another technique Murch likes to use is what he calls adding a skin to the topmost track. The concept is simple. When you have a lot of mixed media and temp effects, system performance can be poor until rendered. Instead of rendering, the timeline is exported as a self-contained file. In turn, that is re-imported into the project and placed onto the topmost track, hiding everything below it. Now playback is smooth, because the system only has to play this self-contained file. It’s like a “skin” covering the “viscera” of the timeline clips below it.

As changes are made to add, remove, trim or replace shots and scenes, an edit is made in this self-contained clip and the ends are trimmed back to expose the area in which changes are being made. Only the part where “edit surgery” happens isn’t covered by the “skin”, i.e. self-contained file. Next a new export is done and the process is repeated. By seeing the several tracks where successive revisions have been made to the timeline, it’s possible to track the history of the changes that have been made to the story. Effectively this functions as a type of visual change list.

Visual organization of the bin. Most NLEs feature list and frame views of a bin’s contents. FCPX also features a filmstrip view in the event (bin), as well as a full strip for the selected clip at the top of the screen when in the list view. Unfortunately, the standard approach is for these to be arranged based on sorting criteria or computer defaults, not by manual methods. Typically the view is a tiled view for nice visual organization. But, of course, the decision-making process can be messy.

Premiere Pro at least lets you manually rearrange the order of the tiles, but none of the NLEs is as freeform as Media Composer. The bin’s frame view can be a completely messy affair, which editors use to their advantage. A common practice is to move all of the selected takes up to the top row of the bin and then have everything else pulled lower in the bin display, often with some empty space in between.

Multi-camera. It is common practice, even on smaller films, to shoot with two or more cameras for every scene. Assuming these are used for two angles of the same subject, like a tight and a wide shot on the person speaking, then it’s best to group these as multi-camera clips. This gives you the best way to pick among several options. Every NLE has good multi-camera workflow routines. However, there are times when you might not want to do that, such as in this blog post of mine.

Multi-channel source audio. Generally sound on a film shoot is recorded externally with several microphones being tracked separately. A multi-channel .wav file is recorded with eight or more tracks of materials. The location sound mixer will often mix a composite track of the microphones for reference onto channel one and/or two of the file. When bringing this into the edit, how you handle it will vary with each NLE.

Both Media Composer and Premiere Pro will enable you to merge audio and picture into synchronized clips and select which channels to include in the combined file. Since it’s cumbersome to drag along eight or more source channels for every edit in these track-based timelines, most editors will opt to only merge the clips using channel one (the mixed track) of the multi-channel .wav file. There will be times when you need to go to one of the isolated mics, in which case a match-frame will get you back to the source .wav, from which you can pull the clean channel containing the isolated microphone. If your project goes to a post-production mixer using Pro Tools, then the mixer normally imports and replaces all of the source audio with the multi-channel .wav files. This is common practice when the audio work done by the picture editor is only intended to be used as a temp mix.

With Final Cut Pro X, source clips always show up as combined a/v clips, with multi-channel audio hidden within this “container”. This is just as true with synchronized clips. To see all of the channels, expand the clip or select it and view the details in the inspector. This way the complexity doesn’t clog the timeline and you can still selectively turn on or off any given mic channel, as well as edit within each audio channel. No need to sync only one track or to match-frame back to the audio source for more involved audio clean-up.

Multi-channel mixing. Most films are completed as 5.1 surround mixes – left, center, right, left rear surround, right rear surround, and low-frequency emitter (subwoofer). Films are mixed so that the primary dialogue is mono and largely in the center channel. Music and effects are spread to the left and right channels with a little bit also in the surrounds. Only loud, low frequencies activate the subwoofer channel. Usually this means explosions or some loud music score with a lot of bottom. In order to better approximate the final mix, many editors advocate setting up their mixing rooms for 5.1 surround or at least an LCR speaker arrangement. If you’ve done that, then you need to mix the timeline accordingly. Typically this would mean mono dialogue into the center channel and effects and music to the left and right speakers. Each of these NLEs support sequence presets for 5.1, which would accommodate this edit configuration, assuming that your hardware is set up accordingly.

Audio – organizing temp sound. It’s key that you organize the sounds you use in the edit in such a way that it is logical for other editors with whom you may be collaborating. It should also make sense to the post-production mixer who might do the final mix. If you are using a track-based NLE, then structure your track organization on the timeline. For example, tracks 1-8 for dialogue, tracks 9-16 for sound effects, and tracks 17-24 for music.

If you are using Final Cut Pro X, then it’s important to spend time with the roles feature. If you correctly assign roles to all of your source audio, it doesn’t matter what your timeline looks like. Once properly assigned, the selection of roles on output – including when using X2Pro to send to Pro Tools – determines where these elements show up on an exported file or inside of a Pro Tools track sheet. The most basic roles assignment would be dialogue, effects and music. With multi-channel location recordings, you could even assign a role or subrole for each channel, mic or actor. Spending a little of this time on the front end will greatly improve efficiency at the back end.

For more ideas, click on the “tips and tricks” category or start at 12 Tips for Better Film Editing and follow the bread crumbs forward.

©2016 Oliver Peters

NLE as Post Production Hub


As 2009 closed, I wrote a post about Final Cut Studio as the center of a boutique post production workflow. A lot has changed since then, but that approach is still valid and a number of companies can fill those shoes. In each case, rather than be the complete, self-contained tool, the editing application becomes the hub of the operation. Other applications surround it and the workflow tends to go from NLE to support tool and back for delivery. Here are a few solutions.

Adobe Premiere Pro CC

df2316_prproNo current editing package comes as close to the role of the old Final Cut Studio as does Adobe’s Creative Cloud. You get nearly all of the creative tools under a single subscription and facilities with a team account can equip every room with the full complement of applications. When designed correctly, workflows in any room can shift from edit to effects to sound to color correction – according to the load. In a shared storage operation, projects can stay in a single bay for everything or shift from bay to bay based on operator speciality and talent.

While there are many tools in the Creative Cloud kit, the primary editor-specific applications are Premiere Pro CC, After Effects CC and Audition CC. It goes without saying that for most, Photoshop CC and Adobe Media Encoder are also givens. On the other hand, I don’t know too many folks using Prelude CC, so I can’t say what the future for this tool will be. Especially since the next version of Premiere Pro includes built-in proxy transcoding. Also, as more of SpeedGrade CC’s color correction tools make it into Premiere Pro, it’s clear to see that SpeedGrade itself is getting very little love. The low-cost market for outboard color correction software has largely been lost to DaVinci Resolve (free). For now, SpeedGrade is really “dead man walking”. I’d be surprised if it’s still around by mid-2017. That might also be the case for Prelude.

Many editors I know that are heavy into graphics and visual effects do most of that work in After Effects. With CC and Dynamic Link, there’s a natural connection between the Premiere Pro timeline and After Effects. A similar tie can exist between Premiere Pro and Audition. I find the latter to be a superb audio post application and, from my experience, provides the best transfer of a Premiere Pro timeline into any audio application. This connection is being further enhanced by the updates coming from Adobe this year.

Rounding out the package is Photoshop CC, of course. While most editors are not big Photoshop artists, it’s worth noting that this application also enables animated motion graphics. For example, if you want to create an animated lower third banner, it can be done completely inside of Photoshop without ever needing to step into After Effects. Drop the file onto a Premiere Pro timeline and it’s complete with animation and proper transparency values. Update the text in Photoshop and hit “save” – voila the graphic is instantly updated within Premiere Pro.

Given the breadth and quality of tools in the Creative Cloud kit, it’s possible to stay entirely within these options for all of a facility’s post needs. Of course, roundtrips to Resolve, Baselight, ProTools, etc. are still possible, but not required. Nevertheless, in this scenario I typically see everything starting and ending in Premiere Pro (with exports via AME), making the Adobe solution my first vote for the modern hub concept.

Apple Final Cut Pro X

df2316_fcpxApple walked away from the market for an all-inclusive studio package. Instead, it opted to offer more self-contained solutions that don’t have the same interoperability as before, nor that of the comparable Adobe solutions. To build up a similar toolkit, you would need Final Cut Pro X, Motion, Compressor and Logic Pro X. An individual editor/owner would purchase these once and install these on as many machines as he or she owned. A business would have to buy each application for each separate machine. So a boutique facility would need a full set for each room or they would have to build rooms by specialty – edit, audio, graphics, etc.

Even with this combination, there are missing links when going from one application to another. These gaps have to be plugged by the various third-party productivity solutions, such as Clip Exporter, XtoCC, 7toX, Xsend Motion, X2Pro, EDL-X and others. These provide better conduits between Apple applications than Apple itself provides. For example, only through Automatic Duck Xsend Motion can you get an FCPX project (timeline) into Motion. Marquis Broadcast’s X2Pro Audio Convert provides a better path into Logic than the native route.

If you want the sort of color correction power available in Premiere Pro’s Lumetri Color panel, you’ll need more advanced color correction plug-ins, like Hawaiki Color or Color Finale. Since Apple doesn’t produce an equivalent to Photoshop, look to Pixelmator or Affinity Photo for a viable substitute. Although powerful, you still won’t get quite the same level of interoperability as between Photoshop and Premiere Pro.

Naturally, if your desire is to use non-Apple solutions for graphics and color correction, then similar rules apply as with Premiere Pro. For instance, roundtripping to Resolve for color correction is pretty solid using the FCPXML import/export function within Resolve. Prefer to use After Effects for your motion graphics instead of Motion? Then Automatic Duck Ximport AE on the After Effects side has your back.

Most of the tools are there for those users wishing to stay in an Apple-centric world, provided you add a lot of glue to patch over the missing elements. Since many of the plug-ins for FCPX (Motion templates) are superior to a lot of what’s out there, I do think that an FCPX-centric shop will likely choose to start and end in X (possibly with a Compressor export). Even when Resolve is used for color correction, I suspect the final touches will happen inside of Final Cut. It’s more of the Lego approach to the toolkit than the Adobe solution, yet I still see it functioning in much the same way.

Blackmagic Design DaVinci Resolve

df2316_resolveIt’s hard to say what Blackmagic’s end goal is with Resolve. Clearly the world of color correction is changing. Every NLE developer is integrating quality color correction modules right inside of their editing application. So it seems only natural that Blackmagic is making Resolve into an all-in-one tool for no other reason than self-preservation. And by golly, they are doing a darn good job of it! Each version is better than the last. If you want a highly functional editor with world-class color correction tools for free, look no further than Resolve. Ingest, transcoded and/or native media editing, color correction, mastering and delivery – all there in Resolve.

There are two weak links – graphics and audio. On the latter front, the internal audio tools are good enough for many editors. However, Blackmagic realizes that specialty audio post is still the domain of the sound engineering world, which is made up predominantly of Avid Pro Tools shops. To make this easy, Resolve has built-in audio export functions to send the timeline to Pro Tools via AAF. There’s no roundtrip back, but you’d typically get composite mixed tracks back from the engineer to lay into the timeline.

To build on the momentum it started, Blackmagic Design acquired the assets of EyeOn’s Fusion software, which gives then a node-based compositor, suitable for visual effects and some motion graphics. This requires a different mindset than After Effects with Premiere Pro or Motion with Final Cut Pro X (when using Xsend Motion). You aren’t going to send a full sequence from Resolve to Fusion. Instead, the Connect plug-in links a single shot to Fusion, where it can be effected through series of nodes. The Connect plug-in provides a similar “conduit” function to that of Adobe’s Dynamic Link between Premiere Pro and After Effects, except that the return is a rendered clip instead of a live project file. To take advantage of this interoperability between Resolve and Fusion, you need the paid versions.

Just as in Apple’s case, there really is no Blackmagic-owned substitute for Photoshop or an equivalent application. You’ll just have to buy what matches your need. While it’s quite possible to build a shop around Resolve and Fusion (plus maybe Pro Tools and Photoshop), it’s more likely that Resolve’s integrated approach will appeal mainly to those folks looking for free tools. I don’t see too many advanced pros doing their creative cutting on Resolve (at least not yet). However, that being said, it’s pretty close, so I don’t want to slight the capabilities.

Where I see it shine is as a finishing or “online” NLE. Let’s say you perform the creative or “offline” edit in Premiere Pro, FCPX or Media Composer. This could even be three editors working on separate segments of a larger show – each on a different NLE. Each’s sequence goes to Resolve, where the timelines are imported, combined and relinked to the high-res media. The audio has gone via a parallel path to a Pro Tools mixer and graphics come in as individual clips, shots or files. Then all is combined inside Resolve, color corrected and delivered straight from Resolve. For many shops, that scenario is starting to look like the best of all worlds.

I tend to see Resolve as less of a hub than either Premiere Pro or Final Cut Pro X. Instead, I think it may take several possible positions: a) color correction and transcoding at the front end, b) color correction in the middle – i.e. the standard roundtrip, and/or c) the new “online editor” for final assembly, color correction, mastering and delivery.

Avid Media Composer

df2316_avidmcThis brings me to Avid Media Composer, the least integrated of the bunch. You can certainly build an operation based on Media Composer as the hub – as so many shops have. But there simply isn’t the silky smooth interoperability among tools like there is with Adobe or the dearly departed Final Cut Pro “classic”. However, that doesn’t mean it’s not possible. You can add advanced color correction through the Symphony option, plus Avid Pro Tools in your mixing rooms. In an Avid-centric facility, rooms will definitely be task-oriented, rather than provide the ease of switching functions in the same suite based on load, as you can with Creative Cloud.

The best path right now is Media Composer to Pro Tools. Unfortunately it ends there. Like Blackmagic, Avid only offers two hero applications in the post space – Media Composer/Symphony and Pro Tools. They have graphics products, but those are designed and configured for news on-air operations. This means that effects and graphics are typically handled through After Effects, Boris RED or Fusion.

Boris RED runs as an integrated tool, which augments the Media Composer timeline. However, RED uses its own user interface. That operation is relatively seamless, since any “roundtrip” happens invisibly within Media Composer. Fusion can be integrated using the Connect plug-in, just like between Fusion and Resolve. Automatic Duck’s AAF import functions have been integrated directly into After Effects by Adobe. It’s easy to send a Media Composer timeline into After Effects as a one-way trip. In fact, that’s where this all started in the first place. Finally, there’s also a direct connection with Baselight Editions for Avid, if you add that as a “plug-in” within Media Composer. As with Boris RED, clips open up in the Baselight interface, which has now been enhanced with a smoother shot-to-shot workflow inside of Media Composer.

While a lot of shops still use Media Composer as the hub, this seems like a very old-school approach. Many editors still love this NLE for its creative editing prowess, but in today’s mixed-format, mixed-codec, file-based post world, Avid has struggled to keep Media Composer competitive with the other options. There’s certainly no reason Media Composer can’t be the center – with audio in Pro Tools, color correction in Resolve, and effects in After Effects. However, most newer editors simply don’t view it the same way as they do with Adobe or even Apple. Generally, it seems the best Avid path is to “offline” edit in Media Composer and then move to other tools for everything else.

So that’s post in 2016. Four good options with pros and cons to each. Sorry to slight the Lightworks, Vegas Pro, Smoke/Flame and Edius crowds, but I just don’t encounter them too often in my neck of the woods. In any case, there are plenty of options, even starting at free, which makes the editing world pretty exciting right now.

©2016 Oliver Peters

NAB 2016 – Technology and Friends


The annual National Association of Broadcasters convention and equipment show – aka The NAB Show – is one of Las Vegas’ biggest. Typically about 100,000 folks officially attend the April ritual, including actual NAB members, along with a much larger group of production and post professionals there to check out the gear and attend the various on and off-site workshops and sessions.

df2216_03For me, that’s part of it, together with the fact that I cover the show as a journalist writing for Digital Video magazine and CreativePlanetNetwork website. Rather than rehash here what I’ve already written, here are the links to my preview and wrap-up articles. If you want to hear what several industry pros thought of as the highlights of the show, check out our local Orlando Post Pros user group meeting, produced by Adrenaline Films and sponsored by Blackmagic Design.df2216_05

In addition, NAB for me is a time to reconnect in person with old and new friends from all over the country and the world. These are folks I’ve known for years, as well as some that I’ve originally met only online. NAB is a chance to spend some face-to-face time – if only for a few moments. It’s also a chance to connect with online friends for the first time and get a new perspective on their ideas. That’s something that’s often lacking in so much of today’s social media and internet forums.

df2216_04This year I had an opportunity to connect with my friends Philip Hodgetts and Greg Clarke from Intelligent Assistance. Most likely you know them as the brains behind such apps as 7toX, XtoCC, Sync-n-Link-X, Lumberjack and more. They also routinely record a web series called Lunch with Philip and Greg. So along with plenty of time at the NAB show, we stepped out to the Firefly restaurant down the road from the convention center. There we recorded an hour of good conversation over unexpectedly excellent food for another episode. A welcomed break from the show.df2216_02

If you get a chance to attend next year, make sure to allow some time to connect with your friends, too. Gear is cool for the nerd in all of us, but it’s not the only part of Vegas!

©2016 Oliver Peters

From the Trenches


Not all editing is done in comfy edit suites with powerful workstations. Many film editors cut on location for part of the film. Plenty of other editors make their living on the corporate circuit cutting convention and conference highlights, “happy faces” videos, and recaps for social media. If you are one of the latter group, much of your work is done in hotel rooms, ad hoc media centers set up in conference rooms, and/or backstage in the bowels of some convention center. Your gigs bounce between major cities and resorts around the country and sometimes the world. Learning to travel light, but without compromise is important.

I work a number of these events each year. In the past, the set-up was usually a full-blown Avid Media Composer system and decks – often augmented by a rack of VHS decks for dubs on-site. Today, more often than not, a laptop with accessories will do the trick. Media is all file-based and final copies get delivered on USB thumb drives or straight to the web. The following road warrior tips will help any editor who has to cut on the run.

Nail down the deal. Before you commit, find out who is supplying the  gear. If you are expected to bring editing gear, then define the rate and what is expected of you. Most of these gigs are on a 10-hour day-rate, but define the call and end times. If the camera crew gets an early start, your call time (and the start of 10 hours) might not be until noon. Make sure you know who is covering meals and how the expenses are being handled (air fare, hotels, car rental, per diem, mileage, parking, etc.).

Which NLE to use. This is often a matter of personal preference, but on some jobs, one over the other becomes the client’s decision. A set-up with several editors working in collaboration is best handled using Avid Media Composer and Avid or Facilis shared storage. Live ingest and quick turnaround of the CEO’s keynote is also best handled with Avid Media Composer and Avid i/o hardware. When those are the criteria, odds are the production company will be supplying the gear. You just have to know how to use it. Apple Final Cut Pro X is great for a lot of the convention video work being done, but I’ve also had clients specify Adobe Premiere Pro, just so everything is compatible with their home operation or with the other editors on the same gig.

df2116_trench_4_smRemember that with Adobe Creative Cloud, the software needs to “phone home” monthly for continued authorization. This requires an internet connection to log-in. If, for some reason, you are going to be out of internet contact for more than a month, this could become an issue, because your software will kick into the trial mode. That may be sufficient to get you through the project, but maybe not.

Video standards. Before you start any editing, make sure everyone is on the same page. Typically video packages are going to be produced and finished as 1080p/29.97 or 1080p/23.976. However, if you are recutting the highlights from a large corporate keynote presentation, odds are this was recorded with broadcast cameras, meaning that the project will be 1080i/59.94. But don’t assume that a US conference is always going to be only using US TV standards. A large European company holding an event in the US might want to stay consistent with their other videos. Therefore, you might be working in 1080p/25. As with many things in life – don’t assume.

Mac versus PC. Inevitably you’ll run up against the compatibility issue of passing files and drives between Macs and PC. Maybe you are in a team of editors with mixed systems. Maybe you have a Mac, but the client is PC-based. Whatever the circumstance, know how you are going to exchange files. Some folks trust ExFAT-formatted drives to work cross-platform. On a recent job, only the PCs could mount the LaCie drives that were formatted as ExFAT, whereas Seagate drives were OK on both platforms. Go figure. My recommendation is to have all the Macs install Tuxera (to read NTFS drives) and have the PCs install MacDrive (to read HFS+).

Media drives. Speaking of drives, who is supplying the drives for the edit? Are you expected to bring them or use your internal drive? Is the client supplying drives and will they be compatible? If you are supplying the drives, you probably still need to factor in time to copy the media, projects and final masters back to the customer’s drive at the end of the job. All of this means connection compatibility is important. Newer Macs work with USB 3.0 (compatible with USB 2.0) and Thunderbolt 1 and 2. Most newer PC laptops generally only connect to USB devices, with a few that also support Thunderbolt and/or eSATA.

Unless you are all Mac-based, drives that connect via USB 3.0 will give you sufficient speed and will connect cross-platform. In you own a Mac with Thunderbolt, then it’s worthwhile to pick up a few adapters. For example, I have FireWire 800, Ethernet, DVI and VGA adapters and they’ve all been useful. In addition, my MacBook Pro will also connect via HDMI to external video and audio monitors.

df2116_trench_3_smCamera card readers. When it comes to media, card readers are another concern. Since post is largely file-based these days, you are going to have to be able to read the media and the files provided by the camera crew. As an editor, you can’t really be expected to provide a reader for every possible camera type. For instance, Canon C300 cameras record to CF cards, which will normally plug into most common, multi-card readers that plug in via USB. However, if the crew is using an ARRI Amira that records to CFast cards, then you’ll need a different reader. Same if they were using something that records to Pak, P2, SxS and other media types. Therefore, it’s best when the camera crew supplies a reader that matches their  camera. That’s something you need to get straight with the company hiring you before the gigs starts.

Peripherals. It’s helpful to bring along some extra goodies. In addition to a generic multi-card reader and Thunderbolt adapters, other useful items include extra USB sticks, a generic USB (or better yet, USB 3.0) hub adapter, and possibly a video i/o device. While most of the time you don’t need to capture live video or feed masters out to external recorders, there will be times where live capture or monitoring is required. When it isn’t, then cutting from your laptop screen is fine and playing it full screen for review will also work. However, if you do need to do this, then small i/o devices from Blackmagic Design or AJA are your best bet. If you want more screen real estate then Duet Display is an app to turn an iPad into a second desktop screen.

Audio. While video monitoring isn’t that tough with a 15” laptop – even for client review – audio monitoring is a different story. Unless you’re cutting in a quiet hotel room, odds are you are going to be in a noisy environment, like backstage. If you are working with a team of editors, the noise factor just went up – all of which means headphones are essential. If you like to travel as light as possible, then you might try to get by with ear buds, but those can be very uncomfortable if you wear them all day long – not to mention bad for your hearing. An alternative to consider might be in-ear monitors like the ones Fender now makes.

Ultimately it’s personal preference, until it’s time to work with the producer or review the cut with the client. For a two-person operation, you might consider bringing a Y-shaped headphone adapter and a second set of lightweight headphones. Obviously you aren’t going to want to share in-ear monitors the way you might a large-cup standard headphone.

When it comes time to review the cut with the client – often two or more people looking over your shoulder at the laptop screen – then headphones won’t work. You are stuck momentarily using the laptop’s sound system. In the case of Apple MacBook Pros, the max volume is completely unacceptable in professional use. At times I’ve brought small external powered speakers, but that’s extra weight and room. Another solution I’ve used, which has worked reasonably well, is a single small, battery-powered “boom box” style amplified speaker that connects via Bluetooth. It packs a lot of oomph in a tiny footprint. The only downside is that sync during playback from an NLE timeline is very rubbery at times, although most clients will excuse that when it’s just for a quick review. With exported files, it’s fine.

df2116_trench_2_smDepending on the job, you might also need a microphone for scratch (or maybe even final) voice-over recordings. The Shure MOTIV series is worth considering. These mics are designed for the “iDevices”, but will also connect to Macs and PCs via USB. Another solution for down-and-dirty recordings would be a cheap webcast USB mic.

Graphics. Most editors are not very good graphic designers. We tend to work best when provided with templates and packaged branding elements. However, this means you as the editor have to be compatible with the client’s needs. For example, if the elements are supplied to you as editable After Effects or Photoshop files, then it’s essential that you have the latest version of those applications installed. For example, an old version of Photoshop or using Affinity Photo instead, won’t cut it. On a recent job, the client supplied lower third templates as animated Photoshop files. This worked like a champ and was yet another example of how Adobe Creative Cloud integration between applications is one of the best in the business. However, this only worked, because I was current on all of the Creative Cloud apps I had installed – and not just Premiere Pro CC.

Music. Try to make sure you are using licensed music that has been provided by the client. Corporate events are notorious for skirting around music licensing in the belief that, because it’s a closed conference, this is fine. Or that they are somehow covered under Fair Use guidelines (they aren’t). As an editor, you may not have control over this, but you should make sure to only use music that has been supplied to you by the client or purchased by the client from a stock music source during the course of the project.

Internet. Since many of these conference videos are intended for quick turnaround to the web, having a fast pipe to the internet is essential. Tying your edit machine to the center’s wifi isn’t going to be fast enough. A high quality 1080p MP4 that’s several minutes long will be several hundred MB in size. If you have several of these to upload each day, fast upload speeds are critical. Usually this means that the production company is going to have to pay for a dedicated line and ethernet cabling. This needs to be available past the point that the conference floor itself closes, since post will still be going on awhile longer. From the editor’s standpoint, this requires a machine that can accept an ethernet cable, which is getting harder to come by on both Mac and PC laptops. For the MacBook Pros, you can get a Thunderbolt-to-Ethernet adapter, which works flawlessly.

Schedule. Last, but not least, there’s the schedule. What can realistically be accomplished in the allotted time? Most corporate clients are not production people. They have no idea of what’s involved when they decide to interview and edit a given number of people in the course of a day. Even if the editor starts later, it still requires a producer to be involved in both the shoot and the edit. Realistically there should be a 50/50 ratio of shoot time to edit time. Naturally that’s not always possible, but this allows enough time to cut the piece, make tweaks, get approval, and then encode. On a fast laptop, the time required to encode high quality MP4 files is roughly the running time of the piece. Therefore, if you have an hour of total edited content, you’ll need to allow for at least an hour of encoding. Add to this upload time and the back-up to the client’s drives, and you have a rather packed schedule.

©2016 Oliver Peters

Adobe Premiere Pro CC Learning Tips


Adobe Premiere Pro CC is the heir apparent editing application for many editors. In order to make your transition easier, I’ve compiled a series of links to various official and unofficial resources, including Adobe sites, forums, YouTube videos, training resources, and various blog posts. This list is by no means all that’s out there, but it should provide a great starting point to become more comfortable with Premiere Pro CC.

Adobe and Adobe-related training resources

Adobe tutorials

Adobe Premiere Pro tips

Maxim Jago’s tips at Lynda

Maxim Jago’s tips at Peachpit

Lynda’s Adobe Premiere Pro training

Ripple Training – Premiere Pro CC 2015

Adobe-related blogs and YouTube channels

Dave Helmly’s DAV Tech Table

Jason Levine’s YouTube channel

Colin Smith’s YouTube channel

Dave Helmly’s presentation at Orlando Post Pros meeting – 2015 (webcast)


Adobe Filmmaker stories

Adobe Premiere Pro Forum

Creative COW Premiere Pro forum

Premiere-centric sites and YouTube channels

Jarle Leirpoll’s Premiere Pro blog


Premiere Bro

Best Premiere Pro Quick Tips YouTube Channel

Premiere Pro Tips YouTube channel

Blog posts

VashiVisuals – Deadpool Premiere Pro Presets

VashiVisuals – Keyboard Layouts

VashiVisuals – Music Video Editing Tips

VashiVisuals – Pancake Timeline

Jonny Elwyn – Premiere posts

Jonny Elwyn – Premiere Pro Tools and Tutorials

Jonny Elwyn – Tips and Tricks

Jonny Elwyn – Tips for Better Editing in Premiere Pro

Jonny Elwyn – Tutorials for Better Editing in Premiere Pro

Premium Beat – Beginner’s Guide to Premiere Pro Shortcuts

Premium Beat – Match Frame and Replace Edit

Premium Beat – How to Clean up Audio in Premiere Pro

Premium Beat – Creating a Storyboard Edit in Premiere Pro

Premium Beat – AVCHD Editing Workflow

Premium Beat – How to Organize a Feature Film Edit

Premium Beat – 3 Quick Tips for Editing in Premiere Pro

Premium Beat – Time-Saving Premiere Pro CC Tips

Derek Lieu – Simple Tricks for Faster Editing in Premiere Pro

No Film School – Premiere Pro Keyboard Shortcuts

Wipster – 4 Reasons Why Premiere Pro is a Great Choice

Wipster – Master the Media Browser

Plug-ins, add-ons, other

Kinetic type and layer effects by TypeMonkey for After Effects

Post Notes Premiere Pro control panel

PDFviewer Premiere Pro control panel integration integration

Axle Video integration

LookLabs SpeedLooks

FxFactory plug-ins

RedGiant Software plug-ins

Boris FX plug-ins

DigitalFilms – SpeedGrade Looks

Jarle’s Premiere Pro Presets

Note : this information is also included on the Editing Resources page accessible from the header of this blog. Future updates will be made there.

©2016 Oliver Peters

Easy 4K Workflow


In the last post I questioned the visual value of 4K. However, it’s inevitable that more and more distributors will be asking for 4K deliverables, so you might as well start planning how you are going to achieve that. There are certainly plenty of demos showing how easy it is to edit 4K content and they use iPhone video for the demo material. The reality is that such footage is crap and should only be used when it’s the only camera available. At the low end, there are plenty of cameras to choose from that work with highly-compressed 4K images and yet, yield great results. The Blackmagic Design URSA Mini, Sony FS7 and Canon C300 Mark II come to mind. Bump up to something in a more cinema-style package and you are looking at a Sony F55, RED, ARRI or even the AJA CION.

df1816_easy_4k_1While many cameras record to various proprietary compressed codecs, having a common media codec is the most ideal. Typically this means Apple ProRes or Avid DNxHD/HR. Some cameras and standalone monitor/recorders can natively generate media in these formats. In other circumstances, it requires an interim transcode before editing. This is where system throughput becomes a big issue. For example, if you want to work with native 4K material as ProRes 4444, you are going to need fast drives. On my home Mac Pro tower, I have two internal 7200RPM spinning drives for media striped as RAID-0. In addition to these and the boot drive, I also have another internal SSD media drive. When I checked their relative performance with the AJA System Test utility, these clocked at 161 write /168 read for the RAID-0 stripe and 257/266 for the single SSD. That’s good enough for approximately 27fps and 43fps respectively, if the media were large 3840 x 2160 (2160p) ProRes 4444 files. In other words, both drive units are adequate for a single stream of 2160p/23.98 as ProRes 4444, but would have a tougher time with two streams or more.

Unfortunately the story doesn’t end with drive performance alone, because some NLEs handle real-time playback of 4K media better than do others. I’ve performed a number of tests with 4K files in Apple Final Cut Pro X, Adobe Premiere Pro CC, Avid Media Composer and Blackmagic Design DaVinci Resolve. This has been on a number of different units, including a couple of Mac Pro towers, as well as a newer “trash can” Mac Pro. Plus, I’ve run tests with local drives, attached media RAIDs, and network-attached storage systems. What I’ve found is that as long as you have fast drive performance, then the bottleneck is the NLE.

Pretty much all of these choices can handle a single stream of 4K media without too much of an issue. However, when you stack up a second layer or track for a simple 2D PIP composite, generally the system struggles. In some cases, FCPX has performed better than the others, but not consistently.  The others all choked to varying degrees. When you limit it to a single stream of 4K video with associated audio, then FCPX performs more fluidly at a higher quality level than Media Composer or Premiere Pro, although Media Composer also performed well in some of the tests. My conclusion, for now, is that if you want to work with native 4K media in a client-involved session, and with the least amount of rendering, then FCPX is the clear winner – at least on the Mac platform. For many editors it will be the most viable choice.

Native workflow

The first big plus for Final Cut Pro X is how easily it works with native media that it’s compatible with. That’s one thing I don’t generally advocate on a large project like a show or feature film – opting instead to create “optimized” media first, either externally or within FCPX. Nevertheless, a lot of native codecs can be quite easy on the system. For example, one client cut an indie feature, using all native camera files from his Sony FS7. His Final Cut system was a tricked out iMac that was a couple of years old and a Promise Pegasus RAID array. Initially he cut the film from native 4K FS7 files to an FCPX 1080p timeline. I was doing the grading in Resolve, so I had him export a single, flattened movie file from the timeline as 1080p ProRes 4444. I brought this into Resolve, “bladed” the cuts to create edit points and applied my color correction. I exported a single ProRes 4444 master file, which he could import back into FCPX and marry with the post-production mix.

df1816_easy_4k_2Fast forward a year and the film distributor was inquiring whether they could easily produce a 4K master instead of a 1080 master. This turned out to be relatively simple. All my client had to do was change his FCPX project (timeline) settings to 4K, double-check the scaling for his clips and export a new 4K ProRes 4444 file of the timeline. In Resolve, I also changed the timeline setting to 4K and then relinked to the new 4K file. Voila! – all the cuts lined up and the previous grades all looked fine. Then I simply exported the graded 4K file to send back to the client.

In this example, even with a roundtrip to Resolve and a change from 1080p to 2160p, FCPX performed perfectly without much fuss. However, for many, you wouldn’t even need to go this far. Depending on how much you like to play and tweak during the color grade, there are plenty of ways to do this and stay totally inside FCPX. You could use tools like the Color Board, Hawaiki Color, Color Finale, or even some home-brew Motion effects, and achieve excellent results without ever leaving Final Cut Pro X.

As a reminder, Media Composer, Premiere Pro CC and Resolve are all capable of working with native media, including 4K.

Proxy workflow

df1816_easy_4k_4In addition to native 4K post, Apple engineers built an ingenious internal proxy workflow into Final Cut. Transcode the camera files in the background, flip a toggle, and work with the proxy files until you are ready to export a master. When you opt to transcode proxies, FCPX generates half-resolution, ProRes Proxy media corresponding to your original files. As an example, if your media consists of 2160p XAVC camera files, FCPX creates corresponding 1080p ProRes Proxy files. Even though the proxy media’s frame is 1/4th the size of the 4K original, FCPX takes care of handling the scaling math in the timeline between original and proxy media. The viewer display will also appear very close in quality, regardless of whether you have switched to original/optimized or proxy media. The majority of legacy A/V output cards, like a Blackmagic Design Decklink, are only capable of displaying SD and HD content to an external monitor. FCPX can send it the proper data so that a 4K timeline is displayed as a scaled 1080 output to your external video monitor.

Although proxies are small for a 4K project, these are still rather large to be moving around among multiple editors. It’s not an official part of the Final Cut operation, but you can replace these generated proxies with your own versions, with some caveats. Let’s say you have 3840 x 2160, log-gamma-encoded, 4K camera files. You would first need to have FCPX generate proxies. However, using an external application such as EditReady, Compressor, etc, you could transcode these camera files into small 960×540 ProRes Proxy media, complete with a LUT applied and timecode/clip name burnt in. Then find your Proxy Media folder, trash the FCPX-generated files and replace them with your own files. FCPX should properly relink to these and understand the correct relationship between the original and the proxy files. (This post explains the process in more detail.) There are several caveats. Clip name, frame rate, clip length, aspect ratio, and audio channel configurations must match. Otherwise you are good to go.df1816_easy_4k_3

The benefit to this solution is that you can freely edit with the proxies on a lightweight system, such as a MacBook Pro with a portable drive. When ready, move back to a beefier unit and storage, flip to original/optimized media, double-check all effects and color-correction on a good monitor, and then export the master files. It’s worth noting that this workflow is also potentially possible with Premiere Pro CC, because the new version to be introduced later this year will include a proxy editing workflow.

Naturally there is no single solution, but Final Cut Pro X makes this process far easier than any other tool that I use. If 4K is increasingly looming on the horizon for you, then FCPX is certainly worth a test run.

©2016 Oliver Peters

4K is kinda meh


Lately I’ve done a lot of looking at 4K content. Not only was 4K all over the place at NAB in Las Vegas, but I’ve also had to provide some 4K deliverables on client projects. This has meant a much closer examination of the 4K image than in the past.

First, let’s define 4K. Typically the term 4K applies to either a “cinema” width of 4096 pixels or a broadcast width of 3840 pixels. The latter is also referred to as QuadHD, UltraHD or UHD and is a 2x multiple of the 1920-wide HD standard. For simplicity’s sake, in this article I’m going to be referring to 4K, but will generally mean the UHD version, i.e. 3840 x 2160 pixels, aka 2160p. While 4K (and greater) acquisition for an HD finish has been used for awhile in post, there are already demands for true 4K content. This vanguard is notably led by Netflix and Amazon, however, international distributors are also starting to request 4K masters, if they are available.

In my analysis of the images from various 4K (and higher) camera, it starts to become quite obvious that the 1:1 image in 4K really isn’t all that good. In fact, if you compared a blow-up from HD to 4K of that same image, it becomes very hard to distinguish the blow-up from the true 4K image. Why is that?

When you analyze a native 4K image, you become aware of the deficiencies in the image. These weren’t as obvious when that 4K original was down-sampled to an HD timeline and master. That’s because in the HD timeline you are seeing the benefit of oversampling, which results in a superb HD image. Here are some factors that become more obvious when you view the footage in its original size.

1. Most formats use a high-compression algorithm to squeeze the data into a smaller file size. In some cases compression artifacts start to become visible at the native size.

2. Many DPs like to shoot with vintage or otherwise “lower quality” lenses. This gives the image “character” and, in the words of one cinematographer that I worked with, “takes the curse off of the digital image.” That’s all fine, but again, viewed natively, you start to see the defects in the optics, like chromatic aberration in the corners, coloration of the image, and general softness.

3. Due to the nature of video viewfinders, run-and-gun production methods, and smaller crews, many operators do not nail the critical focus on a shot. That’s not too obvious when you down-convert the image; however, at 100% you notice that focus was on your talent’s ear and not their nose.

The interesting thing to me is that when you take a 4K (or greater) image, down-convert that to HD, and then up-convert it back to 4K, much of the image detail is retained. I’ve especially noticed this when high quality scalers are used for the conversion. For example, even the free version of DaVinci Resolve offers one of the best up-scalers on the market. Secondly, scaling for 1920 x 1080 to 3840 x 2160 is an even 2x multiple, so a) the amount you are zooming in isn’t all that much, and b) even numbered multiples give you better results than fractional values. In addition, Resolve also offers several scaling methods for sharper versus smoother results.

df1716_4k-native_16_smIn general, I feel that the most quality is retained when you start with 4K footage rather than HD, but that’s not a given. I’ve blown up ARRI ALEXA clips – that only ever existed as HD – up to 4K and the result was excellent. That has a lot to do with what ARRI is doing in their sensor and the general detail of the ALEXA image. Clearly that’s been proven time and time again in the theaters, where files recorded using ALEXAs with the footage in 2K, HD or 2.8K ARRIRAW have been blown up via 4K projection onto the large screen and the image is excellent.

Don’t get me wrong. I’m not saying you shouldn’t post in 4K if you have an easy workflow (see my post next week) to get there. What I am saying is that staying in 4K versus a 4K-HD-4K workflow won’t result in a dramatic difference in image quality, when you compare the two side-by-side at 100% pixel-for-pixel resolution. The samples below come from a variety of sources, including the blogs of John Brawley, Philip Bloom and OffHollywood Productions. In some cases the source images originated from pre-production cameras, so there may be image anomalies not found in actual shipping models of these cameras. Grades applied are mine.

View some of the examples below. Click on any of these images for the slide show. From there you can access the full size version of any of these comparisons.

©2016 Oliver Peters