What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

Loud, but not TOO Loud!

Clients always want a mix with impact. In their minds, impact equals loud, but that’s not really true. What you really want is a dynamic mix that fits within a comfortable range. The listener should hear some variance without the broadcasters and/or streaming platforms affecting the mix too badly through loudness normalization. Hence, the battle over loudness and in recent times, the various ways to measure it.

In the early 2000s the so-called loudness wars came to the attention of legislators who, through the FCC, eventually codified US standards with the CALM Act of 2010. This resulted in loudness targets measured in LUFS (loudness units full scale). It has become the standard for broadcast mixers ever since. However, years before the CALM Act, noted mastering engineer Bob Katz offered a solution in a white paper to the AES. His proposal was the K-scale, which is still a valid way to mix and is an available feature in a number of metering plug-ins. This method is known to recording engineers, but probably something that most video editors never knew existed.

The K-scale is based on the concept that mixing should always be done at a reference speaker volume. Based on research done by Tomlinson Holman (the TH in Lucasfilm THX) and Dolby for theater sound calibration, that level is 85 dB SPL (sound pressure level) in movie theaters. The K-scale weights a VU meter’s scale according to three levels of headroom. These are labelled as K-12, K-14, and K-20 for 12, 14, and 20 dB of headroom, respectively. K-20 is intended for mixes with a wide dynamic range like classical music, K-14 is good for pop music, and K-12 for broadcast. Of course, those genres are only suggestions.

The point is to preserve dynamic range in the mix based on the type of material. The key to the weighting of the scale is that 0 VU always equates to a speaker volume of 85 dB SPL. It’s intended for professional music mixing and mastering studios. However, don’t discount its value for modern video post. K-scale metering has two advantages. First, you are mixing with a VU style meter, which has an averaged response. That is more akin to how humans perceive loudness than fast-responding full scale meters common to most NLEs and DAWs. Second, it encourages consistent monitoring levels when you mix audio.

How do you implement this in a small edit suite?

First, set up the room properly if possible – depending on available space. Place your speakers at the front. Your seating location should be about 1/3 of the way from the front with 2/3 of the room space behind you. Sound absorption panels are highly recommended. The speakers should be about five feet from you. Calibrate your speaker volume using a sound pressure level meter. The simplest and cheapest option is a phone app. I use the free NIOSH SLM app on my iPhone set to C weighing. It’s fine for casual monitoring like this, but understand that a phone app is not accurate in critical environments due to how the built-in mic is calibrated.

There are several meter plug-ins that feature K-scale presets. I prefer the free mvMeter2 from TBProAudio, which emulates an analog VU meter. It includes numerous VU and PPM presets along with the three K-scale settings. Place this on your final master or mix bus without any other effects or level changes after the meter. Play a pink noise file in your timeline and set the output level so that it reads 0 VU on the meter scale that you’ve picked – for instance, 0 VU on the K-14 scale.

There are plenty of home theater guides about how far away you should be when measuring SPL. The official distance would be one meter from the speaker, but it’s also valid to measure from where you would normally sit. A comfortable listening volume is going to be somewhere between 70 and 85 dB SPL. Start at 85dB. Test your mix playback. If this sounds too loud, you can lower the level and retest with the pink noise file. You might not operate at that level the entire time that you are editing. However, keeping this consistent level each time you mix will give you more predictable and translatable results.

Using this system to mix

Now, let’s put this into practice. Most of the mixes I do myself are for broadcast TV or some social media outlet, like YouTube. Each uses different loudness targets. I find that the K-12 or K-14 setting is best for my internet work and K-20 is best for broadcast. Note that Katz recommended the K-12 scale for broadcast; however, that was before the industry standardized at -23 or -24 LUFS. I find that if you use the K-20 scale (with 20 dB of headroom) that the result is closer to this spec. That difference is probably because Katz was talking about music broadcast, as in radio airplay of songs, rather than TV programs.

The reading on the meter will react like a typical, analog-style VU meter. If you’ve mixed with VU meters in the past, then you’re probably used to the needle hovering around 0 VU and safely (and often) bouncing up into the red overload zone up to +3 dB or more. When using the K-scale method, you’ll find that a lower meter reading will actually match what sounds right to you, assuming you followed the speaker calibration described above. Most of the time my levels are somewhere in the middle of the VU meter, well under 0 VU. It only bounces above 0 on a few peaks. If you compare this to Premiere Pro’s loudness meter, you’ll find that this equates to around -20 to -14 LUFS, which is a good target for platforms like YouTube.

Remember that a VU-style meter is an average. Be sure to limit your signal for peaks. A limiter plug-in should be placed before the meter. In my work that’s usually set to a brick wall level of -3 or -6 dB true peak depending on the program. This is lower than the -1 dB limit often recommended for music mixes. This approach to mixing should give you a decent amount of dynamic range without the need for severe normalization by the platforms.

It’s important to understand that many of these recommendations for streaming service targets are based on music mixing and not TV shows and films, where dialogue is usually dominant. In 2021 the AES posted an updated technical document (linked here) that spells out recommendations for dialogue-based content. Their suggested target is -18 LUFS based on measuring the dialogue tracks rather than the full mix.

Alternatives

The K-scale system is old, though still valid. Of course, NLEs like Premiere Pro and Resolve include their own loudness meters and there are numerous third-party plug-ins using traditional and modernized ways to measure loudness. One modern metering technique is to measure the difference between peaks and short term loudness, aka PSR, and/or peaks versus long-term loudness, aka PLR. A plug-in like Meterplugs’ Dynameter provides you with this feedback. The reading will tell you how the mix level will be handled by the loudness management of the top streamers. Loudness Penalty is an online resource that will tell you this, as well.

Lastly, what about headphone mixing? There are pros and cons, but in order to trust your ears and know whether your mix will translate to various speaker systems, Waves has a solution. Their Nx technology emulates the three-dimensional sound and space of several famed studio control rooms, including those of Chris Lord-Alge, Ocean Way, and Abbey Road.

Place the plug-in last in your mix bus, select the correct headphone or ear bud model EQ curve (270 presets are available), and you’ll experience your mix based on an emulation of those speaker systems and rooms. Each plug-in includes head tracking and Waves sells a separate Nx tracking sensor. However, it can also use your front-facing computer camera.

With the plug-in enabled, you’ll hear a proper sound image as if listening to those speakers (Abbey Road, Ocean Way, etc.) instead of the usual left/right split of headphones. With head tracking, the spatial image shifts as you move your head. If you disable head tracking, you can still manually pan the image around the room and hear the result in your headphones. Since the plug-in is only designed for headphone monitoring, disable it when listening on external speakers and when you output the mix. Since the plug-in is manipulating phase to create these emulations, material that is truly just mono might sound more odd than full stereo mixes.

Granted, you’ll probably never own the kind of speaker systems these studios use, but the real point is that these plug-ins give you a transportable reference based on some of the best control room environments. As you move between different systems with different monitor set-ups, you can maintain a common monitoring reference for all of your mixes.

Happy mixing!

©2023 Oliver Peters

NLE Tips – Audio Track Mixing in Final Cut Pro

In the past I’ve explained how audio in routed through the Final Cut Pro architecture. I’ve also discussed track-based audio mixing, predominantly based on the workflow in Premiere Pro. Today I’d like to extend that workflow into the realm of Final Cut Pro.

Everyone knows that FCP is not track-based. The timeline consists of a string of audio/video clips called the primary storyline, which empowers its magnetic feature. Additional audio and video clips can be attached to the clips on the primary storyline as connected clips – video above, audio below. At this level the software is indeed trackless. (Click on any image to see an enlarged view.)

Understanding audio roles and lanes

Several years ago, Apple added the “roles” feature. Audio and video clips can be assigned default and/or custom role designations, which can be used for visual organization and other functions. For example, do you want to export a “textless” ProRes file from your timeline? Then simply disable the Titles video role in the export dialogue.

Apple engineers have done more with audio roles, which can be further grouped into audio “lanes” through the timeline index window. If you’ve assigned the correct audio roles to each clip, then all dialogue clips are grouped into the dialogue lane, all music clips in the music lane, and so on. If you exported an FCPXML file for an outside mixer, then audio roles help to organize the track layout in other audio software.

At this point the clips are still individual. However, once you combine all clips in the sequence into a single compound clip, then the audio for all clips within an audio lane are summed together. This is similar to a group or submix bus in a DAW. The combination of lanes are in turn summed together and sent to the mix output. In essence, each audio lane within the compound clip is similar to a summing track stack in Logic Pro. You can adjust volume and apply effects to the entire lane, on top of anything done to individual clips contained inside of that lane.

Mixing in FCP on real-world projects

I’m working on an Alaska travelogue series – on-camera host on location, voice-overs, voice-over pick-ups, and music. The host stand-ups were recorded in two environments – close to the shoreline and in a quiet wooded area.

The location sound mixer recorded both a Lavaliere mic and a boom mic on separate channels. My personal preference is the boom, but sometimes the waves on the beach created too much background noise. In those cases, it’s the lav mic, but then I have to contend with the duller sound of the mic under the clothing, along with some rustle.

The next challenge is getting the voice-overs to sound close to the on-camera audio. These were recorded on location, but in a quiet room. The final challenge is to match the sonic quality of the voice-over pick-ups (done by the host at his home) to the original voice-overs.

Step One

The first step in this process is to assign the proper audio roles before clips are edited into the FCP sequence. Roles are quite versatile. If you had multiple speakers, each one could be assigned a separate role. In this project, my audio roles are Dialogue, VO, VO2, and Music. Once clips are imported and roles assigned, I can edit as I normally would in Final Cut. I personally add very few audio effects at this point to the individual clips, because I will do that later. In addition, certain effects, like noise reduction simply don’t work very well with short clips (more on that in a minute). So I only add what I need to “sell” the cut.

Step Two

Once the cut is approved and locked, I can move on to a final mix. To start, I’ll remove any audio effects that I’ve added to individual clips. Then, I meticulously go through and even out any level imbalances. Final Cut Pro features multiple gain stages. You have the clip volume control, but if you expand the audio, you see the individual channels, which each have volume controls, as well. Each of these can be raised by up to 12dB. So if you’ve applied 12dB to the clip and it’s still too quiet, expand the audio and bump up the channel volume. Or work this process in reverse. My objective is to end up with a clip volume that’s a bit hot in the peaks and then use the range tool to highlight the larger peaks and duck them down a bit.

Expand audio and make sure you have overlaps with fade handles between all clips. This is somewhat time-consuming. It’s far simpler in Premiere Pro to add audio dissolves (crossfades) across all audio edits in the timeline in a single step. But it’s a necessary step, including the addition or room tone/ambience to fill any gaps in the speech.

Finally, check the music. Make sure the edits work musically. Overall, the music volume can be a bit loud at this stage, but you want to make sure the balance is right for the entire sequence. So pay attention to the proper and graceful ducking of music around spoken audio.

Step Three

After you’ve made everything as uniform as possible, compound the sequence. Open the timeline index, enable “show audio lanes” which expands the audio of the compound. You’ll now see a “track” or summing bus for each audio role – Dialogue, VO, VO2, and Music. When you select an audio lane, you can adjust its volume and apply audio effects to only that lane. That lane’s audio parameters are shown in the inspector pane.

Selecting the topmost level of the clip, displays the output (i..e mix) bus parameters. Additional effects can be added here. It’s fine to apply and adjust such “master” effects, but I recommend that you do not make any changes to the volume. That’s because the volume control comes after any effects, which would include a meter plug-in, such as the built-in multimeter plug-in. Leave the volume slider alone if you want to see accurate volume levels.

Aside from mixing in tracks/busses, audio roles add another value at the time of export. My deliverables include a ProRes file without titles, as well as audio that’s split into separate tracks. In Final Cut Pro’s export setting, I can select the Multitrack Quicktime and then arrange the combination and order of roles. For this project, it’s a ProRes file with four stereo tracks corresponding to the four roles that I’m working with.

Note that when you export a multitrack file, each lane output also has any master output effects added to them. For example, if your mix uses a compressor and a limiter on the main output of the compound clip, then each lane/bus/track of the multitrack will now also have the added effect of that compression and limiting. If you don’t want this, then make sure to disable these effects prior to exporting a Multitrack Quicktime file.

Which effects should you use?

I’ve now discussed how the process works, but what combination of effects should you be using? Obviously that’s a question of style and personal taste. The type of effects for me will be similar to my description in the Premiere Pro article. I tend to stick with native Final Cut Pro effects, so that I don’t have to worry about what’s installed if I move to another Mac or a different editor has to step in. Also, Final Cut Pro is often a poor host for some third party audio plug-ins. I don’t know the reason, but have been told it’s up to those developers to optimize their tools for FCP. In most cases these same plug-ins work well in Logic Pro, not to mention other non-Apple applications. Go figure!

I’m happy with most of the built-in Apple audio plug-ins, with the exception of noise reduction and other audio repair tasks. The Accusonus tools are my go-to, but they are sadly no longer available. After that it’s the RX package from iZotope. If you have a really challenging piece of audio, then use the standalone RX package on that clip and re-import. If you don’t own either of these, then the newly added voice isolation feature in Resolve is pretty sweet (and better than what’s in FCP). Another impressive contender is Adobe’s Podcast beta. The AI-powered voice enhancement feature is available for free use through their web portal. I’ve used it for some really poor Zoom interview audio and it did an outstanding job of cleaning up all manner of audio defects.

Where this explanation is most pertinent is on location-based dialogue recordings. These are the ones that often benefit from noise removal/repair. These tools require consistency and some lead-in to the first audio, so they are best applied to full tracks and not individual clips. That’s why I make sure I have overlaps and fill in gaps and do all of this processing on the lanes of the compound and not on individual clips. If you have different dialogue sections – some noisy and some clean – then it’s best to organize these into separate audio roles, so that they are sorted out correctly once you compound the clip.

My typical processing chain

My FCP effects layout is similar to the description in the Premiere Pro post. Dialogue and VO tracks get some noise reduction, EQ, and compression. Voice-overs are particularly susceptible to plosives (popping “p” consonants) and sibilance, so plosive and de-essing filters are useful. For music, I usually spread the stereo image more and dip the EQ in the midrange. Plus some compression. All of this is designed to allow the dialogue to sit better in the mix. 

The last level of processing is what you do to the top level of the compound clip itself. That’s a bit like mastering in audio production. Applying effects to the compound clips is analogous to applying effects to a mix or output bus in the DAW world. On this particular chain, it’s EQ, exciter, compressor, adaptive limiter, and the multimeter. The effects stack is processed before the volume slider. Since I’m judging peak and loudness levels with the multimeter plug-in, I don’t want to make any volume slider changes on the compound clip, because those would be applied after the reading on the multimeter.

You’ll notice from my screen grabs that different compressor models have been used. These are all from the same Logic Pro compressor in FCP. This single plug-in features various presets designed to emulate tried-and-true analog compressors favored by top recording engineers/mixers.

Final thoughts 

As with my other Final Cut Pro audio articles and posts, I can already hear some screaming that this is just a workaround for the fact that Final Cut Pro has no “true” audio mixing panel. While that may be true, it’s also irrelevant. Until such time as Apple’s ProApps engineers redesign the audio section or add a “roles-based mixer” to the tool set, this is the software you have. If you want to mix in Final Cut Pro and deliver a properly mixed master file without using specialized audio software, then it’s best to understand how to achieve the required results.

If you step into the compound clip to make any editorial changes to the sequence or to individual clips, then you will not hear the results of the top-level mixing and effects. The proper mix is only heard when you step back out. This is a short-coming compared with this same process in Premiere Pro. Therefore, when you are editing in Final Cut Pro, it’s best to leave all of the final mixing until the end. In Premiere Pro, I tend to mix as I go.

Hopefully this post gives you some insight into the “guts” of the software. If you can’t send the audio to a mix engineer and don’t want to bounce over to Logic Pro, Pro Tools, or Resolve (Fairlight) yourself, then there’s no reason Final Cut Pro can’t be made to work for you.

©2023 Oliver Peters

Final Cut Pro + DaVinci Resolve

The concept of offline and online editing goes back to the origins of film editing. Work print was cut by the film editor during the creative stage of the process and then original negative was conformed by the lab and married to the final mix for the release prints (with a few steps in between). The terms offline and online were lifted from early computer lingo and applied to edit systems when the post process shifted from film to video. Thus offline equates to the creative editorial stage, while conforming and finishing services are defined as online.

Digital nonlinear edit systems evolved to become capable of handling all of these stages of creative editorial and finishing at the highest quality level. However, both phases require different mindsets and skills, as well as more advanced hardware for finishing. And so, the offline/online split continues to this day.

If you are an editor cutting local market spots, YouTube videos, corporate marketing pieces, etc, then you are probably used to performing all of these tasks on your own. However, most major commercials, TV shows, and films definitely split them up. In feature films and high-end TV shows, the film editors are separate from the sound editing/mixing team and everything goes through the funnel of a post facility that handles the finishing services. The latter is often referred to as the DI (digital intermediate) process in feature film productions.

You may be cutting on Media Composer, Premiere Pro, or Final Cut Pro, but the final assembly, insertion of effects, and color correction will likely be done with a totally different system and/or application. The world of finishing offers many options, like SGO Mistika, Quantel Rio, and Filmlight Baselight. But the tools that pop up most often are Autodesk Flame, DaVinci Resolve, and Avid Symphony (the latter for unscripted shows). And of course, Pro Tools seemingly “owns” the audio post market.

Since offline/online still exists, how can you use modern tools to your advantage?

If Apple’s Final Cut Pro is your main axe, then you might be reading this and think that you can easily do this all within FCP. Likewise, if you’ve shifted to Resolve, you’re probably wondering, why not just do it all in Resolve? Both concepts are true in theory; however, I contend that most good editors aren’t the best finishers and vice versa. In addition, it’s my opinion that Final Cut is optimized for editing, whereas Resolve is optimized for finishing. That doesn’t make them mutually exclusive. In fact, the opposite is true. They work great in tandem and I would suggest that it’s good to know and use both.

Scenario 1: If you edit with FCP, but use outside services for color and sound, then you’ll need to exchange lists and media. Typically this means AAF for sound and FCPXML for Resolve color (or possibly XML or AAF if it’s a different system). If those systems don’t accept FCPXML lists, then normally you’d need to invest in tools from Intelligent Assistance and/or Marquis Broadcast. However, you can also use Resolve to convert the FCPXML list into other formats.

If they are using Resolve for color and you have your own copy of Resolve or Resolve Studio, then simply import the FCPXML from Final Cut. You can now perform a “preflight check” on your sequence to make sure everything translated correctly from Final Cut. Take this opportunity to correct any issues before it goes to the colorist. Resolve includes media management to copy and collect all original media used in your timeline. You have the option to trim files if these are long clips. Ideally, the DP recorded short takes without a lot of resets, which makes it easy to copy the full-length clip. Since you are not rendering/exporting color-corrected media, you aren’t affected by the UHD export limit of the free Resolve version.

After media management, export the Resolve timeline file. Both media and timeline file can go directly to the colorist without any interpretation required at the other end. Finally, Resolve also enables AAF exports for audio, if you need to send the audio files to a mixer using Pro Tools.

Scenario 2: What if you are doing everything on your own and not sending the project to a colorist or mixer for finishing? Well, if you have the skillset and understand the delivery criteria, then Resolve is absolutely your friend for finishing the project. For one thing, owning Resolve means you could skip purchasing Apple Motion, Compressor, and/or Logic Pro, if you want to. These are all good tools to have and a real deal from a cost standpoint; however, Resolve or Resolve Studio definitely covers most of what you would do with these applications.

Start the same way by sending your FCPXML into Resolve. Correct any editorial issues, flatten/collapse compound and multicam clips, etc. Insert effects and titles or build them in the Fusion page. Color correct. When it comes to sound, the Fairlight page is a full-fledged DAW. Assuming you have the mixing chops, then Fairlight is a solid stand-in for Logic Pro, Pro Tools, or other DAWs. Finally, export the various formats via the Deliver page.

Aside from the obvious color and mixing superiority of Resolve over Final Cut Pro, remember that you can media-manage, as well as render out trimmed clips – something that FCP won’t do without third-party applications. It’s also possible to develop proxy workflows that work between these two applications.

While both Final Cut Pro and DaVinci Resolve are capable of standing alone to cover the creative and finishing stages of editing, the combination of the two offers the best of all worlds – a fast editing tool and a world-class finishing application.

©2023 Oliver Peters

NLE Tips – Audio Track FX

I’ve written quite a few blog posts and articles about audio mixing methods in Premiere Pro and Final Cut Pro. But over time, methods evolve, change, or become more streamlined, so it’s time to revisit the subject. When you boil down most commercials and short-subject videos (excluding trailers), the essence of the soundtrack is just voice against a music bed with some sound effects. While I’ll be the first to say you’ll get the best results sending even a simple mix to a professional mixer, often budget and timeframe don’t allow for that. And so, like most editors, I do a lot of my own mixes.

My approach to these mixes is straightforward and rather systematic. I’m going to use Premiere Pro examples, but track-based mixing techniques can be universally applied to all NLEs. Even FCP works with track-based mixing if you properly use its audio roles function. I will almost never apply audio effects at the individual clip level, unless it something special, like simulated phone call voice processing.

All dialogue clips usually end up on A1 with crossfades between to smooth the edits. Add room tone between for consistency. This also helps the processing of the track effects, especially noise reduction. If I have more than one voice or character, then each goes onto a separate track. I will use clip volume adjustments in order to get the track to sound even across the length of the video. With this done, it’s time to move to the track mixer.

In this example from a recent product video, the reviewer’s voice is on A1. There’s a motor start-up sound that I’ve isolated and placed on A2. Music is on A3 and then the master mix bus. These audio plug-in effects are the ones I use on almost every video in a pretty systematic fashion. I have a nice collection of paid and free, third-party audio plug-ins, but I often stick to only the stock effects that come with a given NLE. That’s because I frequently work with other editors on the same project and I know that if I stick with the standard effects, then they won’t have any compatibility issues due to missing plug-ins. The best stock plug-in set can be found in Logic Pro and many of those are available in FCP. However, the stock audio effects available in Premiere are solid options for most projects.

Audio track 1 – Dialogue – Step 1 – noise reduction. Regardless of how clean the mic recording is, I will apply noise reduction to nearly every voice track recorded on location. My default is the light noise reduction preset, where I normally tweak only the percentage. If you have a really noisy recording, I suggest using Audition first (if you are a Creative Cloud subscriber). It includes several noise reduction routines and a spectral repair function. Process the audio, bounce out an export, and bring the cleaned-up track into your timeline. However, that’s going to be the exception. The new dialogue isolation feature in Resolve 18.1 (and later) as well as iZotope RX are also good options.

Step 2 – equalization. I apply a parametric EQ effect after the noise reduction stage. This is just to brighten the voice and cut any unnecessary low end. Adobe’s voice enhancer preset is fine for most male and female voices. EQ is very subjective, so feel free to tweak the settings to taste.

Step 3 – compressor. I prefer the tube-modeled compressor set to the voice leveling preset for this first compression stage. This squashes any of the loudest points. I typically adjust the threshold level. You can also use this filter to boost the gain of the voice as you see in the screenshot. You really need to listen to how the audio sounds and work interactively. Play this compressor off against the audio levels of the clip itself. Don’t just squash peaks using the filter. Duck any really loud sections and/or boost low areas within the clip for an even sound without it becoming overly compressed.

Audio track 2 – Sound FX – Step 1 – equalization. Many of my videos are just voice and music, but in this case, the reviewer powers up a boat motor and cruises off at the end of the piece. I wanted to emphasis the motor rumble, so I split that part of the clip’s audio and moved it down to A2. This let me apply different effects than the A1 track effects. Since I wanted a lot of bottom end, I used parametric EQ at full reset and boosted the low end to really get a roaring sound.

Step 2 – compressor. I once again applied the tube-modeled compressor in order to keep the level tame with the boosted EQ settings.

Audio track 3 – Music – Step 1 – equalization. Production music helps set the mood and provides a bed under the voice. But you don’t want it to compete. Before applying any effects, get the volume down to an acceptable level and adjust any really loud or quiet parts in the track. Then, apply a parametric equalizer in the track mixer panel. Pull down the level of the midrange in the frequencies closest to the voice. I will also adjust the Q (range and tightness of the bell curve at that frequency). In addition, I often boost the low and high ends. In this example, the track included a bright hi-hat, which I felt was a bit distracting. And so in this example, I also pulled down some of the high end.

Step 2 – stereo expander. This step is optional, but it helps many mixes. The stereo expander effect pushes the stereo image out to the left and right, leaving more of the center open for voice. However, don’t get carried away, because stereo expander plug-ins also alter the phase of the track. This can potentially throw some of the music out of phase when listened to in mono, which could cause your project to be rejected. If you are mixing for the web, then this is less of an issue, since most modern computers, tablets, smart phones, not to mention ear buds, etc are all set up for stereo. However, if you mix is for broadcast, then be sure to check your mix for proper phase correlation.

Mix bus – Step 1 – multi-band compression. The mix bus (aka master bus or output bus) is your chance to “glue” the mix together. There are different approaches, but for these types of projects, I like to use Adobe’s multi-band compressor set to the classical master preset. I adjust the threshold of the first three bands to -20 and a compression ratio of 4 across the board. This lightly knocks down any overshoots without being heavy-handed. The frequency ranges usually don’t need to be adjusted. Altering the output gain drives the volume hitting the limiter in the next step. You may of may not need to adjust this depending on your target level for the whole mix.

Step 2 – hard limiter. The limiter is the last plug-in that controls output volume. This is your control to absolutely stay below a certain level. I use the -3 or -6 preset (depending on the loudness level I’m trying achieve) and reduce the input boost back to 0. I also change it to read true peaks instead of only peak levels. 

Step 3 – loudness meter. The loudness meter keeps you honest. Don’t just go by the NLE’s default audio meters. If you have been mixing to a level of just below 0 on those, then frankly you are mixing the wrong way for this type of content. Really loud mixes close to 0 are fine for music production, but not OK for any video project.

The first step is to find out the target deliverable and use the preset for that. There are different presets for broadcast loudness standards versus web streaming, like YouTube. These presets don’t change the readout of the numbers, though. They change the color indicators slightly. Learn what those mean. 

Broadcast typically requires integrated loudness to be in the -23 to -24 area, whereas YouTube uses -14. I aim for a true peak target of -3 or -6. This tracks with the NLE audio meters at levels peaking in the -9 to -6 range. Adjusting the gain levels of the multi-band compressor and/or limiter help you get to those target levels.

©2022 Oliver Peters