NLE Tips – Audio Track FX

I’ve written quite a few blog posts and articles about audio mixing methods in Premiere Pro and Final Cut Pro. But over time, methods evolve, change, or become more streamlined, so it’s time to revisit the subject. When you boil down most commercials and short-subject videos (excluding trailers), the essence of the soundtrack is just voice against a music bed with some sound effects. While I’ll be the first to say you’ll get the best results sending even a simple mix to a professional mixer, often budget and timeframe don’t allow for that. And so, like most editors, I do a lot of my own mixes.

My approach to these mixes is straightforward and rather systematic. I’m going to use Premiere Pro examples, but track-based mixing techniques can be universally applied to all NLEs. Even FCP works with track-based mixing if you properly use its audio roles function. I will almost never apply audio effects at the individual clip level, unless it something special, like simulated phone call voice processing.

All dialogue clips usually end up on A1 with crossfades between to smooth the edits. Add room tone between for consistency. This also helps the processing of the track effects, especially noise reduction. If I have more than one voice or character, then each goes onto a separate track. I will use clip volume adjustments in order to get the track to sound even across the length of the video. With this done, it’s time to move to the track mixer.

In this example from a recent product video, the reviewer’s voice is on A1. There’s a motor start-up sound that I’ve isolated and placed on A2. Music is on A3 and then the master mix bus. These audio plug-in effects are the ones I use on almost every video in a pretty systematic fashion. I have a nice collection of paid and free, third-party audio plug-ins, but I often stick to only the stock effects that come with a given NLE. That’s because I frequently work with other editors on the same project and I know that if I stick with the standard effects, then they won’t have any compatibility issues due to missing plug-ins. The best stock plug-in set can be found in Logic Pro and many of those are available in FCP. However, the stock audio effects available in Premiere are solid options for most projects.

Audio track 1 – Dialogue – Step 1 – noise reduction. Regardless of how clean the mic recording is, I will apply noise reduction to nearly every voice track recorded on location. My default is the light noise reduction preset, where I normally tweak only the percentage. If you have a really noisy recording, I suggest using Audition first (if you are a Creative Cloud subscriber). It includes several noise reduction routines and a spectral repair function. Process the audio, bounce out an export, and bring the cleaned-up track into your timeline. However, that’s going to be the exception. The new dialogue isolation feature in Resolve 18.1 (and later) as well as iZotope RX are also good options.

Step 2 – equalization. I apply a parametric EQ effect after the noise reduction stage. This is just to brighten the voice and cut any unnecessary low end. Adobe’s voice enhancer preset is fine for most male and female voices. EQ is very subjective, so feel free to tweak the settings to taste.

Step 3 – compressor. I prefer the tube-modeled compressor set to the voice leveling preset for this first compression stage. This squashes any of the loudest points. I typically adjust the threshold level. You can also use this filter to boost the gain of the voice as you see in the screenshot. You really need to listen to how the audio sounds and work interactively. Play this compressor off against the audio levels of the clip itself. Don’t just squash peaks using the filter. Duck any really loud sections and/or boost low areas within the clip for an even sound without it becoming overly compressed.

Audio track 2 – Sound FX – Step 1 – equalization. Many of my videos are just voice and music, but in this case, the reviewer powers up a boat motor and cruises off at the end of the piece. I wanted to emphasis the motor rumble, so I split that part of the clip’s audio and moved it down to A2. This let me apply different effects than the A1 track effects. Since I wanted a lot of bottom end, I used parametric EQ at full reset and boosted the low end to really get a roaring sound.

Step 2 – compressor. I once again applied the tube-modeled compressor in order to keep the level tame with the boosted EQ settings.

Audio track 3 – Music – Step 1 – equalization. Production music helps set the mood and provides a bed under the voice. But you don’t want it to compete. Before applying any effects, get the volume down to an acceptable level and adjust any really loud or quiet parts in the track. Then, apply a parametric equalizer in the track mixer panel. Pull down the level of the midrange in the frequencies closest to the voice. I will also adjust the Q (range and tightness of the bell curve at that frequency). In addition, I often boost the low and high ends. In this example, the track included a bright hi-hat, which I felt was a bit distracting. And so in this example, I also pulled down some of the high end.

Step 2 – stereo expander. This step is optional, but it helps many mixes. The stereo expander effect pushes the stereo image out to the left and right, leaving more of the center open for voice. However, don’t get carried away, because stereo expander plug-ins also alter the phase of the track. This can potentially throw some of the music out of phase when listened to in mono, which could cause your project to be rejected. If you are mixing for the web, then this is less of an issue, since most modern computers, tablets, smart phones, not to mention ear buds, etc are all set up for stereo. However, if you mix is for broadcast, then be sure to check your mix for proper phase correlation.

Mix bus – Step 1 – multi-band compression. The mix bus (aka master bus or output bus) is your chance to “glue” the mix together. There are different approaches, but for these types of projects, I like to use Adobe’s multi-band compressor set to the classical master preset. I adjust the threshold of the first three bands to -20 and a compression ratio of 4 across the board. This lightly knocks down any overshoots without being heavy-handed. The frequency ranges usually don’t need to be adjusted. Altering the output gain drives the volume hitting the limiter in the next step. You may of may not need to adjust this depending on your target level for the whole mix.

Step 2 – hard limiter. The limiter is the last plug-in that controls output volume. This is your control to absolutely stay below a certain level. I use the -3 or -6 preset (depending on the loudness level I’m trying achieve) and reduce the input boost back to 0. I also change it to read true peaks instead of only peak levels. 

Step 3 – loudness meter. The loudness meter keeps you honest. Don’t just go by the NLE’s default audio meters. If you have been mixing to a level of just below 0 on those, then frankly you are mixing the wrong way for this type of content. Really loud mixes close to 0 are fine for music production, but not OK for any video project.

The first step is to find out the target deliverable and use the preset for that. There are different presets for broadcast loudness standards versus web streaming, like YouTube. These presets don’t change the readout of the numbers, though. They change the color indicators slightly. Learn what those mean. 

Broadcast typically requires integrated loudness to be in the -23 to -24 area, whereas YouTube uses -14. I aim for a true peak target of -3 or -6. This tracks with the NLE audio meters at levels peaking in the -9 to -6 range. Adjusting the gain levels of the multi-band compressor and/or limiter help you get to those target levels.

©2022 Oliver Peters

NLE Tips – Proxy Hacks

Editors often think of the clip within the edit application’s browser as the media file. But that clip is only a facsimile of the actual media. It links to potentially three different assets on the hard drive – the original camera (or sound) file, optimized media, and/or proxy media.

Optimized media. You may decide to create optimized media when the original media’s codec or file format is too taxing on your system. For example, you might convert a media file made up of an image sequence into an optimized movie file using one of the ProRes or DNx codecs. When you create optimized media, that is often the media used for finishing instead of the original camera media. For sake of simplicity I’ll refer to original media from here on, but understand that it could be optimized media or original camera files.

Proxy media. There are many reasons for creating proxy media – portability, system performance, remote editing, etc. Proxy media is usually lightweight, more highly compressed, and of a lower resolution than the original media. Nearly all editing applications enable users to edit with lightweight proxy media in lieu of heavier, native camera files. When proxy media has been created, then the media clip in the NLE’s browser can actually link to both the original camera file, as well as the proxy media file. Software “toggles” in the application can seamlessly swap the link from one type of media file to the other.

The NLEs that offer proxy editing workflows integrate routines to transcode and automatically switch the links between proxy and original camera files on the hard drive. DaVinci Resolve 18 is the newest in this group with the addition of the Blackmagic Proxy Generator application. However, that tool only works with Resolve Studio 18 downloaded from Blackmagic Design’s website. The Generator is an addition to Resolve 18 and augments the built-in transcoding tools. In either case, you don’t have to use the built-in routines nor the Blackmagic Proxy Generator. You can encode proxies using different software and even different computers. Then you can attach those proxies to the clips in the editing application at a later time.

Creating external proxy media

Proxies can be created with any encoding software. I like Apple Compressor, which includes a category of presets specifically designed for proxy media generation. The presets can be modified according to your needs.  For instance, you can add a LUT and effects, like a timecode overlay. This makes it easy to know when you are toggled to the original or the proxy media within the NLE.

Before creating any proxy files, make sure that your original files all have unique file names. Rename any duplicates or those with generic file names, like Clip001, Clip002, etc. There are several key parameters needed for successful relinking between original and proxy media. These include matching names, frame rates, timecode, lengths, and audio channel configurations. Some applications let you force a relink when some of these items don’t match, but it will usually be one file at a time.

Frame sizes can be smaller, since that’s an aspect of any proxy workflow. For example, if you start with 4K/UHD original media, but you create half-size HD proxies. The embedded metadata in the proxy file informs the NLE so that the correct size is maintained when switching between the two. Likewise, the codecs do not need to match. You can have 4K/UHD ProRes HQ originals and HD H.264 proxy media (I prefer ProRes Proxy). The point is to have proxy media with smaller file sizes, which play back more efficiently on your computer.

When you transcode proxy media files in Compressor or any other encoding application, it’s best to render them into a folder specifically called Proxy. This can be anywhere you like, but it’s best to have it near your original camera files. If you have multiple camera file folders – organized by camera roll, day, camera model, etc – then there are two options. You can either have one single Proxy file for all renders or have a separate subfolder called Proxy within each camera roll folder.

Dealing with externally-created proxies in different editing applications

Final Cut Pro – There is a setting to switch between Proxy Preferred and Original/Optimized. When you create external proxies, highlight the original camera clips and relink to the proxy media in the Proxy folder(s). Once proxies have been linked, then you can seamlessly switch between the two types of media.

Premiere Pro – There is a similar toggle button accessible in the timeline tools panel. The linking steps are similar to Final Cut Pro. Highlight the originals and then Attach Proxies. Navigate to the Proxy folder(s) and attach that media. The toggle button lets you switch back and forth between media types.

DaVinci Resolve Studio 18 – This update changed the proxy workflow as well as added the Generator application. You can still use the older proxy generation method. If so, then set the encoding parameters and location in your project settings. If you encode using the Blackmagic Proxy Generator app or an external application, then it’s a different process. The advantage to using Blackmagic Proxy Generator is that you can set up watch folders for automatic encoding.

The default location when using the Blackmagic Proxy Generator app or Resolve’s internal routine places a Proxy subfolder inside the folder of each roll of original media. When that condition exists, then original clips added into the Media page automatically include links to both the original and the proxy media. In fact, the Proxy subfolders don’t even show up in Resolve’s browser when searching for media. When both types of media are present, then the Resolve clip icons reflects that duality.

When you transcode externally with Compressor or another app, then media placed into individual Proxy subfolders will also automatically link inside Resolve. However, if you render to a single, unified Proxy folder, then you’ll need to manually relink the proxy files to the originals in the Media page. Like the other two NLEs, you can do this as a batch function by navigating to the Proxy folder.

I hope these pointers will be a useful guide the next time you decide to use a proxy media workflow.

©2022 Oliver Peters

NLE Tips – Timecode Banner

Every editor has to contend with client changes. The process has become more challenging over the years with fewer clients attending edit sessions in person. This is especially difficult in long-form projects where you often end up rearranging sections to change the flow of the narrative. 

Modern tools make it easier than ever to generate time-stamped transcripts directly from the audio itself. The client can then create “paper cuts” from these transcripts for the editor to follow. Online virtual editing tools exist to edit and export such revisions in an NLE-friendly format. Unfortunately clients prefer to work with tools they know, so often Word becomes the tool of choice instead of a virtual editor. This poses some editing challenges.

The following is an all-too-familiar scenario. You are editing down an hourlong conversation that was recorded as a linear discussion. You’ve edited the first pass (version 1) and created an AI-based, speech-to-text transcript from the dialogue track. This includes timecode stamps and speaker identification for the client. (Premiere Pro is an excellent tool to use.)

The client sends back a paper cut in the form of a Word document with recommended trims, sections to delete, and rearranged paragraphs that change the flow of the conversation. The printed time stamps stay associated with each paragraph, which enables you to find the source clips within the version 1 timeline. However, as you move paragraphs around and cut sections, these time stamps are no longer a valid reference. The sequence times have now changed with your edits.

The solution is simple. First, create a movie file with running timecode on black. The timecode format and start time should match that of the sequence. You may want to create several of these assets at different frame rates and store them for future use. For instance, a lot of my sequences are cut at 23.98fps with a starting timecode of 00:00:00:00. I created a ProRes Proxy “timecode banner” file that’s over an hour long, which is stored in a folder along with other useful assets, like countdowns, tone, color bars, etc.

Once you receive the client’s Word document, dupe the version 1 sequence to create a version 2 sequence. Import the timecode banner file into the project and drop it onto the topmost track of version 2. Crop the asset so you only see timecode over the rest of the picture. Since this is a rendered media asset and not a dynamic timecode plug-in applied to an adjustment layer, the numbers stay locked when you move the clip around.

As you navigate to each point in the edited transcript to move or remove sections, cut (“blade”) across all tracks to isolate those sections. Now rearrange as needed. The timecode banner clip will move with those sections, which will allow you to stay in tune with the client’s time stamps as listed on the transcript.

When done, you can compare the new version 2 sequence with the transcript and know that all the changes you made actually match the document. Then delete the timecode banner and get ready for the next round.

©2022 Oliver Peters

NLE Tips – Premiere Pro Multicam

The best way to edit interviews with more than one camera is to use your edit software’s multicam function. The Adobe Premiere Pro version works quite well. I’ve written about it before, but there are differing multicam workflows depending on the specific production situation. Some editors prefer to work with cameras stacked on tracks, but that’s a very inefficient way of working. In this post, I’m going to look at a slightly different way of using Premiere Pro with multicam clips.

I like to work in the timeline more than the browser/bin. Typically an interview involves longer takes and fewer clips, so it’s easy to organize on the timeline and that’s how I build my multicam clips. Here is a proven workflow in a few simple steps.

Step 1 – String out your clips sequentially onto the timeline – all of A-cam, then all of B-cam, then C-cam, and so on. You will usually have the same number of clips for each camera, but on occasion there will be some false starts. Remove those from the timeline.

Step 2 – Move all of the B-cam clips to V2 and the audio onto lower tracks so that they are all below the A-cam tracks. Move all of the C-cam clips to V3 and the audio onto lower tracks so that they are all below the B-cam tracks. Repeat this procedure for each camera.

Step 3 – Slide the B, C, etc camera clips for take 1 so they overlap with the A-camera clip. Repeat for take 2, take 3, and so on.

Step 4 – Highlight all of the clips for take 1, right-click and select Synchronize. There are several ways to sync, but if you recorded good reference audio onto all cameras (always do this), then synchronizing by the audio waveforms is relatively foolproof. Once the analysis is complete, Premiere will automatically realign the take 1 clips to be in sync with each other. Repeat the step for each take. This method is ideal when there’s mismatched timecode or when no slate or common sync marker (like a clap) was used.

Step 5 – Usually the A-camera will have the high-quality audio for your mix. However, if an external audio recorder was used for double-system sound, then the audio clips should have been part of the same syncing procedure in steps 1-4. In any case, delete all extra tracks other than your high-quality audio. In a two-person interview, it’s common to have a mix of both mics recorded onto A1 and A2 of the camera or sound recorder and then each isolated mic on A3 and A4. Normally I will keep all four channels, but disable A1 and A2, since my intention is to remix the interview using the isolated mics. In the case of some cameras, like certain Sony models, I might have eight tracks from the A-cam and only the first four have anything on them. Remove the empty channels. The point is to de-clutter the timeline.

Step 6 – Next, trim the ends of each take across all clips. Then close the gaps between all takes.

Step 7 – Before going any further, do any touch-up that may be necessary to the color in order to match the cameras. In a controlled interview, the same setting should theoretically apply to each take for each camera, but that’s never a given. You are doing an initial color correction pass at this stage to match cameras as closely as possible. This is easy if you have the same model camera, but trickier if different brands were used. I recently edited a set of interviews where a GoPro was used as the C-camera. In addition to matching color, I also had to punch in slightly on the GoPro and rotate the image a few degrees in order to clean up the wide-angle appearance and the fact that the camera wasn’t leveled well during the shoot.

Step 8 – Make sure all video tracks are enabled/shown, highlight all the video clips (not audio), and nest them. This will collapse your timeline video clips into a single nested clip. Right-click and Enable Multi-Camera. Then go through and blade the cut point at the beginning of each take (this should match the cuts in your audio). Duplicate that sequence for safe keeping. By doing it this way, I keep the original audio clips and do not place them into a nest. I find that working with nested audio is rather convoluted and, so, more straightforward this way.

Step 9 – Now you are ready to edit down the interview – trimming down the content and switching/cutting between camera angles of the multicam clip. Any Lumetri correction, effects, or motion tab settings that you applied or altered in Step 7 follow the visible angle. Proceed with the rest of the edit. I normally keep multicam clips in the sequence until the very end to accommodate client changes. For example, trims made to the interview might result in the need to re-arrange the camera switching to avoid jump cuts.

Step 10 – Once you are done and the sequence is approved by the client, select all of the multicam clips and flatten them. This leaves you with the original camera clips for only the visible angles. Any image adjustments, effects, and color correction applied to those clips will stick.

©2022 Oliver Peters

Six Premiere Pro Game Changers

When a software developer updates any editing application, users often look for big changes, fancy features, and new functionality. Unfortunately, many little updates that can really change your day-to-day workflow are often overlooked.

Ever since the shift to its Create Cloud subscription model, Adobe has brought a string of updates to its core audio and video applications. Although there are several that have made big news, the more meaningful changes often seem less than awe inspiring to Adobe’s critics. Let me counter that narrative and point out six features that have truly improved the daily workflow for my Premiere Pro projects.

Auto Reframe Sequence. If you deliver projects for social media outlets, you know that various vertical formats are required. This is truly a pain when starting with content designed for 16×9 horizontal distribution. The Auto Reframe feature in Premiere Pro makes it easy to reformat any sequence for 9×16, 4×5, and 1×1 formats. It takes care of keyframing each shot to follow an area of interest within that shot, such as a person walking.

While other NLEs, like Final Cut Pro, also offer reformatting for vertical aspect ratios, none offer the same degree of automatic control to reposition the clip. It’s not perfect, but it works for most shots. It you don’t like the results on a shot, simply override the existing keyframes and manually reposition the clip. Auto Reframe works best if you start with a flattened, textless file, which brings me to the next feature.

Scene Edit Detection. This feature is generally used in color correction to automatically determine cuts between shots in a flattened file. The single clip in the sequence is split at each detected cut point. While you can use it for color correction in Premiere Pro, as well, it is also useful when Auto Reframing a sequence for verticals. If you try to apply Auto Reframe to a flattened file, Premiere will attempt to analyze and apply keyframes across the entire sequence since it’s one long clip. With these added splices created by Scene Edit Detection, Premiere can analyze each shot separately within the flattened file.

Auto Transcribe Sequence / Captioning. Modern deliverables take into account the challenges many viewers face. One of these is closed captions, which are vital to hearing-impaired viewers. Captions are also turned on by many viewers with otherwise normal hearing abilities for a variety of reasons. Just a few short years ago, getting interviews transcribed, adding subtitles for foreign languages, or creating closed captions required using an outside service, often at a large cost. 

Adobe’s first move was to add caption and subtitle functions to Premiere Pro, which enabled editors to import, create, and/or edit caption and subtitle text. This text can be exported as a separate sidecar file (such as .srt) or embedded into the video file. In a more recent update, Adobe augmented these features with Auto Transcribe. It’s included as part of your Creative Cloud subscription and there is generally no length limitation for reasonable use. If you have an hourlong interview that needs to be transcribed – no problem. 

Adobe uses cloud-based AI for part of the transcription process, so an internet connection is required. The turnaround time is quite fast and the accuracy is one of the best I’ve encountered. While the language options aren’t as broad as some of the competitors, most common Romance and Asian languages are covered. After the analysis and the speech-to-text process has been completed, that text can be used as a transcription or as captions (closed captions and/or subtitles). The transcription can also be exported as a text file with timecode. That’s handy for producers to create a paper cut for the editor.

Remix. You’ve just cut a six-minute corporate video and now you have to edit a needle drop music cue as a bed. It’s only 2:43, but needs to be extended to fit the 6:00 length and correctly time out to match the ending. You can either do this yourself or let Adobe tackle it for you. Remix came into Premiere Pro from Audition. This feature lets you use Adobe Sensei (their under-the-hood AI technology) to automatically re-edit a music track to a new target length. 

Open the Essential Sound panel, designate the track containing the cue as Music, enable the Duration tab, and select Remix. Set your target length and see what you get. You can customize the number of segments and variations to make the track sound less repetitive if needed. Some tracks have long fade-outs. So you may have to overshoot your target length in order to get the fade to properly coincide with the end of the video. I often still make one manual music edit to get it just right. Nevertheless the Remix feature is a great time-saver that usually gets me 90% of the way there.

Audition. If you pay for a full Creative Cloud subscription, then you benefit from the larger Adobe ecosystem. One of those applications is Audition, Adobe’s digital audio workstation (DAW) software. Audition is often ignored in most DAW roundups, because it doesn’t include many music-specific features, like software instruments and MIDI. Instead, Audition is targeted at general audio production (VO recordings, podcasts, commercials) and audio-for-video post in conjunction with Premiere Pro. Audition is designed around editing and processing a single audio file or for working in a multitrack session. I want to highlight the first method here.

Noise in location recordings is a fact of life for many projects. Record an interview in a working commercial kitchen and there will be a lot of background noise. Premiere Pro includes a capable noise reduction audio filter, which can be augmented by many third party tools from Accusonus, Crumplepop, and of course, iZotope RX. But if the Premiere Pro filter isn’t good enough, you need look no further than Audition. Export the track(s) from Premiere and open those (or the original files) in Audition.

Select the Noise Reduction/Restoration category under the Effects pulldown menu. First capture a short noise print in a section of the track with only background noise. This “trains” the filter for what is to be removed. Then select Noise Reduction (process). Follow the instructions and trust your own hearing to remove as much noise as possible with the least impact on the dialogue. If the person speaking sounds like they are underwater, then you’ve gone too far. Apply the effect in order to render the processing and then bounce (export) that processed track. Import the new track into Premiere. While this is a two-step process, you aren’t encumbering your computer with any real-time noise reduction filter when using such a pre-processed audio file.

Link Media. OK, I know relinking isn’t new to Premiere Pro and it’s probably not a marquee feature for editors always working with native media. When moving projects from offline to online – creative to finishing editorial – you know that if you cannot properly relink media files, a disaster will ensue.

Media Composer, Final Cut Pro, and Resolve all have relink functions. They work well with application-controlled, optimized media. But at other times, when working with camera original, native files, it might not work at all. I find Premiere Pro works about the best of these NLEs when it comes to relinking a wide variety of media files. That’s precisely because the user has a lot of control of the relink criteria in Premiere Pro. It’s not left up entirely to the application.

Premiere Pro expects the media be in the same relative path on the drive. Let’s say that you move the entire project to a different folder (like from Active Projects to Archived Projects) on your storage system. Navigate to and locate the first missing file and Premiere will find all the rest.

The relinking procedure is also quite forgiving, because various file criteria used to relink can be checked or unchecked. For example, I frequently edit with watermarked temporary music tracks, which are 44.1kHz MP3 files. When the cut is approved and the music is licensed, I download new, non-watermarked versions of that music as 48kHz WAV or AIF files. Premiere Pro easily relinks to the WAV or AIF files instead of the MP3s once I point it in the right direction. All music edits (including internal edits made by Remix) stay as intended and there is no mismatch due the the sample rate change.

These features might not make it into everyone’s Top 10 list, but they are tools generally not found in other NLEs. I use them quite often to speed up the session and remove drudgery from the editing process.

©2022 Oliver Peters