Six Premiere Pro Game Changers

When a software developer updates any editing application, users often look for big changes, fancy features, and new functionality. Unfortunately, many little updates that can really change your day-to-day workflow are often overlooked.

Ever since the shift to its Create Cloud subscription model, Adobe has brought a string of updates to its core audio and video applications. Although there are several that have made big news, the more meaningful changes often seem less than awe inspiring to Adobe’s critics. Let me counter that narrative and point out six features that have truly improved the daily workflow for my Premiere Pro projects.

Auto Reframe Sequence. If you deliver projects for social media outlets, you know that various vertical formats are required. This is truly a pain when starting with content designed for 16×9 horizontal distribution. The Auto Reframe feature in Premiere Pro makes it easy to reformat any sequence for 9×16, 4×5, and 1×1 formats. It takes care of keyframing each shot to follow an area of interest within that shot, such as a person walking.

While other NLEs, like Final Cut Pro, also offer reformatting for vertical aspect ratios, none offer the same degree of automatic control to reposition the clip. It’s not perfect, but it works for most shots. It you don’t like the results on a shot, simply override the existing keyframes and manually reposition the clip. Auto Reframe works best if you start with a flattened, textless file, which brings me to the next feature.

Scene Edit Detection. This feature is generally used in color correction to automatically determine cuts between shots in a flattened file. The single clip in the sequence is split at each detected cut point. While you can use it for color correction in Premiere Pro, as well, it is also useful when Auto Reframing a sequence for verticals. If you try to apply Auto Reframe to a flattened file, Premiere will attempt to analyze and apply keyframes across the entire sequence since it’s one long clip. With these added splices created by Scene Edit Detection, Premiere can analyze each shot separately within the flattened file.

Auto Transcribe Sequence / Captioning. Modern deliverables take into account the challenges many viewers face. One of these is closed captions, which are vital to hearing-impaired viewers. Captions are also turned on by many viewers with otherwise normal hearing abilities for a variety of reasons. Just a few short years ago, getting interviews transcribed, adding subtitles for foreign languages, or creating closed captions required using an outside service, often at a large cost. 

Adobe’s first move was to add caption and subtitle functions to Premiere Pro, which enabled editors to import, create, and/or edit caption and subtitle text. This text can be exported as a separate sidecar file (such as .srt) or embedded into the video file. In a more recent update, Adobe augmented these features with Auto Transcribe. It’s included as part of your Creative Cloud subscription and there is generally no length limitation for reasonable use. If you have an hourlong interview that needs to be transcribed – no problem. 

Adobe uses cloud-based AI for part of the transcription process, so an internet connection is required. The turnaround time is quite fast and the accuracy is one of the best I’ve encountered. While the language options aren’t as broad as some of the competitors, most common Romance and Asian languages are covered. After the analysis and the speech-to-text process has been completed, that text can be used as a transcription or as captions (closed captions and/or subtitles). The transcription can also be exported as a text file with timecode. That’s handy for producers to create a paper cut for the editor.

Remix. You’ve just cut a six-minute corporate video and now you have to edit a needle drop music cue as a bed. It’s only 2:43, but needs to be extended to fit the 6:00 length and correctly time out to match the ending. You can either do this yourself or let Adobe tackle it for you. Remix came into Premiere Pro from Audition. This feature lets you use Adobe Sensei (their under-the-hood AI technology) to automatically re-edit a music track to a new target length. 

Open the Essential Sound panel, designate the track containing the cue as Music, enable the Duration tab, and select Remix. Set your target length and see what you get. You can customize the number of segments and variations to make the track sound less repetitive if needed. Some tracks have long fade-outs. So you may have to overshoot your target length in order to get the fade to properly coincide with the end of the video. I often still make one manual music edit to get it just right. Nevertheless the Remix feature is a great time-saver that usually gets me 90% of the way there.

Audition. If you pay for a full Creative Cloud subscription, then you benefit from the larger Adobe ecosystem. One of those applications is Audition, Adobe’s digital audio workstation (DAW) software. Audition is often ignored in most DAW roundups, because it doesn’t include many music-specific features, like software instruments and MIDI. Instead, Audition is targeted at general audio production (VO recordings, podcasts, commercials) and audio-for-video post in conjunction with Premiere Pro. Audition is designed around editing and processing a single audio file or for working in a multitrack session. I want to highlight the first method here.

Noise in location recordings is a fact of life for many projects. Record an interview in a working commercial kitchen and there will be a lot of background noise. Premiere Pro includes a capable noise reduction audio filter, which can be augmented by many third party tools from Accusonus, Crumplepop, and of course, iZotope RX. But if the Premiere Pro filter isn’t good enough, you need look no further than Audition. Export the track(s) from Premiere and open those (or the original files) in Audition.

Select the Noise Reduction/Restoration category under the Effects pulldown menu. First capture a short noise print in a section of the track with only background noise. This “trains” the filter for what is to be removed. Then select Noise Reduction (process). Follow the instructions and trust your own hearing to remove as much noise as possible with the least impact on the dialogue. If the person speaking sounds like they are underwater, then you’ve gone too far. Apply the effect in order to render the processing and then bounce (export) that processed track. Import the new track into Premiere. While this is a two-step process, you aren’t encumbering your computer with any real-time noise reduction filter when using such a pre-processed audio file.

Link Media. OK, I know relinking isn’t new to Premiere Pro and it’s probably not a marquee feature for editors always working with native media. When moving projects from offline to online – creative to finishing editorial – you know that if you cannot properly relink media files, a disaster will ensue.

Media Composer, Final Cut Pro, and Resolve all have relink functions. They work well with application-controlled, optimized media. But at other times, when working with camera original, native files, it might not work at all. I find Premiere Pro works about the best of these NLEs when it comes to relinking a wide variety of media files. That’s precisely because the user has a lot of control of the relink criteria in Premiere Pro. It’s not left up entirely to the application.

Premiere Pro expects the media be in the same relative path on the drive. Let’s say that you move the entire project to a different folder (like from Active Projects to Archived Projects) on your storage system. Navigate to and locate the first missing file and Premiere will find all the rest.

The relinking procedure is also quite forgiving, because various file criteria used to relink can be checked or unchecked. For example, I frequently edit with watermarked temporary music tracks, which are 44.1kHz MP3 files. When the cut is approved and the music is licensed, I download new, non-watermarked versions of that music as 48kHz WAV or AIF files. Premiere Pro easily relinks to the WAV or AIF files instead of the MP3s once I point it in the right direction. All music edits (including internal edits made by Remix) stay as intended and there is no mismatch due the the sample rate change.

These features might not make it into everyone’s Top 10 list, but they are tools generally not found in other NLEs. I use them quite often to speed up the session and remove drudgery from the editing process.

©2022 Oliver Peters

Frame.io Brings FiLMiC Pro to the Cloud

It’s not news that Frame.io has been pioneering camera-to-cloud (C2C) workflows. However, one of the newsworthy integrations announced last week was the addition C2C capabilities for iPhone and Android users with the Filmic Pro camera application. The update already popped up in your Filmic Pro settings if you’ve kept the app current. Frame’s C2C feature requires Filmic’s Cinematographer Kit (an in-app purchase), and a Frame.io Pro or Adobe Creative Cloud account.

Professional filming with iPhones has become common in many market sectors for both primary and secondary videography. The Filmic Pro/C2C workflow can prove worthwhile when fast turnaround and remote access become factors in your production.

Understanding the Filmic Pro C2C integration

Filmic Pro’s C2C integration is a little different than Frame’s other camera-to-cloud workflows, which are tied to Teradek devices. In those situations, the live video stream from the camera is simultaneously encoded into a low-res proxy file by the Teradek device. High-res OCF (original camera files) media is stored on the camera card and the proxies on the Teradek. The proxies are uploaded to Frame. There is some latency in triggering the proxy generation, so start and end times do not match perfectly between the OCF media and the proxies. Accurate relinks between file versions are made possible by common timecode.

An Android phone or iPhone does not require any extra hardware to handle the proxy creation or uploading. Filmic Pro encodes the proxy file after the recording is stopped, not simultaneously. Both high and low-res files are stored on the phone within Filmic’s clip library and have identical lengths and start/stop times. Filmic Pro won’t add a timecode track in this mode, so all files start from zero, albeit with unique file names. If you are shooting with multiple iPhones or double-system sound, then be sure to slate the clips so the editor can sync the files.

Testing the workflow

I have an iPhone SE and the software, so it was time to run some workflow tests in a hypothetical scenario. Here’s the premise – three members of the production team (in reality, me, of course), all located in different cities. The videographer is on site. The producer is in another city and he’s going to turn around a quick cut for story content. The editor is in yet a third location and he’ll conform the high-res camera files, add effects, graphics, color correction, and finish the mix to deliver the final product.

Click here to read the rest of the article at Pro Video Coalition

Click here for a more in-depth article about mobile filmmaking with FiLMiC Pro

©2022 Oliver Peters

Adobe’s Frame Rollout

Adobe acquired Frame.io last October. The latest Adobe Creative Cloud application updates showcase the first formal integration of Frame.io as a product within the Creative Cloud ecosystem. Frame.io had already developed a Premiere Pro integration using Adobe’s extensions architecture; however, the latest version of Premiere Pro and After Effects adds an integrated interface panel called Review with Frame.io.

Now your individual Adobe Creative Cloud subscription includes a Frame.io account at no additional charge. This includes 100GB of cloud storage (separate from existing Creative Cloud storage) for up to five projects, use by two collaborators, and unlimited access for reviewers. If you need more storage or to add more collaborators, then you can upgrade to a larger Frame.io plan, but at additional cost.

Adobe Creative Cloud Team and Enterprise accounts don’t fall under this plan and those admins will need to consult Adobe or Frame.io for a plan that best meets their needs. In other words, if you are a production company paying for an Adobe Team account with multiple users on the account, you don’t get 100GB of “free” Frame.io storage for each user. This offering is primarily designed for individual Adobe Creative Cloud subscribers.

Something to know before you start

There’s a gotcha for some existing Frame.io customers. You activate your new Adobe CC Frame.io service by logging in with the same e-mail and password as used for your Adobe ID. Let’s say you work freelance at a facility and are a collaborator on their Frame.io Team account. In that case, you might be using a personal email address to log into Frame.io. However, if that email is the same as used for your personal Adobe ID, then Frame.io does not know how to differentiate between the two.

To rectify this you need to use a different email for one of these two log-ins. This is generally a minor issue, since most people have more than one email address that they use. In my own case, I needed to change my Adobe ID email, which was a relatively quick procedure. This allows me to separately access either of the two Frame.io accounts as a collaborator, based on which email I log in with.

One confusing thing I encountered was that the account starts as a 30-day trial for a Frame.io Team account, so it looks like you are going to get billed extra after the trial ends. This is not the case. I think it’s a mistake for Adobe and Frame.io to do this, because they are trying to upsell you to the paid account. Fortunately there’s no need to enter payment information up front. I wish that this was clearer in the marketing details. Hopefully Adobe will correct this after the initial rollout. At the end of the 30-trial, you will be asked whether to pay or end the trial. If you opt to end the trial, then the account reverts to the free plan, which is the one included with your Adobe Creative Cloud subscription.

Getting started

Open the Review with Frame.io panel in Premiere Pro or After Effects and sign-in using your Adobe ID. This will open your default browser and send you to the Frame.io website to complete the sign-in. As long as you stay signed in, you can access Frame.io either in your web browser or within the panel. If you sign out, then next time you’ll need to sign in again using the Adobe ID.

I won’t go into how Frame.io itself works, since there are plenty of tutorials. This integration doesn’t change any of the operation. The Frame.io panel works like the previous extensions panel. A clip with reviewer comments can be synced to your Premiere Pro timeline for easy changes. Or you can simply work from the web portal and ignore the panel entirely. 100GB is plenty if your intent is to use Frame.io for low-resolution review files. However, if your intention is a larger, more complex workflow, then you may need to upgrade your Frame.io account after all.

Enter C2C

The bigger picture is that Frame.io is enthusiastically pushing its camera-to-cloud (C2C) workflow. I’m not really a big believer in this concept, but I know plenty of companies are going to announce more cloud and remote services at NAB. For many reasons, I don’t believe that all of our media will be in the cloud in a decade or two. However, I think Adobe does. In my opinion, it’s not a particularly good goal for users or the planet. But, I digress. In today’s world, what C2C offers in conjunction with the Premiere Pro integration is a Dropbox-style experience.

Let’s say your videographer is recording a corporate CEO interview in Los Angeles. The company’s PR rep is in New York and the editor in Atlanta. And there’s a very short turnaround schedule. In this basic scenario, both the videographer and editor are collaborators on a Frame.io project. While the interview is being recorded, the feed is being uploaded to Frame.io in near real-time. This requires some hardware on the camera side or it could be done by someone on set right after the recording ends. Once it’s in Frame.io, the PR rep in NYC can access and review the takes. The editor in Atlanta also sees the footage appear in the Frame.io panel within Premiere Pro. Files can be downloaded from the panel to the editor’s drives and the edit can start right away.

Given most standard internet speeds today and the 100GB bucket, this workflow makes sense if you are uploading smaller camera proxy files. Some proxies can actually be good enough to master with – especially in fast turnaround situations. In other scenarios, the proxies might be used to start the edit and later replaced with the high-res camera originals, once received from the shoot.

I feel that such situations are a lot fewer than the marketers want you to believe. Moving high-res files over the internet is never fast. FedEx often still offers the better option. So unless you really do need to get started right away, just wait for the media to arrive a day or so later. However, C2C for the purpose of an out-of-town producer reviewing takes remotely – especially in light of workflow changes caused by COVID over the past couple of years – has gained steam.

Frame.io is clear that just because they are an Adobe company doesn’t change their dedication to other workflows and other applications, such as Final Cut Pro. New announcements include native FilmLight Baselight integration, an app for Apple TV, and C2C partnerships with FiLMiC Pro.

If you are a current Frame.io customer without any Adobe subscription – no problem. Nothing changes for you. I’ve been using Frame.io since it launched and have been happy with the service. There are occasional glitches, but no worse than any other internet service, including your regular e-mail provider. Better yet, clients love the process. It’s not perfect, but it is one of the better review-and-approval sites and services on the market. If this is the first time you start using Frame.io by virtue of your Adobe subscription, then you are bound to see your daily workflow enhanced.

©2022 Oliver Peters

Five Adobe Workflow Tips

Subscribers to Adobe Creative Cloud have a whole suite of creative tools at their fingertips. I believe most users often overlook some of the less promoted features. Here are five quick tips for your workflow. (Click on images to see an enlarged view.)

Camera Raw. Photographers know that the Adobe Camera Raw module is used to process camera raw images, such as .cr2 files. It’s a “develop” module that opens first when you import a camera raw file into Photoshop. It’s also used in Bridge and Lightroom. Many people use Photoshop for photo enhancement – working with the various filters and adjustment layer tools available. What may be overlooked is that you can use the Camera Raw Filter in Photoshop on any photo, even if the file is not raw, such as a JPEG or TIFF.

Select the layer containing the image and choose the Camera Raw Filter. This opens that image into this separate “develop” module. There you have all the photo and color enhancement tools in a single, comprehensive toolkit – the same as in Lightroom. Once you’re done and close the Camera Raw Filter, those adjustments are now “baked” into the image on that layer.

Remix. Audition is a powerful digital audio workstation application that many use in conjunction with Premiere Pro or separately for audio productions. One feature it has over Premiere Pro is the ability to use AI to automatically edit the length of music tracks. Let’s say you have a music track that’s 2:47 in length, but you want a :60 version to underscore a TV commercial. Yes, you could manually edit it, but Audition Remix turns this into an “automagic” task. This is especially useful for projects where you don’t need to have certain parts of the song time to specific visuals.

Open Audition, create a multitrack session, and place the music selection on any track in the timeline. Right-click the selection and enable Remix. Within the Remix dialogue box, set the target duration and parameters – for example, short versus long edits. Audition will calculate the number and location of edit points to seamlessly shorten the track to the approximate desired length.

Audition attempts to create edits at points that are musically logical. You won’t necessarily get an exact duration, since the value you entered is only a target. This is even more true with tracks that have a long musical fade-out. A little experimentation may be needed. For example, a target value of :59 will often yield significantly different results than a target of 1:02, thanks to the recalculation. Audition’s remix isn’t perfect, but will get you close enough that only minimal additional work is required. Once you are happy, bounce out the edited track for the shortened version to bring into Premiere Pro.

Photoshop Batch Processing. If you want to add interesting stylistic looks to a clip, then effects filters in Premiere Pro and/or After Effects usually fit the bill. Or you can go with expensive third party options like Continuum Complete or Sapphire from Boris FX. However, don’t forget Photoshop, which includes many stylized looks not offered in either of Adobe’s video applications, such as specific paint and brush filters. But, how do you apply those to a video clip?

The first step is to turn your clip into an image sequence using Adobe Media Encoder. Then open a representative frame in Photoshop to define the look. Create a Photoshop action using the filters and settings you desire. Save the action, but not the image. Then create a batch function to apply that stored action to the clean frames within the image sequence folder. The batch operation will automatically open each image, apply the effects, and save the stylized results to a new destination folder.

Open that new image sequence using any app that supports image sequences (including QuickTime) and save it as a ProRes (or other) movie file. Stylized effects, like oil paint, are applied to individual frames and will vary with the texture and lighting of each frame; therefore, the stitched movie will display an animated appearance to that effect.

After Effects for broadcast deliverables. After Effects is the proverbial Swiss Army knife for editors and designers. It’s my preferred conversion tool when I have 24p masters that need to be delivered as 60i broadcast files.

Import a 23.98 master and place it into a new composition. Scale, if needed (UHD to HD, for instance). Send to the Render Queue. Set the frame rate to 29.97, field render to Upper (for HD), and enable pulldown (any whole/split frame cadence is usually OK). Turn off Motion Blur and Frame Blending. Render for a proper interlaced broadcast deliverable file.

Photoshop motion graphics. One oft-ignored (or forgotten) feature of Photoshop is that you can do layer-based video animation and editing within. Essentially there’s a very rudimentary version of After Effects inside Photoshop. While you probably wouldn’t want to use it for video instead of using After Effects or Premiere Pro, Photoshop does have a value in creating animated lower thirds and other titles.

Photoshop provides much better text and graphic style options than Premiere Pro. The files are more lightweight than an After Effects comp on your Premiere timeline – or rendering animated ProRes 4444 movies. Since it’s still a Photoshop file (albeit a special version), the “edit in original” command opens the file in Photoshop for easy revisions. Let’s say you are working on a show that has 100 lower thirds that slide in and fade out. These can easily be prepped for the editor by the graphics department in Photoshop – no After Effects skills required.

Create a new file in Photoshop, turn on the timeline window, and add a new blank video layer. Add a still onto a layer for positioning reference, delete the video layer, and extend the layers and timeline to the desired length. Now build your text and graphic layers. Keyframe changes to opacity, position, and other settings for animation. Delete the reference image and save the file. This is now a keyable Photoshop file with embedded animation properties.

Import the Photoshop file into Premiere with Merged Layers. Add to your timeline. The style in Premiere should match the look created in Photoshop. It will animate based on the keyframe settings created in Photoshop.

©2021 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters