Regardless of whether you own or work for a small editorial company or a large studio cranking out blockbusters, media and how you manage it is the circulatory system of your operation. No matter the same, many post operations have some of the same concerns, although may approach them with solutions that are vastly different from company to company.
Last year I wrote on this topic for postPerspective and interviewed key players at Molinare and Republic. This year I’ve revisited the topic, taking a look at top Midwestern spot shops Drive Thru and Utopic, as well as Marvel Studios. In addition, I’ve also broken down the “best practices” that Netflix suggests to its production partners.
Here are links to these articles at postPerspective:
Every editor has to contend with client changes. The process has become more challenging over the years with fewer clients attending edit sessions in person. This is especially difficult in long-form projects where you often end up rearranging sections to change the flow of the narrative.
The following is an all-too-familiar scenario. You are editing down an hourlong conversation that was recorded as a linear discussion. You’ve edited the first pass (version 1) and created an AI-based, speech-to-text transcript from the dialogue track. This includes timecode stamps and speaker identification for the client. (Premiere Pro is an excellent tool to use.)
The client sends back a paper cut in the form of a Word document with recommended trims, sections to delete, and rearranged paragraphs that change the flow of the conversation. The printed time stamps stay associated with each paragraph, which enables you to find the source clips within the version 1 timeline. However, as you move paragraphs around and cut sections, these time stamps are no longer a valid reference. The sequence times have now changed with your edits.
The solution is simple. First, create a movie file with running timecode on black. The timecode format and start time should match that of the sequence. You may want to create several of these assets at different frame rates and store them for future use. For instance, a lot of my sequences are cut at 23.98fps with a starting timecode of 00:00:00:00. I created a ProRes Proxy “timecode banner” file that’s over an hour long, which is stored in a folder along with other useful assets, like countdowns, tone, color bars, etc.
Once you receive the client’s Word document, dupe the version 1 sequence to create a version 2 sequence. Import the timecode banner file into the project and drop it onto the topmost track of version 2. Crop the asset so you only see timecode over the rest of the picture. Since this is a rendered media asset and not a dynamic timecode plug-in applied to an adjustment layer, the numbers stay locked when you move the clip around.
As you navigate to each point in the edited transcript to move or remove sections, cut (“blade”) across all tracks to isolate those sections. Now rearrange as needed. The timecode banner clip will move with those sections, which will allow you to stay in tune with the client’s time stamps as listed on the transcript.
When done, you can compare the new version 2 sequence with the transcript and know that all the changes you made actually match the document. Then delete the timecode banner and get ready for the next round.
I like to work in the timeline more than the browser/bin. Typically an interview involves longer takes and fewer clips, so it’s easy to organize on the timeline and that’s how I build my multicam clips. Here is a proven workflow in a few simple steps.
Step 1 – String out your clips sequentially onto the timeline – all of A-cam, then all of B-cam, then C-cam, and so on. You will usually have the same number of clips for each camera, but on occasion there will be some false starts. Remove those from the timeline.
Step 2 – Move all of the B-cam clips to V2 and the audio onto lower tracks so that they are all below the A-cam tracks. Move all of the C-cam clips to V3 and the audio onto lower tracks so that they are all below the B-cam tracks. Repeat this procedure for each camera.
Step 3 – Slide the B, C, etc camera clips for take 1 so they overlap with the A-camera clip. Repeat for take 2, take 3, and so on.
Step 4 – Highlight all of the clips for take 1, right-click and select Synchronize. There are several ways to sync, but if you recorded good reference audio onto all cameras (always do this), then synchronizing by the audio waveforms is relatively foolproof. Once the analysis is complete, Premiere will automatically realign the take 1 clips to be in sync with each other. Repeat the step for each take. This method is ideal when there’s mismatched timecode or when no slate or common sync marker (like a clap) was used.
Step 5 – Usually the A-camera will have the high-quality audio for your mix. However, if an external audio recorder was used for double-system sound, then the audio clips should have been part of the same syncing procedure in steps 1-4. In any case, delete all extra tracks other than your high-quality audio. In a two-person interview, it’s common to have a mix of both mics recorded onto A1 and A2 of the camera or sound recorder and then each isolated mic on A3 and A4. Normally I will keep all four channels, but disable A1 and A2, since my intention is to remix the interview using the isolated mics. In the case of some cameras, like certain Sony models, I might have eight tracks from the A-cam and only the first four have anything on them. Remove the empty channels. The point is to de-clutter the timeline.
Step 6 – Next, trim the ends of each take across all clips. Then close the gaps between all takes.
Step 7 – Before going any further, do any touch-up that may be necessary to the color in order to match the cameras. In a controlled interview, the same setting should theoretically apply to each take for each camera, but that’s never a given. You are doing an initial color correction pass at this stage to match cameras as closely as possible. This is easy if you have the same model camera, but trickier if different brands were used. I recently edited a set of interviews where a GoPro was used as the C-camera. In addition to matching color, I also had to punch in slightly on the GoPro and rotate the image a few degrees in order to clean up the wide-angle appearance and the fact that the camera wasn’t leveled well during the shoot.
Step 8 – Make sure all video tracks are enabled/shown, highlight all the video clips (not audio), and nest them. This will collapse your timeline video clips into a single nested clip. Right-click and Enable Multi-Camera. Then go through and blade the cut point at the beginning of each take (this should match the cuts in your audio). Duplicate that sequence for safe keeping. By doing it this way, I keep the original audio clips and do not place them into a nest. I find that working with nested audio is rather convoluted and, so, more straightforward this way.
Step 9 – Now you are ready to edit down the interview – trimming down the content and switching/cutting between camera angles of the multicam clip. Any Lumetri correction, effects, or motion tab settings that you applied or altered in Step 7 follow the visible angle. Proceed with the rest of the edit. I normally keep multicam clips in the sequence until the very end to accommodate client changes. For example, trims made to the interview might result in the need to re-arrange the camera switching to avoid jump cuts.
Step 10 – Once you are done and the sequence is approved by the client, select all of the multicam clips and flatten them. This leaves you with the original camera clips for only the visible angles. Any image adjustments, effects, and color correction applied to those clips will stick.
Audio software plug-ins (effects and filters) come in two forms. On one hand, you have a wide range of products that emulate vintage analog hardware, often showcasing a skeuomorphic interface design. If you know how the original hardware version worked and sounded, then that will inform your expectations for the software equivalent. The other approach is to eschew the sonic and visual approach of analog emulation and build a plug-in with a modern look and sound. Increasingly this second group of plug-ins employ intelligent profiles and “assistants” to analyze your track and provide you with automatic settings that form a good starting point.
Austria has a long and proud musical history and heritage of developing leading audio products. There are many high-end Austrian audio manufacturers. One of those companies is Sonible, which develops both hardware and software products. The Sonible software falls into that second camp of plug-ins, with clean sonic qualities and a modern interface design. Of key interest is the “smart:” category, including smart:comp 2, smart:limit, smart:EQ 3, smart:reverb, and smart:EQ live. The first four of these are also available as the smart:bundle.
Taking a spin with Sonible’s spectro-dynamic compressor
I tested out smart:comp 2, which is billed as a spectro-dynamic compressor. It’s compatible with Windows and macOS and installs AU, VST, VST3, and AAX (Avid) versions. Licensing uses an iLok or is registered to your computer (up to two computers at a time). Let’s start with why these are “smart.” In a similar fashion to iZotope’s Ozone and others, smart:comp 2 can automatically analyze your track and assign compressor settings based on different profiles. The settings may be perfect out of the gate or form a starting point for additional adjustments. Of course, you can also just start by making manual adjustments.
Spectro-dynamic is a bit of a marketing term, but in essence, smart:comp 2 works like a highly sophisticated multiband compressor. The compression ranges are based on the sonic spectrum of the track. Instead of the four basic bands of most multiband compressors, smart:comp 2 carves up the signal into 2,000 slices to which compression is dynamically applied. As a compressor, this plug-in is equally useful on individual tracks or on the full mix as a mastering plug-in.
In addition, I would characterize the interface design as “discoverable.” When you first open the plug-in, you see a clean user interface with simple adjustments for level and ratio. However, you can click certain disclosure triangles to open other parts of the interface, such as control of attack and release timing, as well as side-chain filtering. There are three unique sound shaping controls at the bottom. Style controls the character of the compressor between “clean” (transparent) and “dirty” (warm and punchy). The Spectral Compression control dials in the amount of spectral (multiband) compression being applied. At zero, smart:comp 2 will act as an ordinary broadband compressor. The Color control lets you emphasis “darker” or “brighter” ranges within the spectral compression.
Simple, yet powerful functions
Start by selecting a profile (or leave on “Universal”). Play a louder section of your mix and let smart:comp 2 “learn” the track. Once learning is done and a profile established, you may be done. Or you may want to make further adjustments to taste. For example, the plug-in features automatic input riding along with automatic output (make-up gain). I found that for my mixes, input riding worked well, but I preferred a fixed output gain, which can be set manually.
There’s a “limit” function, which is always set to 0dBFS. When enabled, the limit option becomes a soft clipper. All peaks exceeding 0dBFS will be tamed to avoid hard clipping. It’s like a smooth limiter set to 0dBFS after the compression stage. However, if your intended use is broadcast production, rather than music mixes, you may still need to add a separate limiter plug-in (such as Sonible’s smart:limit) in the mastering chain after smart:comp 2. Especially if your target is lower, such as true peaks at -3dB or -6dB.
smart:comp2 did a wonderful job as a master bus compressor on my music mixes. I tested it against other built-in and third-party compressors within Logic Pro and DaVinci Resolve Fairlight. First, smart:comp 2 is very clean when you press it hard. There’s always a pleasing sound. However, the biggest characteristic is that the mixes sound more open with better clarity.
smart:comp 2 for mixing video projects
I’m a video editor and most of my mixes are more basic than multitrack music mixes with large track counts. Just a few dialogue, music, and sound effects tracks and that’s it. So the next test was applying smart:comp 2 on Premiere Pro’s mix bus. When I originally mixed this particular project, I used Adobe’s built-in tube-modeled compression on the dialogue tracks and then Adobe’s multiband compressor and limiter of the mix buss. For this test, I stripped all of those out and only added smart:comp 2 to the mix output buss.
I noticed the same openness as in the music mixes, but input riding was even more evident. My sequence started with a 15 second musical lead-in. Then the music ducks under the dialogue as the presenter appears. I had mixed this level change manually for a good-sounding balance. When I applied smart:comp 2, I noticed that the opening music was louder than with the plug-in bypassed. Yet, this automatic loudness level change felt right and the transition to the ducked music was properly handled by smart:comp 2. Although the unprocessed mix initially sounded fine to me, I would have to say that using smart:comp 2 made it a better-sounding mix overall. It was also better than when I used the built-in options.
How you use plug-ins is a matter of taste and talent. Some pros may look at automatic functions as some sort of cheat. I think that’s wrong. Software analysis can give you a good starting point in less time, allowing more time for creativity. You aren’t getting bogged down twirling knobs. That’s a good thing. I realize vintage plug-ins often look cool, but if you don’t know the result you’ll get, they can be a waste of time and money. This is where plug-ins like the smart: series from Sonible will enhance to your daily mixing workflow, regardless of whether you are a seasoned recording engineer or a video editor.
This battle-testing led Adobe to release a new Best Practices and Workflow Guide. It’s available online and as a free, downloadable PDF. While it’s targeted towards editors working on long-form projects, there are many useful pointers for all Premiere Pro editors. The various chapters cover such topics as hardware settings, proxies, multi-camera, remote/cloud editing, and much more.
Adobe has shied away from written documentation over the years, so it’s good to see them put the effort in to document best practices gleaned from working editors that will benefit all Premiere Pro users.