What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

NLE Tips – Audio Track Mixing in Final Cut Pro

In the past I’ve explained how audio in routed through the Final Cut Pro architecture. I’ve also discussed track-based audio mixing, predominantly based on the workflow in Premiere Pro. Today I’d like to extend that workflow into the realm of Final Cut Pro.

Everyone knows that FCP is not track-based. The timeline consists of a string of audio/video clips called the primary storyline, which empowers its magnetic feature. Additional audio and video clips can be attached to the clips on the primary storyline as connected clips – video above, audio below. At this level the software is indeed trackless. (Click on any image to see an enlarged view.)

Understanding audio roles and lanes

Several years ago, Apple added the “roles” feature. Audio and video clips can be assigned default and/or custom role designations, which can be used for visual organization and other functions. For example, do you want to export a “textless” ProRes file from your timeline? Then simply disable the Titles video role in the export dialogue.

Apple engineers have done more with audio roles, which can be further grouped into audio “lanes” through the timeline index window. If you’ve assigned the correct audio roles to each clip, then all dialogue clips are grouped into the dialogue lane, all music clips in the music lane, and so on. If you exported an FCPXML file for an outside mixer, then audio roles help to organize the track layout in other audio software.

At this point the clips are still individual. However, once you combine all clips in the sequence into a single compound clip, then the audio for all clips within an audio lane are summed together. This is similar to a group or submix bus in a DAW. The combination of lanes are in turn summed together and sent to the mix output. In essence, each audio lane within the compound clip is similar to a summing track stack in Logic Pro. You can adjust volume and apply effects to the entire lane, on top of anything done to individual clips contained inside of that lane.

Mixing in FCP on real-world projects

I’m working on an Alaska travelogue series – on-camera host on location, voice-overs, voice-over pick-ups, and music. The host stand-ups were recorded in two environments – close to the shoreline and in a quiet wooded area.

The location sound mixer recorded both a Lavaliere mic and a boom mic on separate channels. My personal preference is the boom, but sometimes the waves on the beach created too much background noise. In those cases, it’s the lav mic, but then I have to contend with the duller sound of the mic under the clothing, along with some rustle.

The next challenge is getting the voice-overs to sound close to the on-camera audio. These were recorded on location, but in a quiet room. The final challenge is to match the sonic quality of the voice-over pick-ups (done by the host at his home) to the original voice-overs.

Step One

The first step in this process is to assign the proper audio roles before clips are edited into the FCP sequence. Roles are quite versatile. If you had multiple speakers, each one could be assigned a separate role. In this project, my audio roles are Dialogue, VO, VO2, and Music. Once clips are imported and roles assigned, I can edit as I normally would in Final Cut. I personally add very few audio effects at this point to the individual clips, because I will do that later. In addition, certain effects, like noise reduction simply don’t work very well with short clips (more on that in a minute). So I only add what I need to “sell” the cut.

Step Two

Once the cut is approved and locked, I can move on to a final mix. To start, I’ll remove any audio effects that I’ve added to individual clips. Then, I meticulously go through and even out any level imbalances. Final Cut Pro features multiple gain stages. You have the clip volume control, but if you expand the audio, you see the individual channels, which each have volume controls, as well. Each of these can be raised by up to 12dB. So if you’ve applied 12dB to the clip and it’s still too quiet, expand the audio and bump up the channel volume. Or work this process in reverse. My objective is to end up with a clip volume that’s a bit hot in the peaks and then use the range tool to highlight the larger peaks and duck them down a bit.

Expand audio and make sure you have overlaps with fade handles between all clips. This is somewhat time-consuming. It’s far simpler in Premiere Pro to add audio dissolves (crossfades) across all audio edits in the timeline in a single step. But it’s a necessary step, including the addition or room tone/ambience to fill any gaps in the speech.

Finally, check the music. Make sure the edits work musically. Overall, the music volume can be a bit loud at this stage, but you want to make sure the balance is right for the entire sequence. So pay attention to the proper and graceful ducking of music around spoken audio.

Step Three

After you’ve made everything as uniform as possible, compound the sequence. Open the timeline index, enable “show audio lanes” which expands the audio of the compound. You’ll now see a “track” or summing bus for each audio role – Dialogue, VO, VO2, and Music. When you select an audio lane, you can adjust its volume and apply audio effects to only that lane. That lane’s audio parameters are shown in the inspector pane.

Selecting the topmost level of the clip, displays the output (i..e mix) bus parameters. Additional effects can be added here. It’s fine to apply and adjust such “master” effects, but I recommend that you do not make any changes to the volume. That’s because the volume control comes after any effects, which would include a meter plug-in, such as the built-in multimeter plug-in. Leave the volume slider alone if you want to see accurate volume levels.

Aside from mixing in tracks/busses, audio roles add another value at the time of export. My deliverables include a ProRes file without titles, as well as audio that’s split into separate tracks. In Final Cut Pro’s export setting, I can select the Multitrack Quicktime and then arrange the combination and order of roles. For this project, it’s a ProRes file with four stereo tracks corresponding to the four roles that I’m working with.

Note that when you export a multitrack file, each lane output also has any master output effects added to them. For example, if your mix uses a compressor and a limiter on the main output of the compound clip, then each lane/bus/track of the multitrack will now also have the added effect of that compression and limiting. If you don’t want this, then make sure to disable these effects prior to exporting a Multitrack Quicktime file.

Which effects should you use?

I’ve now discussed how the process works, but what combination of effects should you be using? Obviously that’s a question of style and personal taste. The type of effects for me will be similar to my description in the Premiere Pro article. I tend to stick with native Final Cut Pro effects, so that I don’t have to worry about what’s installed if I move to another Mac or a different editor has to step in. Also, Final Cut Pro is often a poor host for some third party audio plug-ins. I don’t know the reason, but have been told it’s up to those developers to optimize their tools for FCP. In most cases these same plug-ins work well in Logic Pro, not to mention other non-Apple applications. Go figure!

I’m happy with most of the built-in Apple audio plug-ins, with the exception of noise reduction and other audio repair tasks. The Accusonus tools are my go-to, but they are sadly no longer available. After that it’s the RX package from iZotope. If you have a really challenging piece of audio, then use the standalone RX package on that clip and re-import. If you don’t own either of these, then the newly added voice isolation feature in Resolve is pretty sweet (and better than what’s in FCP). Another impressive contender is Adobe’s Podcast beta. The AI-powered voice enhancement feature is available for free use through their web portal. I’ve used it for some really poor Zoom interview audio and it did an outstanding job of cleaning up all manner of audio defects.

Where this explanation is most pertinent is on location-based dialogue recordings. These are the ones that often benefit from noise removal/repair. These tools require consistency and some lead-in to the first audio, so they are best applied to full tracks and not individual clips. That’s why I make sure I have overlaps and fill in gaps and do all of this processing on the lanes of the compound and not on individual clips. If you have different dialogue sections – some noisy and some clean – then it’s best to organize these into separate audio roles, so that they are sorted out correctly once you compound the clip.

My typical processing chain

My FCP effects layout is similar to the description in the Premiere Pro post. Dialogue and VO tracks get some noise reduction, EQ, and compression. Voice-overs are particularly susceptible to plosives (popping “p” consonants) and sibilance, so plosive and de-essing filters are useful. For music, I usually spread the stereo image more and dip the EQ in the midrange. Plus some compression. All of this is designed to allow the dialogue to sit better in the mix. 

The last level of processing is what you do to the top level of the compound clip itself. That’s a bit like mastering in audio production. Applying effects to the compound clips is analogous to applying effects to a mix or output bus in the DAW world. On this particular chain, it’s EQ, exciter, compressor, adaptive limiter, and the multimeter. The effects stack is processed before the volume slider. Since I’m judging peak and loudness levels with the multimeter plug-in, I don’t want to make any volume slider changes on the compound clip, because those would be applied after the reading on the multimeter.

You’ll notice from my screen grabs that different compressor models have been used. These are all from the same Logic Pro compressor in FCP. This single plug-in features various presets designed to emulate tried-and-true analog compressors favored by top recording engineers/mixers.

Final thoughts 

As with my other Final Cut Pro audio articles and posts, I can already hear some screaming that this is just a workaround for the fact that Final Cut Pro has no “true” audio mixing panel. While that may be true, it’s also irrelevant. Until such time as Apple’s ProApps engineers redesign the audio section or add a “roles-based mixer” to the tool set, this is the software you have. If you want to mix in Final Cut Pro and deliver a properly mixed master file without using specialized audio software, then it’s best to understand how to achieve the required results.

If you step into the compound clip to make any editorial changes to the sequence or to individual clips, then you will not hear the results of the top-level mixing and effects. The proper mix is only heard when you step back out. This is a short-coming compared with this same process in Premiere Pro. Therefore, when you are editing in Final Cut Pro, it’s best to leave all of the final mixing until the end. In Premiere Pro, I tend to mix as I go.

Hopefully this post gives you some insight into the “guts” of the software. If you can’t send the audio to a mix engineer and don’t want to bounce over to Logic Pro, Pro Tools, or Resolve (Fairlight) yourself, then there’s no reason Final Cut Pro can’t be made to work for you.

©2023 Oliver Peters

Final Cut Pro + DaVinci Resolve

The concept of offline and online editing goes back to the origins of film editing. Work print was cut by the film editor during the creative stage of the process and then original negative was conformed by the lab and married to the final mix for the release prints (with a few steps in between). The terms offline and online were lifted from early computer lingo and applied to edit systems when the post process shifted from film to video. Thus offline equates to the creative editorial stage, while conforming and finishing services are defined as online.

Digital nonlinear edit systems evolved to become capable of handling all of these stages of creative editorial and finishing at the highest quality level. However, both phases require different mindsets and skills, as well as more advanced hardware for finishing. And so, the offline/online split continues to this day.

If you are an editor cutting local market spots, YouTube videos, corporate marketing pieces, etc, then you are probably used to performing all of these tasks on your own. However, most major commercials, TV shows, and films definitely split them up. In feature films and high-end TV shows, the film editors are separate from the sound editing/mixing team and everything goes through the funnel of a post facility that handles the finishing services. The latter is often referred to as the DI (digital intermediate) process in feature film productions.

You may be cutting on Media Composer, Premiere Pro, or Final Cut Pro, but the final assembly, insertion of effects, and color correction will likely be done with a totally different system and/or application. The world of finishing offers many options, like SGO Mistika, Quantel Rio, and Filmlight Baselight. But the tools that pop up most often are Autodesk Flame, DaVinci Resolve, and Avid Symphony (the latter for unscripted shows). And of course, Pro Tools seemingly “owns” the audio post market.

Since offline/online still exists, how can you use modern tools to your advantage?

If Apple’s Final Cut Pro is your main axe, then you might be reading this and think that you can easily do this all within FCP. Likewise, if you’ve shifted to Resolve, you’re probably wondering, why not just do it all in Resolve? Both concepts are true in theory; however, I contend that most good editors aren’t the best finishers and vice versa. In addition, it’s my opinion that Final Cut is optimized for editing, whereas Resolve is optimized for finishing. That doesn’t make them mutually exclusive. In fact, the opposite is true. They work great in tandem and I would suggest that it’s good to know and use both.

Scenario 1: If you edit with FCP, but use outside services for color and sound, then you’ll need to exchange lists and media. Typically this means AAF for sound and FCPXML for Resolve color (or possibly XML or AAF if it’s a different system). If those systems don’t accept FCPXML lists, then normally you’d need to invest in tools from Intelligent Assistance and/or Marquis Broadcast. However, you can also use Resolve to convert the FCPXML list into other formats.

If they are using Resolve for color and you have your own copy of Resolve or Resolve Studio, then simply import the FCPXML from Final Cut. You can now perform a “preflight check” on your sequence to make sure everything translated correctly from Final Cut. Take this opportunity to correct any issues before it goes to the colorist. Resolve includes media management to copy and collect all original media used in your timeline. You have the option to trim files if these are long clips. Ideally, the DP recorded short takes without a lot of resets, which makes it easy to copy the full-length clip. Since you are not rendering/exporting color-corrected media, you aren’t affected by the UHD export limit of the free Resolve version.

After media management, export the Resolve timeline file. Both media and timeline file can go directly to the colorist without any interpretation required at the other end. Finally, Resolve also enables AAF exports for audio, if you need to send the audio files to a mixer using Pro Tools.

Scenario 2: What if you are doing everything on your own and not sending the project to a colorist or mixer for finishing? Well, if you have the skillset and understand the delivery criteria, then Resolve is absolutely your friend for finishing the project. For one thing, owning Resolve means you could skip purchasing Apple Motion, Compressor, and/or Logic Pro, if you want to. These are all good tools to have and a real deal from a cost standpoint; however, Resolve or Resolve Studio definitely covers most of what you would do with these applications.

Start the same way by sending your FCPXML into Resolve. Correct any editorial issues, flatten/collapse compound and multicam clips, etc. Insert effects and titles or build them in the Fusion page. Color correct. When it comes to sound, the Fairlight page is a full-fledged DAW. Assuming you have the mixing chops, then Fairlight is a solid stand-in for Logic Pro, Pro Tools, or other DAWs. Finally, export the various formats via the Deliver page.

Aside from the obvious color and mixing superiority of Resolve over Final Cut Pro, remember that you can media-manage, as well as render out trimmed clips – something that FCP won’t do without third-party applications. It’s also possible to develop proxy workflows that work between these two applications.

While both Final Cut Pro and DaVinci Resolve are capable of standing alone to cover the creative and finishing stages of editing, the combination of the two offers the best of all worlds – a fast editing tool and a world-class finishing application.

©2023 Oliver Peters

NLE Tips – Proxy Hacks

Editors often think of the clip within the edit application’s browser as the media file. But that clip is only a facsimile of the actual media. It links to potentially three different assets on the hard drive – the original camera (or sound) file, optimized media, and/or proxy media.

Optimized media. You may decide to create optimized media when the original media’s codec or file format is too taxing on your system. For example, you might convert a media file made up of an image sequence into an optimized movie file using one of the ProRes or DNx codecs. When you create optimized media, that is often the media used for finishing instead of the original camera media. For sake of simplicity I’ll refer to original media from here on, but understand that it could be optimized media or original camera files.

Proxy media. There are many reasons for creating proxy media – portability, system performance, remote editing, etc. Proxy media is usually lightweight, more highly compressed, and of a lower resolution than the original media. Nearly all editing applications enable users to edit with lightweight proxy media in lieu of heavier, native camera files. When proxy media has been created, then the media clip in the NLE’s browser can actually link to both the original camera file, as well as the proxy media file. Software “toggles” in the application can seamlessly swap the link from one type of media file to the other.

The NLEs that offer proxy editing workflows integrate routines to transcode and automatically switch the links between proxy and original camera files on the hard drive. DaVinci Resolve 18 is the newest in this group with the addition of the Blackmagic Proxy Generator application. However, that tool only works with Resolve Studio 18 downloaded from Blackmagic Design’s website. The Generator is an addition to Resolve 18 and augments the built-in transcoding tools. In either case, you don’t have to use the built-in routines nor the Blackmagic Proxy Generator. You can encode proxies using different software and even different computers. Then you can attach those proxies to the clips in the editing application at a later time.

Creating external proxy media

Proxies can be created with any encoding software. I like Apple Compressor, which includes a category of presets specifically designed for proxy media generation. The presets can be modified according to your needs.  For instance, you can add a LUT and effects, like a timecode overlay. This makes it easy to know when you are toggled to the original or the proxy media within the NLE.

Before creating any proxy files, make sure that your original files all have unique file names. Rename any duplicates or those with generic file names, like Clip001, Clip002, etc. There are several key parameters needed for successful relinking between original and proxy media. These include matching names, frame rates, timecode, lengths, and audio channel configurations. Some applications let you force a relink when some of these items don’t match, but it will usually be one file at a time.

Frame sizes can be smaller, since that’s an aspect of any proxy workflow. For example, if you start with 4K/UHD original media, but you create half-size HD proxies. The embedded metadata in the proxy file informs the NLE so that the correct size is maintained when switching between the two. Likewise, the codecs do not need to match. You can have 4K/UHD ProRes HQ originals and HD H.264 proxy media (I prefer ProRes Proxy). The point is to have proxy media with smaller file sizes, which play back more efficiently on your computer.

When you transcode proxy media files in Compressor or any other encoding application, it’s best to render them into a folder specifically called Proxy. This can be anywhere you like, but it’s best to have it near your original camera files. If you have multiple camera file folders – organized by camera roll, day, camera model, etc – then there are two options. You can either have one single Proxy file for all renders or have a separate subfolder called Proxy within each camera roll folder.

Dealing with externally-created proxies in different editing applications

Final Cut Pro – There is a setting to switch between Proxy Preferred and Original/Optimized. When you create external proxies, highlight the original camera clips and relink to the proxy media in the Proxy folder(s). Once proxies have been linked, then you can seamlessly switch between the two types of media.

Premiere Pro – There is a similar toggle button accessible in the timeline tools panel. The linking steps are similar to Final Cut Pro. Highlight the originals and then Attach Proxies. Navigate to the Proxy folder(s) and attach that media. The toggle button lets you switch back and forth between media types.

DaVinci Resolve Studio 18 – This update changed the proxy workflow as well as added the Generator application. You can still use the older proxy generation method. If so, then set the encoding parameters and location in your project settings. If you encode using the Blackmagic Proxy Generator app or an external application, then it’s a different process. The advantage to using Blackmagic Proxy Generator is that you can set up watch folders for automatic encoding.

The default location when using the Blackmagic Proxy Generator app or Resolve’s internal routine places a Proxy subfolder inside the folder of each roll of original media. When that condition exists, then original clips added into the Media page automatically include links to both the original and the proxy media. In fact, the Proxy subfolders don’t even show up in Resolve’s browser when searching for media. When both types of media are present, then the Resolve clip icons reflects that duality.

When you transcode externally with Compressor or another app, then media placed into individual Proxy subfolders will also automatically link inside Resolve. However, if you render to a single, unified Proxy folder, then you’ll need to manually relink the proxy files to the originals in the Media page. Like the other two NLEs, you can do this as a batch function by navigating to the Proxy folder.

I hope these pointers will be a useful guide the next time you decide to use a proxy media workflow.

©2022 Oliver Peters

Storage Case Studies

Regardless of whether you own or work for a small editorial company or a large studio cranking out blockbusters, media and how you manage it is the circulatory system of your operation. No matter the size, many post operations have some of the same concerns, although may approach them with solutions that are vastly different from company to company.

Last year I wrote on this topic for postPerspective and interviewed key players at Molinare and Republic. This year I’ve revisited the topic, taking a look at top Midwestern spot shops Drive Thru and Utopic, as well as Marvel Studios. In addition, I’ve also broken down the “best practices” that Netflix suggests to its production partners.

Here are links to these articles at postPerspective:

Editing and Storage: Molinare and Republic

Utopic and Drive Thru: How Spot Shops Manage Their Media

Marvel and Netflix: How Studio Operations Manage Media

©2022 Oliver Peters