Storage Case Studies

Regardless of whether you own or work for a small editorial company or a large studio cranking out blockbusters, media and how you manage it is the circulatory system of your operation. No matter the same, many post operations have some of the same concerns, although may approach them with solutions that are vastly different from company to company.

Last year I wrote on this topic for postPerspective and interviewed key players at Molinare and Republic. This year I’ve revisited the topic, taking a look at top Midwestern spot shops Drive Thru and Utopic, as well as Marvel Studios. In addition, I’ve also broken down the “best practices” that Netflix suggests to its production partners.

Here are links to these articles at postPerspective:

Editing and Storage: Molinare and Republic

Utopic and Drive Thru: How Spot Shops Manage Their Media

Marvel and Netflix: How Studio Operations Manage Media

©2022 Oliver Peters

NLE Tips – Timecode Banner

Every editor has to contend with client changes. The process has become more challenging over the years with fewer clients attending edit sessions in person. This is especially difficult in long-form projects where you often end up rearranging sections to change the flow of the narrative. 

Modern tools make it easier than ever to generate time-stamped transcripts directly from the audio itself. The client can then create “paper cuts” from these transcripts for the editor to follow. Online virtual editing tools exist to edit and export such revisions in an NLE-friendly format. Unfortunately clients prefer to work with tools they know, so often Word becomes the tool of choice instead of a virtual editor. This poses some editing challenges.

The following is an all-too-familiar scenario. You are editing down an hourlong conversation that was recorded as a linear discussion. You’ve edited the first pass (version 1) and created an AI-based, speech-to-text transcript from the dialogue track. This includes timecode stamps and speaker identification for the client. (Premiere Pro is an excellent tool to use.)

The client sends back a paper cut in the form of a Word document with recommended trims, sections to delete, and rearranged paragraphs that change the flow of the conversation. The printed time stamps stay associated with each paragraph, which enables you to find the source clips within the version 1 timeline. However, as you move paragraphs around and cut sections, these time stamps are no longer a valid reference. The sequence times have now changed with your edits.

The solution is simple. First, create a movie file with running timecode on black. The timecode format and start time should match that of the sequence. You may want to create several of these assets at different frame rates and store them for future use. For instance, a lot of my sequences are cut at 23.98fps with a starting timecode of 00:00:00:00. I created a ProRes Proxy “timecode banner” file that’s over an hour long, which is stored in a folder along with other useful assets, like countdowns, tone, color bars, etc.

Once you receive the client’s Word document, dupe the version 1 sequence to create a version 2 sequence. Import the timecode banner file into the project and drop it onto the topmost track of version 2. Crop the asset so you only see timecode over the rest of the picture. Since this is a rendered media asset and not a dynamic timecode plug-in applied to an adjustment layer, the numbers stay locked when you move the clip around.

As you navigate to each point in the edited transcript to move or remove sections, cut (“blade”) across all tracks to isolate those sections. Now rearrange as needed. The timecode banner clip will move with those sections, which will allow you to stay in tune with the client’s time stamps as listed on the transcript.

When done, you can compare the new version 2 sequence with the transcript and know that all the changes you made actually match the document. Then delete the timecode banner and get ready for the next round.

©2022 Oliver Peters

NLE Tips – Premiere Pro Multicam

The best way to edit interviews with more than one camera is to use your edit software’s multicam function. The Adobe Premiere Pro version works quite well. I’ve written about it before, but there are differing multicam workflows depending on the specific production situation. Some editors prefer to work with cameras stacked on tracks, but that’s a very inefficient way of working. In this post, I’m going to look at a slightly different way of using Premiere Pro with multicam clips.

I like to work in the timeline more than the browser/bin. Typically an interview involves longer takes and fewer clips, so it’s easy to organize on the timeline and that’s how I build my multicam clips. Here is a proven workflow in a few simple steps.

Step 1 – String out your clips sequentially onto the timeline – all of A-cam, then all of B-cam, then C-cam, and so on. You will usually have the same number of clips for each camera, but on occasion there will be some false starts. Remove those from the timeline.

Step 2 – Move all of the B-cam clips to V2 and the audio onto lower tracks so that they are all below the A-cam tracks. Move all of the C-cam clips to V3 and the audio onto lower tracks so that they are all below the B-cam tracks. Repeat this procedure for each camera.

Step 3 – Slide the B, C, etc camera clips for take 1 so they overlap with the A-camera clip. Repeat for take 2, take 3, and so on.

Step 4 – Highlight all of the clips for take 1, right-click and select Synchronize. There are several ways to sync, but if you recorded good reference audio onto all cameras (always do this), then synchronizing by the audio waveforms is relatively foolproof. Once the analysis is complete, Premiere will automatically realign the take 1 clips to be in sync with each other. Repeat the step for each take. This method is ideal when there’s mismatched timecode or when no slate or common sync marker (like a clap) was used.

Step 5 – Usually the A-camera will have the high-quality audio for your mix. However, if an external audio recorder was used for double-system sound, then the audio clips should have been part of the same syncing procedure in steps 1-4. In any case, delete all extra tracks other than your high-quality audio. In a two-person interview, it’s common to have a mix of both mics recorded onto A1 and A2 of the camera or sound recorder and then each isolated mic on A3 and A4. Normally I will keep all four channels, but disable A1 and A2, since my intention is to remix the interview using the isolated mics. In the case of some cameras, like certain Sony models, I might have eight tracks from the A-cam and only the first four have anything on them. Remove the empty channels. The point is to de-clutter the timeline.

Step 6 – Next, trim the ends of each take across all clips. Then close the gaps between all takes.

Step 7 – Before going any further, do any touch-up that may be necessary to the color in order to match the cameras. In a controlled interview, the same setting should theoretically apply to each take for each camera, but that’s never a given. You are doing an initial color correction pass at this stage to match cameras as closely as possible. This is easy if you have the same model camera, but trickier if different brands were used. I recently edited a set of interviews where a GoPro was used as the C-camera. In addition to matching color, I also had to punch in slightly on the GoPro and rotate the image a few degrees in order to clean up the wide-angle appearance and the fact that the camera wasn’t leveled well during the shoot.

Step 8 – Make sure all video tracks are enabled/shown, highlight all the video clips (not audio), and nest them. This will collapse your timeline video clips into a single nested clip. Right-click and Enable Multi-Camera. Then go through and blade the cut point at the beginning of each take (this should match the cuts in your audio). Duplicate that sequence for safe keeping. By doing it this way, I keep the original audio clips and do not place them into a nest. I find that working with nested audio is rather convoluted and, so, more straightforward this way.

Step 9 – Now you are ready to edit down the interview – trimming down the content and switching/cutting between camera angles of the multicam clip. Any Lumetri correction, effects, or motion tab settings that you applied or altered in Step 7 follow the visible angle. Proceed with the rest of the edit. I normally keep multicam clips in the sequence until the very end to accommodate client changes. For example, trims made to the interview might result in the need to re-arrange the camera switching to avoid jump cuts.

Step 10 – Once you are done and the sequence is approved by the client, select all of the multicam clips and flatten them. This leaves you with the original camera clips for only the visible angles. Any image adjustments, effects, and color correction applied to those clips will stick.

©2022 Oliver Peters

NLE Tips – Premiere Pro Workflow Guide

Avid Media Composer is still the king of the hill when it comes to editing feature films and other long-form projects. However, Adobe also has a strong and ever-growing presence with many editors of notable TV shows, documentaries, and dramatic feature films using Premiere Pro as their NLE of choice. Adobe maintains a close relationship with many of these users, often seeding early versions of advanced features to them, as well as seeing what workflow pain points they encounter.

This battle-testing led Adobe to release a new Best Practices and Workflow Guide. It’s available online and as a free, downloadable PDF. While it’s targeted towards editors working on long-form projects, there are many useful pointers for all Premiere Pro editors. The various chapters cover such topics as hardware settings, proxies, multi-camera, remote/cloud editing, and much more.

Adobe has shied away from written documentation over the years, so it’s good to see them put the effort in to document best practices gleaned from working editors that will benefit all Premiere Pro users.

©2022 Oliver Peters

Colourlab Ai

An artificial intelligence grading option for editors and colorists

There are many low-cost software options for color correction and grading, but getting a stunning look is still down to the skill of a colorist. Why can’t modern artificial intelligence tools improve the color grading process? Colorist and color scientist Dado Valentic developed Colourlab Ai as just that solution. It’s a macOS product that’s a combination of a standalone application and companion plug-ins for Resolve, Premiere Pro, Final Cut Pro, and Pomfort Live Grade.

Colourlab Ai is comprised of two main functions – grading and show look creation. Most Premiere Pro and Final Cut Pro editors will be interested in either the basic Colourlab Ai Creator or the richer features of Colourlab Ai Pro. The Creator version offers all of the color matching and grading tools, plus links to Final Cut Pro and Premiere Pro. The Pro version adds advanced show look design, DaVinci Resolve and Pomfort Live Grade integration, SDI output, and Tangent panel support. These integrations differ slightly, due to the architecture of each host application.

Advanced color science and image processing

Colourlab Ai uses color management similar to Resolve or Baselight. The incoming clip is processed with an IDT (input device transform), color adjustments are applied within a working color space, and then it’s processed with an ODT (output device transform) – all in real-time. This enables support for a variety of cameras with different color science models (such as ARRI Log-C) and it allows for output based on different display color spaces, such as Rec 709, P3, or sRGB.

If you prefer to work directly with the Colourlab Ai application by itself – no problem. Import raw footage, color correct the clips, and then export rendered movie files with a baked in look. Or you can use the familiar roundtrip approach as you would with DaVinci Resolve. However, the difference in the Colourlab Ai roundtrip is that only color information moves back to the editing application without the need to render any new media.

The Colourlab Ai plug-in for Final Cut Pro or Premiere Pro reads the color information created by the Colourlab Ai application from an XML file used to transfer that data. A source effect is automatically applied to each clip with those color parameters. The settings are still editable inside Final Cut Pro (not Premiere Pro). If you want to modify any color parameter, simply uncheck the “Use Smart Match” button and adjust the sliders in the inspector. In fact, the Colourlab Ai plug-in for FCP is a full-featured grading effect and you could use it that way. Of course, that’s doing it the hard way!

The ability to hand off source clips to Final Cut Pro with color metadata attached is unique to Colourlab Ai. This is especially a game changer for DITs who deliver footage with a one-light grade to editors working in FCP. The fact that no media need be rendered also significantly speeds up the process.

A professional grading workflow with Final Cut Pro and Colourlab Ai

Thanks to Apple’s color science and media architecture, Final Cut Pro can be used as a professional color grading platform with the right third-party tools. CoreMelt (Chromatic) and Color Trix (Color Finale) are two examples of developers who have had success offering advanced tools, using floating panels within the Final Cut Pro interface. Colourlab Ai takes a different approach by offloading the grade to its own application, which has been designed specifically for this task.

My workflow test involved two passes – once for dailies (such as a one-light grade performed by a DIT on-set) and then again for the final grade of the locked cut. I could have simply sent the locked cut once to Colourlab Ai, but my intention was to test a workflow more common for feature films. Shot matching between different set-ups and camera types is the most time-consuming part of color grading. Colourlab Ai is intended to make that process more efficient by employing artificial intelligence.

Step one of the workflow is to assemble a stringout of all of your raw footage into a new FCP project (sequence). Then drag that project from FCP to the Colourlab Ai icon on the dock (Colourlab Ai has already been opened). The Colourlab Ai app will automatically determine some of the camera sources (like ARRI files) and apply the correct IDT. For any unknown camera, manually test the settings for different cameras or simply stick with a default Rec 709 IDT.

The Pro interface features three tabs – Grade, Timeline Intelligence, and Look Design. The top half of the Grade tab displays the viewer and reference images used for matching. Color wheels, printer light controls, scopes, and versions are in the bottom half. Scope choices include waveform, RGB parade, or vectorscope, but also EL Zones. Developed by Ed Lachman, ASC, the EL Zone System is a false color display with 15 colors to represent a 15-stop exposure range. The mid-point equates to the 18% grey standard.

AI-based shot matching forms the core

Colourlab Ai focuses on smart shot matching, either through its Auto-Color feature or by matching to a reference image. The application includes a variety of reference images, but you can also import your own, such as from Shotdeck. The big advance Colourlab Ai offers over other matching solutions is Color Tune. A small panel of thumbnails can be opened for any clip. Adjust correction parameters – brightness, contrast, density, etc – simply by stepping through incremental value changes. Click on a thumbnail to preview it in the viewer.

The truly unique aspect is that Color Tune lets you choose from eleven matching options. Maybe instead of a Smart model, you’d prefer to match based only on Balance or RGB or a Perceptual model. Step through the thumbnails and pick the look that’s right for the shot. Therefore, matching isn’t an opaque process. It can be optimized in a style more akin to adjusting photos than traditional video color correction.

Timeline Intelligence allows you to rearrange the sequence to group similar set-ups together. Once you do this, use matching to set a pleasing look for one shot. Select that shot as a “fingerprint.” Then select the rest of the shots in a group and match those to the fingerprinted reference shot. This automatically applies that grade to the rest. But, it’s not like adding a simple LUT to a clip or copy-and-pasting settings. Each shot is separately analyzed and matched based on the differences within each shot.

When you’re done going through all of the shots, right-click any clip and “push” the scene (the timeline) back to Final Cut Pro. This action uses FCPXML data to send the dailies clips back to Final Cut, now with the added Colourlab Ai effect containing the color parameters on each source clip.

Remember that Final Cut Pro automatically adds a LUT to certain camera clips, such as ARRI Alexa files recorded in Log-C. When your clips comes back in from Colourlab Ai, FCP may add a LUT on top of some camera files. You don’t want this, because Colourlab Ai has already made this adjustment with its IDT. If that happens, simply change the inspector LUT setting for that source file to “none.”

Lock the edit and create your final look

At this point you can edit with native camera clips that have a primary grade applied to them. No proxy media rendered by a DIT, hence a much faster turnaround and no extra media to take up drive space. Once you’ve locked the edit, it’s time for step two – the show look design for the final edit.

Drag the edited FCP project (new sequence with the graded clips) to the Colourlab Ai icon on the dock to send the edited sequence back to Colourlab Ai. All of the clips retain the color settings created earlier in the dailies grading session. However, this primary grade is just color metadata and can be altered. After any additional color tweaks, it’s time to move to Show Looks. Click through the show look examples and apply the one that fits best.

If you have multiple shots with the same look, apply a show look to the first one, copy it, and then apply that look to the rest of the selected clips. In most cases, you’ll have a different show look for various scenes within a film, but it’s also possible that a single show look would work through the entire film. So, experiment!

To modify a look or create your own, step into the Look Design tab (Pro version). Here you’ll find the Filmlab and Primary panels. Filmlab uses film stock emulation models and film’s subtractive color (CMY instead of RGB) for adjustments. Their film emulation is among the most convincing I’ve seen. You can select from a wide range of branded negative and print film stocks and then make contrast, saturation, and CMY color adjustments. The Primary panel gives you even more control over RGBCMY for the lift, gamma, and gain regions. Custom adjustments may be saved to create your own show looks. Once you’ve set a show look for all of your shots, push the sequence back to Final Cut Pro. Voila – a fully graded show and no superfluous media created in the process.

Some observations

Colourlab Ai is a revolutionary tool based on a film-style approach to grading. Artificial intelligence models speed up the process, but you are always in control. Thanks to the ease of operation, you can get great results without Resolve’s complex node structure. You can always augment a shot with FCP’s own color tools for a power window or a vignette.

The application currently lacks a traditional undo/redo stack. Therefore, use the version history to experiment with settings and looks. Each time you generate a new match, such as with Auto-Color or using a reference image, a new version is automatically stored. If you want to iterate, then manually add a version at any waypoint if a new match isn’t involved – for example, when making color wheels adjustments. The version history displays a thumbnail for each version. Step through them to pick the one that suits you best.

If you are new to color correction, then Colourlab Ai might look daunting at first glance. Nevertheless, it’s deceptively easy to use. There are numerous tutorials available on the website, as well as directly accessible from the launch window. A 7-day free trial can be downloaded for you to dip your toes in the water. The artificial intelligence at the heart of Colourlab Ai will enable any editor to deliver professional grades.

©2022 Oliver Peters