Drive – Postlab’s Virtual Storage Volume

Postlab is the only service designed for multi-editor, remote collaboration with Final Cut Pro X. It works whether you have a team collaborating on-premises within a facility or spread out at various locations around the globe. Since the initial launch, Hedge has also extended Postlab’s collaboration to Premiere Pro.

When using Postlab, projects containing Final Cut Pro X libraries or Premiere Pro project files are hosted on Hedge’s servers. But, the media lives on local drives or shared storage and not “in the cloud.” When editors work remotely, media needs to be transferred to them by way of “sneakernet,” High Tail, WeTransfer, or other methods.

Hedge has now solved that media issue with the introduction of Drive, a virtual storage volume for media, documents, and other files. Postlab users can utilize the original workflow and continue with local media – or they can expand remote capabilities with the addition of Drive storage. Since it functions much like DropBox, Drive can also be used by team members who aren’t actively engaged in editing. As a media volume, files on Drive are also accessible to Avid Media Composer and DaVinci Resolve editors.

Drive promises significantly better performance than a general business cloud service, because it has been fine-tuned for media. The ability to use Drive is included with each Postlab plan; but, storage costs are based on a flat rate per month for the amount of storage you need. Unlike other cloud services, there are no hidden egress charges for downloads. If you only want to use Drive as a single user, then Hedge’s Postlab Solo or Pro plan would be the place to start.

How Drive works

Once Drive storage has been added to an account, each team member simply needs to connect to Drive from the Postlab interface. This mounts a Drive volume on the desktop just like any local hard drive. In addition, a cache file is stored at a designated location. Hedge recommends using a fast SSD or RAID for this cache file. NAS or SAN network volumes cannot be used.

After the initial set up, the operation is similar to DropBox’s SmartSync function. When an editor adds media to the local Drive volume, that media is uploaded to Hedge’s cloud storage. It will then sync to all other editors’ Drive volumes. Initially those copies of the media are only virtual. The first time a file is played by a remote team member, it is streamed from the cloud server. As it streams, it is also being added the local Drive cache. Every file that has been fully played is now stored locally within the cache for faster access in the future.

Hedge feels that latency is as or more important than outright connection speed for a fluid editing experience. They recommend wired, rather than wi-fi, internet connections. However, I tested the system using wi-fi with office speeds of around 575Mbps down / 38Mbps up. This is a business connection and was fast enough to stream 720p MP4 and 1080p ProRes Proxy files with minimal hiccups on the initial streamed playback. Naturally, after it was locally cached, access was instantaneous.

From the editor’s point of view, virtual files still appear in the FCPX event browser as if local and the timeline is populated with clips. Files can also be imported or dragged in from Drive as if they are local. As you play the individual clips or the timeline from within FCPX or Premiere, the files become locally cached. All in all, the editing experience is very fluid.

In actual practice

The process works best with lightweight, low-res files and not large camera originals. That is possible, too, of course, but not very efficient. Drive and the Hedge servers support most common media files, but not a format like REDCODE raw. As before, each editor will need to have the same effects, LUTs, Motion templates, and fonts installed for proper collaboration.

I did run into a few issues, which may be related to the recent 10.4.9 Final Cut update. For example, the built-in proxy workflow is not very stable. I did get it to work. Original files were on a NAS volume (not Drive) and the generated proxies (H.264 or ProRes Proxy) were stored on the Drive volume of the main system. The remote editing system would only get the proxies, synced through Drive. In theory that should work, but it was hit or miss. When it worked, some LUTs, like the standard ARRI Log-C LUTs, were not applied on the remote system in proxy mode. Also the “used” range indicator lines for the event browser clips were present on the original system, but not the remote system. Other than these few quirks, everything was largely seamless.

My suggested workflow would be to generate editing proxies outside of the NLE and copy those to Drive. H.264 or ProRes Proxy with matching audio configurations to the original camera files work well. Treat these low-res files as original media and import them into Final Cut Pro X or Premiere Pro for editing. Once the edit is locked, go to the main system and transfer the final sequence to a local FCPX Library or Premiere Pro project for finishing. Relink that sequence to the original camera files for grading and delivery. Alternatively, you could export an FCPXML or XML file for a Resolve roundtrip.

One very important point to know is that the entire Postlab workflow is designed around team members staying logged into the account. This maintains the local caches. It’s OK to quit the Postlab application, plus eject and reconnect the Drive volume. However, if you log out, those local caches for editing files and Drive media will be flushed. The next time you log back in, connection to Drive will need to be re-established, Drive information must be synced again, and clips within FCPX or Premiere Pro will have to be relinked. So stay logged in for the best experience.

Additional features

Thanks to the Postlab interface, Drive offers features not available for regular hard drives. For example, any folder within Drive can be bookmarked in Postlab. Simply click on a Bookmark to directly open that folder. The Drop Off feature lets you generate a URL with an expiration date for any Bookmarked folder. Send that link to any non-team member, such as an outside contributor or client, and they will be able to upload additional media or other files to Drive. Once uploaded to Hedge’s servers, those files show up in Drive within the folder and will be synced to all team members.

Hedge offers even more features, including Mail Drop, designed for projects with too much media to efficiently upload. Ship Hedge a drive to copy dailies straight into their servers. Pick Up is another feature still in development. When updated, you will be able to select files on Drive, generate a Pick Up link, and send that to your client for download.

Editing with Drive and Postlab makes remote collaboration nearly like working on-site. The Hedge team is dedicated to expanding these capabilities with more services and broader NLE support. Given the state of post this year, these products are at the right time and place.

Check out this Soho Editors masterclass in collaboration using Postlab and Drive.

Originally written for FCP.co.

©2020 Oliver Peters

Accusonus ERA5

The trend in audio plug-ins is simple-to-use effects with a minimal number of controls. Waves started this with their One Knob series – a set of equalization, reverb, and compression filters to make audio “brighter,” “phatter,” or “wetter.” In recent years, Accusonus was among the first to expand this concept to audio repair effects in order to de-ess, remove plosives, reduce noise, and so on. Last summer I reviewed their ERA4 bundle. These plug-ins have become part of my go-to toolkit when dealing with audio in Premiere Pro and/or Final Cut Pro X.

Accusonus has now introduced their ERA5 bundles, along with a new pricing and licensing model (more on that later). As before, there’s a Standard and a Pro bundle. The Standard bundle includes the set of single-button filters, while the Pro bundle adds several more advanced, multi-band filters. I’ll skip the single-button filters, since I covered those in my ERA4 review. The Accusonus site features processed samples to hear how each works. However, these filters have been updated for ERA5 and to my ears, tend to sound better than before.

In addition to the single-button filters, both bundles include these new ERA5 effects: Voice AutoEQ, Room Tone Match (only available as an AudioSuite plug-in for Pro Tools), and Voice Deepener. The ERA5 Pro bundle adds Noise Remover Pro, Reverb Remover Pro, and De-Esser Pro.

Voice AutoEQ is an intelligent equalizer that analyzes your vocal track to set a base and then offers controls to adjust the EQ towards more air, clarity, or body. Moving the puck around within the triangle results in complex, multi-frequency equalization using a single control. This filter is designed for single voices in mono or stereo tracks. It won’t work with a multi-channel, broadcast wave file and isn’t effective on a mixed dialogue track with several speakers.

The Voice Deepener filter seems like a gimmick to me. The intent is to add more bottom to a voice and make it sound fuller. Accusonus promotes it as giving the voice that “movie-trailer” effect. While a small touch of it on male voices does work, pushing it to extremes errs on the side of sounding like you are disguising the voice. It sounds downright cartoonish on female voices. Of course, that means you could use it for just such an effect, rather than only enhancement.

The three Pro effects (Noise Remover Pro, Reverb Remover Pro, and De-Esser Pro) are more advanced or multi-band versions of their companion single-button filter. You get both in the Pro bundle, so if the simpler version doesn’t achieve the correct results, use the Pro version instead.

Both bundles now include the Audio Clean-Up Assistant. This is a container that is applied as a single plug-in effect. Within it are five slots to which you can add any combination of the ERA5 processing modules. In operation, that’s a lot like iZotope’s “mothership” approach. Choose from a range of preset configurations or start with an empty container and build up your own configuration. Maybe you have a standard set of effects that you apply to every voice recording. Simply create your own channel strip configuration and save it as a custom preset. Then apply it as a single Audio Clean-Up Assistant effect.

One huge change in this past year is pricing and licensing. In the past, ERA bundles were purchased as perpetual licenses with activation keys for each separate plug-in. Now you can opt for subscription, as well as perpetual. Unfortunately, if you look at the Accusonus website, all promotion points towards subscription. It’s only when you go to the “buy now” page that you see a pulldown revealing the perpetual option. You can also purchase ERA5 Standard or ERA5 Pro through the FxFactory site (perpetual only). However, in both cases, it now appears that you can only purchase or subscribe to the bundle and not individual filters.

If you go through Accusonus, licensing is now handled in a manner similar to Adobe Creative Cloud. You set up an account and sign-in from any of the plug-in panels. When you do so, all ERA5 plug-ins attached to that account are immediately activated. No need to enter individual activation codes. However, you should not sign out. Doing so de-activates the plug-ins until you sign-in again. This may be confusing, because it implies that you have to constantly be connected to the internet. I’ve already seen confusion online about this point and Accusonus does not make it clear in their installation instructions nor on the website.

In fact, as long as you sign-in (and were connected when you signed in) and stay signed in, your plug-ins work. Disconnect from the interact, lose your connection, whatever – the plug-ins are still activated. Adobe CC works in exactly the same manner. The advantage is that you can have the ERA5 bundle installed on multiple computers and easily move your activation around as you go from one machine to the other, simply through this sign-in/sign-out method.

If you already own ERA4, then the new tools may or may not entice you to upgrade. If you don’t own either, then it’s easy to start in a trial mode and decide. The Accusonus ERA5 filters are easy to use and augment the built-in effects bundles of most DAWs and NLEs. They are real-time and don’t require too much fiddling to dial in the sound. ERA5 is a useful set of audio repair tools for video editors, podcasters, and audio engineers alike.

©2020 Oliver Peters

iZotope RX8 Audio Editor

Most digital audio and video editing applications come with a robust set of audio plug-ins, but many editors and mixers prefer to augment those with third-party effects. iZotope is the go-to brand for many who need best-in-class audio effects tools. The company offers a number of comprehensive audio products and software suites, but most video editors will primarily be interested in RX8. It’s the latest version of iZotope’s renowned audio repair product.

iZotope offers its products, including RX8, in Elements (“lite”), Standard, and Advanced versions, giving the user the option to pick the feature set that best fits their budget. Some video editing software also comes bundled with one or more of the iZotope Elements products. iZotope’s Neutron, Nectar, and Ozone each install as a single plug-in that iZotope likes to call a “mothership.” This means that you apply a single instance of Nectar to a track and it becomes a container. Then, configure the processing modules that you need within the Nectar interface. In concept, it functions like a channel strip or effects rack. The filters work in real-time within the framework of the DAW or NLE.

An audio editor plus plug-ins

RX8 is different in that it installs over a dozen individual AU, VST, and AAX plug-ins, instead of a single “mothership” plug-in. In addition, a standalone application – the RX8 Audio Editor – is also installed. That’s where the real power is.

If you are working in Audition or Premiere Pro, for example, and need to apply a De-clip or De-ess effect to a voice-over recording, then you can simply apply that individual iZotope filter to the track. However, when more extensive processing is required, then it’s time to use the RX8 Audio Editor application. Most of the time you’ll find that it’s best to process a track in this external application first and then import the processed track into your editing application.

You can use the RX Connect plug-in within some DAWs and NLEs to roundtrip the track between the host and the RX8 application, much like Adobe’s dynamic link function. Unfortunately, the RX Connect roundtrip doesn’t work in current versions of Adobe Premiere Pro and Apple Final Cut Pro X. Instead, use a “reveal in Finder” command to locate the track, open it in RX8, process it, and then bring it back into the host to replace the original clip.

What’s new in RX8

iZotope has been continually improving the RX technology from one version to the next and RX8 is no exception. Besides interface changes and improved performance, RX8 includes three new processing modules.

Guitar De-noise will be of more interest to recording engineers than video editors. It is used to remove recording issues, like string squeaks on acoustic guitars, pick attacks, and amp hum with electric guitars. Spectral Recovery is ideal for news and documentary editors. Need to deal with a lo-fi voice-over recorded on a phone? This module can be used to restore frequencies above 4kHz and render a fuller voice recording. The Wow & Flutter module can be used to correct speed and pitch variations in older soundtracks. Several of the existing processing effects have also been improved with better processing, more functionality, and/or improved module interfaces.

The real heavy lifting

The RX8 standalone editor is truly a Swiss Army Knife of processing effects and at first glance might seem a bit daunting. Tracks can be displayed as a waveform, spectrogram, or a mix of both. The right side of the interface presents the selection of effects modules. You can apply single effects or create a module chain containing a series of filters. Plus there are a ton of presets. If you have a question about how a module works, click on the question mark icon in the upper right corner of the module panel and that takes you to iZotope’s website for reference information. However, you can also just start with Repair Assistant, which automatically analyzes the track and offers suggested processing. The Assistant presents A, B, and C preview options – pick one and tweak the settings further, if needed.

Many of the RX8 modules are processor-intensive. Depending on the function, some can be previewed in real-time. Others need to be rendered first and then you can compare and evaluate the before and after versions. RX8 maintains a history, so it’s easy to reject any changes that you’ve made, return to the initial state of the file, and try something different.

One interesting effect is Music Rebalance. Let’s say you have a completely mixed track of voice with music. Now you want the voice to be more dominant in the mix; but, remixing the original isn’t an option. One way to get there is Music Rebalance, which isolates and separates the component parts of the mix. This enables you to change the relative levels of each in the mix. As a by-product, it will also generate separate, isolated tracks, such as just the voice track. While such isolation isn’t 100% perfect, it’s some of the best isolation that I’ve heard.

But wait… There’s more

RX8 offers a large toolkit that goes way beyond the scope of this review. Here are just a few more highlights. If you need to get in deep for more audio surgery, then you can use Spectral Repair. It’s much like working with Photoshop. Select and then remove, replace, or “heal” noises, clicks, and other artifacts visible in the spectrogram.

Another useful feature is EQ Match (only available in RX8 Advanced). Do you have two different VO recordings done by the same talent at different times and they don’t sound the same? Use EQ Match to correct one to closely match the other. Editors who need to deliver final shows that adhere to proper loudness specs will be happy with the improved Loudness Control to monitor and adjust levels that meet broadcast targets.

The RX8 Audio Editor can now have up to 32 tabs of individual files loaded at once. These can be combined into a single Composite tab that allows you to apply the same processing simultaneously to all. In addition, RX8 also offers batch processing of audio files. Simply set up a module chain with the desired effects and settings, load multiple files, and apply that module chain to the batch. From there, export in a range of file formats and bit depths.

iZotope’s complete product line forms a comprehensive audio toolkit. RX8 is the most relevant to video editors and audio post engineers. It’s a tool that will also benefit podcasters and vloggers. In short, anyone who deals with dialogue-heavy material. RX8 represents the latest version of a product that’s being constantly improved. There certainly are competing plug-in packages that offer some similar filters as individual plug-ins. However, nothing on the market is as all-encompassing within a single tool for cleaning, repairing, and restoring audio than iZotope RX8.

Originally written for Pro Video Coalition.

©2020 Oliver Peters

Color Finale Connect – Remote Grading for FCPX

Remote workflows didn’t start with COVID, but that certainly drove the need home for many. While editing collaboration at a distance can be a challenge, it’s a far simpler prospect than remote color grading. That’s often a very interactive process that happens on premises between a colorist and a client, director, or cinematographer. Established high-end post facilities, like Company3 with locations in the US, Canada, and England, have pioneered remote color grading sessions using advanced systems like Resolve and Baselight. This allows a director in Los Angeles and a colorist in London to conduct remote, real-time, interactive grading sessions. But the investment in workflow development, hardware, and grading environments to make this happen is not inconsequential.

High-end remote grading comes to Final Cut Pro X

The Color Finale team has been on a quest to bring advanced grading tools to the Final Cut Pro X ecosystem with last December’s release of Color Finale 2. Many editors are working from home these days, so the team decided to leverage the frameworks for macOS and FCPX to enable remote grading in a far simpler method than with other grading solutions.

The result is Color Finale Connect, which is a Final Cut Pro X workflow extension currently in free public beta. Connect enables two or more Final Cut Pro X users to collaborate in near-real-time in a color grading session, regardless of their location. This review is in the context of long distance sessions, but Connect can also be used within a single facility where the participants might be in other parts of the building or in different buildings.

Color Finale Connect requires each user in a session to be on macOS Catalina, running licensed copies of Final Cut Pro X (not trial) and Color Finale 2.2 Pro (or higher). Download and install Color Finale Connect, which shows up as a Final Cut workflow extension. You can work in a Connect session with or without local media on every participant’s system. In order to operate smoothly and keep the infrastructure lightweight, person-to-person communication is handled outside of Connect. For example, interact with your director via Skype or Zoom on an iPad while you separately control Final Cut on your iMac.

Getting started

To start a session, each participant launches the Color Finale Connect extension within Final Cut. Whoever starts a session is the “broadcaster” and others that join this session are “followers.” The session leader (who has the local media) drags the Project icon to the Connect panel and “publishes” it. This generates a session code, which can be sent to the other participants to join the session from within their Connect extension panels.

Once a session is joined, the participants drag the Project icon from the Connect panel into an open FCPX Event. This generates a timeline of clips. If they have the matching local media, the timeline will be populated with the initial graded clips. If they don’t have media, then the timeline is populated with placeholder clips. Everyone needs to keep their Connect panel open to stay in the session (it can be minimized).

Data transfer is very small, since it consists mainly of Color Finale instructions; therefore, crazy-fast internet speeds aren’t required. It is peer-to-peer and doesn’t live anywhere “in the cloud.” If a participant doesn’t have local media installed, then as the session leader makes a color correction change in Color Finale 2 Pro, an “in-place” full-resolution frame is sent for that clip on the timeline. As more changes are made, the frames are updated in near-real-time.

The data communication is between Color Finale on one system and Color Finale on the others. All grading must happen within the Color Finale 2 Pro plug-in, not FCPX’s native color wheels or other plug-ins. The “in-place” frames support all native Final Cut media formats, such as H.264, ProRes, and ProRes RAW; however, formats that require a plug-in, like RED camera raw files, will not transmit “in-place” frames. In that case, the data applied to the placeholder frame is updated, but you won’t see a reference image.

This isn’t a one-way street. The session leader can enable any participant to also have control. Let’s say the session leader is the colorist and the director of photography is a participant. The colorist can enable remote control for the DP, which would permit them to make tweaks on their own system. This in turn would update back on the colorist’s system, as well as for all the other participants.

Color Finale Connect workflows

I’ve been testing a late-stage beta version of Connect and Color Finale 2.2 Pro and the system works well. The “in-place” concept is ingenious, but the workflow is best when each session member has local media. This has been improved with the enhanced proxy workflow updated in Final Cut Pro X 10.4.9. Let’s say the editor has the full-resolution, original media and generates smaller proxies – for example, 50% size H.264 files. These are small enough that you can easily send the Library and proxy media to all participants using services like WeTransfer, MASV, FileMail, or Frame.io.

One of the session members could be a favored colorist on the other side of the world. In this case, he or she would be working with the proxy media. If the editor and colorist are both able to control the session, then it becomes highly interactive. Formats like RED don’t pose a problem thanks to the proxy transcodes, as long as no local changes are made outside of the Color Finale plug-in. In other words, don’t change the RED raw source settings within this session. Once the colorist has completed the grade using proxy media, those grading settings would be updated through a Connect session on the editor’s system where the original media resides.

Color management

How do you know that your client sees the color in the same way as you do on a reference display? Remote color grading has always been hampered by color management and monitor calibration. It would, of course, be ideal for each participant in the session to have Blackmagic or AJA output hardware connected to a calibrated display. If there is an a/v output for FCPX, then the Connect session changes will also be seen on that screen. But that’s a luxury most clients don’t have.

This is where Apple hardware, macOS, and Final Cut Pro X’s color management come to the rescue and make Color Finale Connect a far simpler solution than other methods. If both you and your client are using Apple hardware (iMac, iMac Pro, Pro Display XDR) then color management is tightly controlled and accurate. First make sure that macOS display settings like True Tone and Night Shift are turned off on all systems. Then you are generally going to see the same image within the Final Cut viewer on your iMac screen as your client will see on theirs.

The one caveat is that users still have manual control of the screen brightness, which can affect the perception of the color correction. One tip is to include a grayscale or color chart that can be used to roughly calibrate the display’s brightness setting. Can everyone just barely see the darkest blocks on the chart? If not, brighten the display setting slightly. It’s not a perfect calibration, but it will definitely get you in the ballpark.

Color Finale 2 Pro turns Final Cut Pro X into an advanced finishing solution. Thanks to the ecosystem and extensions framework, Final Cut opens interesting approaches to collaboration, especially in the time of COVID. Tools like Frame.io and Postlab enable better long-distance collaboration in easier-to-use ways than previous technologies. Color Finale Connect brings that same ease-of-use and efficient remote collaboration to FCPX grading. Remember this is still a beta, albeit a stable one, so make sure you provide feedback should any issues crop up.

Originally written for FCP.co.

©2020 Oliver Peters

Working with ACES in DaVinci Resolve

In the film days, a cinematographer had a good handle on what the final printed image would look like. The film stocks, development methods, and printing processes were regimented with specific guidelines and limited variations. In color television production, up through the early adoption of HD, video cameras likewise adhered to the standards of Rec. 601 (SD) and Rec. 709 (HD). The advent of the video colorist allowed for more creative looks derived in post. Nevertheless, video directors of photography could also rely on knowing that the image they were creating would translate faithfully throughout post-production.

As video moved deeper into “cinematic” images, raw recording and log encoding became the norm. Many cinematographers felt their control of the image slipping away, thanks to the preponderance of color science approaches and LUTs (color look-up tables) generated from a variety of sources and applied in post. As a result, the Academy Color Encoding System (ACES) was developed as a global standard for managing color workflows. It’s an open color standard and method of best practices created by filmmakers and color scientists under the auspices of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences (AMPAS, aka “The Academy”). To dive into the nuances of ACES – complete with user guides – check out the information at ACEScentral.com.

The basics of how ACES works

Traditionally, Rec. 709 is the color space and gamma encoding standard that dictates your input, timeline, and exports for most television projects. Raw and log recordings are converted into Rec. 709 through color correction or LUTs. The color gamut is then limited to the Rec. 709 color space. Therefore, if you later try to convert a Rec. 709 ProResHQ 4:2:2 master file into full RGB, Rec. 2020, HDR, etc., then you are starting from an already-restricted range of color data. The bottom line is that this color space has been defined by the display technology – the television set.

ACES is its own color space designed to be independent of the display hardware. It features an ultra-wide color gamut that encompasses everything the human eye can see. It is larger than Rec. 709, Rec. 2020, P3, sRGB, and others. When you work in an ACES pipeline, ACES is an intermediate color space not intended for direct viewing. In other world, ACES is not dictated by current display technology. Files being brought into ACES and being exported for delivery from ACES pass through input and output device transforms. These are mathematical color space conversions.

For example, film with an ARRI Alexa, record as LogC, and grade in a Rec. 709 pipeline. A LogC-to-REC709 LUT will be applied to the clip to convert it to the Rec. 709 color space of the project. The ACES process is similar. When working in an ACES pipeline, instead of applying a LUT, I would apply an Input Device Transform (IDT) specific for the Alexa camera. This is equivalent to a camera profile for each camera manufacturer’s specific color science.

ACES requires one extra step, which is to define the target device on which this image will be displayed. If your output is intended to be viewed on television screens with a standard dynamic range, then an Output Device Transform (ODT) for Rec. 709 would be applied as the project’s color output setting. In short, the camera file is converted by the IDT into the ACES working color space, but is viewed on your calibrated display based on the ODT used. Under the hood, ACES preserves all of the color data available from the original image. In addition to IDTs and ODTs, ACES also provides for Look Modification Transforms (LMT). These are custom “look” files akin to various creative LUTs built for traditional Rec. 709 workflows.

ACES holds a lot of promise, but it is still a work-in-progress. If your daily post assignments don’t include major network or studio deliverables, then you might wonder what benefit ACES has for you. In that case, yes, continuing to stick with a Rec. 709 color pipeline will likely be fine for a while. But companies like Netflix are behind the ACES initiative and other media outlets are bound to follow. You may well find yourself grading a project that requires ACES deliverables at some point in the future.

There is no downside in adopting an ACES pipeline now for all of your Resolve Rec. 709 projects. Working in ACES does not mean you can instantly go from a grade using a Rec. 709 ODT to one with a Rec. 2020 ODT without an extra trim pass. However, ACES claims to make that trim pass easier than other methods.

The DaVinci Resolve ACES color pipeline

Resolve has earned a position of stature within the industry. With its low price point, it also offers the most complete ACES implementation available to any editor and/or colorist. Compared with Media Composer, Premiere Pro, or Final Cut Pro X, I would only trust Resolve for an accurate ACES workflow at this point in time. However, you can start your edit in Resolve as Rec. 709 – or roundtrip from another editor into Resolve – and then switch the settings to ACES for the grade and delivery. Or you can start with ACES color management from the beginning. If you start a Resolve project using a Rec. 709 workflow for editing and then switch to ACES for the grade, be sure to remove any LUTs applied to clips and reset grading nodes. Those adjustments will all change once you shift the settings into ACES color management.

To start with an ACES workflow, select the Color Management tab in the Master Settings (lower right gear icon). Change Color Science to ACEScct and ACES version 1.1. (The difference between ACEScc and ACEScct is that the latter has a slight roll-off at the bottom, thus allowing a bit more shadow detail.) Set the rest as follows: ACES Input Device Transform to No Input Transform. ACES Output Device Transform to Rec. 709 (when working with a calibrated grading display). Process Node LUTs in ACEScc AP1 Timeline Space. Finally, if this is for broadcast, enable Broadcast Safe and set the level restrictions based on the specs that you’ve been supplied by the media outlet.

With these settings, the next step is to select the IDT for each camera type in the Media page. Sort the list to change all cameras of a certain model at once. Some media clips will automatically apply an IDT based on metadata embedded into the clip by the camera. I found this to be the case with the raw formats I tested, such as RED and BRAW. While an IDT may appear to be doing the same thing as a technical LUT, the math is inherently  different. As a result, you’ll get a slightly different starting look with Rec. 709 and a LUT, versus ACES and an IDT.

Nearly all LUTs are built for the Rec. 709 color space and should not be used in an ACES workflow. Yes, you can apply color space transforms within your node structure, but the results are highly unpredictable and should be avoided. Technical camera LUTs in Resolve were engineered by Blackmagic Design based on a camera manufacturer’s specs. They are not actually supplied as a plug-in by the manufacturer to Blackmagic. The same is true for Apple, Avid, and Adobe, which means that in all cases a bit of secret sauce may have been employed. Apple’s S-Log conversion may not match Avid’s for instance. ACES IDTs and ODTs within Resolve are also developed by Blackmagic, but based on ACES open standards. In theory, the results of an IDT in Resolve should match that same IDT used by another software developer.

Working with ACES on the Color page

After you’ve set up color management and the transforms for your media clips, you’ll have no further interaction with ACES during editing. Likewise, when you move to the Color page, your grading workflow will change very little. Of course, if you are accustomed to applying LUTs in a Rec. 709 workflow, that step will no longer be necessary. You might find a reason to change the IDT for a clip, but typically it should be whatever is the correct camera profile for the associated clip. Under the hood, the timeline is actually working in a log color space (ACEScc AP1); therefore, I would suggest grading with Log rather than Primary color wheels. The results will be more predictable. Otherwise, grade any way you like to get the look that you are after.

Currently Resolve offers few custom look presets specific to the ACES workflow. There are three LMTs found under the LUTs option / CLF (common LUT format) tab (right-click any node). These include LMT Day for Night. LMT Kodak 2383 Print Emulation, LMT Neon Suppression. I’m not a fan of either of the first two looks. Quite frankly, I feel Resolve film stock emulations are awful and certainly nowhere near as pleasing as those available through Koji Advance or FilmConvert Nitrate. But the third is essential. The ACES color space has one current issue, which is that extremely saturated colors with a high brightness level, like neon lights, can induce image artifacts. The Neon Suppression LMT can be applied to tone down extreme colors in some clips. For example, a shot with a highly saturated red item will benefit from this LMT, so that the red looks normal.

If you have used LUTs and filters for certain creative looks, like film stock emulation or the orange-and-teal look, then use PowerGrades instead. Unlike LUTs, which are intended for Rec. 709 and are typically a “black box,” a PowerGrade is simply a string of nodes. Every time you grab a still in the Color page, you have stored that series of correction nodes as a PowerGrade. A few enterprising colorists have developed their own packs of custom Resolve PowerGrades available for free or sale on the internet.

The advantages are twofold. First, a PowerGrade can be applied to your clip without any transform or conversion to make it work. Second, because these are a series of nodes, you can tweak or disable nodes to your liking. As a practical matter, because PowerGrades were developed with a base image, you should insert a node in front of the added PowerGrade nodes. This will allow you to optimize your image for the settings of the PowerGrade nodes and provide an optimal starting point.

Deliverables

The project’s ODT is still set to Rec. 709, so nothing changes in the Resolve Deliver page. If you need to export a ProResHQ master, simply set the export parameters as you normally would. As an extra step of caution, set the Data Levels (Advanced Settings) to Video and Color Space and Gamma Tags to Rec. 709, Gamma 2.4. The result should be a proper video file with correct broadcast levels. So far so good.

One of the main reasons for an ACES workflow is future proofing, which is why you’ve been working in this extended color space. No common video file format preserves this data. Furthermore, formats like DNxHR and ProRes are governed by companies and aren’t guaranteed to be future-proofed.

An ACES archival master file needs to be exported in the Open EXR file format, which is an image sequence of EXR files. This will be a separate deliverable from your broadcast master file. First, change the ACES Output Device Transform (Color Management setting) to No Output Device and disable Broadcast Safe limiting. At this point all of your video clips will look terrible, because you are seeing the image in the ACES log color space. That’s fine. On the Deliver page, change the format to EXR, RGB float (no compression), and Data Levels to Auto. Color Space and Gamma Tags to Same As Project. Then Export.

In order to test the transparency of this process, I reset my settings to an ODT of Rec. 709 and imported the EXR image sequence – my ACES master file. After import, the clip was set to No Input Transform. I placed it back-to-back on the timeline against the original. The two clips were a perfect match: EXR without added grading and the original with correction nodes. The one downside of such an Open EXR ACES master is a huge size increase. My 4K ProRes 4444 test clip ballooned from an original size of 3.19GB to 43.21GB in the EXR format.

Conclusion

Working with ACES inside of DaVinci Resolve involves some different terminology, but the workflow isn’t too different once you get the hang of it. In some cases, camera matching and grading is easier than before, especially when multiple camera formats are involved. ACES is still evolving, but as an open standard supported globally by many companies and noted cinematographers, the direction can only be positive. Any serious colorist working with Resolve should spend a bit of time learning and getting comfortable with ACES. When the time comes that you are called upon to deliver an ACES project, the workflow will be second nature.

UPDATE 2/23/21

Since I wrote this post, I’ve completed a number of grading jobs using the ACES workflow in DaVinci Resolve. I have encountered a number of issues. This primarily relates to banding and artifacts with certain colors.

In a recent B-roll shoot, the crew was recording in a casino set-up with an ARRI Alexa Mini in Log-C. The set involved a lot of extreme lights and colors. The standard Resolve ACES workflow would be to set the IDT to Alexa, which then automatically corrects the Log-C image back to the default working color space. In addition, it’s also recommended to apply neon suppression in order to tone down the bright colors, like vibrant reds.

I soon discovered that the color of certain LED lights in the set became wildly distorted (see image). The purple trim lighting on the frames of signs or the edges of slot machines became very garish and artificial. When I set the IDT to Rec 709 instead of Alexa and graded the shot manually without any IDT or LUT, then I was able to get back to a proper look. It’s worth noting that I tested these same shots in Final Cut Pro using the Color Finale 2 Pro grading plug-in, which also incorporates ACES and log corrections. No problems there.

After scrutinizing a number of other shots within this batch of B-roll footage, I noticed quite a bit more banding in mid-range portions of these Alexa shots. For example, the slight lighting variations on a neutral wall in the background displayed banding, as if it were an 8-bit shot. In general, natural gradients within an image didn’t look as smooth as they should have. This is something I don’t normally see in a Rec 709 workflow with Log-C Alexa footage.

Overall, after this experience, I am now less enthusiastic about using ACES in Resolve than I was when I started out. I’m not sure if the issue is with Blackmagic Design’s implementation of these camera IDTs or if it’s an inherent problem with ACES. I’m not yet willing to completely drop ACES as a possible workflow, but for now, I have to advise that one should proceed with caution, if you intend to use ACES.

Originally written for Pro Video Coalition.

©2020, 2021 Oliver Peters