Generalists versus Specialists

“Jack of all trades, master of none” is a quote most are familiar with. But the complete quote “Jack of all trades, master of none, but oftentimes better than master of one” actually has quite the opposite perceived meaning. In the world of post production you have Jacks and Jills of all trades (generalists) and masters of one (specialists). While editors are certainly specialized in storytelling, I would consider them generalists when comparing their skillset to those of other specialists, such as visual effects artists, colorists, and audio engineers. Editors often touch on sound, effects, and color in a more general (often temp) way to get client approval. The others have to deliver the best, final results within a single discipline. Editors have to know the tools of editing, but not the nitty gritty of color correction or visual effects.

This is closely tied to the Pareto Principle, which most know as the 80/20 Rule. This principle states that 80% of the consequences come from 20% of the causes, but it’s been applied in various ways. When talking about software development, the 80/20 Rule predicts that 80% of the users are going to use 20% of the features, while only 20% of users will find a need for the other features. The software developer has to decide whether the target customer is the generalist (the 80% user) or the specialist (the 20% user). If the generalist is the target, then the challenge is to add some specialized features to service the advanced user without creating a bloated application that no one will use.

Applying these concepts to editing software development

When looking at NLEs, the first question to ask is, “Who is defined as a video editor today?” I would separate editors into three groups. One group would be the “I have to do it all” group, which generates most of what we see on local TV, corporate videos, YouTube, etc. These are multi-discipline generalists who have neither the time nor interest in dealing with highly specialized software. In the case of true one-man bands, the skill set also includes videography, plus location lighting and sound.

The “top end” – national and international commercials, TV series, and feature films – could be split into two groups: craft (aka film or offline) editors and finishing (aka online) editors. Craft editors are specialists in molding the story, but generalists when it comes to working software. Their technical skills don’t have to be the best, but they need to have a solid understanding of visual effects, sound, and color, so that they can create a presentable rough cut with temp elements. The finishing editor’s role is to take the final elements from sound, color, and the visual effects houses, and assemble the final deliverables. A key talent is quality control and attention to detail; therefore, they have no need to understand dedicated color, sound, or effects applications, unless they are also filling one of these roles.

My motivation for writing this post stemmed from an open letter to Tim Cook, which many editors have signed – myself included. Editors have long been fans of Apple products and many gravitated from Avid Media Composer to Apple Final Cut Pro 1-7. However, when Apple reimagined Final Cut and dropped Final Cut Studio in order to launch Final Cut Pro X many FCP fans were in shock. FCPX lacked a number of important features at first. A lot of these elements have since been added back, but that development pace hasn’t been fast enough for some, hence the letter. My wishlist for new features is quite small. I recognize Final Cut for what it is in the Apple ecosystem. But I would like to see Apple work to raise the visibility of Final Cut Pro within the broader editing community. That’s especially important when the decision of which editing application to use is often not made by editors.

Blackmagic Design DaVinci Resolve – the über-app for specialists

This brings me to Resolve. Editors point to Blackmagic’s aggressive development pace and the rich feature set. Resolve is often viewed as the greener pasture over the hill. I’m going to take a contrarian’s point of view. I’ve been using Resolve since it was introduced as Mac software and recently graded a feature film that was cut on Resolve by another editor.

Unfortunately, the experience was more problematic than I’ve had with grades roundtripped to Resolve from other NLEs. Its performance as an editor was quite slow when trying to move around in the timeline, replace shots, or trim clips. Resolve wouldn’t be my first NLE choice when compared to Premiere Pro, Media Composer, or Final Cut Pro. It’s a complex program by necessity. The color management alone is enough to trip up even experienced editors who aren’t intimately familiar with what the various settings do with the image.

DaVinci Resolve is an all-in-one application that integrates editing (2 different editing models), color correction (aka grading), Fusion visual effects, and the Fairlight DAW. Historically, all-in-ones have not had a great track record in the market. Other such über-apps would include Avid|DS and Autodesk Smoke. Avid pulled the plug on DS and Autodesk changed their business model for the Flame/Smoke/Lustre product family into subscription. Neither DS nor Smoke as a standalone application moved the needle for market share.

At its core, Resolve is a grading application with Fusion and Fairlight added in later. Color, effects, and audio mixing are all specialized skills and the software is designed so that each specialist if comfortable with the toolset presented on those pages/modes. I believe Blackmagic has been attempting to capitalize on Final Cut editor discontent and create the mythical “FCP8” or “FC Extreme” that many wanted. However, adding completely new and disparate functions to an application that at its core is designed around color correction can make it quite unwieldy. Beginning editors are never going to touch most of what Resolve has to offer and the specialists would rather have a dedicated specialized tool, like Nuke, After Effects, or Pro Tools.

Apple Final Cut Pro – reimagining modern workflows for generalists

Apple makes software for generalists. Pages, Numbers, Keynote, Photos, GarageBand, and iMovie are designed for that 80%. Apple also creates advanced software for the more demanding user under the ProApps banner (professional applications). This is still “generalist” software, but designed for more complex workflows. That’s where Final Cut Pro, Motion, Compressor, and Logic Pro fit.

Apple famously likes to “skate to where the puck will be” and having control over hardware, operating system, and software gives the teams special incite to develop software that is optimized for the hardware/OS combo. As a broad-based consumer goods company Apple also understands market trends. In the case of iPhones and digital photography it also plays a huge role in driving trends.

When Apple launched Final Cut Pro X the goal was an application designed for simplified, modernized workflows – even if “Hollywood” wasn’t quite ready. This meant walking away from the comprehensive “suite of tools” concept (Final Cut Studio). They chose to focus on a few applications that were better equipped for where the wider market of content creators was headed – yet, one that could still address more sophisticated needs, albeit in a different way.

This reimagining of Final Cut Pro had several aspects to it. One was to design an application that could easily be used on laptops and desktop systems and was adaptable to single and dual screen set-ups. It also introduced workflows based on metadata to improve edit efficiency. It was intended as a platform with third parties filling in the gaps. This means you need to augment FCP to cover a few common industry workflows. In short, FCP is designed to appeal to a broad spectrum of today’s “professionals” and not how one might have defined that term in the early 1990s, when nonlinear editing first took hold.

For a developer, it gets down to who the product is marketed towards and which new features to prioritize. Generalists are going to grow the market faster, hence a better return on development resources. The more complex an application becomes, the more likely it is to have bugs or break when the hardware or OS is updated. Quality assurance testing (QA) expands exponentially with complexity.

Final thoughts

Do my criticisms of Resolve mean that it’s a bad application? No, definitely not! It’s powerful in the right hands, especially if you work within its left-to-right workflow (edit -> Fusion -> color -> Fairlight). But, I don’t think it’s the ideal NLE for craft editing. The tools are designed for a collection of specialists. Blackmagic has been on this path for a rather long time now and seem to be at a fork in the road. Maybe they should step back, start from a clean slate, and develop a fresh, streamlined version of Resolve. Or, split it up into a set of individual, focused applications.

So, is Final Cut Pro the ideal editing platform? It’s definitely a great NLE for the true generalist. I’m a fan and use it when it’s the appropriate tool for the job. I like that it’s a fluid NLE with a responsive UI design. Nevertheless, it isn’t the best fit for many circumstances. I work in a market and with clients that are invested in Adobe Creative Cloud workflows. I have to exchange project files and make sure plug-ins are all compatible. I collaborate with other editors and more than one of us often touches these projects.

Premiere Pro is the dominant NLE for me in this environment. It also clicks with how my mind works and feels natural to me. Although you hear complaints from some, Premiere has been quite stable for me in all my years of use. Premiere Pro hits the sweet spot for advanced editors working on complex productions without becoming overly complex. Product updates over the past year have provided new features that I use every day. However, if I were in New York or Los Angeles, that answer would likely be Avid Media Composer, which is why Avid maintains such dominance in broadcast operations and feature film post.

In the end, there is no right or wrong answer. If you have the freedom to choose, then assess your skills. Where do you fall on the generalist/specialist spectrum? Pick the application that best meets your needs and fits your mindset.

For another direct comparison check out this previous post.

©2022 Oliver Peters

Final Cut Pro vs DaVinci Resolve

Apple’s innovative Final Cut Pro editing software has passed its tenth year and for many, the development pace has become far too slow. As a yardstick, users point to the intensity with which Blackmagic Design has advanced its flagship DaVinci Resolve application. Since acquiring DaVinci, Blackmagic has expanded the editing capabilities and melded in other acquisitions, such as EyeOn Fusion and Fairlight audio. They’ve even integrated a second, FCP-like editing model called the Cut page. This has some long-time Final Cut editors threatening to jump ship and switch to Resolve.

Let’s dig a bit deeper into some of the comparisons. While Resolve has a strong presence as a premier color correction tool, its actual adoption as the main editor within the post facility world hasn’t been very strong. On the other hand, if you look outside of the US to Europe and the rest of the world, you’ll find quite a few installations of Final Cut Pro within larger media operations and production companies. Clearly both products have found a home servicing professional workflows.

Editing versus finishing

When all production and post was done with film, the picture editor would make all of the creative editing decisions by cutting workprint and sound using a flatbed or upright editing machine. The edited workprint became the template for the optical house, negative cutter, film timer, and lab to produce the final film prints. There was a clear delineation between creative editing and the finishing stages of filmmaking.

Once post moved to videotape, the film workflow was translated into its offline (creative editing) and online (finishing) video counterparts. Offline editing rooms used low-res formats and were less expensive to equip and operate. Online rooms used high-res formats and often looked like the bridge of a starship. But it could also be the other way around, because the offline and online processes were defined by the outcome and not the technology. Offline = creative decisions. Online = finished masters. Of course, given proper preparation or a big budget, the offline edit stage could be skipped. Everything – creative edit and finishing – was all performed in the same online edit bay.

Early nonlinear editing supplemented videotape offline edit bays for a hybrid workflow. As computer technology advanced and NLE quality and capabilities improved, all post production shifted to workstation-based operations. But the offline/online – editing/finishing – workflows have persisted, in spite of the fact that most computers and editing applications are capable of meeting both needs. Why? It comes down to three things: personality, kit, and skillset.

Kit first. Although your software might do everything well, you may or may not have a capable computer, which is why proxy workflows exist today. Beyond that comes monitoring. Accurate color correction and sound mixing requires proper high-quality audio and video monitoring. A properly equipped finishing room should also have the right lighting environment and/or wall treatments for sound mixing. None of this is essential for basic editing tasks, even at the highest level. While having a tool like Resolve makes it possible to cover all of the technical aspects of editing and finishing, if you don’t have the proper room, high-quality finishing may still be a challenge.

Each of the finishing tasks requires its own specialized skillset. A topnotch re-recording mixer isn’t going to be a great colorist or an award-winning visual effects compositor. It’s not that they couldn’t, but for most of us, that’s not the way the mind works nor the opportunities presented to us. As we spend more time at a specialized skill – the “10,000 hour” rule – the better we are at it.

Finally, the issue of personality. Many creative editors don’t have a strong technical background and some aren’t all that precise in how they handle the software. As someone who works on both sides, I’ve encountered some of the most awful timelines on projects where I’ve handled the finishing tasks. The cut was great and very creative, but the timeline was a mess.

On the flipside, finishing editors (or online editors before them) tend to be very detail-oriented. They are often very creative in their own right, but they do tend to fit the “left-brained” description. Many prefer finishing tasks over the messy world of clients, directors, and so on. In short, a topnotch creative editor might not be a good finisher and vice versa.

The all-in-one application versus the product ecosystem

Blackmagic Design’s DaVinci Resolve is an all-in-one solution, combining editing, color, visual effects, and sound mixing. As such, it follows in the footsteps of other all-in-ones, like Avid|DS (discontinued) and Autodesk Flame (integrated with Smoke and Lustre). Historically, neither of these or any other all-in-ones have been very successful in the wider editing market. Cost coupled with complex user interfaces have kept them in more rarified areas of post.

Apple took the opposite approach with the interaction of Final Cut Pro X. They opted for a simpler, more approachable interface without many features editors had grown used to in the previous FCP 7/FCP Studio versions. This stripped-down application was augmented by other Apple and third-party applications, extensions, and plug-ins to fill the void.

If you want the closest equivalent to Resolve’s toolkit in the Final Cut ecosystem, you’ll have to add Motion, Logic Pro, Xsend Motion, X2Pro Audio Convert, XtoCC, and SendToX at a very minimum. If you want to get close to the breadth of Adobe Creative Cloud offerings, also add Compressor, Pixelmator Pro (or Affinity, Photo, Publisher, and Designer), and a photo application. Resolve is built upon a world-class color correction engine, but Final Cut Pro does include high-quality grading tools, too. Want more? Then add Color Finale 2, Coremelt Chromatic, FilmConvert Nitrate, or one of several other color correction plug-ins.

Yes, the building block approach does seem messy, but it allows a user to tailor the software toolkit according to their own particular use case. The all-in-one approach might appear better, but that gets to personality and skillset. It’s highly unlikely that the vast majority of Resolve users will fully master its four core capabilities: edit, color, VFX (Fusion), and mixing (Fairlight). A good, full-time editor probably isn’t going to be as good at color correction as a full-time colorist. A great colorist won’t also be a good mixer.

In theory, if you have a team of specialists who have all centralized around Resolve, then the same tool and project files could bounce from edit to VFX, to color, and to the mix, without any need to roundtrip between disparate applications. In reality it’s likely that your go-to mograph/VFX artist/compositor is going to prefer After Effects or maybe Nuke. Your favorite audio post shop probably won’t abandon Pro Tools for Fairlight.

Even for the single editor who does it all, Resolve presents some issues with its predefined left-to-right, tabbed workflow. For example, grading performed in the Color tab can’t be tweaked in the Edit tab. The UI is based on modal tabs instead of fly-out panels within a single workspace.

If you boil it all down, Resolve is the very definition of a finishing application and appeals best to editors of that mindset and with the skills to effectively use the majority of its power. Final Cut Pro is geared to the creative approach with its innovative feature set, like metadata-based organization, skimming, and the magnetic timeline. It’s more approachable for less-experience editors, hiding the available technical complexity deeper down. However, just like offline and online editing suites, you can flip it around and do creative editing with Resolve and finishing with Final Cut Pro (plus the rest of the ecosystem).

The intangibles of editing

It’s easy to compare applications on paper and say that one product appears better and more feature-rich than another. That doesn’t account for how an application feels when you use it, which is something Apple has spent a lot of time thinking about. Sometimes small features can make all the difference in an editor’s preference. The average diner might opine that chef’s knives are the same, but don’t tell that to a real chef!

Avid Media Composer editors rave about the trim tool. Many Adobe Premiere Pro editors swear by Dynamic Link. Some Apple Final Cut Pro editors get frustrated when they have to return to a track-based, non-magnetic NLE. It’s puzzling to me that some FCP stalwarts are vocal about shifting to Resolve (a traditional track-based NLE) if Apple doesn’t add ‘xyz’ feature. That simply doesn’t make sense to me, unless a) you are equally comfortable in track-based versus trackless architectures, and/or b) you truly have the aptitude to make effective use out of an all-in-one application like Resolve. Of course, you can certainly use both side-by-side depending on the task at hand. Cost is no longer an impediment these days. Organize and cut in FCP, and then send an FCPXML of the final sequence to Resolve for the grade, visual effects, and the mix.

It’s horses for courses. I recently read where NFL Films edits in Media Composer, grades in DaVinci Resolve, and conforms/finishes projects in Premiere Pro. That might seem perplexing to some, but makes all the sense in the world to me, because of the different skillsets of the users at those three stages of post. In my day gig, Premiere Pro is also the best choice for our team of editors. Yet, when I have projects that are totally under my control, I’ll often use FCP.

Ultimately there is no single application that is great at each and every element in post production. While the majority of features might fit all of my needs, that may not be true for you or anyone else. The divide between creative editing and finishing is likely to continue – at least at the higher end of production. In that context, Final Cut Pro still makes more sense for a frictionless editing experience, but Resolve is hard to beat for finishing.

There is one final caveat to consider. The post world is changing and much is driven by the independent content creator, as well as the work-from-home transformation. That market segment is cost conscious and subscription business models are less appealing. So Resolve’s entry point at free is attractive. Coupling Resolve with Blackmagic’s low cost, high quality cameras is also a winning strategy for new users. While Resolve can be daunting in its breadth, a new user can start with just the tools needed to complete the project and then learn new aspects of the software over time. As I look down the road, it’s a toss up as to who will be dominant in another ten years.

For another look at this topic, click here.

©2021 Oliver Peters

Easy Resolve Grading with 6 Nodes

Spend any time watching Resolve tutorials and you’ll see many different ways in which colorists approach the creation of the same looks. Some create a look with just a few simple nodes. Others build a seemingly convoluted node tree designed to achieve the same goal. Neither approach is right or wrong.

Often what can all be done in a single node is spread across several in order to easily trace back through your steps when changes are needed. It also makes it easy to compare the impact of a correction by enabling and disabling a node. A series of nodes applied to a clip can be saved as a PowerGrade, which is a node preset. PowerGrades can be set up for a certain look or can be populated with blank (unaltered) nodes that are organized for how you like to work. Individual nodes can also be labeled, so that it’s easy to remember what operation you will do in each node.

The following is a simple PowerGrade (node sequence) that can be used as a starting point for most color grading work. It’s based on using log footage, but can also be modified for camera RAW or recordings in non-log color spaces, like Rec 709. These nodes are designed as a simple operational sequence to follow and each step can be used in a manner that works best with your footage. The sample ARRI clip was recorded with an ALEXA camera using the Log-C color profile.

Node 2 (LUT) – This is the starting point, because the first thing I want to do is apply the proper camera LUT to transform the image out of log. You could also do this with manual grading (no LUT). In that case the first three nodes would be rolled into one. Alternately you may use a Color Space Transform effect or even a Dehaze effect in some cases. But for the projects I grade, which largely use ARRI, Panasonic, Canon, and Sony cameras, adding the proper LUT seems to be the best starting point.

Node 1 (Contrast/Saturation) – With the LUT added to Node 2, I will go back to Node 1 to adjust contrast, pivot, and saturation. This changes the image going into the LUT and is a bit like adjusting the volume gain stage prior to applying an effect or filter when mixing sound. Since LUTs affect how color is treated, I will rarely adjust color balance or hue offsets (color wheels) in Node 1, as it may skew what the LUT is doing to the image in Node 2. The objective is to make subtle adjustments in Node 1 that improve the natural result coming out of Node 2.

Node 3 (Primary Correction) – This node is where you’ll want to correct color temperature/tint and use the color wheels, RGB curves,  and other controls to achieve a nice primary color correction. For example, you may need to shift color temperature warmer or cooler, lower black levels, apply a slight s-curve in the RGB curves, or adjust the overall level up or down.

Node 4 (Secondary Correction) – This node is for enhancement and the tools you’ll generally use are hue/sat curves. Let’s say you want to enhance skin tones, or the blue in the sky. Adjust the proper hue/sat curve in this node.

Node 5 (Windows) – You can add one or more “power windows” within the node (or use multiple nodes). Windows can be tracked to follow objects, but the main objective is a way to relight the scene. In most projects, I find that one window per shot is typically all I need, if any at all. Often this is to brighten up the lighting on the main talent in the shot. The use of windows is a way to direct the viewer’s attention. Often a simple soft-edged oval is all you’ll need to achieve a dramatic result.

Node 6 (Vignette) – The last node in this basic structure is to add a vignette, which I generally apply just to subtly darken the corners. This adds a bit of character to most shots. I’ll build the vignette manually with a circular window rather than apply a stock effect. The window is inverted so that the correction impacts the shot outside of the windowed area.

So there’s a simple node tree that works for many jobs. If you need to adjust parameters such as noise reduction, that’s best done in Node 1 or 2. Remember that Resolve grading works on two levels – clip and timeline. These are all clip-based nodes. If you want to apply a global effect, like adding film grain to the whole timeline, then you can change the grading mode from clip to timeline. In the timeline mode, any nodes you apply impact the whole timeline and are added on top of any clip-by-clip correction, so it works a bit like an adjustment layer.

©2021 Oliver Peters

Working with ACES in DaVinci Resolve

In the film days, a cinematographer had a good handle on what the final printed image would look like. The film stocks, development methods, and printing processes were regimented with specific guidelines and limited variations. In color television production, up through the early adoption of HD, video cameras likewise adhered to the standards of Rec. 601 (SD) and Rec. 709 (HD). The advent of the video colorist allowed for more creative looks derived in post. Nevertheless, video directors of photography could also rely on knowing that the image they were creating would translate faithfully throughout post-production.

As video moved deeper into “cinematic” images, raw recording and log encoding became the norm. Many cinematographers felt their control of the image slipping away, thanks to the preponderance of color science approaches and LUTs (color look-up tables) generated from a variety of sources and applied in post. As a result, the Academy Color Encoding System (ACES) was developed as a global standard for managing color workflows. It’s an open color standard and method of best practices created by filmmakers and color scientists under the auspices of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences (AMPAS, aka “The Academy”). To dive into the nuances of ACES – complete with user guides – check out the information at ACEScentral.com.

The basics of how ACES works

Traditionally, Rec. 709 is the color space and gamma encoding standard that dictates your input, timeline, and exports for most television projects. Raw and log recordings are converted into Rec. 709 through color correction or LUTs. The color gamut is then limited to the Rec. 709 color space. Therefore, if you later try to convert a Rec. 709 ProResHQ 4:2:2 master file into full RGB, Rec. 2020, HDR, etc., then you are starting from an already-restricted range of color data. The bottom line is that this color space has been defined by the display technology – the television set.

ACES is its own color space designed to be independent of the display hardware. It features an ultra-wide color gamut that encompasses everything the human eye can see. It is larger than Rec. 709, Rec. 2020, P3, sRGB, and others. When you work in an ACES pipeline, ACES is an intermediate color space not intended for direct viewing. In other world, ACES is not dictated by current display technology. Files being brought into ACES and being exported for delivery from ACES pass through input and output device transforms. These are mathematical color space conversions.

For example, film with an ARRI Alexa, record as LogC, and grade in a Rec. 709 pipeline. A LogC-to-REC709 LUT will be applied to the clip to convert it to the Rec. 709 color space of the project. The ACES process is similar. When working in an ACES pipeline, instead of applying a LUT, I would apply an Input Device Transform (IDT) specific for the Alexa camera. This is equivalent to a camera profile for each camera manufacturer’s specific color science.

ACES requires one extra step, which is to define the target device on which this image will be displayed. If your output is intended to be viewed on television screens with a standard dynamic range, then an Output Device Transform (ODT) for Rec. 709 would be applied as the project’s color output setting. In short, the camera file is converted by the IDT into the ACES working color space, but is viewed on your calibrated display based on the ODT used. Under the hood, ACES preserves all of the color data available from the original image. In addition to IDTs and ODTs, ACES also provides for Look Modification Transforms (LMT). These are custom “look” files akin to various creative LUTs built for traditional Rec. 709 workflows.

ACES holds a lot of promise, but it is still a work-in-progress. If your daily post assignments don’t include major network or studio deliverables, then you might wonder what benefit ACES has for you. In that case, yes, continuing to stick with a Rec. 709 color pipeline will likely be fine for a while. But companies like Netflix are behind the ACES initiative and other media outlets are bound to follow. You may well find yourself grading a project that requires ACES deliverables at some point in the future.

There is no downside in adopting an ACES pipeline now for all of your Resolve Rec. 709 projects. Working in ACES does not mean you can instantly go from a grade using a Rec. 709 ODT to one with a Rec. 2020 ODT without an extra trim pass. However, ACES claims to make that trim pass easier than other methods.

The DaVinci Resolve ACES color pipeline

Resolve has earned a position of stature within the industry. With its low price point, it also offers the most complete ACES implementation available to any editor and/or colorist. Compared with Media Composer, Premiere Pro, or Final Cut Pro X, I would only trust Resolve for an accurate ACES workflow at this point in time. However, you can start your edit in Resolve as Rec. 709 – or roundtrip from another editor into Resolve – and then switch the settings to ACES for the grade and delivery. Or you can start with ACES color management from the beginning. If you start a Resolve project using a Rec. 709 workflow for editing and then switch to ACES for the grade, be sure to remove any LUTs applied to clips and reset grading nodes. Those adjustments will all change once you shift the settings into ACES color management.

To start with an ACES workflow, select the Color Management tab in the Master Settings (lower right gear icon). Change Color Science to ACEScct and ACES version 1.1. (The difference between ACEScc and ACEScct is that the latter has a slight roll-off at the bottom, thus allowing a bit more shadow detail.) Set the rest as follows: ACES Input Device Transform to No Input Transform. ACES Output Device Transform to Rec. 709 (when working with a calibrated grading display). Process Node LUTs in ACEScc AP1 Timeline Space. Finally, if this is for broadcast, enable Broadcast Safe and set the level restrictions based on the specs that you’ve been supplied by the media outlet.

With these settings, the next step is to select the IDT for each camera type in the Media page. Sort the list to change all cameras of a certain model at once. Some media clips will automatically apply an IDT based on metadata embedded into the clip by the camera. I found this to be the case with the raw formats I tested, such as RED and BRAW. While an IDT may appear to be doing the same thing as a technical LUT, the math is inherently  different. As a result, you’ll get a slightly different starting look with Rec. 709 and a LUT, versus ACES and an IDT.

Nearly all LUTs are built for the Rec. 709 color space and should not be used in an ACES workflow. Yes, you can apply color space transforms within your node structure, but the results are highly unpredictable and should be avoided. Technical camera LUTs in Resolve were engineered by Blackmagic Design based on a camera manufacturer’s specs. They are not actually supplied as a plug-in by the manufacturer to Blackmagic. The same is true for Apple, Avid, and Adobe, which means that in all cases a bit of secret sauce may have been employed. Apple’s S-Log conversion may not match Avid’s for instance. ACES IDTs and ODTs within Resolve are also developed by Blackmagic, but based on ACES open standards. In theory, the results of an IDT in Resolve should match that same IDT used by another software developer.

Working with ACES on the Color page

After you’ve set up color management and the transforms for your media clips, you’ll have no further interaction with ACES during editing. Likewise, when you move to the Color page, your grading workflow will change very little. Of course, if you are accustomed to applying LUTs in a Rec. 709 workflow, that step will no longer be necessary. You might find a reason to change the IDT for a clip, but typically it should be whatever is the correct camera profile for the associated clip. Under the hood, the timeline is actually working in a log color space (ACEScc AP1); therefore, I would suggest grading with Log rather than Primary color wheels. The results will be more predictable. Otherwise, grade any way you like to get the look that you are after.

Currently Resolve offers few custom look presets specific to the ACES workflow. There are three LMTs found under the LUTs option / CLF (common LUT format) tab (right-click any node). These include LMT Day for Night. LMT Kodak 2383 Print Emulation, LMT Neon Suppression. I’m not a fan of either of the first two looks. Quite frankly, I feel Resolve film stock emulations are awful and certainly nowhere near as pleasing as those available through Koji Advance or FilmConvert Nitrate. But the third is essential. The ACES color space has one current issue, which is that extremely saturated colors with a high brightness level, like neon lights, can induce image artifacts. The Neon Suppression LMT can be applied to tone down extreme colors in some clips. For example, a shot with a highly saturated red item will benefit from this LMT, so that the red looks normal.

If you have used LUTs and filters for certain creative looks, like film stock emulation or the orange-and-teal look, then use PowerGrades instead. Unlike LUTs, which are intended for Rec. 709 and are typically a “black box,” a PowerGrade is simply a string of nodes. Every time you grab a still in the Color page, you have stored that series of correction nodes as a PowerGrade. A few enterprising colorists have developed their own packs of custom Resolve PowerGrades available for free or sale on the internet.

The advantages are twofold. First, a PowerGrade can be applied to your clip without any transform or conversion to make it work. Second, because these are a series of nodes, you can tweak or disable nodes to your liking. As a practical matter, because PowerGrades were developed with a base image, you should insert a node in front of the added PowerGrade nodes. This will allow you to optimize your image for the settings of the PowerGrade nodes and provide an optimal starting point.

Deliverables

The project’s ODT is still set to Rec. 709, so nothing changes in the Resolve Deliver page. If you need to export a ProResHQ master, simply set the export parameters as you normally would. As an extra step of caution, set the Data Levels (Advanced Settings) to Video and Color Space and Gamma Tags to Rec. 709, Gamma 2.4. The result should be a proper video file with correct broadcast levels. So far so good.

One of the main reasons for an ACES workflow is future proofing, which is why you’ve been working in this extended color space. No common video file format preserves this data. Furthermore, formats like DNxHR and ProRes are governed by companies and aren’t guaranteed to be future-proofed.

An ACES archival master file needs to be exported in the Open EXR file format, which is an image sequence of EXR files. This will be a separate deliverable from your broadcast master file. First, change the ACES Output Device Transform (Color Management setting) to No Output Device and disable Broadcast Safe limiting. At this point all of your video clips will look terrible, because you are seeing the image in the ACES log color space. That’s fine. On the Deliver page, change the format to EXR, RGB float (no compression), and Data Levels to Auto. Color Space and Gamma Tags to Same As Project. Then Export.

In order to test the transparency of this process, I reset my settings to an ODT of Rec. 709 and imported the EXR image sequence – my ACES master file. After import, the clip was set to No Input Transform. I placed it back-to-back on the timeline against the original. The two clips were a perfect match: EXR without added grading and the original with correction nodes. The one downside of such an Open EXR ACES master is a huge size increase. My 4K ProRes 4444 test clip ballooned from an original size of 3.19GB to 43.21GB in the EXR format.

Conclusion

Working with ACES inside of DaVinci Resolve involves some different terminology, but the workflow isn’t too different once you get the hang of it. In some cases, camera matching and grading is easier than before, especially when multiple camera formats are involved. ACES is still evolving, but as an open standard supported globally by many companies and noted cinematographers, the direction can only be positive. Any serious colorist working with Resolve should spend a bit of time learning and getting comfortable with ACES. When the time comes that you are called upon to deliver an ACES project, the workflow will be second nature.

UPDATE 2/23/21

Since I wrote this post, I’ve completed a number of grading jobs using the ACES workflow in DaVinci Resolve. I have encountered a number of issues. This primarily relates to banding and artifacts with certain colors.

In a recent B-roll shoot, the crew was recording in a casino set-up with an ARRI Alexa Mini in Log-C. The set involved a lot of extreme lights and colors. The standard Resolve ACES workflow would be to set the IDT to Alexa, which then automatically corrects the Log-C image back to the default working color space. In addition, it’s also recommended to apply neon suppression in order to tone down the bright colors, like vibrant reds.

I soon discovered that the color of certain LED lights in the set became wildly distorted (see image). The purple trim lighting on the frames of signs or the edges of slot machines became very garish and artificial. When I set the IDT to Rec 709 instead of Alexa and graded the shot manually without any IDT or LUT, then I was able to get back to a proper look. It’s worth noting that I tested these same shots in Final Cut Pro using the Color Finale 2 Pro grading plug-in, which also incorporates ACES and log corrections. No problems there.

After scrutinizing a number of other shots within this batch of B-roll footage, I noticed quite a bit more banding in mid-range portions of these Alexa shots. For example, the slight lighting variations on a neutral wall in the background displayed banding, as if it were an 8-bit shot. In general, natural gradients within an image didn’t look as smooth as they should have. This is something I don’t normally see in a Rec 709 workflow with Log-C Alexa footage.

Overall, after this experience, I am now less enthusiastic about using ACES in Resolve than I was when I started out. I’m not sure if the issue is with Blackmagic Design’s implementation of these camera IDTs or if it’s an inherent problem with ACES. I’m not yet willing to completely drop ACES as a possible workflow, but for now, I have to advise that one should proceed with caution, if you intend to use ACES.

Originally written for Pro Video Coalition.

©2020, 2021 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters