Blackmagic Cloud Store Workgroup Ideas

In my previous post I reviewed the new Blackmagic Cloud Store Mini. (Click here for the full article.) Blackmagic Design has a track record of being disruptive with cool new ideas and products. Their cloud storage line is no exception. I am less interested in the Blackmagic Cloud, as I’m not a fan of either the Resolve database format for projects nor parking your projects in the cloud. However, the storage products don’t require the Blackmagic Cloud and that opens up a number of possibilities.

I’ve worked with shared storage (SAN and NAS) products going back to Avid’s first fiber channel MediaShare. Blackmagic’s Cloud Store and Mini are light years ahead of those early units. Yet the multi-user workflows developed back then are still in play today. This is especially true if you are a facility owner with a handful of workstations all connected over a network to the same storage pool. It’s also true if you are an editorial team working together on a feature film. Both are workgroups that can easily be serviced by one of the Cloud Store or Cloud Store Mini storage products.

Building the workgroup

I’m a fan of simplicity. Today’s edit suites don’t need all the gadgets that they’ve had in the past. Less is often more. Let’s envision four workstations running either Final Cut Pro or Premiere Pro. Resolve may be part of the mix, but not for collaborative editing. I’ll leave that discussion for another day. Mac-based, of course, but by personal preference since the storage also works with Windows. With those parameters, what would you buy?

For a fixed facility, the Mac Studio is a no-brainer. The Mac Pro makes no sense to me for most editorial work nor for most grading or audio mixing. A loaded M1 Max with a 1TB or 2TB internal drive is plenty. This unit already includes a built-in 10G Ethernet port. If the budget can’t handle four of those, then a loaded M1 Mac mini is another option. Just make sure to add the 10G upgrade.

What if the team needs to more portable, such as for on-site editing? In that case, go for one of the MacBook Pros instead. Since these do not include Ethernet ports, you’ll need add a Sonnet Solo 10G (Thunderbolt 3 to 10G Ethernet) converter for each computer in order to get 10G speeds.

External displays are required for the two desktop Mac models, but optional with the laptops. Equip each unit with one or two of Apple’s 27″ Studio Displays. An alternate option would be the Apple Store’s LG UltraFine 4K display. Don’t forget audio. Unless everyone is working with headphones, I would recommend a simple audio interface (PreSonus, Focusrite, Universal Audio, etc) and desktop speakers (M-Audio, KRK, Adam, etc).

This team is using 10G Ethernet. If you opt for the Cloud Store Mini 8TB drive, then you’ll need to integrate a standard 10G Ethernet switch to connect the four workstations with the Mini over a 10G network. There are a range of options, but at a minimum the switch will require that five or more ports supports 10G. There are many small combo switches on the market with both 1G and 10G ports. It’s easy to misread and think you have enough ports and then find out that most are only 1G.

In this hypothetical scenario, if you only have four workstations and are using the larger Cloud Store, then a switch isn’t needed, since the Cloud Store already has a 4-port 10G switch built in. And finally, you’ll need enough Cat 6 Ethernet cable for the cable runs between rooms. If any run exceeds 100 feet or passes alongside a lot of electrical wiring, then you may want to consider Cat 7 cable.

Storage strategies

Now that we’ve built out the body of the facility or workgroup, what about its heart? Blackmagic Design offers CloudStore with 20TB, 80TB, or 320TB capacities, and Cloud Store Mini with 8TB.

The type of work being done will determine which one is the best fit. For example, if you work with high-resolution media and commonly edit with native camera files, then you’re going to need as big of a unit as you can afford. On the other hand, if most of the media is smaller or you generally edit with proxy files, then the Mini with 8TB might be plenty. Remember that you can always add a large Thunderbolt array (Promise, OWC, etc) with spinning drives at a very reasonable cost as a local drive on one of the stations.

A feature film might have a large Thunderbolt storage tower connected to one of the workstations. It would hold all of the original camera media. Those files would be transcoded to proxy media located on the Mini 8TB. All four workstations would be able to edit with the common proxy files, because the Mini is shared storage. When the cut is locked and it’s time to finish, the unit with the Thunderbolt tower would relink to the original camera media for grading and output.

There are many variations of these scenarios. It’s just a few ways that Blackmagic’s cloud storage products can be used to build powerful workgroups with a very light footprint. However, a key part of this innovation is the ease of putting such a system together.

©2022 Oliver Peters

A look at the Blackmagic Cloud Store Mini

Blackmagic Design is striving to democratize shared storage and edit collaboration with the introduction of the Blackmagic Cloud and the Blackmagic cloud storage product line. Let’s focus on storage first, which in spite of the name is very much earthbound.

Blackmagic Design’s cloud storage product line-up

Blackmagic Cloud Store (starting at $9,595 USD) sits at the top end. This is a desktop network-attached storage system engineered into the same chassis design that was developed for Blackmagic’s eGPUs. It features two redundant power supplies and an M.2 NVMe SSD drive array, which is configured as RAID-5 for data protection in case of drive failure. Cloud Store integrates an internal 10G Ethernet switch for up to four users connected at 10Gbps speeds. It also supports port aggregation for a combined speed of 40Gbps.

Cloud Store will ship soon and be available with 20TB, 80TB, or 320TB capacities. If you are familiar with RAID-5 systems, you know that some of that stated capacity is unaccessible due to the data parity required. Blackmagic Design has factored that in up front, because according to them, the size in the name, like 20TB, correctly reflects the useable amount of storage space.

Cloud Store Mini ($2,995 USD) is an 8TB unit using Blackmagic’s half rack width form factor. There are four internal M.2 flash cards configured as RAID-0. It sports three different Ethernet ports: 10Gbps, 1Gbps, and USB-C, which uses a built-in Ethernet adaptor. Lastly, the Cloud Pod ($395 USD) is a small 10G Ethernet unit designed for customers who supply their own USB-C storage.

All three models are designed for fast 10G Ethernet connectivity and are compatible with both Windows and macOS. Although there are many SAN and NAS products on the market, Blackmagic Design is targeting the customer who wants a high-performance shared storage solution that’s plug-and-play. These storage products are not there to usurp solutions like Avid Nexis. Instead, Blackmagic Design is appealing to customers without that sort of “heavy iron” infrastructure.

Cloud Store Mini as a storage device

The Blackmagic Cloud Store Mini is shipping, so I was able to test drive it for a couple of weeks. I connected my 2020 iMac (which includes the 10G option) via the 10G Ethernet port using a recommended Cat 6 cable. I also connected my M1 MacBook Pro on the Ethernet via USB-C port. This gave me a small “workgroup” of two workstations connected to shared storage.

Continue reading the full article at Pro Video Coalition – click here.

©2022 Oliver Peters

Working with ACES in DaVinci Resolve

In the film days, a cinematographer had a good handle on what the final printed image would look like. The film stocks, development methods, and printing processes were regimented with specific guidelines and limited variations. In color television production, up through the early adoption of HD, video cameras likewise adhered to the standards of Rec. 601 (SD) and Rec. 709 (HD). The advent of the video colorist allowed for more creative looks derived in post. Nevertheless, video directors of photography could also rely on knowing that the image they were creating would translate faithfully throughout post-production.

As video moved deeper into “cinematic” images, raw recording and log encoding became the norm. Many cinematographers felt their control of the image slipping away, thanks to the preponderance of color science approaches and LUTs (color look-up tables) generated from a variety of sources and applied in post. As a result, the Academy Color Encoding System (ACES) was developed as a global standard for managing color workflows. It’s an open color standard and method of best practices created by filmmakers and color scientists under the auspices of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences (AMPAS, aka “The Academy”). To dive into the nuances of ACES – complete with user guides – check out the information at ACEScentral.com.

The basics of how ACES works

Traditionally, Rec. 709 is the color space and gamma encoding standard that dictates your input, timeline, and exports for most television projects. Raw and log recordings are converted into Rec. 709 through color correction or LUTs. The color gamut is then limited to the Rec. 709 color space. Therefore, if you later try to convert a Rec. 709 ProResHQ 4:2:2 master file into full RGB, Rec. 2020, HDR, etc., then you are starting from an already-restricted range of color data. The bottom line is that this color space has been defined by the display technology – the television set.

ACES is its own color space designed to be independent of the display hardware. It features an ultra-wide color gamut that encompasses everything the human eye can see. It is larger than Rec. 709, Rec. 2020, P3, sRGB, and others. When you work in an ACES pipeline, ACES is an intermediate color space not intended for direct viewing. In other world, ACES is not dictated by current display technology. Files being brought into ACES and being exported for delivery from ACES pass through input and output device transforms. These are mathematical color space conversions.

For example, film with an ARRI Alexa, record as LogC, and grade in a Rec. 709 pipeline. A LogC-to-REC709 LUT will be applied to the clip to convert it to the Rec. 709 color space of the project. The ACES process is similar. When working in an ACES pipeline, instead of applying a LUT, I would apply an Input Device Transform (IDT) specific for the Alexa camera. This is equivalent to a camera profile for each camera manufacturer’s specific color science.

ACES requires one extra step, which is to define the target device on which this image will be displayed. If your output is intended to be viewed on television screens with a standard dynamic range, then an Output Device Transform (ODT) for Rec. 709 would be applied as the project’s color output setting. In short, the camera file is converted by the IDT into the ACES working color space, but is viewed on your calibrated display based on the ODT used. Under the hood, ACES preserves all of the color data available from the original image. In addition to IDTs and ODTs, ACES also provides for Look Modification Transforms (LMT). These are custom “look” files akin to various creative LUTs built for traditional Rec. 709 workflows.

ACES holds a lot of promise, but it is still a work-in-progress. If your daily post assignments don’t include major network or studio deliverables, then you might wonder what benefit ACES has for you. In that case, yes, continuing to stick with a Rec. 709 color pipeline will likely be fine for a while. But companies like Netflix are behind the ACES initiative and other media outlets are bound to follow. You may well find yourself grading a project that requires ACES deliverables at some point in the future.

There is no downside in adopting an ACES pipeline now for all of your Resolve Rec. 709 projects. Working in ACES does not mean you can instantly go from a grade using a Rec. 709 ODT to one with a Rec. 2020 ODT without an extra trim pass. However, ACES claims to make that trim pass easier than other methods.

The DaVinci Resolve ACES color pipeline

Resolve has earned a position of stature within the industry. With its low price point, it also offers the most complete ACES implementation available to any editor and/or colorist. Compared with Media Composer, Premiere Pro, or Final Cut Pro X, I would only trust Resolve for an accurate ACES workflow at this point in time. However, you can start your edit in Resolve as Rec. 709 – or roundtrip from another editor into Resolve – and then switch the settings to ACES for the grade and delivery. Or you can start with ACES color management from the beginning. If you start a Resolve project using a Rec. 709 workflow for editing and then switch to ACES for the grade, be sure to remove any LUTs applied to clips and reset grading nodes. Those adjustments will all change once you shift the settings into ACES color management.

To start with an ACES workflow, select the Color Management tab in the Master Settings (lower right gear icon). Change Color Science to ACEScct and ACES version 1.1. (The difference between ACEScc and ACEScct is that the latter has a slight roll-off at the bottom, thus allowing a bit more shadow detail.) Set the rest as follows: ACES Input Device Transform to No Input Transform. ACES Output Device Transform to Rec. 709 (when working with a calibrated grading display). Process Node LUTs in ACEScc AP1 Timeline Space. Finally, if this is for broadcast, enable Broadcast Safe and set the level restrictions based on the specs that you’ve been supplied by the media outlet.

With these settings, the next step is to select the IDT for each camera type in the Media page. Sort the list to change all cameras of a certain model at once. Some media clips will automatically apply an IDT based on metadata embedded into the clip by the camera. I found this to be the case with the raw formats I tested, such as RED and BRAW. While an IDT may appear to be doing the same thing as a technical LUT, the math is inherently  different. As a result, you’ll get a slightly different starting look with Rec. 709 and a LUT, versus ACES and an IDT.

Nearly all LUTs are built for the Rec. 709 color space and should not be used in an ACES workflow. Yes, you can apply color space transforms within your node structure, but the results are highly unpredictable and should be avoided. Technical camera LUTs in Resolve were engineered by Blackmagic Design based on a camera manufacturer’s specs. They are not actually supplied as a plug-in by the manufacturer to Blackmagic. The same is true for Apple, Avid, and Adobe, which means that in all cases a bit of secret sauce may have been employed. Apple’s S-Log conversion may not match Avid’s for instance. ACES IDTs and ODTs within Resolve are also developed by Blackmagic, but based on ACES open standards. In theory, the results of an IDT in Resolve should match that same IDT used by another software developer.

Working with ACES on the Color page

After you’ve set up color management and the transforms for your media clips, you’ll have no further interaction with ACES during editing. Likewise, when you move to the Color page, your grading workflow will change very little. Of course, if you are accustomed to applying LUTs in a Rec. 709 workflow, that step will no longer be necessary. You might find a reason to change the IDT for a clip, but typically it should be whatever is the correct camera profile for the associated clip. Under the hood, the timeline is actually working in a log color space (ACEScc AP1); therefore, I would suggest grading with Log rather than Primary color wheels. The results will be more predictable. Otherwise, grade any way you like to get the look that you are after.

Currently Resolve offers few custom look presets specific to the ACES workflow. There are three LMTs found under the LUTs option / CLF (common LUT format) tab (right-click any node). These include LMT Day for Night. LMT Kodak 2383 Print Emulation, LMT Neon Suppression. I’m not a fan of either of the first two looks. Quite frankly, I feel Resolve film stock emulations are awful and certainly nowhere near as pleasing as those available through Koji Advance or FilmConvert Nitrate. But the third is essential. The ACES color space has one current issue, which is that extremely saturated colors with a high brightness level, like neon lights, can induce image artifacts. The Neon Suppression LMT can be applied to tone down extreme colors in some clips. For example, a shot with a highly saturated red item will benefit from this LMT, so that the red looks normal.

If you have used LUTs and filters for certain creative looks, like film stock emulation or the orange-and-teal look, then use PowerGrades instead. Unlike LUTs, which are intended for Rec. 709 and are typically a “black box,” a PowerGrade is simply a string of nodes. Every time you grab a still in the Color page, you have stored that series of correction nodes as a PowerGrade. A few enterprising colorists have developed their own packs of custom Resolve PowerGrades available for free or sale on the internet.

The advantages are twofold. First, a PowerGrade can be applied to your clip without any transform or conversion to make it work. Second, because these are a series of nodes, you can tweak or disable nodes to your liking. As a practical matter, because PowerGrades were developed with a base image, you should insert a node in front of the added PowerGrade nodes. This will allow you to optimize your image for the settings of the PowerGrade nodes and provide an optimal starting point.

Deliverables

The project’s ODT is still set to Rec. 709, so nothing changes in the Resolve Deliver page. If you need to export a ProResHQ master, simply set the export parameters as you normally would. As an extra step of caution, set the Data Levels (Advanced Settings) to Video and Color Space and Gamma Tags to Rec. 709, Gamma 2.4. The result should be a proper video file with correct broadcast levels. So far so good.

One of the main reasons for an ACES workflow is future proofing, which is why you’ve been working in this extended color space. No common video file format preserves this data. Furthermore, formats like DNxHR and ProRes are governed by companies and aren’t guaranteed to be future-proofed.

An ACES archival master file needs to be exported in the Open EXR file format, which is an image sequence of EXR files. This will be a separate deliverable from your broadcast master file. First, change the ACES Output Device Transform (Color Management setting) to No Output Device and disable Broadcast Safe limiting. At this point all of your video clips will look terrible, because you are seeing the image in the ACES log color space. That’s fine. On the Deliver page, change the format to EXR, RGB float (no compression), and Data Levels to Auto. Color Space and Gamma Tags to Same As Project. Then Export.

In order to test the transparency of this process, I reset my settings to an ODT of Rec. 709 and imported the EXR image sequence – my ACES master file. After import, the clip was set to No Input Transform. I placed it back-to-back on the timeline against the original. The two clips were a perfect match: EXR without added grading and the original with correction nodes. The one downside of such an Open EXR ACES master is a huge size increase. My 4K ProRes 4444 test clip ballooned from an original size of 3.19GB to 43.21GB in the EXR format.

Conclusion

Working with ACES inside of DaVinci Resolve involves some different terminology, but the workflow isn’t too different once you get the hang of it. In some cases, camera matching and grading is easier than before, especially when multiple camera formats are involved. ACES is still evolving, but as an open standard supported globally by many companies and noted cinematographers, the direction can only be positive. Any serious colorist working with Resolve should spend a bit of time learning and getting comfortable with ACES. When the time comes that you are called upon to deliver an ACES project, the workflow will be second nature.

UPDATE 2/23/21

Since I wrote this post, I’ve completed a number of grading jobs using the ACES workflow in DaVinci Resolve. I have encountered a number of issues. This primarily relates to banding and artifacts with certain colors.

In a recent B-roll shoot, the crew was recording in a casino set-up with an ARRI Alexa Mini in Log-C. The set involved a lot of extreme lights and colors. The standard Resolve ACES workflow would be to set the IDT to Alexa, which then automatically corrects the Log-C image back to the default working color space. In addition, it’s also recommended to apply neon suppression in order to tone down the bright colors, like vibrant reds.

I soon discovered that the color of certain LED lights in the set became wildly distorted (see image). The purple trim lighting on the frames of signs or the edges of slot machines became very garish and artificial. When I set the IDT to Rec 709 instead of Alexa and graded the shot manually without any IDT or LUT, then I was able to get back to a proper look. It’s worth noting that I tested these same shots in Final Cut Pro using the Color Finale 2 Pro grading plug-in, which also incorporates ACES and log corrections. No problems there.

After scrutinizing a number of other shots within this batch of B-roll footage, I noticed quite a bit more banding in mid-range portions of these Alexa shots. For example, the slight lighting variations on a neutral wall in the background displayed banding, as if it were an 8-bit shot. In general, natural gradients within an image didn’t look as smooth as they should have. This is something I don’t normally see in a Rec 709 workflow with Log-C Alexa footage.

Overall, after this experience, I am now less enthusiastic about using ACES in Resolve than I was when I started out. I’m not sure if the issue is with Blackmagic Design’s implementation of these camera IDTs or if it’s an inherent problem with ACES. I’m not yet willing to completely drop ACES as a possible workflow, but for now, I have to advise that one should proceed with caution, if you intend to use ACES.

Originally written for Pro Video Coalition.

©2020, 2021 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters

Video Technology 2020 – Editing Software

Four editing applications dominate the professional market: Adobe Premiere Pro, Apple Final Cut Pro X, Avid Media Composer, and Blackmagic Design DaVinci Resolve. Established facilities are still heavy Avid users, with Adobe being the up-and-coming choice. This doesn’t mean that Final Cut Pro X lost out. Going into 2020, Apple can tout FCPX as the best-selling version of its professional editing tool. It most likely has three million users after nearly nine years on the market. While pro editors in the US are often reluctant to adopt FCPX, this innovative application has earned wider acceptance in the broader international market.

The three “A”s have been battling for editing market share, but the wild card is Blackmagic Design’s DaVinci Resolve. It started as a high-end color correction application, but through Blackmagic’s acquisitions and fast development pace, Resolve is becoming an all-in-one application rivaling Autodesk Smoke or Avid DS. Recent versions bring enhanced creative editing tools, making it possible to edit, mix, composite, grade, and deliver entirely from Resolve. No need to roundtrip with other applications. Blackmagic is so dedicated to Resolve as an editor that they introduced a special editor keyboard.

Is Resolve attractive enough to sway editors to shift away from other tools? The answer for most in 2020 will still be “no.” Experienced editors have made their choice and all of the current options are quite good. However, Resolve does make the most sense for new users with no prior allegiances. The caveat is advanced finishing. Users may edit in an editing application, but then roundtrip to Resolve and back for grading. Unfortunately these roundtrips can be problematic. So I do think that many will opt to cut creatively in their NLE of choice, but then send to Resolve for the final grade, mix, and VFX work. Expect to see Resolve’s finishing footprint expand in 2020.

Two challenges confront these companies in 2020: multi-user collaboration and high dynamic range (HDR) delivery. Collaboration is an Avid strength, but not so for the other three. Blackmagic and Adobe have an approach to project sharing, but still not what Avid users have come to expect. Apple offers nothing directly, but there are some third-party workarounds. Expect 2020 to yield collaboration improvements for Final Cut Pro X and Premiere Pro.

HDR is a more complex situation requiring specialized hardware for proper monitoring. There simply is no way to accurately view HDR on any computer display. All of these companies are developing software pipelines to deal with HDR, but in 2020, HDR delivery will still require specific hardware that will remain the domain of dedicated color correction facilities.

Finally, as with cameras, AI will become an increasing aspect of post hardware. You already see that in Apple’s shape recognition within FCPX (automatic sorting of wides and close-ups) or Adobe Sensei for content replacement and automatic music editing. Expect to see more of these features introduced in coming software versions.

Originally written for Creative Planet Network.

©2020 Oliver Peters