It’s great… Until it isn’t

No, that picture at the top isn’t a map of Covid-19 hotspots. It’s the map of US users affected by this week’s Adobe sign-in outage. This past Wednesday (May 27th) affected users across all of Adobe’s various cloud products were unable to sign into their accounts – locking them out of using any installed Adobe products. But not every user – only those who needed to sign in to enable their applications.

Adobe products, like Creative Cloud, are paid on a monthly or prorated annual basis. You sign in one time to activate the account on that device and you are good to go until renewal time, as long as you stay signed into the cloud license manager application. In theory, you don’t need to be continually connected to the internet for the applications to function. However, once a month Adobe’s servers are pinged and you may be prompted to sign in again. So if Wednesday was your machine’s day to “phone home,” you haven’t used the apps in a while, or if you were signing in after having previously signed out, then your Adobe cloud manager application was unable to connect to the server and activate (or reactive) your software product(s). Just like that, a day of billable time flushed away.

Before you grab the torches and pitchforks, it may be useful to revisit the pros and cons of the various software licensing models.

Subscription

Quite a few companies have adopted the subscription – aka software as a service or SaaS – model. The argument is that you never actually own any software, regardless of the company or the application (read any EULA). Rather, you pay for the right to use it over a specified period of time – monthly, annually, or perpetually. Adobe decided to go all-in on subscription plans, arguing that the upfront costs were cheaper for the user, the plans offered a better ROI with access to many more Adobe products, and that this provided a predictable revenue stream to fuel more frequent product updates.

Generally, these points have been realized and the system works rather well most of the time. Yes, you can argue that over time you pay more for your CC subscription than in the old CS days (assuming you skipped a few CS updates). But if you are an active facility, production company, or independent contractor, then it’s a small monthly business expense, just like your internet or electric bill, which is easily absorbed against the work you are bringing in. The software cost has shifted from cap-ex to op-ex.

That is all true, unless you have no revenue coming in or are merely working with media as a hobby. In addition, once you stop subscribing, all past project files – whether that’s Premiere Pro, Lightroom, Photoshop, In Design, etc – can no longer be opened until you renew.

Unfortunately, when you hit a day like Wednesday, all rational arguments go out the window.

The App Store model

If you are a Final Cut Pro X user, then Wednesday might have stirred the urge to say, “I told you so.” I get that. The App Store method of purchasing/installing/updating software works well. You only have to sign in for new purchases, new Mac installations, and occasionally for updates.

However, don’t be smug. Certain applications that Apple sees as a service, like News, occasionally prompt you to sign in with your Apple ID again before you can use that software. This is true even if you haven’t subscribed to any paid magazines or newspapers. In a scenario such as Adobe’s Wednesday outage, I can image that you would be just as locked out. Not necessarily from your creative apps, like FCPX, but rather certain software/services, such as News, iTunes, Music, etc. As someone who uses my iCloud e-mail account quite a lot through web browsers, I can’t count the number of times access has been hampered.

License managers

Similar to the App Store or Adobe Creative Cloud, some companies use license manager software that’s installed onto your machine. This is a common method for plug-in developers, such as FxFactory and Waves. It’s a way of centralizing the installation and authorization of that software.

FxFactory follows the App Store approach and includes purchasing and update features. Others, like Waves and Avid Link, are designed to activate and/or move licenses between machines, based on the company’s stored, online database of serial numbers. You typically do not need to stay online or be signed in unless making changes or updates, or your system has changed in some way, like a new OS installation. These work well, but aren’t bulletproof – as many Media Composer editors can attest.

Activation codes

One of the oldest methods is simply to provide the user with a serial number/activation code. Install the software and activate the license with the supplied code number. If you need to move the software to a different machine, then you will typically have to deactivate that application on the first machine and activate it again on the second machine. You only have to be connected to the company’s servers during these activation times. Some companies also offer offline methods for activation in the event you don’t have internet access.

Seems simple, right? Well, not necessarily. First of all, if you have a lot of software that uses this method, that’s a lot of serial numbers you will have to keep track of. Second, some companies only give you a limited number of times you can deactivate and reactivate the software. If that’s the case, you can’t really move the license back and forth between two machines every other week. If your motherboard dies with the software activated, you are likely going to have to jump through hoops to get the company to deactivate the number on their server in order to be able to activate it again on the repaired machine. That’s because the new motherboard IDs it as a completely different machine. Finally, even some of these companies require you to occasionally sign in or reactivate the software in order for you to continue being able to use it.

Hardware license key

Ah, the “dongle.” When Avid switched to software licensing, many Media Composer editors approached it with the attitude of “from my cold, dead hands.” And so, Avid still maintains hardware licensing for many Avid systems. Likewise, Blackmagic Design has also shipped dongles for DaVinci Resolve and Fusion. iLok devices, common among Pro Tools users, are another variant of this. Dongles, which are actually USB hardware keys, make it simple to move authorization between machines. That’s useful if you maintain a fleet of rental systems. No internet required. Just a USB port.

Unfortunately, dongles are subject to forgetfulness, loss, breakage, theft, and even counterfeiting. A friend reminded me that when Avid Symphony first came out and cost $100K for a system, dongle theft was a very real issue. That’s likely less the case now, because software is so cheap by comparison. I do know of film schools that extended their Media Composer dongles on a USB extension cable and then strung it to the inside of their Mac Pro. Lock the case for viable theft prevention.

Free

The Holy Grail – right? Or so many users believe. It’s the model Blackmagic uses for the standard versions of Resolve and Fusion. Many plug-in developers use the free model on a few plug-ins just to get you interested in their other paid products. Of course, the ease of making Motion Templates for Final Cut Pro X has create a homegrown hobbyist/developer market of free or extremely cheap effects and graphics templates.

Even though some commercial software is free, you are only granted the right to use it, not ownership. Often in exact for user data so that you can be marketed to in the future. As a business plan for a commercial developer, this model is only sustainable because of other revenue, like hardware sales. And in practice, even the Mac App Store model, with its “buy once” policy, is close to free when you own and personally control a number of Macs.

There are pros and cons to all of these models. They all work well until they don’t. When there’s a hiccup, roll with the punches, or contact support if it’s appropriate. With some luck, there will be a speedy resolution and you’ll be back up and running in no time.

©2020 Oliver Peters

SOUND FORGE Pro Revisited

I’ve reviewed SOUND FORGE a number of times over the years, most recently in 2017. Since its initial development, it has migrated from Sonic Foundry to Sony Creative Software and most recently Magix, a German software developer. Magix’s other products are PC-centric, but SOUND FORGE comes in both Mac and Windows versions.

The updated 3.0 version of SOUND FORGE Pro for the Mac was released in 2017. Although no 4.0 version has been released in the interim, 3.0 was developed as a 64-bit app. Current downloads are, of course, an updated build. Across the product line, there are several versions and bundles, including “lite” SOUND FORGE versions. However, Mac users can only choose between SOUND FORGE Pro Mac 3 or Audio Master Suite Mac. Both include SOUND FORGE Pro Mac, iZotope RX Elements, and iZotope Ozone Elements. The Audio Master Suite Mac adds the Steinberg SpectraLayers Pro 4 analysis/repair application. It’s not listed, but the download also includes the Convrt application, which is an MP3 batch conversion utility.

SOUND FORGE Pro is designed as a dedicated audio mastering application, that does precision audio editing. You can record, edit, and process multichannel audio files (up to 32 tracks) in maximum bit rates of 24-bit, 32-bit, and 64-bit float at up to 192kHz. In addition to the iZotope Elements packages, SOUND FORGE Pro comes with a variety of its own AU plug-ins. Any other AU and VST plug-ins already installed on your system will also show up and work within the application.

Even though SOUND FORGE Pro is essentially a single file editor (as compared with a multi-track DAW, like Pro Tools), you can work with multiple individual files. Multiple files are displayed within the interface as horizontal tabs or in a vertical stack. You can process multiple files at the same time and can copy and paste between them. You can also copy and paste between individual channels within a single multichannel file.

As an audio editor, it’s fast, tactile, and non-destructive, making it ideal for music editing, podcasts, radio interviews, and more. For audio producers, it complies with Red Book Standard CD authoring. The attraction for video editors is its mastering tools, especially loudness control for broadcast compliance. Both Magix’s Wave Hammer and iZotope Ozone Elements’ mastering tools are great for solving loudness issues. That’s aided by accurate LUFS metering. Other cool tools include AutoTrim, which automatically removes gaps of silence at the beginnings and ends of files or from regions within a file.

There is also élastique Timestretch, a processing tool to slow down or speed up audio, while maintaining the correct pitch. Timestretch can be applied to an entire file or simply a section within a file. Effects tools and plug-ins are divided into groups that require processing or those that can be played in real-time. For example, Timestretch is applied as a processing step, whereas a reverb filter would play in real time. Processing is typically fast on any modern desktop or laptop computer, thanks to the application’s 64-bit engine.

Basic editing is as simple as marking a section and hitting the delete key. You can also split a file into events and then trim, delete, move, or copy & paste event blocks. If you slide an event to overlap another, a crossfade is automatically created. You can adjust the fade-in/fade-out slopes of these crossfades.

Even if you already have Logic Pro X, Audition, or Pro Tools installed, SOUND FORGE Pro Mac may still be worth the investment for its simplicity and mastering focus.

©2020 Oliver Peters

Color Finale 2.1 Update

 

Color grading roundtrips are messy and prone to errors. Most editors want high-quality solutions that keep them within their favorite editing application. Color Trix launched the revamped Color Finale 2 this past December with the goal of building Final Cut Pro X into a competitive, professional grading environment. In keeping to that goal, Color Trix just released Color Finale 2.1 – the first major update since the December launch. Color Finale 2.1 is a free upgrade to Color Finale 2 owners and adds several new features, including inside/outside mask grading, an image mask, a new smoothness function, and the ability to copy and paste masks between layers. (Right-click images to see enlarged view.)

Grading with inside/outside masks

Color Finale 2 launched with trackable, spline masks that could be added to any group or layer. But in version 2.0, grading occurred either inside or outside of the mask, but not both. The new version 2.1 feature allows a mask to be applied to a group, which then becomes the parent mask. Grading would then be done within that mask. If you want to also grade the area outside of that mask, simply apply a new group inside the first group. Then add a new mask that is an invert of the parent mask. Now you can add new layers to grade the area outside of the same mask.

In the example image, I first applied a mask around the model at the beach and color corrected her. Then I applied a new group with an inverted mask to adjust for the sky. In that group I could add additional masking, such as an edge mask to create a gradient. The parent mask around the model maintains that the sky gradient is applied behind her rather than in the foreground. Once you get used to this grouping strategy with inside and outside masks, you can achieve some very complex results.

Image masks

The second major addition is that of image masks. This is a monochrome version of the image in which the dark-to-light contrast range acts as a qualifier or matte source to restrict the correction being applied to the image. The mask controls include black and white level sliders, blurring, and the ability to invert the mask. Wherever you see a light area in the mask is where that color correction will be applied. This enables a number of grading tricks that are also popular in photography, including split-toning and localized contrast control.

Simply put, split-toning divides the image according to darks and lights (based on the image mask) and enables you to apply a different correction to each. This can be as extreme as a duotone look or something a bit more normal, yet still stylized.

In the duotone example, I first removed saturation from the original clip to create a black-and-white image. Then, the boxer’s image mask divides the range so that I could apply red and blue tinting for the duotone look.

In the second example, the image mask enabled me to create glowing highlights on the model’s face, while pushing the mids and shadows back for a stylistic appearance.

Another use for an image mask can be for localized contrast control. This technique allows me to isolate regions of the image and grade them separately. For example, if I want to only correct the shadow areas of the image, I can apply an image mask, invert it (so that dark areas are light in the mask), and then apply grading within just the dark areas of the image – as determined by the mask.

Smoothness

Color Finale 2 included a sharpness slider. New in version 2.1 is the ability to go in the opposite direction to soften the image, simply by moving the slider left into negative values. This slider controls the high frequency detail of the overall image – positive values increase that detail, while negative values decrease it.

Since this is an overall effect, it can’t be masked within the layers panel. If you wanted to apply it just to a person’s face, like other “beauty” filters, then that can be achieved by using Final Cut Pro X’s built-in effects masks. This way a similar result can be reached while staying within the Color Finale workflow.

One last addition to version 2.1 is that Final Cut Pro X’s hotkeys now stay active while the Color Finale layers panel is open. Color Trix has stated that they plan more upgrades and options over the next nine months, so look for more ahead. Color finale 2.1 is already a powerful grading tool for nearly any level of user. Nevertheless, more features will certainly be music to the ears of advanced users who prefer to stay within Final Cut Pro X to finish and deliver their projects. Stay tuned.

Originally written for FCP.co.

©2020 Oliver Peters

Chasing the Elusive Film Look

Ever since we started shooting dramatic content on video, directors have pushed to achieve the cinematic qualities of film. Sometimes that’s through lens selection, lighting, or frame rate, but more often it falls on the shoulders of the editor or colorist to make that video look like film. Yet, many things contribute to how we perceive the “look of film.” It’s not a single effect, but rather the combination of careful set design, costuming, lighting, lenses, camera color science, and color correction in post.

As editors, we have control over the last ingredient, which brings me to LUTs and plug-ins. A number of these claim to offer looks based on certain film emulsions. I’m not talking about stylized color presets, but the subtle characteristics of film’s color and texture. But what does that really mean? A projected theatrical film is the product of four different stocks within that chain – original camera negative, interpositive print, internegative, and the release print. Conversely, a digital project shot on film and then scanned to a file only involves one film stock. So it doesn’t really mean much to say you are copying the look of film emulsion, without really understanding the desired effect.

My favorite film plug-in is Koji Advance, which is distributed through the FxFactory platform. Koji was developed between Crumplepop and noted film timer, Dale Grahn. A film timer is the film lab’s equivalent to a digital colorist. Grahn selected several color and black-and-white film stocks as the basis for the Koji film looks and film grain emulation. Then Crumplepop’s developers expanded those options with neutral, saturated, and low contrast versions of each film stock and included camera-based conversions from log or Rec 709 color spaces. This is all wrapped into a versatile color correction plug-in with controls for temperature/tint, lift/gamma/gain/density (low, mid, high, master), saturation, and color correction sliders. (Click an image to see an expanded view.)

This post isn’t a review of the Koji Advance plug-in, but rather how to use such a filter effectively within an NLE like Final Cut Pro X (or Premiere Pro and After Effects, as well). In fact, these tips can also be used with other similar film look plug-ins. Koji can be used as your primary color correction tool, applying and adjusting it on each clip. But I really see it as icing on the cake and so will take a different approach.

1. Base grade/shot matching. The first thing you want to do in any color correction session is to match your shots within the sequence. It’s best to establish a base grade before you dive into certain stylized looks. Set the correct brightness and contrast and then adjust for proper balance and color tone. For these examples, I’ve edited a timeline consisting of a series of random FilmSupply stock footage clips. These clips cover a mix of cameras and color spaces. Before I do anything, I have to grade these to look consistent.

Since these are not all from the same set-up, there will naturally be some variances. A magic hour shot can never be corrected to be identical to a sunny exterior or an office shot. Variations are OK, as long as general levels are good and the tone feels right. Final Cut Pro X features a solid color correction tool set that is aided by the comparison view. That makes it easy to match a shot to the clip before and after it in the timeline.

2. Adding the film look. Once you have an evenly graded sequence of shots, add an adjustment layer. I will typically apply the Koji filter, an instance of Hue/Sat Curves, and a broadcast-safe limiter into that layer.

Within the Koji filter, select generic Rec 709 as the camera format and then the desired film stock. Each selection will have different effects on the color, brightness, and contrast of the clips. Pick the one closest to your intended effect. If you also want film grain, then select a stock choice for grain and adjust the saturation, contrast, and mix percentage for that grain. It’s best to view grain playing back at close to your target screen size with Final Cut set to Better Quality. Making grain judgements in a small viewer or in Better Performance mode can be deceiving. Grain should be subtle, unless you are going for a grunge look.

The addition of any of these film emulsion effects will impact the look of your base grade; therefore, you may need to tweak the color settings with the Koji controls. Remember, you are going for an overall look. In many cases, your primary grade might look nice and punchy – perfect for TV commercials. But that style may feel too saturated for a convincing film look of a drama. That’s where the Hue/Sat Curves tool comes in. Select LUMA vs SAT and bring down the low end to taste. You want to end up with pure blacks (at the darkest point) and a slight decrease in shadow-area saturation.

3. Readjust shots for your final grade. The application of a film effect is not transparent and the Koji filter will tend to affect the look of some clips more than others. This means that you’ll need to go back and make slight adjustments to some of the clips in your sequence. Tweak the clip color correction settings applied in the first step so that you optimize each clip’s final appearance through the Koji plug-in.

4. Other options. Remember that Koji or similar plug-ins offer different options – so don’t be afraid to experiment. Want film noir? Try a black-and-white film stock, but remember to also turn down the grain saturation.

You aren’t going for a stylized color correction treatment with these tips. What you are trying to achieve is a look that is more akin to that of a film print. The point of adding a film filter on top is to create a blend across all of your clips – a type of visual “glue.” Since filters like this and the adjustment layer as a whole have opacity settings, is easy to go full bore with the look or simply add a hint to taste. Subtlety is the key.

Originally written for FCP.co.

©2020 Oliver Peters

Paul McCartney’s “Who Cares”

Paul McCartney hasn’t been the type of rock star to rest on his past. Many McCartney-related projects have embraced new technologies, such as the 360VR. The music video for Who Cares – McCartney’s musical answer to bullying – was filmed in both 16mm and 65mm film. And it was edited using Final Cut Pro X.

Who Cares features Paul McCartney and actress Emma Stone in a stylized, surreal song and dance number filmed in 65mm, which is bookended by a reality-based 16mm segment. The video was directed by Brantley Gutierrez, choreographed by Ryan Heffington, and produced through LA production company Subtractive.

Gutierrez has collaborated for over 14 years with Santa Monica-based editor Ryan P. Adams on a range of projects, including commercials, concerts, and music videos. Adams also did a stint with Nitro Circus, cutting action sports documentaries for NBC and NBCSN. In that time he’s used the various NLEs, including Premiere Pro, Media Composer, and Final Cut Pro 7. But it was the demands of concert videos that really brought about his shift to Final Cut Pro X.

___________________________________

[OP] Please tell me a bit about what style you were aiming for in Who Cares. Why the choice to shoot in both 16mm and 65mm film?

[Brantley Gutierrez] In this video, I was going for an homage to vaudevillian theater acts and old Beatles-style psychedelia. My background is working with a lot of photography. I was working in film labs when I was pretty young. So my DP and friend, Linus Sandgren, suggested film and had the idea, “What if we shot 65mm?” I was open to it, but it came down to asking the folks at Kodak. They’re the ones that made that happen for us, because they saw it as an opportunity to try out their new Ektachrome 16mm motion film stock.

They facilitated us getting the 65mm at a very reasonable price and getting the unreleased Ektachrome 16mm film. The reason for the two stocks was the separation of the reality of the opening scene – kind of grainy and hand-held – with the song portion. It was almost dreamlike in its own way. This was in contrast to the 65mm psychedelic part, which was all on crane, starkly lit, and with very controlled choreography. The Ektachrome had this hazy effect with its grain. We wanted something that would jump as you went between these worlds and 16 to 65 was about as big of a jump as we could get in film formats.

[OP] What challenges did you face with this combination of film stocks? Was it just a digital transfer and then you were only dealing with video files? Or was the process different than that?

[BG] The film went to London where they could process and scan the 65mm film. It actually went in with Star Wars. Lucafilm had all of the services tied up, but they were kind enough put our film in with The Rise of Skywalker and help us get it processed and scanned. But we had to wait a couple of extra days, so it was a bit of a nervous time. I have full faith in Linus, so I knew we had it. However, it’s a little strange these days to wait eight or nine days to see what you had shot.

We were a guinea pig for Kodak for the 16mm stock. When we got it back, it looked crazy! We were like, “Oh crap.” It looked like it had been cross-processed  – super grainy and super contrasty. It did have a cool look, but more like a Tony Scott style of craziness. When we showed it to Kodak they agreed that it didn’t look right. Then we had Tom Poole, our colorist at Company 3 in New York, rescan the 16mm and it looked beautiful.

[Ryan P. Adams] Ektachrome is a positive stock, which hasn’t been used in a while. So the person in London scanning it just wasn’t familiar with it.

[BG] They just didn’t have the right color profile built for that stock yet, since it hadn’t been released yet. Of course, someone with a more experienced eye would know that wasn’t correct.

[OP] How did this delay impact your editing?

[BG] It was originally scanned and we started cutting with the incorrect version. In the meantime, the film was being rescanned by Poole. He didn’t really have to do any additional color correction to it once he had rescanned it. This was probably our quickest color correction session for any music video – probably 15 minutes total.

[RPA] One of the amazing things I learned, is that all you have to do is give it some minor contrast and then it is done. What it does give you is perfect skin tones. Once we got the proper scan and sat in the color session, that’s what really jumped out.

[OP] So then, what was the workflow like with Final Cut Pro X?

[RPA] The scans came in as DPX files. Here at Subtractive, we took those into DaVinci Resolve and spit out ProRes 422 HQ QuickTime files to edit with. To make things easy for Company 3, we did the final conform in-house using Resolve. An FCPXML file was imported into Resolve, we linked back to the DPX files, and then sent a Resolve project file to Company 3 for the final grade. This way we could make sure everything was working. There were a few effects shots that came in and we set all of that up so Tom could just jump on it and grade. Since he’s in New York, the LA and New York locations for Company 3 worked through a remote, supervised grading session.

[OP] The video features a number of effects, especially speed effects. Were those shot in-camera or added in post?

[RPA] The speed effects were done in post. The surreal world was very well choreographed, which just plays out. We had a lot of fun with the opening sequence in figuring out the timing. Especially in the transitional moment where Emma is staring into the hypnotic wheel. We were able to mock up a lot of the effects that we wanted to do in Final Cut. We would freeze-frame these little characters called “the idiots” that would jump into Emma’s head. I would do a loose rotoscope in Final Cut and then get the motion down to figure out the timing. Our effects people then remade that in After Effects.

[OP] How involved was Paul McCartney in the edit and in review-and-approval?

[BG] I’ve know Paul for about 13 years and we have a good relationship. I feel lucky that he’s very trusting of me and goes along with ideas like this. The record label didn’t even know this video was happening until the day of production. It was clandestine in a lot of ways, but you can get away with that when it’s Paul McCartney. If I had tried that with some other artist, I would have been in trouble. But Paul just said, “We’re going to do it ourselves.”

We showed him the cut once we had picture lock, before final color. He called on the phone, “Great. I don’t have any notes. It’s cool. I love it and will sign off.” That was literally it for Paul. It’s one of the few music videos where there was no going back and forth between the management, the artist, and the record label. Once Paul signed off on it, the record label was fine with it.

[OP] How did you manage to get Emma Stone to be a part of this video?

[BG] Emma is a really close friend of mine. Independently of each other, we both know Paul. Their paths have crossed over the years. We’ve all hung out together and talked about wanting to do something. When Paul’s album came out, I hit them both up with the idea for the music video and they both said yes.

The hardest part of the whole process was getting schedules to align. We finally had an open date in October with only a week and a half to get ready. That’s not a lot of time when you have to build sets and arrange the choreography. It was a bit of a mad dash. The total time was about six weeks from prep through to color.

Because of the nature of this music video, we only filmed two takes for Paul’s performance to the song. I had timed out each set-up so that we knew how long each scene would be. The car sequence was going to be “x” amount of seconds, the camera sequence would be “x” amount, and so on. As a result, we were able to tackle the edit pretty quickly. Since we were shooting 65mm film, we only had two or three takes max of everything. We didn’t have to spend a lot of time looking through hours of footage – just pick the best take for each. It was very old school in that way, which was fun.

[OP] Ryan, what’s your approach to organizing a project like this in Final Cut Pro X?

[RPA] I labelled every set-up and then just picked the best take. The first pass was just a rough to see what was the best version of this video. Then there were a few moments that we could just put in later, like when the group of idiots sings, “Who cares.”

My usual approach is to lay in the sections of synced song segments to the timeline first. We’ll go through that first to find the best performance moments and cut those into the video, which is our baseline. Then I’ll build on top of that. I like to organize that in the timeline rather than the browser so that I can watch it play against the music. But I will keyword each individual set-up or scene.

I also work that way when I cut commercials. I can manage this for a :30 commercial. When it’s a much bigger project, that’s where the organization needs to be a little more detailed. I will always break things down to the individual set-ups so I can reference them quickly. If we are doing something like a concert film, that organization may be broken up by the multiple days of the event. A great feature of Final Cut Pro X is the skim tool and that you can look at clips like a filmstrip. It’s very easy to keyword the angles for a scene and quickly go through it.

[OP] Brantley, I’m sure you’ve sat over the shoulder of the editor in many sessions. From a director’s point of view, what do you think about working with Final Cut Pro X?

[BG] This particular project was pretty well laid out in my head and it didn’t have a lot of footage, so it was already streamlined. On more complex projects, like a multi-cam edit, FCPX is great for me, because I get to look at it like a moving contact sheet from photography. I get to see my choices and I really respond to that. That feels very intuitive and it blows me away that every system isn’t like that.

[OP] Ryan, what attracted you about Final Cut Pro X in order to use it whenever possible?

[RPA] I started with Final Cut Pro X when they added multi-cam. At that time we were doing more concert productions. We had a lot of photographers who would fill in on camera and Canon 5Ds were prevalent. I like to call them “trigger-happy filmers,” because they wouldn’t let it roll all the way through.

FCPX came up with the solution to sync cameras with the audio on the back end. So I could label each photographer’s clips. Each clip might only be a few seconds long. I could then build the concert by letting FCPX sync the clips to audio even without proper timecode. That’s when I jumped on, because FCPX solved a problem that was very painful in Final Cut Pro 7 and a lot of other editing systems. That was an interesting moment in time when photographic cameras could shoot video and we hired a lot of those shooters. Final Cut Pro X solved the problem in a very cool way and it helped me tremendously.

We did this Tom Petty music video, which really illustrates why Final Cut Pro X is a go-to tool. After Tom had passed, we had to take a lot of archival footage as part of a music video, called Gainesville, that we did for his boxed set. Brantley shot a lot of video around Tom’s hometown of Gainesville [Florida], but they also brought us a box with a massive amount of footage that we put into the system. A mix of old films and tapes, some of Tom’s personal footage, all this archival stuff. It gave the video a wonderful feeling.

[BG] It’s very nostalgic from the point of view of Tom and the band. A lot of it was stuff they had shot in their 20s and had a real home movie feel. I shot Super 8mm footage around Tom’s original home and places where they grew up to match that tone. I was trying to capture the love his hometown has for him.

[RPA] That’s a situation where FCPX blows the competition out of the water. It’s easy to use the strip view to hunt for those emotional moments. So the skimmer and the strip view were ways for us to cull all of this hodge-podge of footage for those moments and to hit beats and moments in the music for a song that had been unreleased at that time. We had one week to turn that around. It’s a complicated situation to look through box of footage on a very tight deadline and put a story to it and make it feel correct for the song. That’s where all of those tools in Final Cut shine. When I have to build a montage, that’s when I love Final Cut Pro X the most.

[OP] You’ve worked with the various NLEs. You know DaVinci Resolve and Blackmagic is working hard to make it the best all-in-one tool on the market. When you look at this type of application, what features would you love to see added to Final Cut Pro X?

[RPA] If I had a wishlist, I would love to see if FCPX could be scaled up for multiple seats and multiple editors. I wish some focus was being put on that. I still go to Resolve for color. I look at compositing as just mocking something up so we can figure out timing and what it is generally going to look like. However, I don’t see a situation currently where I do everything in the editor. To me, DaVinci Resolve is kind of like a Smoke system and I tip my hat to them.

I find that Final Cut still edits faster than a lot of other systems, but speed is not the most important thing. If you can do things quickly, then you can try more things out. That helps creatively. But I think that typically things take about as long from one system to the next. If an edit takes me a week in Adobe it still takes me a week in FCPX. But if I can try more things out creatively, then that’s beneficial to any project.

Originally written for FCP.co.

©2020 Oliver Peters

Jezebel

If you’ve spent any time in Final Cut Pro X discussion forums, then you’ve probably run across posts by Tangier Clarke, a film editor based in Los Angeles. Clarke was an early convert to FCPX and recently handled the post-production finishing for the film, Jezebel. I was intrigued by the fact that Jezebel was picked up by Netflix, a streaming platform that has been driving many modern technical standards. This was a good springboard to chat with Clarke and see how FCPX fared as the editing tool of choice.

_______________________________________________________________

[OP] Please tell me a little about your background in becoming a film editor.

[TC] I’m a very technical person and have always had a love for computers. I went to college for computer science, but along the way I discovered Avid Videoshop and started to explore editing more, since it married my technical side with creative storytelling. So, at UC Berkeley I switched from computer science to film.

My first job was at motion graphics company Montgomery/Cobb, which was in Los Angeles. They later became Montgomery & Co. Creative. I was a production assistant for main titles and branding packages for The Weather Channel, Fox, NBC, CBS, and a whole host of cable shows. Then, I worked for 12 years with Loyola Productions (no affiliation with Loyola Marymount University).

I moved on to a company called Black & Sexy TV, which was started by Dennis Dortch as a company to have more control over black images in media. He created a movie called A Good Day to be Black and Sexy in 2008, which was picked up and distributed by Magnolia Pictures and became a cult hit. It ended up in Blockbuster Video stores, Target, and Netflix. The success of that film was leveraged to launch Black & Sexy TV and its online streaming platform.

[OP] You’ve worked on several different editing applications, but tell me a bit about your transition to Final Cut Pro X.

[TC] I started my career on Avid, which was also at the time when Final Cut Pro “legacy” was taking off. During 2011 at Loyola Productions, I had an opportunity to create a commercial for a contest put out by American Airlines. We thought this was an opportunity for us as a company to try Final Cut Pro X.

I knew that it was for us once we installed it. Of course, there were a lot of things missing coming from Final Cut Pro 7, and a couple of bugs here and there. The one thing that was astonishing for me, despite the initial learning curve, was that within one week of use my productivity compared to Final Cut Pro 7 went through the roof. There was no correlation between anything I had used before and what I was experiencing with Final Cut X in that first week. I also noticed that our interns – whose only experience was iMovie – just picked up Final Cut Pro X with no problems whatsoever.

Final Cut Pro X was very liberating, which I expressed to my boss, Eddie Siebert, the president and founder of Loyola Productions. We decided to keep using it to the extent that we could on certain projects and worked with Final Cut Pro 7 and Final Cut Pro X side-by-side until we eventually just switched over.

[OP] You recently were the post supervisor and finishing editor for the film Jezebel, which was picked up by Netflix. What is this film about?

[TC] Jezebel is a semi-autobiographical film written and directed by Numa Perrier, who is a co-founder of Black & Sexy TV. The plot follows a 19-year-old girl, who after the death of her mother begins to do sex work as an online chat room cam girl to financially support herself. Numa is starring in the film, playing her older sister, and an actress named Tiffany Tenille is playing Numa. This is also Numa Perrier’s feature film directorial debut. It’s a side of her that people didn’t know – about how she survived as a young adult in Las Vegas. So, she is really putting herself out there.

The film made its debut at South by Southwest last year, where it was selected as a “Best of SXSW” film by The Hollywood Reporter. After that it went to other domestic and international festivals. At some point it was seen by Ava DuVernay, who decided to to pick up Numa’s film through her company, Array. That’s how it got to Netflix.

[OP] Please walk me through the editorial workflow for Jezebel. How did FCPX play a unique role in the post?

[TC] I was working on a documentary at the time, so I couldn’t fully edit Jezebel, but I was definitely instrumental in the process. A former coworker of mine, Brittany Lyles, was given the task of actually editing the project in Final Cut Pro X, which I had introduced her to a couple of years ago and trained her on how to use it. The crew shot with a Canon C300 camera and we used the Final Cut proxy workflow. Brittany wouldn’t have been able to work on it if we weren’t using proxies, because of her hardware. I was using a late 2013 Mac Pro, as well as a 2016 MacBook Pro.

At the front end, I assisted the the production team with storage and media management. Frances Ampah (a co-producer on the film) and I worked to sync all the footage for Brittany, who was working with copy of the footage on a dedicated drive. We provided Bittany with XMLs during the syncing process as she was getting familiar with the footage.

While Brittany was working on the cut, Numa and I were trying to figure out how best to come up with a look and a style for the text during the chat room scenes in the movie. It hadn’t been determined yet if I was going to get the entire film and put the graphics in myself or if I was going to hand it off to Brittany for her to do it. I pitched Numa on the idea of creating a Motion template so that I could have more control over the look, feel, and animation of the graphics. That way either Brittany or I could do it and it would look the same.

Brittany and Numa refined the edit to a point where it made more sense for me to put in a lot of the graphics and do any updating per Numa’s notes, because some of the text had changed as well. And we wanted to really situate the motion of the chat so that it was characteristic of what it looked like back then – in the 90s. We needed specific colors for each user who was logged into the screen. I had some odd color issues with Final Cut and ended up actually just going into the FCPXML file to modify color values. I’m used to going into files like that and I’m not afraid of it. I also used the FCP X feature in the text inspector to save format and appearance attributes. This was tremendously helpful to quickly assign the color and formatting for the different users in the chat room – saving a lot of time.

Our secondary editor, Bobby Field, worked closely with Numa to do the majority of color grading on the film. He was more familiar with Premiere Pro than FCP X, but really enjoyed the color tools in Final Cut Pro X. Through experimentation, Bobby learned how to use adjustment layers to apply color correction. I was fascinated by this and it was a learning experience for me as well. I’m used to working directly with the clip itself and in my many years of using FCP X, this wasn’t a method I used or saw anyone else firsthand doing.

[OP] What about sound post and music?

[TC] I knew that there’s only so much technically that I’d had the skillset to do and I would not dare to pretend that I know how to do certain things. I called on the help of Jim Schaefer – a skilled and trusted friend that I worked with at Loyola productions. I knew he wanted an opportunity to work on a big project, particularly a feature. The film needed a tremendous amount of sound work, so he took it on along with Travis Prater, a coworker of his at Source Sound in Woodland Hills. Together they really transformed the film.

Jim and Travis worked in Pro Tools, so I used X2Pro to get files to them. Jim gave me a list of how he wanted the film broken down. Because of the length of Jezebel, he preferred that the film was broken up into reels. In addition to reels, I also gave him the entire storyline with all of the roles. Everything was broken down very nicely using AAFs and he didn’t really have any problems.  In his words, “It’s awesome that all the tracks are sorted by character and microphone – that’ll cut down significantly on the sorting/organizing pass for me.” The only hiccup experienced was that metadata was missing in the AAF up to a certain point in ProTools. Yet that metadata did exist in the original WAV. Some clip names were inconsistent as well, but that may have happened during production.

[OP] Jezebel is streaming on Netflix, which has a reputation for having tough technical specs. Were there any special things you had to do to make it ready for the platform?

[TC] We supplied Array with a DCI 2K (full frame) Quicktime master in ProRes 422HQ per their delivery schedule, along with other elements such as stereo and 5.1 mixes from Jim, Blu-Rays, DVD, and DCP masters. I expected to do special things to make it ready for Netflix. Numa and I discussed this, but to my knowledge, the Quicktime that I provided to Array is what Netflix received. There were no special conversions made just for Netflix on the part of Array.

[OP] Now that you have this Final Cut Pro X experience under your belt, what would you change if you could? Any special challenges or shortcomings?

[TC] I had to do some composite shots for the film, so the only shortcoming for me was Final Cut’s compositing tool set. I’d love to have better tools built right into FCP X, like in DaVinci Resolve. I love Apple Motion and it’s fine for what it is, but it could go a little further for me. I’d love to see an update with improved compositing and better tracking. Better reporting for missing files, plugins, and other elements would also be tremendously helpful in troubleshooting vague alerts.

In spite of this, there was no doubt in any part of the process whether or not Final Cut was fully capable of being at the center of everything that needed to be done – whether it was leveraging Motion for template graphics between Brittany and me, using a third-party tool to make sure that the post sound team had precisely what they needed, or exchanging XMLs or backup libraries with Bobby to make sure that his work got to me intact. I was totally happy with the performance of FCP X. It was just rock solid and for the most part did everything I needed it to do without slowing me down.

Originally written for FCPco.

A special thanks to Lumberjack System for their assistance in transcribing this interview.

©2020 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters