A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

Storage Reliability

Recently I’ve written about storage strategies designed to future-proof access to your files. Other than questions of whether future software can still play your files, the biggest issue is whether of not the media is playable at all in a number of years. Unfortunately, there are simply no guarantees. All media can and does fail. Let’s look at various answers.

Everyone touts “the cloud” as the ultimate solution. Although cloud-based storage space is relatively cheap, the cost and data charges for massive uploads and downloads along with local internet speeds pose the stumbling blocks. There’s very little in the near term to change that. Remember, too, that cloud storage is a subscription service than never ends if you want to keep that media in the cloud.

The LTO (Linear Tape Open) data tape format is considered the “gold standard” for physical back-up and retrieval, but it’s really a format designed for long-term industrial and financial data applications. In other words, back it up once and forget it unless you need to restore from a backup tape in the future.

While many studios require original camera footage for major feature films to be archived onto LTO, the format doesn’t fit well into the needs of most small-to-medium production companies and post houses. There are three reasons for this: 1) As file capacities grow, LTO barely keeps up in equivalent capacity and transfer speeds. 2) The LTO standards keep evolving with limited forward or backward version compatibility. 3) If you need to continually go back to your archive to revise and update older projects, the linear design of LTO isn’t very attractive. In addition, frequent shuttling back and forth on LTO tapes to retrieve materials from random sections of the tape will cause an LTO tape to prematurely fail before its rated life.

One alternative to LTO is Sony’s Optical Disc Archive. It’s essentially a videotape deck-sized unit that records on writeable optical media (like a Blu-ray disc). They offer a robotic juke-box type of system for automated retrieval with large library systems. It’s a robust solution, but is mainly relevant to large facilities, such as at broadcast networks.

Storing on a large, RAID-protected array is a good, short-term idea, but it won’t be very cost-effective as your storage needs mount. I don’t recommend small 2-drive or 4-drive RAID enclosures for extended storage. These are more likely to have the RAID structure (whether hardware or software) fail and leave you will nothing accessible on that array. In my experience, single, enterprise-grade drives are more reliable. I buy these as raw drives (so I’m not paying extra for a power supply and interface with every drive) and mount them in a drive dock when I need to use them.

Hard drives do carry a manufacturer’s warranty for a rated lifespan, but I will reiterate that there are no guarantees. A 3-year-warranted drive may last as long as a 5-year drive and either one could fail in one year or last 10 years or longer. I currently have some drives that are as old as that. With drive failure is always a looming possibility, the reasonable strategy is to maintain multiple copies of any media of value. Three duplicate copies is recommend.

Let’s address how to select the drive to buy. Most of these types of drives come in several speeds and warranty levels. 5400 or 7200 RPM are the normal speed offerings. Both are fine for archiving, but 7200 is preferred if you occasionally need to edit directly from them. Warranties are usually three or five years. As with any physical media, it covers the replacement of the product, but not the value of the data stored, which you may have permanently lost.

A warranty is like life insurance. A 3-year drive isn’t necessarily better than a 5-year drive. The company has developed actuarial tables that tell them statistically enough of the  5-year drives last to the 5-year mark, so they won’t lose too much money by replacing the few drives that do fail. Sometimes the difference between three and five years may simply be that drives tested with more minor errors end up in the 3-year pile, while the ones with fewer errors go into the 5-year pile. I haven’t looked into the manufacturing specifics too deeply, but that’s generally how product warranties work.

With those two criteria in mind, I usually purchase 7200 RPM enterprise-grade drives with 5-year warranties. These are drives intended to be used in servers and shared storage systems running 24/7/365. There has been a lot of consolidation in the hard drive business, so regardless of the brand name, there are really only a handful of companies manufacturing the media.

One source to track which drives to buy is Backblaze. They are a cloud provider that publishes their testing results, based on a current pool of over 100,000 drives that they have in operation. Right now the front-runners are ToshibaHGST (Hitachi enterprise) and Seagate. The HGST brand has been absorbed by Western Digital. All these are good options. I also hold back on the largest drives rather than be on the bleeding edge. For example, you can now purchase 14 TB drives, but I’ll tend to stick with 8 TB for a while.

Mechanical hard drives are meant to spin and not to sit on a shelf indefinitely. Periodically load each drive into a dock and spin it up. Make sure the contents are still retrievable and files can be opened. This process should happen no less than once a year. More frequent is even better. And yes, if you have 100 drives in your archive, don’t get lazy. This needs to be done. If a drive sounds odd, has difficulty spinning up or mounting, or has lot of vibration, then clone and replace it ASAP, because it’s likely to fail soon.

Many spinning drives and solid state drives employ S.M.A.R.T. technology. This is a prediction of drive failure. Diagnostics fail the S.M.A.R.T. test when they determine that enough sectors on drive are no longer writeable. Other drive issues, like excessive heat and slow spin-up can cause errors. The drive may outwardly act and seem fine, but it’s time to clone and replace the drive. Shared storage servers monitor for S.M.A.R.T. errors in their RAID drives, but you can also get some diagnostic applications to test individual drives.

The final level of security is to develop a plan to routinely transfer your entire library to the current format of the day. If you use hard drives, then plan on migrating your library to a replacement within five to ten years. Many feature film operations, like ILM, have done that for years, because they sit on a library of material with a ton of value. Your media files, might not be that, but this should be a strategy you follow to future-proof your production investment.

©2019 Oliver Peters

DaVinci Resolve Editor Keyboard

Blackmagic Design doubled-down on advanced editing features in 2019 by introducing a new editing mode to DaVinci Resolve 16 called the cut page. They also added a dedicated editor’s keyboard – something that warms the heart of any editor who started their career in a linear edit suite. After some post-NAB feedback and adjustment, the keyboard is finally ready for prime time, running with DaVinci Resolve 16.1 (currently in public beta) or later.

Blackmagic Design’s Grant Petty comes from a broadcast engineering background and knows how fast tape editing was with the right controller. Speed is lost using a mouse-centric, drag-and-drop approach, so the DaVinci Resolve keyboard is designed to put speed back into modern edit workflows. Blackmagic Design was kind enough to loan me a keyboard for a couple of weeks of testing for this review.

Hardware design

The keyboard is very reminiscent of Sony’s BVE keyboards of the past. That’s not simply cosmetic – there are a number of plastic editing keyboards with a shuttle knob – it’s about precision engineering. The DaVinci Resolve search dial (job/shuttle/scroll wheel) truly feels like it has the same type of ballistics and tactile feedback that a Sony dial gave you. The DaVinci Resolve keyboard is built into a sturdy metal case with keycaps that are designed to take some pounding. They intend for the keyboard to last and will offer replacement parts as needed. In short, don’t think of this as a product you’ll have to toss out in a few years.

The keyboard connects via USB-C. But it also worked on the USB3.0 connection of a two-year old iMac and MacBook Pro by using a USB-A to USB-C cable. The back of the keyboard includes two additional USB-A ports for a thumb drive, mouse, or a DaVinci Resolve license key (“dongle”). The keyboard is wider than a standard extended keyboard due to dedicated edit keys on the left and the search dial on the right. It has a replaceable wrist rest on the front edge and adjustable feet to elevate the keyboard angle.

The Cut Page

The Editor Keyboard is optimized for the cut and edit pages. It does work as a standard keyboard in the color, Fairlight, and Fusion pages. However, I found the dial operation in those modes to be rather finicky. Outside of DaVinci Resolve, it’s a generic QWERTY keyboard, but the special edit keys and dial will not work with other editing software.

It’s hard to talk about the keyboard without delving into the cut page. While the keyboard works effectively and correctly in the edit page, you’ll still find yourself needing the mouse, which defeats the purpose. In short, the design motivation is fast editing where your hands never leave the keyboard. That ideal plays out best in the cut page and the two have been developed in tandem.

While the DaVinci Resolve cut page shares many similarities to Apple’s Final Cut Pro X, Blackmagic Design software engineers added a number of unique functions that improve editing speed. The best of these is the source tape view. The bin can be sorted by timecode, camera, duration, or name order using dedicated keys and then viewed as if from a single source – essentially a virtual string-out. Quickly scroll through the footage using the search dial as effortlessly as using the FCPX skimming function. Large, dedicated buttons for source and timeline, in and out, and sort methods make for easy navigation and quick assembly. Smart edit and special function buttons, such as the unique “close-up” button (automatically does a basic punch-in of high-res footage), round out the picture.

The cut page itself has a number of other unique features that are beyond the scope of this article. Nevertheless, one unique tool that is worth mentioning is the dual timeline view. The timeline pane is divided into a top mini-display of the full timeline, while the lower area always shows the zoomed-in section of the timeline at the current time indicator (cursor). You never have to zoom in and zoom out to navigate your timeline. The search dial makes it a breeze to quickly scroll through the full timeline (top) and then hit the jog key to zero in on the frame you want (bottom).

Trimming is where the dial shines. Dedicated keys quickly select in-point, out-point, roll, slip, or slide trimming. Simply hit the key and DaVinci Resolve automatically jumps to the nearest cut point. Then use the search dial for the rest. As you adjust the head or tail of a cut the rest of the timeline ripples accordingly. It’s one of the best trim models of any NLE.

Some additional thoughts

I do have a few quibbles. Trim functions in the cut and edit pages are inconsistent with each other. The cut page uses a similar model to FCPX, where audio and video from the clip are combined into a single timeline clip rather than on separate tracks. Unfortunately, Blackmagic Design has yet to implement a way to expand a/v clips and perform L-cut or J-cut trimming on the cut page. You’ll have to shift to the edit page to perform those.

This is a right-handed device, so left-handed editors will have the same dilemma that left-handed guitar players encounter. In addition, these are imprinted keycaps based on DaVinci Resolve’s default keyboard map. If you use a custom layout or one of the other keyboard maps that DaVinci Resolve offers, then the QWERTY command portion of the keyboard becomes less useful.

The search dial will not override the J-K-L or the space bar play commands. In order to jog once the sequence is playing, you must first hit the K key or the space bar to stop playback before you can properly jog through frames. Otherwise, playback continues the minute you let go of the dial.

Conclusion

This keyboard is addictive. But, is its $995 (USD) price tag justified? That’s steep, but many plastic gaming keyboards can run up to $200 and some even $500. That’s without any extra pointers, dials, or keys. I’ve also found precision metal keyboards with force-sensitive pointers as high as $3,000. Given that, Blackmagic Design may be in the right ballpark. Just like control surfaces for grading or mixing, this keyboard isn’t for everyone. If you are already a fast, keyboard-oriented editor, then the DaVinci Resolve Editor Keyboard may not make you faster. Likewise, a Final Cut Pro X editor who flies by skimming with a mouse is also going to have a hard time justifying the expense, not to mention a shift to a different application.

This keyboard is designed for DaVinci Resolve editors and not colorists. It’s for facilities that intend to deploy DaVinci Resolve as their full-time editing application. I could easily see DaVinci Resolve and this keyboard used in a fast turnaround edit environment, like broadcast news. Under that scenario, it will certainly enhance speed and workflow, especially for editors who want to make the most out of the new cut page.

Originally written for RedShark News.

Be sure to also check out Scott Simmons’ review at ProVideoCoalition.

©2019 Oliver Peters

Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

©2019 Oliver Peters

Handling and Protecting Media

Once the industry entered the file-based era, we realized that dealing with and properly archiving audio and video files could make or break a production company. No more videotapes on the shelf to pull footage from. Unfortunately many companies, producers, clients, and editors simply solved this with a hodgepodge of small, portable drives – Firewire, USB, Thunderbolt, whatever. That’s no longer practical. A typical 10-day, 4K shoot with a handful of formats can easily generate 8-10TB of original footage. That’s if the production is structured. Make that a 2-3 weeklong documentary or reality-style production and you’ll have closer to 20-30TB. Not exactly something you want to deal with in post using a bunch of orange LaCie drives!

The road to safeguarding your files

At the day job, we were able to invest in a LumaForge Jellyfish shared storage network (NAS). It’s 480TB, which sounds like a lot, but after RAID protection the available net capacity is 316TB. And you only want to use up to 80%-90% of that for the most efficient operation. While it still sounds like a lot of storage, it is a finite amount. This means that you need to develop a strategy for archiving older projects and the associated media, but yet easily find and restore it later for revisions.

Cloud storage remains a pipe dream at these quantities. LTO data tape back-up is also impractical, because of its linear read/write nature. It is only intended for deep storage archiving. Facilities who have attempted to use LTO as a type of near-line storage – with frequent restores, updates, and subsequent re-archiving – have worn out their LTO tapes long before the rated life.

Efficient media handling starts when a project or production is first originated. In our case, every new project gets a folder on the Jellyfish and inside that folder is a standard group of subfolders for the corresponding project files, graphics, exports, and source footage. We assign all projects a job number for billing and that number is part of the top level folder name, as well as in any project file name. This default, template starting point is generated for each new production using the Post Haste application.

The location crew

On location all media is copied daily (with verification using the Hedge application) to both master and back-up drives. Depending on the size of the crew, this is the responsibility of the DIT, assistant cameraman, or the director of photography. On large productions, the cost of these drives is built into the budget and they later end up being stored on the shelf for safe keeping. On smaller jobs (or some fast turnaround jobs) temporary, fast SSDs are used, which will later be reused on other projects.

Post starts here

The next step back at the shop is to copy all of this material from the location drives onto the Jellyfish into that project’s Source Media or Dailies subfolder. Once copied, I will proceed to clean up and reorganize all media into subfolders according to this hierarchy:

DATE / CAMERA / REEL

For example: 092819/A-CAMERA_ALEXA/A001

Or outside of the US, maybe: 28SEPT19/A-CAMERA_ALEXA/A001

If a camera file is buried several folders deep – due to the camera card structure or an error made by the crew member on location – I will move those files to the top level within the REEL subfolder without any other levels in between. Camera folders, like DCIM, CLIP, etc are thus orphaned, and so, deleted from Jellyfish. Remember that I still have the original master drive from the location, which will sit on the shelf. If I ever need to get back to the file in its original container, I have that option.

I discussed relinking strategies in the previous post and that comes into play here. Files from semi-pro and non-pro cameras, like DSLRs, GoPros, iPhones, etc will have a prefix appended to the file name using the Better Rename application. The name is typically a short 8-10 character alphanumeric to indicate a job name reference, date, camera letter, and reel.

For instance, a file from the B-camera’s reel 7 for a production done for project ABC on September 28th would get the prefix “ABC0928B07_”. The camera-generated clip name would follow the underscore in that name. The point of doing this is to guarantee unique file names, especially when multiple cameras and filming days are involved. I also apply this process to sound files, even if the clip name reflects the scene and take number.

The last step is to transcode and rate-convert all non-pro media. If my base rate is 23.98fps (23.976), then files like GoPro 59.94fps media get turned into ProRes at 23.98 (slomo). In that case, I will have a subfolder with the original media and a second subfolder with the transcoded media, both with proper file names. I usually apply the “_PR2398” suffix to these transcoded files. I have found that DaVinci Resolve is the best and fastest tool for this transcoding process and large batches can be run overnight as needed.

Archiving your files

If the crew used temporary drives on location, then before these are reformatted and recycled, they are copied to inexpensive portables, like Seagate or Western Digital USB drives. These are then parked on the shelf for safe keeping. The objectives is to end up with at least two copies of the source media – the unaltered, camera original files and the new, master files on the Jellyfish.

Once editing has been completed and approved and the client files have been delivered, we move into the archiving stage. For nearly every project, we try to make sure that a ProRes master and a textless ProRes master have been generated by the editor. In addition, the mixer or the editor will generate a mixed audio file and audio stems for dialogue, SFX, and music (as separate files). Many times, you end up making future changes or versions using these files without going back to the original project file.

The entire project folder with all of the associated media is now copied to a raw, removable hard drive. These are enterprise-grade drives. All of our workstations are equipped with docking stations for such drives. To date, we are up to 200 drives, ranging in size from 2TB to 8TB. They are indexed using the simple DiskCatalogMaker application, which generates a searchable index file of all of these archive drives. (Note – I would recommend spinning up these archive drives every few months.)

Let me mention that while this can be done at the end, I will often split this archival step into two phases. I will first copy only the Dailies media right after I have organized it on Jellyfish (before any editing), leaving the other project subfolders blank. The reason is that once location production is done, there won’t be anything else added to Dailies. In addition, it gives me three copies of the camera files – the location drive (or its back-up), Jellyfish, and the archive drive. Once the project is finished, I only need to copy the rest of the material from the other subfolders.

The last step is to move the project folder from the PROJECTS master folder on Jellyfish to the BACKED UP master folder. As long as we have space on Jellyfish, the project is never deleted. Often changes are required. When that happens, the affected project folder is moved from BACKED UP to PROJECTS again. The changes are made and client files delivered. Then the archive drive for that project is updated and re-indexed to the DiskCatalogMaker catalog file. The project file is finally returned to the BACKED UP folder. As we need space on Jellyfish, the oldest projects that haven’t been touched in a long while are deleted.

Redundancy is the key

There are two additional protection steps taken. All active project files (usually Premiere Pro) are copied to the company’s DropBox by every editor at the end of each day. In the event of a catastrophic NAS failure – before the completion of that project – we can at least get to the project file in the cloud (DropBox) and the media that is stored on hard drive in order to restore the edit. (Note that if you do this with FCPX Libraries, they must first be “zipped,” because DropBox and FCPX Libraries do not play well together.)

The second item is that we have an additional folder on Jellyfish for all completed masters. When an editor generates ProRes master and/or textless files, those files are also copied to this masters folder. That give us quick access to all final versions, should the client require an extra web file or some other type of deliverable. It’s easy to simply encode new files from these ProRes masters, without needing to search out the original project folder.

These steps may sound complex and daunting if you aren’t currently doing them. I have covered some of this in past posts, but I do update my processes over time. Once you get into a routine of doing these steps, the benefits pay off immensely. Your media is better protected, it’s easier to find in the future, and relinking is a no-brainer.

©2019 Oliver Peters

Foolproof Relinking Strategy

Prior to file-based camera capture, film and then videotape were the dominant visual acquisition technologies. To accommodate, post-production adopted a two-stage solution: work print editing + negative conform for film, offline/online editing for video. During the linear editing era high-res media on tape was transferred to a low-res tape format, like 3/4″, for creative editing (offline). The locked cut was assembled and enhanced with effects and graphics in a high-end online suite using an edit decision list and the high-res media. The inherent constraints of tape formats forced consistency in media standards and frame rates.

In the early nonlinear days, storage capacities were low and hard drives expensive, so this offline/online methodology persisted. Eventually storage could cost-effectively handle high-res media, but this didn’t eliminate these workflows. File-based camera acquisition has brought down operating cost, but the proliferation of formats and ever-increasing resolutions have meant that there is still a need for such a two-stage approach. This is now generally referred to as proxy versus full-resolution editing. The reasons vary, but typically it’s a matter of storage size, system performance, or the capabilities of the systems and operator/artist running the finishing/full-res (aka “online”) system.

All of this requires moving media around among drives, systems, locations, and facilities, thus making correct list management essential. Whether or not it works well depends on the ability to accurately relink media with each of these moves. Despite the ability of most modern NLEs to freely mix and match formats, sizes, frame rates, etc., ignoring certain criteria will break media relinking. You must be able to relink the same media between systems or between low and high-res media on the same or different systems.

Criterial for successful relinking

– Unique file names that match between low and high-res media (extensions are usually not important).

– Proper timecode that does not repeat within a single clip.

– A single, standard frame rate that matches the project’s base frame-rate. Using conform or interpret functions within an NLE to alter a clip’s frame rate will mess up relinking on another system. Constant speed changes (such as slomo at 50%) is generally OK, but speed ramp effects tend to be proprietary with every NLE and typically do not translate correctly between different edit or grading applications.

– Match audio configurations between low and high-res media. If your camera source has eight channels of audio, then so must the low-res proxy media.

– Match clip duration. High-res media and proxies must be of the exact same length.

– Note that what is not important is matching frame size or codec or movie wrapper type (extension).

Proxy workflows

Several NLE applications – particularly Final Cut Pro X and Premiere Pro – offer built-in proxy workflows, which automatically generate proxy media and let the editor seamlessly toggle between full-res and proxy files. These are nice as long as you don’t move files around between hard drives.

In the case of Premiere Pro, you can delete proxy files once you no longer need them. From that point on you are only working with full-res media. However, the Premiere project continues to expect to have the proxy file available and wants to locate them when you launch the project. You can, of course, ignore this prompt, but it’s still hard to get rid of completely.

With FCPX, any time you move media and the Library file to another drive with a different volume name, FCPX prompts a relink dialogue. It seems to relink master clips just fine, but not the proxy media that it generated IF stored outside of the Library package. The solution is to set your proxy location to be inside the Library. However, this will cause the Library file to bloat in size, making transfers of Library files between drives and editors that much more cumbersome. So for these and other reasons (like not adhering strictly to the criteria listed above) relinking can often be problematic to impossible (Avid, I’m looking at you).

Instead of using the built-in proxy workflows for projects with extended timetables or huge amounts of media, I prefer an old-school method. Simply transcode everything, work with low-res media, and then relink to the master clips for finishing. Final Cut Pro X, Premiere Pro, and Resolve all allow the relinking of master clips to different media if the criteria match.

Here are five simple steps to make that foolproof.

1. Transcode all non-professional camera originals to a high-quality mastering codec for optimized performance on your systems. I’m talking about footage from DSLRs, GoPros, drones, smart phones, etc. On Macs this will tend to be the ProRes codec family. On PCs, I would recommend DNxHD/HR. Make sure file names are unique (rename if needed) and that there is proper timecode. Adjust frame rates in the transcode if needed. For example, 29.97fps recordings for a playback base rate of 23.98fps should be transcoded to play natively at 23.98fps. This new media will become your master files, so park the camera originals on the shelf with the intent of never needing them (but for safety, DO NOT erase).

2. Transcode all master clips (both pro formats like RED or ARRI, as well as those transcoded in step 1) to your proxy format. Typically this might be ProRes Proxy at a lower frame size, like 1280 x 720. (This is obviously an optional step. If your system has sufficient performance and you have enough available drive space, then you may be able to simply edit with your master source files.)

3. Edit with your proxy media.

4. When you are ready to finish, relink the locked cut to your master files – pro formats like RED and ARRI – and/or the high-res transcodes from step 1.

5. Color correct/grade and add any final effects for finish and delivery.

©2019 Oliver Peters

Rocketman

The last two of years have been rich for film audiences interested in the lives of rock legends. Rocketman was this year’s stylized biography about Elton John. Helmed by British actor/director Dexter Fletcher and starring Taron Egerton of the Kingsman film series, Rocketman tells John’s life through his songs. Astute film buffs also know that Fletcher was the uncredited, additional director who completed Bohemian Rhapsody through the end of principal photography and post, which will invite obvious comparisons between the two rock biopics.

Shepherding Rocketman through the cut was seasoned film editor, Chris Dickens. With experience cutting comedies, dramas, and musicals, it’s impossible to pin Dickens down to any particular film genre. I had recently interviewed him for Mary Queen of Scots, which was a good place to pick up this conversation about editing Rocketman.

__________________________________________

[OP] Our last conversation was about Mary Queen of Scots. I presume you were in the middle of cutting Rocketman at that time. Those are two very different films, so what brought you to edit Rocketman?

[CD] I made a quick shift onto Rocketman after Mary Queen of Scots. It was a fast production with eight or nine months filming and editing. The project had been in the cards a year before and I had met with Dexter to discuss doing the film. But, it didn’t happen, so I had forgotten about it until it got greenlit. I like musicals and have done one before – Les Miserables. This one was more ambitious creatively. Right from the beginning I liked the treatment of it. Rocketman was a classic kind of musical, but it was different in that the themes were adult and had a strong visual sense. Also the treatment using Elton John’s songs and illustrating his life with those was interesting.

[OP] The director had a connection with both Rocketman and Bohemian Rhapsody. Both films are about rock legends, so audiences may draw an obvious comparison. What’s your feeling about the contrast between these two films?

[CD] Obviously, there are a lot of similarities. Both films are essentially rock biopics about a musical figure. Both Freddie and Elton were gay. So that theme is similar, but that’s where it ends. Bohemian Rhapsody was aimed at a wider audience, i.e. less adult material – sex and drug-taking – things like that. And secondly, it’s about music, but it’s not a musical. It’s always grounded in reality. Characters don’t get up and sing to the camera. It’s about Freddie Mercury and Queen and their music. So the treatment of it is very different. Another fundamental difference is that Elton John is still alive and Freddie Mercury is not, so that was right at the film’s core. From the start you know that, so it has a different kind of power.

[OP] Whenever a film deals with popular music – especially when the rights-owners are still alive and active – the treatment and use of that music can be a sticking point. Were Elton John or Bernie Taupin actively involved in the production of Rocketman?

[CD] Yes, they were. Bernie less so – mainly Elton. He didn’t come in the edit room that much, but his husband, David Furnish, was a bit more involved. Elton is not someone who goes out in public that much, except to perform. He’s such a massive star. But, he did watch cuts of the film and had notes – not at every stage – but, David Furnish was the conduit between us and him. Naturally, Elton sanctioned all of the music tracks that were used. But the film was not made by them, i.e. we were making the film and they were giving us notes.

[OP] How were the tracks handled? Was the music remixed from the original studio masters with Taron lip-syncing to Elton’s voice – or was it different?

[CD] The music was radically changed in some cases from the original – the arrangements, the scoring. The music was completely re-recorded and sung by Taron, the actor playing Elton. We evolved the choices made at the beginning during the edit. So alongside of the picture edit was a music edit and a music mix going on constantly. In some cases Taron was singing on-set and we used that for about a quarter of the tracks. These were going in and out of scenes that had natural dialogue. Taron would start singing and we would play the track underneath. Then at that point perhaps, he would start lip-syncing, so it was a combination. On some tracks he was completely lip-syncing to what he had recorded before. This set the tempo for those scenes, but the arrangements evolved during the edit.

Even when he was lip-syncing, it was to his own voice. The whole idea was that the singing would not be Elton, except at the end where we have a track with both singing in the credits roll. So it’s a key thing that these were new recordings. Giles Martin, son of the legendary George Martin, was the music producer who took care of everything and put up with our constant changes. We had a team of two music editors who worked alongside us and a score as well, written by Matt Margeson, which we were rolling into the film in places. It was a real team process of building the film slowly.

[OP] Please expand on the structure of a film musical and what it takes to edit one.

[CD] The editing process was challenging, because of the complex structure. It was fundamentally a musical, with fifteen or sixteen tracks – meaning songs or music numbers – that were initially planned to be shot. Some of these were choreographed song-and-dance sequences. Combined with that was a sort of kitchen sink drama about Elton’s life, his childhood, his teenage years, and then into manhood. And then becoming a superstar. The script has the songs and then long sequences of more classic storytelling. What I found – slowly, as we were putting the film together, even during the shoot – was that we needed to unify those two things within the edit.

For instance, the first song number in the movie is “The Bitch is Back.” It’s a dance sequence with Elton as a boy walking down the street while people are singing and dancing around him. Then his adult self is chasing him around. It’s a very stylized sequence, which then went into about an eight minute sequence of storytelling about his childhood. We needed to give the film the same tone all the way through, i.e. that slightly fantastical feel of a musical. We screened it a few times for some of the core people and it became clear that we wanted to go with the fantastical elements of the film, not the more down-to-earth, realistic elements. Obviously, you could have made the choice to cut back on the music, but that seemed counter-intuitive. So we had to make some deep cuts in the sections between the musical pieces to get the story to flow and have that same kind of tone.

There was also a flashback structure. The film starts with Elton later on as an adult in rehab, after having fallen into drug and alcohol addiction. We framed the film with this device, so it was another element that we had to make work in the edit to get it to feel as an organic part of the story. We found that we didn’t have enough of these rehab sequences and had to shoot a few more of them during the edit to knit the film together in this way in order to remind you that he was telling this story – looking back on his past.

Cutting back sections between the musical numbers wasn’t our only solution to get the right tone. We had to work out how to get in and out of the musical sequences and that’s where the score comes in. I played with this quite a lot with the composer and Giles to have themes from Elton’s song coming throughout the film. For example, “Goodbye Yellow Brick Road” had some musical themes in it that we started using as the theme that went with his rehab. The theme of the film is that Elton lost any sense of where he came from as a person, because of his stardom and “Goodby Yellow Brick Road” – the song – is about that. It’s actually about going back to the farm and your roots. The song isn’t actually in the film until the very end when he performs it. So we found that using this musical theme as a motif throughout the film is very powerful and helped to combine the classical storytelling scenes with the musical scenes.

[OP] Was this process of figuring out the right balance something that happened at the beginning and then became a type of template for the rest of the film? Or was is a constant adjustment process throughout the cutting of Rocketman?

[CD] It was a constant thing trying to make the film work as a whole so people wouldn’t be confused about the tone. At one point we had far too much music and had to take some out. It became very minimal in some areas. In others, it led you more. It was about getting that balance right all the way through. I’m primarily a picture editor, but on this film you couldn’t just concentrate on the picture and then leave the music to the music editors and composer, because it was absolutely a fundamental part of the film. It was about music and so how you were using music was very key within the edit. Sometimes we had to cut longer songs down. Very few are at their original length. Some are half their recorded length.

[OP] This process sounds intriguing, since the scenes use a song as the underlying building block. Elton John’s songs tend to be pop songs – or at least they received a lot of radio airplay – so did those recorded lengths tend to drive the film?

[CD] No. At first I thought we’d have to be very faithful, but as we started cutting, the producers -and particularly Elton John’s side of it – didn’t care whether we cut things down or made them longer or added bits. They weren’t precious about it. In fact, they wanted us to be creative. The producers would say, “Don’t worry about cutting that down, Giles will deal with it.” Of course he would. Although sometimes he’d come back to me and say, “Look, this doesn’t quite work musically. You need to add a bit more time to this, or another couple of bars of music.” So we had a whole back-and-forth process like that.

For instance, in the track “Rocketman,” which is the film’s centerpiece, Elton tries to commit suicide. He’s at a party, gets drunk, and jumps into the swimming pool. While he’s underneath he starts seeing visions of himself as a child under there. He starts singing and gets fished out of the pool and then put on stage in a stadium. It’s a whole sequence that’s been planned to play like that. Of course, I couldn’t fit what they’d shot into the song – there wasn’t enough time. It was all good stuff, so I added a few bars. I’d give it to music and they’d say, “Oh, you can’t add that in that way.” So I’d go back and try different ways of doing it.

At the end, when he’s put back on stage at Dodger Stadium, he’s in a baseball uniform and then fires into the air like a rocket. They shot it in a studio without a big crowd and it looked okay. As soon as we started getting the visual effects, we thought, “Wow. This looks great.” So we doubled the length of that – added on, repeated the chorus, and all of that – because we thought people were going to love this. It looked and sounded great. But, when we then tested the film, it was way too long. It had just outstayed its welcome. We then had to cut it down again, although it was still longer than they’d originally planned it.

[OP] With a regular theatrical musical, the song are written to tell the story. Here, you are using existing songs that weren’t written with that story in mind. I presume you have to be careful that you don’t end up with just a bunch of music videos strung back-to-back.

[CD] Exactly. I don’t think we ever strayed into that. It was always about – does it make its point? These songs were written at all times in his career, but we didn’t use them in their original chronological order. “Honky Cat” was written later than when we used it. He’s just getting successful and at the end of “Honky Cat” they are buying Rolls Royces and clothes and football teams. At the end of that there was a great song-and-dance routine with them dancing on a record – Elton and John Reid, his manager and also a kind of boyfriend. That part went on for two minutes and we ended cutting it out. Partly because people and the producers who saw it thought it wasn’t the right style. It had a kind of 1920s or 1930s style with lots of dancers. It was a big number and took a long time to edit, but we took it out. I thought it was quite a nice sequence, but most people thought the film was better without it, because it wasn’t moving the story on.

[OP] Other than adjusting scenes and length, did friends-and-family and test audience screenings change your edit significantly?

We did three big screenings in Los Angeles, San Francisco, and Kansas City, plus a number of smaller ones in England. The audiences were a mix of people who were Elton John fans, as well as those that weren’t. Essentially people liked the film right from the start, but the audiences weren’t getting some parts, like the flashback structure with the rehab scenes – particularly at the beginning. They didn’t really understand what he was singing about. 

That first song [“The Bitch Is Back”] caused a lot of difficulty, because it starts the film and says this is a musical. You have to handle that the right way. I think the initial problems were partly in how I had cut the sequence originally. I tried to show too much of the crowd around him and the dancers and I thought that was the way to go with it. Actually what it turned out was the way to go was the relationship between the two of them – Elton and Elton as the little boy – because that’s what the song was about. I then readjusted the edits, taking out a lot of the wide shots.

Also Taron had done some improvised dialogue to the little boy rather than just singing all the way through – dialogue lines like, “Stop doing that.” That was in the film a long time, but people didn’t like it and didn’t understand why he was angry with the boy. So we cut that out completely. Another issue was that right at the start, the little boy starts singing to Taron as Elton first, but audiences did not feel comfortable with it. We discussed it a lot and decided that the lead actor should be the one we hear singing first. We did a reshoot of that beginning portion of the scene. You have to let the audience into it more slowly than we had originally done. That’s a prime example of how editing decisions can lead to additional filming to really make it work.

[OP] You mentioned visual effects to complete the “Rocketman” scene. Were there a lot of effects used to make the film period-accurate or just for visual style?

[CD] Quite a lot, though not excessively, like a comic book movie. I imagine it was similar to Bohemian Rhapsody that had to shoot gigs and concerts and places were you couldn’t go now and film that. But our visual effects weren’t as fundamental in that I didn’t need them to cut with. The boy underwater was all created, of course. Taron in the pool was actually him underwater, because he had breathing apparatus. But the little boy couldn’t, so he was singing ‘dry for wet’ – shot in the studio and put into the scene later. There were different evolutions of that scene. In one version we took the boy out completely and just had Taron singing.

The end of the film as written was going to be a re-imagined version of Elton John’s “I’m Still Standing” music video, which is on the beach in Cannes, shot in the 80s. The idea was to go there and shoot it with a lot more dancers. By the time the film was being shot, the weather changed and we couldn’t shoot that sequence. That whole ending was shot later, partly in a studio. Because we couldn’t afford to go to Cannes and reshoot the whole thing, someone was able to get the original rushes from that music video, which had been shot on 16mm film, but edited on videotape. We had to get permission from the original director of that music video and he was very happy for us to do it. We had the 16mm film rescanned and also removed the grain. Instead of Elton, we put Taron into it.  In every shot with Elton, we replaced his head with Taron’s and that became the ending sequence of the film. As a visual effect, that took quite a leap of faith, but it did work in the end. That wasn’t the original plan, but I think it’s better.

[OP] In Bohemian Rhapsody there was a conscious consideration of matching the Live Aid concert angles and actions. Was there anything like that in Rocketman?

[CD] There was no point in trying to do that on Rocketman. It was always going to be stylized and different from reality. We staged Dodger Stadium the way it looked, but we didn’t try to match it. The original concert was late afternoon and ours is more towards night, which was visually better. The visual inspiration came from the stills taken by a famous rock photographer and they look a little more night. At one point we talked about having a concert at the end and we tried shooting something, but it just didn’t feel right. We were going to get compared to BoRhap anyway, so we didn’t want to even try and do something the same way.

[OP] Any final thoughts or advice on how to approach a film like Rocketman?

[CD] Every movie is different. Every single time you come to a story, you nearly have to start again. The director wants to do it a certain way and you have to adapt to that. With some of the dramas or comedies that I’ve cut, it’s a less immediate process. You don’t really know how the whole thing is coming together until you get a sense of it quite late. With this, they shot a few of the song sequences early and as soon as I saw that, I thought right away, “Oh, this is great.” You can build a quick three-minute sequence to show people and you get a feel for the whole film. You can get excited about it. On a drama or even worse, on a thriller, you’re guessing how it’s coming together and you’re using all of your skills to do that.

The director and the story are the differences and I try to adapt. Dexter wanted the film to be popular, but also distinctive. He wanted to see very quickly how it was coming together. As soon as he was done filming he wanted to go to the edit and see how it was coming along. In that scenario you try to get some things done more quickly. So I would try to get some sequences put together knowing that, and then come back to them later if you’ve rushed them.

Since it’s a musical you could string together the songs and get a feel, but that would be misleading. When you start off you can produce a sequence very quickly that looks good, because you’ve got the music that makes it feel almost finished and that it’s working. But that can lead you into a dead end if you’re not careful – if you are too precious about the music – the length of it and such. You still have to be hard about the storytelling element. Ultimately all of the decisions come from the story – how long the scene is, whether you start on a close-up or a wide – I always try to approach everything like that. If you keep that in your head, you’ll make the right decisions.

©2019 Oliver Peters