Game Changers

Thanks to Apple’s recent “Scary Fast” event and the subsequent BTS video revealing that the content was shot with an iPhone, friends and I have had conversations about game-changing technology. The knee jerk response of fanboys any time a high profile filmmaker does anything using an iPhone is what a gamer changer it will be to produce an entire feature film using an iPhone. While I don’t doubt that will eventually happen, analysis of Apple’s BTS video makes it clear that an awful lot went into overcoming the limitations inherent in the iPhone camera system. But this isn’t the first time the term “game changer” has been applied to new and interesting technologies.

Over the years, many technologies for production and post have been viewed as game changers. Think of things like 3D stereoscopic films, VR/AR glasses, the Lytro light-field camera, and others. In nearly all cases, the impact – when it did actually materialize – changed an element in the process, but not the workflow. For example, in the transition from photochemical film processes to video/digital, the film workflow components (labs, negative cutting, color timing, etc) were simply replaced by their electronic equivalents (DITs, conforming/finishing, digital grading, etc).

When the RED Digital Camera Company brought the RED One to market, it was by no means the first nor only digital “film” camera. While RED One did expand the boundaries of resolution from HD to 4K and beyond, the company did not end up “owning” the digital acquisition market. This eventually fell to ARRI. Although later to market, the Alexa has become the gold standard for most filmmakers. Sure, a lot of films are shot with RED and Sony cameras, but it’s ARRI Alexas that dominate the top tier of films.

There are many reasons for ARRI over RED, in spite of the fact that on paper RED’s cameras might seem better. Many it’s because RED cameras originally required non-standard support gear, or because the film community may have been turned off by RED founder Jim Jannard’s bravado. While these might all be factors, what’s more likely the case is that the ARRI company, Arriflex cameras, and ARRI lighting and support gear have been a filmmaking staple for a century. Plus, with high budget film and video projects, cameras and production gear are generally rented. The cost of a camera is a small percentage of the total budget. Going back to the the original premise, there is little advantage from a budget standpoint to using an iPhone – aside from the novelty of doing so.

Let’s look at post. Certainly nonlinear digital editing has had a majoring impact. But it hasn’t really changed the processes involved. For instance, branched stories – an idea that some thought NLEs would encourage – have only had limited success. Netflix’s “Black Mirror: Bandersnatch” is really the only such mainstream film that comes to mind. Avid Media Composer has become the dominant feature film/broadcast TV NLE, in spite of not being the only option at the start. Granted, this is a niche market with a small number of users compared with social media influencers and others working with different brands of editing software.

In spite of challengers, Avid Media Composer is still here. Apple looked promising in the Final Cut Pro “legacy” days, when prominent users like Walter Murch, Angus Wall, and Kirk Baxter cut several high profile feature films with it. Yet even with all that buzz, FCP didn’t make a significant dent in the number of Media Composer seats in Hollywood. Fast forward to the mangled launch of Final Cut Pro X over a decade ago. That put a nail into FCP’s coffin for Hollywood. What Apple promoted as a gamer changer – and Final Cut Pro (formerly X) does offer unique and innovative features – turned out to be a big “whatever” for film editors. Even though some film editors adopted the new Final Cut Pro, Adobe has been able to capitalize on the former FCP “legacy” editors, but Avid is still king in Hollywood.

There are many factors involved. A big one is that experienced editors know the software inside and out and many have decades of experience in its operation. It doesn’t matter if something else is faster or more efficient. If the software you use is second nature, then a few extra keystrokes simply melt away thanks to muscle memory. Another factor is that, like camera gear, post gear is rented by the project and/or supplied by known post facilities with an existing investment in Avid-centric systems. Those companies want to recoup their investment, rather than pursue something that might be trendy for a short period of time.

It’s also worth noting that Media Composer is the sibling of Pro Tools, which is the dominant audio software used in film and TV post. There’s a certain synergy to having the picture and sound cutting/mixing tools come from the same manufacturer. Finally, as a piece of software, Media Composer (owned or subscription) is dirt cheap compared with the computers and storage used to run it. In spite of ongoing financial issues, Avid survives because of the “if it ain’t broke, don’t fix it” mentality. I really don’t see the newest challengers, like Blackmagic Design DaVinci Resolve and Adobe Premiere Pro, doing much to change that outside of a few editors.

The moral of the story is that the next time you hear something pronounced as a game changer, be skeptical. Few things are truly that. A product can bend production and post workflows, but very few completely upend or revolutionize how traditional film and TV content is produced, posted, and distributed. In 2024, I’m sure plenty of AI-related tools will be touted as revolutionary, game changing, or something similar. Before you get too excited, take breath and think about whether it will truly change how you work. Or better yet, should it?

©2024 Oliver Peters

Ford v Ferrari

Outraged by a failed attempt to acquire European carmaker Ferrari, Henry Ford II sets out to trounce Enzo Ferrari on his own playing field – automobile endurance racing. Unfortunately, the effort falls short, leading Ford to turn to independent car designer, Carroll Shelby. But Shelby’s outspoken lead test driver, Ken Miles, complicates the situation by making an enemy out of Ford Senior VP Leo Beebe. Nevertheless, Shelby and his team are able to build one of the greatest race cars ever – the GT40 MkII – setting the showdown between the two auto legends at the 1966 24 Hours of Le Mans. Matt Damon and Christian Bale star as Shelby and Miles.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long time collaborators. I recently spoke with film editors Michael McCusker, ACE (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl on the Train) about what it took to bring Ford v Ferrari together.

_____________________________________________

[OP] The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.

[MM] I cut my very first movie, Walk The Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved out to LA in 2009 and I hired him to assist me on Knight & Day. We’ve been working together for 10 years now.

I always want to keep myself available for Jim, because he chooses good material, attracts great talent, and is a filmmaker with a strong vision who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie, and now a car racing film.

[OP] As a film editor, it must be great not to get type-cast for any particular cutting style.

[MM] Exactly. I worked for David Brenner for years as his first. He was able to cross genres and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres – I simply knew that we worked well together and that the end product was good.  

[OP] In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?

[MM] I saw that movie and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and Frankenheimer’s Grand Prix. Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script and we were in preproduction, it became clear that there was so much more drama that was available for him to portray during the racing sequences than he anticipated. And so, the races took on more of an energized pace.

[OP] Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?

[MM] I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in pre-vis, which required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie. You’re dealing with Mollie and Peter [Ken Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe, and also, of course, what’s going on in the car with Ken. It’s a three act movie unto itself, so Jim was trying to figure out how it was all going to work, before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process – and part of the writing process was the pre-vis. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies. 

[OP] What was the timeline for production and post?

[MM] I started at the end of May 2018. Production began at the the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

The challenge was that there was going to be a lot of racing footage, which meant there was going to be a LOT of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic. There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels. 

[OP] How long was your initial cut and what was your process for trimming the film down to the present run time?

[MM] We’re at 2:30:00 right now and I think the first cut was 3:10:00 or 3:12:00. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others.  Plus, the basic trimming of scenes brought the length down. But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30:00!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45:00, and feel likes it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome. 

[OP] How extensively did you re-arrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?

[MM] To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly and those were cut. The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters – their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end – to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

[OP] Were you both racing fans before you signed onto this film?

[AB] I was not.

[MM] When I was a kid, I watched a lot of racing. I liked CART racing – open wheel racing – not so much stock car racing. As I grew older, I lost interest, particularly when CART disbanded and NASCAR took over. So, I had an appreciation for it. I went to races, like the old Ontario 500 here in California.

[OP] Did that help inform your cutting style for this film?

[MM] I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay – guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

[OP] Let’s dive deeper into the sound for this film. I would imagine that sound design was integral to your rough cuts. How did you tackle that?

 [AB] We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early in the process. The sounds may have not been the exact engine sounds that would end up in the final, but they were adequate to allow you to experience the scenes as intended and to give the right feel.  Because we needed to get Jim’s response early, some of the races were cut with the production sound – from the live mics during filming. This allowed us and Jim to quickly see how the scenes would flow. Other scenes were cut strictly MOS, because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to Don [Sylvester, sound supervisor] who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

[MM] We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work. Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Avid. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix. 

[OP] What about temp music? Did you also weave that into your rough cuts?

[MM] Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man – a screenwriter, a novelist, a one-time musician, and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working towards. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

Specifically, for this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. Which is to say that this is a very complex score. It includes a kind of surf rock sound with Carroll Shelby in LA; an almost jaunty, lounge jazz sound for Detroit and the Ford executives; and then the hard-driving rhythmic sound for the racing.

(The final score was composed by Marco Beltrami and Buck Sanders.)

[OP] I presume you were housed in multiple cutting rooms at a central facility. Right?

[MM] We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces, I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor, and assistants were right around the corner. It makes for a very efficient working environment. 

[OP] Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?

[Both] FluidMorph! (laughs)

[MM] FluidMorph, speed-ramping – we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

[OP] What about Avid’s Script Integration feature, often referred to as ScriptSync? I know a lot of narrative editors love it.

[MM] I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines – subtext. I found that with ScriptSync I could put the scene together quickly, but it was flat as a pancake. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

[OP] Tell me a bit more about your organizational process. Do you start with a KEM roll or stringouts of selected takes?

[MM] I don’t watch dailies, which sounds weird. By that I mean, I don’t watch them in a traditional sense. I don’t start in the morning, watch the dailies, and then start cutting. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up really quickly and then I spend an enormous amount of time – particularly on complex scenes – creating a bin structure that I can work with. Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character – it depends on what’s driving the scene. That’s the way I learn my footage – by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If I do it myself, then I’ll remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments. And then I’ll start cutting.

[AB] I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike. I’ve used ScriptSync before and I tend to agree that it discourages you from seeing – as Mike described – the moments between lines. Those moments are valuable to remember.  

[OP] I presume this film was shot digitally. Right?

[MM] It was primarily shot with [ARRI] Alexa 65 LF cameras, plus some other small format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels ‘of the time.’

[OP] Since the film takes place in the 1960s and with racing action sequences, I presume there were quite a few visual effects to properly place the film in time. Right?

[MM] There’s a ton of that. The whole movie is a period film. We could temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid, but also Adobe After Effects. He has some clever ways of filling in backgrounds or green screens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The caveat, though, is that the racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The current, modern day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot Le Mans. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went for a week to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

[OP] Any final thoughts about how this film turned out? 

[MM] I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and it’s a great feeling. It’s a movie about these really great characters with great scope and great racing. That goes back to the very advent of movies. You can put all the big visual effects in a film that you want to, but it’s really about people.

[AB] I would absolutely agree. It’s more of a character movie with racing.  Also, because I am not a ‘racing fan’ per se, the character drama really pulled me into the film while working on it.

[MM] It’s classic Hollywood cinema. I feel proud to be part of a movie that does what Hollywood does best.

The article is also available at postPerspective.

For more, check out this interview with Steve Hullfish.

©2019 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters