Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using.

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

Ford v Ferrari

Outraged by a failed attempt to acquire European carmaker Ferrari, Henry Ford II sets out to trounce Enzo Ferrari on his own playing field – automobile endurance racing. Unfortunately, the effort falls short, leading Ford to turn to independent car designer, Carroll Shelby. But Shelby’s outspoken lead test driver, Ken Miles, complicates the situation by making an enemy out of Ford Senior VP Leo Beebe. Nevertheless, Shelby and his team are able to build one of the greatest race cars ever – the GT40 MkII – setting the showdown between the two auto legends at the 1966 24 Hours of Le Mans. Matt Damon and Christian Bale star as Shelby and Miles.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long time collaborators. I recently spoke with film editors Michael McCusker, ACE (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl on the Train) about what it took to bring Ford v Ferrari together.

_____________________________________________

[OP] The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.

[MM] I cut my very first movie, Walk The Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved out to LA in 2009 and I hired him to assist me on Knight & Day. We’ve been working together for 10 years now.

I always want to keep myself available for Jim, because he chooses good material, attracts great talent, and is a filmmaker with a strong vision who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie, and now a car racing film.

[OP] As a film editor, it must be great not to get type-cast for any particular cutting style.

[MM] Exactly. I worked for David Brenner for years as his first. He was able to cross genres and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres – I simply knew that we worked well together and that the end product was good.  

[OP] In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?

[MM] I saw that movie and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and Frankenheimer’s Grand Prix. Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script and we were in preproduction, it became clear that there was so much more drama that was available for him to portray during the racing sequences than he anticipated. And so, the races took on more of an energized pace.

[OP] Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?

[MM] I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in pre-vis, which required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie. You’re dealing with Mollie and Peter [Ken Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe, and also, of course, what’s going on in the car with Ken. It’s a three act movie unto itself, so Jim was trying to figure out how it was all going to work, before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process – and part of the writing process was the pre-vis. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies. 

[OP] What was the timeline for production and post?

[MM] I started at the end of May 2018. Production began at the the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

The challenge was that there was going to be a lot of racing footage, which meant there was going to be a LOT of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic. There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels. 

[OP] How long was your initial cut and what was your process for trimming the film down to the present run time?

[MM] We’re at 2:30:00 right now and I think the first cut was 3:10:00 or 3:12:00. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others.  Plus, the basic trimming of scenes brought the length down. But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30:00!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45:00, and feel likes it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome. 

[OP] How extensively did you re-arrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?

[MM] To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly and those were cut. The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters – their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end – to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

[OP] Were you both racing fans before you signed onto this film?

[AB] I was not.

[MM] When I was a kid, I watched a lot of racing. I liked CART racing – open wheel racing – not so much stock car racing. As I grew older, I lost interest, particularly when CART disbanded and NASCAR took over. So, I had an appreciation for it. I went to races, like the old Ontario 500 here in California.

[OP] Did that help inform your cutting style for this film?

[MM] I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay – guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

[OP] Let’s dive deeper into the sound for this film. I would imagine that sound design was integral to your rough cuts. How did you tackle that?

 [AB] We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early in the process. The sounds may have not been the exact engine sounds that would end up in the final, but they were adequate to allow you to experience the scenes as intended and to give the right feel.  Because we needed to get Jim’s response early, some of the races were cut with the production sound – from the live mics during filming. This allowed us and Jim to quickly see how the scenes would flow. Other scenes were cut strictly MOS, because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to Don [Sylvester, sound supervisor] who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

[MM] We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work. Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Avid. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix. 

[OP] What about temp music? Did you also weave that into your rough cuts?

[MM] Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man – a screenwriter, a novelist, a one-time musician, and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working towards. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

Specifically, for this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. Which is to say that this is a very complex score. It includes a kind of surf rock sound with Carroll Shelby in LA; an almost jaunty, lounge jazz sound for Detroit and the Ford executives; and then the hard-driving rhythmic sound for the racing.

(The final score was composed by Marco Beltrami and Buck Sanders.)

[OP] I presume you were housed in multiple cutting rooms at a central facility. Right?

[MM] We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces, I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor, and assistants were right around the corner. It makes for a very efficient working environment. 

[OP] Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?

[Both] FluidMorph! (laughs)

[MM] FluidMorph, speed-ramping – we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

[OP] What about Avid’s Script Integration feature, often referred to as ScriptSync? I know a lot of narrative editors love it.

[MM] I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines – subtext. I found that with ScriptSync I could put the scene together quickly, but it was flat as a pancake. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

[OP] Tell me a bit more about your organizational process. Do you start with a KEM roll or stringouts of selected takes?

[MM] I don’t watch dailies, which sounds weird. By that I mean, I don’t watch them in a traditional sense. I don’t start in the morning, watch the dailies, and then start cutting. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up really quickly and then I spend an enormous amount of time – particularly on complex scenes – creating a bin structure that I can work with. Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character – it depends on what’s driving the scene. That’s the way I learn my footage – by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If I do it myself, then I’ll remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments. And then I’ll start cutting.

[AB] I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike. I’ve used ScriptSync before and I tend to agree that it discourages you from seeing – as Mike described – the moments between lines. Those moments are valuable to remember.  

[OP] I presume this film was shot digitally. Right?

[MM] It was primarily shot with [ARRI] Alexa 65 LF cameras, plus some other small format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels ‘of the time.’

[OP] Since the film takes place in the 1960s and with racing action sequences, I presume there were quite a few visual effects to properly place the film in time. Right?

[MM] There’s a ton of that. The whole movie is a period film. We could temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid, but also Adobe After Effects. He has some clever ways of filling in backgrounds or green screens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The caveat, though, is that the racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The current, modern day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot Le Mans. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went for a week to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

[OP] Any final thoughts about how this film turned out? 

[MM] I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and it’s a great feeling. It’s a movie about these really great characters with great scope and great racing. That goes back to the very advent of movies. You can put all the big visual effects in a film that you want to, but it’s really about people.

[AB] I would absolutely agree. It’s more of a character movie with racing.  Also, because I am not a ‘racing fan’ per se, the character drama really pulled me into the film while working on it.

[MM] It’s classic Hollywood cinema. I feel proud to be part of a movie that does what Hollywood does best.

The article is also available at postPerspective.

For more, check out this interview with Steve Hullfish.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Adobe Anywhere and Divine Access

df0115_da_1_sm

Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters