Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

©2019 Oliver Peters

Handling and Protecting Media

Once the industry entered the file-based era, we realized that dealing with and properly archiving audio and video files could make or break a production company. No more videotapes on the shelf to pull footage from. Unfortunately many companies, producers, clients, and editors simply solved this with a hodgepodge of small, portable drives – Firewire, USB, Thunderbolt, whatever. That’s no longer practical. A typical 10-day, 4K shoot with a handful of formats can easily generate 8-10TB of original footage. That’s if the production is structured. Make that a 2-3 weeklong documentary or reality-style production and you’ll have closer to 20-30TB. Not exactly something you want to deal with in post using a bunch of orange LaCie drives!

The road to safeguarding your files

At the day job, we were able to invest in a LumaForge Jellyfish shared storage network (NAS). It’s 480TB, which sounds like a lot, but after RAID protection the available net capacity is 316TB. And you only want to use up to 80%-90% of that for the most efficient operation. While it still sounds like a lot of storage, it is a finite amount. This means that you need to develop a strategy for archiving older projects and the associated media, but yet easily find and restore it later for revisions.

Cloud storage remains a pipe dream at these quantities. LTO data tape back-up is also impractical, because of its linear read/write nature. It is only intended for deep storage archiving. Facilities who have attempted to use LTO as a type of near-line storage – with frequent restores, updates, and subsequent re-archiving – have worn out their LTO tapes long before the rated life.

Efficient media handling starts when a project or production is first originated. In our case, every new project gets a folder on the Jellyfish and inside that folder is a standard group of subfolders for the corresponding project files, graphics, exports, and source footage. We assign all projects a job number for billing and that number is part of the top level folder name, as well as in any project file name. This default, template starting point is generated for each new production using the Post Haste application.

The location crew

On location all media is copied daily (with verification using the Hedge application) to both master and back-up drives. Depending on the size of the crew, this is the responsibility of the DIT, assistant cameraman, or the director of photography. On large productions, the cost of these drives is built into the budget and they later end up being stored on the shelf for safe keeping. On smaller jobs (or some fast turnaround jobs) temporary, fast SSDs are used, which will later be reused on other projects.

Post starts here

The next step back at the shop is to copy all of this material from the location drives onto the Jellyfish into that project’s Source Media or Dailies subfolder. Once copied, I will proceed to clean up and reorganize all media into subfolders according to this hierarchy:

DATE / CAMERA / REEL

For example: 092819/A-CAMERA_ALEXA/A001

Or outside of the US, maybe: 28SEPT19/A-CAMERA_ALEXA/A001

If a camera file is buried several folders deep – due to the camera card structure or an error made by the crew member on location – I will move those files to the top level within the REEL subfolder without any other levels in between. Camera folders, like DCIM, CLIP, etc are thus orphaned, and so, deleted from Jellyfish. Remember that I still have the original master drive from the location, which will sit on the shelf. If I ever need to get back to the file in its original container, I have that option.

I discussed relinking strategies in the previous post and that comes into play here. Files from semi-pro and non-pro cameras, like DSLRs, GoPros, iPhones, etc will have a prefix appended to the file name using the Better Rename application. The name is typically a short 8-10 character alphanumeric to indicate a job name reference, date, camera letter, and reel.

For instance, a file from the B-camera’s reel 7 for a production done for project ABC on September 28th would get the prefix “ABC0928B07_”. The camera-generated clip name would follow the underscore in that name. The point of doing this is to guarantee unique file names, especially when multiple cameras and filming days are involved. I also apply this process to sound files, even if the clip name reflects the scene and take number.

The last step is to transcode and rate-convert all non-pro media. If my base rate is 23.98fps (23.976), then files like GoPro 59.94fps media get turned into ProRes at 23.98 (slomo). In that case, I will have a subfolder with the original media and a second subfolder with the transcoded media, both with proper file names. I usually apply the “_PR2398” suffix to these transcoded files. I have found that DaVinci Resolve is the best and fastest tool for this transcoding process and large batches can be run overnight as needed.

Archiving your files

If the crew used temporary drives on location, then before these are reformatted and recycled, they are copied to inexpensive portables, like Seagate or Western Digital USB drives. These are then parked on the shelf for safe keeping. The objectives is to end up with at least two copies of the source media – the unaltered, camera original files and the new, master files on the Jellyfish.

Once editing has been completed and approved and the client files have been delivered, we move into the archiving stage. For nearly every project, we try to make sure that a ProRes master and a textless ProRes master have been generated by the editor. In addition, the mixer or the editor will generate a mixed audio file and audio stems for dialogue, SFX, and music (as separate files). Many times, you end up making future changes or versions using these files without going back to the original project file.

The entire project folder with all of the associated media is now copied to a raw, removable hard drive. These are enterprise-grade drives. All of our workstations are equipped with docking stations for such drives. To date, we are up to 200 drives, ranging in size from 2TB to 8TB. They are indexed using the simple DiskCatalogMaker application, which generates a searchable index file of all of these archive drives. (Note – I would recommend spinning up these archive drives every few months.)

Let me mention that while this can be done at the end, I will often split this archival step into two phases. I will first copy only the Dailies media right after I have organized it on Jellyfish (before any editing), leaving the other project subfolders blank. The reason is that once location production is done, there won’t be anything else added to Dailies. In addition, it gives me three copies of the camera files – the location drive (or its back-up), Jellyfish, and the archive drive. Once the project is finished, I only need to copy the rest of the material from the other subfolders.

The last step is to move the project folder from the PROJECTS master folder on Jellyfish to the BACKED UP master folder. As long as we have space on Jellyfish, the project is never deleted. Often changes are required. When that happens, the affected project folder is moved from BACKED UP to PROJECTS again. The changes are made and client files delivered. Then the archive drive for that project is updated and re-indexed to the DiskCatalogMaker catalog file. The project file is finally returned to the BACKED UP folder. As we need space on Jellyfish, the oldest projects that haven’t been touched in a long while are deleted.

Redundancy is the key

There are two additional protection steps taken. All active project files (usually Premiere Pro) are copied to the company’s DropBox by every editor at the end of each day. In the event of a catastrophic NAS failure – before the completion of that project – we can at least get to the project file in the cloud (DropBox) and the media that is stored on hard drive in order to restore the edit. (Note that if you do this with FCPX Libraries, they must first be “zipped,” because DropBox and FCPX Libraries do not play well together.)

The second item is that we have an additional folder on Jellyfish for all completed masters. When an editor generates ProRes master and/or textless files, those files are also copied to this masters folder. That give us quick access to all final versions, should the client require an extra web file or some other type of deliverable. It’s easy to simply encode new files from these ProRes masters, without needing to search out the original project folder.

These steps may sound complex and daunting if you aren’t currently doing them. I have covered some of this in past posts, but I do update my processes over time. Once you get into a routine of doing these steps, the benefits pay off immensely. Your media is better protected, it’s easier to find in the future, and relinking is a no-brainer.

©2019 Oliver Peters

Foolproof Relinking Strategy

Prior to file-based camera capture, film and then videotape were the dominant visual acquisition technologies. To accommodate, post-production adopted a two-stage solution: work print editing + negative conform for film, offline/online editing for video. During the linear editing era high-res media on tape was transferred to a low-res tape format, like 3/4″, for creative editing (offline). The locked cut was assembled and enhanced with effects and graphics in a high-end online suite using an edit decision list and the high-res media. The inherent constraints of tape formats forced consistency in media standards and frame rates.

In the early nonlinear days, storage capacities were low and hard drives expensive, so this offline/online methodology persisted. Eventually storage could cost-effectively handle high-res media, but this didn’t eliminate these workflows. File-based camera acquisition has brought down operating cost, but the proliferation of formats and ever-increasing resolutions have meant that there is still a need for such a two-stage approach. This is now generally referred to as proxy versus full-resolution editing. The reasons vary, but typically it’s a matter of storage size, system performance, or the capabilities of the systems and operator/artist running the finishing/full-res (aka “online”) system.

All of this requires moving media around among drives, systems, locations, and facilities, thus making correct list management essential. Whether or not it works well depends on the ability to accurately relink media with each of these moves. Despite the ability of most modern NLEs to freely mix and match formats, sizes, frame rates, etc., ignoring certain criteria will break media relinking. You must be able to relink the same media between systems or between low and high-res media on the same or different systems.

Criterial for successful relinking

– Unique file names that match between low and high-res media (extensions are usually not important).

– Proper timecode that does not repeat within a single clip.

– A single, standard frame rate that matches the project’s base frame-rate. Using conform or interpret functions within an NLE to alter a clip’s frame rate will mess up relinking on another system. Constant speed changes (such as slomo at 50%) is generally OK, but speed ramp effects tend to be proprietary with every NLE and typically do not translate correctly between different edit or grading applications.

– Match audio configurations between low and high-res media. If your camera source has eight channels of audio, then so must the low-res proxy media.

– Match clip duration. High-res media and proxies must be of the exact same length.

– Note that what is not important is matching frame size or codec or movie wrapper type (extension).

Proxy workflows

Several NLE applications – particularly Final Cut Pro X and Premiere Pro – offer built-in proxy workflows, which automatically generate proxy media and let the editor seamlessly toggle between full-res and proxy files. These are nice as long as you don’t move files around between hard drives.

In the case of Premiere Pro, you can delete proxy files once you no longer need them. From that point on you are only working with full-res media. However, the Premiere project continues to expect to have the proxy file available and wants to locate them when you launch the project. You can, of course, ignore this prompt, but it’s still hard to get rid of completely.

With FCPX, any time you move media and the Library file to another drive with a different volume name, FCPX prompts a relink dialogue. It seems to relink master clips just fine, but not the proxy media that it generated IF stored outside of the Library package. The solution is to set your proxy location to be inside the Library. However, this will cause the Library file to bloat in size, making transfers of Library files between drives and editors that much more cumbersome. So for these and other reasons (like not adhering strictly to the criteria listed above) relinking can often be problematic to impossible (Avid, I’m looking at you).

Instead of using the built-in proxy workflows for projects with extended timetables or huge amounts of media, I prefer an old-school method. Simply transcode everything, work with low-res media, and then relink to the master clips for finishing. Final Cut Pro X, Premiere Pro, and Resolve all allow the relinking of master clips to different media if the criteria match.

Here are five simple steps to make that foolproof.

1. Transcode all non-professional camera originals to a high-quality mastering codec for optimized performance on your systems. I’m talking about footage from DSLRs, GoPros, drones, smart phones, etc. On Macs this will tend to be the ProRes codec family. On PCs, I would recommend DNxHD/HR. Make sure file names are unique (rename if needed) and that there is proper timecode. Adjust frame rates in the transcode if needed. For example, 29.97fps recordings for a playback base rate of 23.98fps should be transcoded to play natively at 23.98fps. This new media will become your master files, so park the camera originals on the shelf with the intent of never needing them (but for safety, DO NOT erase).

2. Transcode all master clips (both pro formats like RED or ARRI, as well as those transcoded in step 1) to your proxy format. Typically this might be ProRes Proxy at a lower frame size, like 1280 x 720. (This is obviously an optional step. If your system has sufficient performance and you have enough available drive space, then you may be able to simply edit with your master source files.)

3. Edit with your proxy media.

4. When you are ready to finish, relink the locked cut to your master files – pro formats like RED and ARRI – and/or the high-res transcodes from step 1.

5. Color correct/grade and add any final effects for finish and delivery.

©2019 Oliver Peters

Affinity Publisher

The software market offers numerous alternatives to Adobe Photoshop, but few companies have taken on the challenge to go further and create a competitive suite of graphics tools – until now. Serif has completed the circle with the release of Affinity Publisher, a full-featured, desktop publishing application. This adds to the toolkit that already includes Affinity Photo (an image editor) and Affinity Designer (a vector-based illustration app). All three applications support Windows and macOS, but Photo and Designer are also available as full-fledged pro applications for the iPad. This graphic design toolkit collectively constitutes an alternative to Adobe Photoshop, Illustrator, and InDesign.

Personas and StudioLink

The core user interface feature of the Affinity applications is that various modules are presented as Personas, which are accessed by the icons in the upper left corner of the interface. For example, in Affinity Photo basic image manipulation happens in the Photo Persona, but for mesh deformations, you need to shift to the Liquify Persona.

Affinity Publisher starts with the Publisher Persona. That’s where you set up page layouts, import and arrange images, create text blocks, and handle print specs and soft proofs. However, with Publisher, Affinity has taken Personas a step further through a technology they call StudioLink. If you also have the Photo and Designer applications installed on the same machine, then a subset of these applications is directly accessible within Publisher as the Photo and/or Designer Persona. If you have both Photo and Designer installed, then the controls for both Personas are functional in Publisher; but, if you only have one of the others installed, then just that Persona offers additional controls.

Users of Adobe InDesign know that to edit an image within a document you have to “open in Photoshop,” which launches the full Photoshop application where you would make the changes and then roundtrip back to InDesign. However, with Affinity Publisher the process is more straightforward, because the Photo Persona is right there. Just select the image within the document and click on the Photo Persona button in the upper left, which then shifts the UI to display the image processing tools. Likewise, clicking on the Designer Persona will display vector-based drawing tools. Effectivity Serif has done with Affinity Publisher what Blackmagic Design has done with the various pages in DaVinci Resolve. Click a button and shift to the function specifically designed for the task at hand without the need to change to a completely different application.

Document handling

All of the Affinity apps are layer-based, so while you are working in any of the three Personas within Publisher, you can see the layer order on the right to let you know where you are in the document. Affinity Photo offers superb compatibility with layered Photoshop PSD files, which means that your interchange with outside designers – who may use Adobe Photoshop – will be quite good.

Affinity Publisher documents are based on Master Pages and Pages. This is similar to the approach taken by many website design applications. When you create a document, you can set up a Master Page to define a uniform style template for that document. From there you would build individual Pages. Any changes made to a Master Page will then change and update the altered design elements for all of the Pages in the rest of that document. Since Affinity Publisher is designed for desktop publishing, single and multi-page document creation and export settings are both web and print-friendly. Publisher also offers a split-view display, which presents your document in a vector view on the left and as a rasterized pixel view on the right.

Getting started

Any complex application can be daunting at first, but I find the Affinity applications offer a very logical layout that makes it easy to get up to speed. In addition, when you start any of these applications you will first see a launch page that offers a direct link to various tutorials, sample documents and/or layered images. A beginner can quickly download these samples in order to dissect the layers and see exactly how they were created. Aside from these links to the tutorials, you can simply go to the website where you’ll find extensive, detailed video tutorials for each step of the process for any of these three applications.

If you are seeking to shake off subscriptions or simply not bound to using Adobe’s design tools for work, then these Affinity applications offer a great alternative. Affinity Publisher, Photo, and Designer are standalone applications, but the combination of the three forms a comprehensive image and design collection. Whether you are a professional designer or just someone who needs to generate the occasional print document, Affinity Publisher is a solid addition to your software tools.

©2019 Oliver Peters

Black Mirror: Bandersnatch

Bandersnatch was initially conceived as an interactive episode within the popular Black Mirror anthology series on Netflix. Instead, Netflix decided to release it as a standalone, spin-off film in December 2018. It’s the story of programmer Stefan Butler (Fionn Whitehead) as he adapts a choose-your-own-adventure novel into a video game. Set in 1984, the viewers get to make decisions for Butler’s actions, which then determine the next branch of the story shown to the viewer. They can go back though Bandersnatch and opt for different decisions, in order to experience other versions of the story.

Bandersnatch was written by show creator Charlie Brooker (Black Mirror, Cunk on Britain, Cunk on Shakespeare), directed by David Slade (American Gods, Hannibal, The Twilight Saga: Eclipse), and edited by Tony Kearns (The Lodgers, Cardboard Gangsters, Moon Dogs). I recently had a chance to interview Kearns about the experience of working on such a unique production.

__________________________________________________

[OP] Please tell me a little about your editing background leading up to cutting Bandersnatch.

[TK] I started out almost 30 years ago editing music videos in London. I did that full-time for about 15 years working for record companies and directors. At the tail end of that a lot of the directors I was working with moved into doing commercials, so I started editing commercials more and more in Dublin and London. In Dublin I started working on long form, feature film projects and cut about 10 projects that were UK or European co-productions with the Irish Film Board.

In 2017 I got a call from Black Mirror to edit the Metalhead episode, which was directed by David Slade. He was someone I had worked with on music videos and commercials 15 years previously, before he had moved to the United States. That was a nice circularity. We were together working again, but on a completely different type of project – drama, on a really cool series, like Black Mirror. It went very well, so David and I were asked to get involved with Bandersnatch, which we jumped at, because it was such an amazing, different kind of project. It was unlike anything either of us – or anyone else, for that matter – has ever done to that level of complexity.

[OP] Other attempts at interactive storytelling – with the exception of the video game genre – have been a hit-or-miss. What were your initial thoughts when you read the script for the first time?

[TK] I really enjoyed the script. It was written like a conventional script, but with software called Twine, so you could click on it and go down different paths. Initially I was overwhelmed at the complexity of the story and the structure. It wasn’t that I was like a deer in the headlights, but it gave me a sense of scale of the project and [writer/show runner] Charlie Brooker’s ambition to take the interactive story to so many layers.

On my own time I broke down the script and created spreadsheets for each of the eight sections in the script and wrote descriptions of every possible permutation, just to give me a sense of what was involved and to get it in my head what was going on. There are so many different narrative paths – it was helpful to have that in my brain. When we started editing, that would also help me to keep a clear eye at any point.

[OP] How long of a schedule did you have to post Bandersnatch?

[TK] 17 weeks was the official edit time, which isn’t much longer than on a low-budget feature. When I mentioned that to people, they felt that was a really short amount of time; but, we did a couple of weekends, we were really efficient, and we knew what we were doing.

[OP] Were you under any running length constraints, in the same way that a TV show or a feature film editor often wrestles with on a conventional linear program?

[TK] Not at all. This is the difference – linear doesn’t exist. The length depends on the choices that are made. The only direction was for it not to be a sprawling 15-hour epic – that there would be some sort of ball park time. We weren’t constrained, just that each segment had to feel right – tight, but not rushed.

[OP] With that in mind, what sort of process did you do through to get it to feel right?

[TK] Part of each edit review was to make it as tight or as lean as it needed to be. Netflix developed their own software, called Branch Manager, which allowed people to review the cut interactively by selecting the choice points. My amazing assistant editor, John Weeks, is also a coder, so he acquired an extra job, which was to take the exports and do the coding in order to have everything work in Branch Manager. He’s a very robust person, but I think we almost broke him (laughs), because there were up to 100 Branch Manager versions by the end. The coding was hanging on by a thread. He was a bit like Scotty in Star Trek, “The engines can’t hold it anymore, Captain!”

By using Branch Manager, people could choose a path and view it and give notes. So I would take the notes, make the changes, and it would be re-exported. Some segments might have five cuts while others would be up to 13 or 14. Some scenes were very straightforward, but others were more difficult to repurpose.

Originally there were more segments in the script, but after the first viewings it was felt that there were too many in there. It was on the borderline of being off-putting for viewers. So we combined a few, but I made sure to keep track of that so it was in the system. There was a lot of reviewing, making notes, updating spreadsheets, and then making sure John had the right version for the next Branch Manager creation. It was quite an involved process.

[OP] How were you able to keep all of this straight? Did you use the common technique of scenes cards on the wall or something different?

[TK] If you looked at flowcharts your head would explode, because it would be like looking at the wiring diagram of an old-fashioned telephone exchange. There wouldn’t have been enough room on the wall. For us, it would just be on paper – notebooks and spreadsheets. It was more in our heads – our own sense of what was happening – that made it less confusing. If you had the whole thing as a picture, you just wouldn’t know where to look.

[OP] In a conventional production an editor always has to be mindful that when something is removed, it may have ramifications to the story later on. In this case, I would imagine that those revisions affected the story in either direction. How were you able to deal with that?

[TK] I have been asked about how did we know that each path would have a sense of a narrative arc. We couldn’t think of it as one, total narrative arc. That’s impossible. You’d have to be a genius to know that it’s all going to work. We felt the performances were great, the story was strong, but it doesn’t have a conventional flow. There are choice points, which act as a propellant into the next part of the film thus creating an unconventional experience to the straight story arc of conventional films or episodes. Although there wasn’t a traditional arc, it still had to feel like a well-told story. And that you would have empathy and a sense of engagement – that it wasn’t a gimmick.

[OP] How did the crew and actors mange to keep the story straight in their minds as scenes were filmed?

[TK] As with any production, the first few days are finding out what you’ve let yourself in for. This was a steep learning curve in that respect. Only three weeks of the seven-week shoot was in the same studio complex where I was working, so I wasn’t present. But there was a sense that they needed to make it easier for the actors and the crew. The script supervisor, Marilyn Kirby, was amazing. She was the oracle for the whole shoot. She kept the whole show on the road, even when it was quite complicated. The actors got into the swing of it quickly, because I had no issues with the rushes. They were fantastic.

[OP] What camera formats were used and what is your preparation process for this footage prior to editing?

[TK] It’s the most variety of camera formats I’ve ever worked on. ARRI Alexa 65 and RED, but also 1980s Ikegami TV cameras, Super 8mm, 35mm, 16mm, and VHS. Plus, all of the print stills were shot on black-and-white film. The data lab handled the huge job to keep this all organized and provide us with the rushes. So, when I got them, they were ready to go. The look was obviously different between the sources, but otherwise it was the same as a regular film. Each morning there was a set of ProRes Proxy rushes ready for us. John synced and organized them and handed them over. And then I started cutting. Considering all the prep the DIT and the data lab had to go through, I think I was in a privileged position!

[OP] What is your method when first starting to edit a scene?

[TK] I watch all of the rushes and can quickly see which take might be the bedrock framing for a scene – which is best for a given line. At that point I don’t just slap things together on a timeline. I try to get a first assembly to be as good as possible, because it just helps anyone who sees it. If you show a director or a show runner a sloppy cut, they’ll get anxious and I don’t want that to happen. I don’t want to give the wrong impression.

When I start a scene, I usually put the wide down end-to-end, so I know I have the whole scene. Then I’ll play it and see what I have in the different framings for each line – and then the next line and the next and so on. Finally, I go back and take out angles where I think I may be repeating a shot too much, extend others, and so on. It’s a built-it-up process in an effort to get to a semi-fine cut as quickly as possible.

[OP] Were you able to work with circle takes and director’s notes on Bandersnatch?

[TK] I did get circle takes, but no director’s notes. David and I have an intuitive understanding, which I hope to fulfill each time – that when I watch the footage he shoots, that I’ll get what he’s looking for in the scene. With circles takes, I have to find out very quickly whether the script supervisor is any good or not. Marilyn is brilliant so whenever she’s doing that, I know that take is the one. David is a very efficient director, so there weren’t a massive number of takes – usually two or three takes for each set-up. Everything was shot with two cameras, so I had plenty of coverage. I understand what David is looking for and he trusts me to get close to that.

[OP] With all of the various formats, what sort of shooting ratio did you encounter? Plus, you had mentioned two-camera scenes. What is your approach to that in your edit application?

[TK] I believe the various story paths totaled about four-and-a-half hours of finished material. There was a 3:1 shooting ratio, times two cameras – so maybe 6:1 or even 9:1. I never really got a final total of what was shot, but it wasn’t as big as you’d expect. 

When I have two-camera coverage I deal with it as two individual cameras. I can just type in the same timecode for the other matching angle. I just get more confused with what’s there when I use multi-cam. I prefer to think of it as that’s the clip from the clip. I hope I’m not displaying an anti-technology thing, but I’m used to it this way from doing music videos. I used to use group clips in Avid and found that I could think about each camera angle more clearly by dealing with them separately.

[OP] I understand that you edited Bandersnatch on Adobe Premiere Pro. Is that your preferred editing software?

[TK] I’ve used Premiere Pro on two feature films, which I cut in Dublin, and a number of shorts and TV commercials. If I am working where I can set up my own cutting room, then I’m working with Premiere. I use both Avid and Adobe, but I find I’m faster on Premiere Pro than on Media Composer. The tools are tuned to help me work faster.

The big thing on this job was that you can have multiple sequences open at the same time in Premiere. That was going to be the crunch thing for me. I didn’t know about Branch Manager when I specified Premiere Pro, so I figured that would be the way we work need to review the segments – simply click on a sequence tab and play it as a rudimentary way to review a story path. The company that supplied the gear wasn’t as familiar with Premiere [as they were with Avid], so there were some issues, but it was definitely the right choice.

[OP] Media Composer’s strength is in multi-editor workflows. How did you handle edit collaboration in Premiere Pro?

[TK] We used Adobe’s shared projects feature, which worked, but wasn’t as efficient as working with Avid in that version of Premiere. It also wasn’t ideal that we were working from Avid Nexis as the shared storage platform. In the last couple of months I’ve been in contact with the people at Adobe and I believe they are sorting out some of the issues we were having in order to make it more efficient. I’m keen for that to happen.

In the UK and London in particular, the big player is Avid and that’s what people know, so anything different, like Premiere Pro, is seen with a degree of suspicion. When someone like me comes in and requests something different, I guess I’m viewed as a bit of a pain in the ass. But, there shouldn’t just be one behemoth. If you had worked on the old Final Cut Pro, then Premiere Pro is a natural fit – only more advanced and supported by a company that didn’t want to make smart phones and tablets.

[OP] Since Adobe Creative Cloud offers a suite of compatible software tools, did you tap into After Effects or other tools for your edit?

[TK] That was another real advantage – the interaction with the graphics user interface and with After Effects. When we mocked up the first choice points, it was so easy to create, import, and adjust. That was a huge advantage. Our VFX editor was able to build temp VFX in After Effects and we could integrate that really easily. He wasn’t just using an edit system’s effects tool, but actual VFX software, which seamlessly integrated with Premiere. Although these weren’t final effects at full 4K resolution, he was able to do some very complex things, so that everyone could go, “Yes, that’s it.”

[OP] In closing, what take-away would you offer an editor interested in tackling an interactive story as compared to a conventional linear film?

[TK] I learned to love spreadsheets (laugh). I realized I had to be really, really organized. When I saw the script I knew I had to go through it with a fine-tooth comb and get a sense of it. I also realized you had to unlearn some things you knew about conventional episodic TV. You can’t think of some things in the same way. A practical thing for the team is that you have to have someone who knows coding, if you are using a similar tool to Branch Manager. It’s the only way you will be able to see it properly.

It’s a different kind of storytelling pressure that you have to deal with, mostly because you have to trust your instincts even more that it will work as a coherent story across all the narrative paths. You also have to be prepared to unlearn some of the normal methods you might use. One example is that you have to cut the opening of different segments differently to work with the last shot of the previous choice point, so you can’t just go for one option, you have to think more carefully what the options are. The thing is not to walk in thinking it’s going to be the same as any other production, because it ain’t.

For more on Bandersnatch, check out these links: postPerspective, an Art of the Guillotine interview with Tony Kearns, and a scene analysis at This Guy Edits.

Images courtesy of Netflix and Tony Kearns.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters