Video Technology 2020 – The Cloud

The “cloud” is merely a collection of physical data centers in multiple locations around the world – not much different than a small storage center you might have. Of course, they employ more advanced systems for power, redundancy, and security than you do. When you work with one of the companies marketing cloud-based editing or a review-and-approval service, like Frame.io or Wipster, they provide the user-facing interface, but are actually renting storage space from one of the big three cloud providers – Google, Amazon, or Microsoft.

There are three reasons that I’m skeptical about ubiquitous, cloud-based editing (with media at native resolutions) in the short term: upload speeds, cost, and security.

Speed

5G (fifth generation wireless) is the technology predicted to offer adequate speeds and low latency for native 4K (and higher) media. While 5G will be a great advancement for many things, it’s a short distance signal requiring more transmission spots than current wireless technology. Full coverage in most metro areas, let alone widespread geographical coverage worldwide, will take many years to fully deploy. Other than potential camera-to-cloud uploads of proxy media in the field, 5G won’t soon be the killer solution. Current technology still dictates that if you want the fastest possible upload speeds for large amounts of data, then you have to tap as close as possible to the internet’s backbone.

Cost

Cloud storage is cheap, but extensive upload and download times aren’t. Unfortunately modern video resolutions also result in huge amounts of data generated on every shoot. Uploading native 4K media for a week-long production is considerably more expensive than FedEx and overnight charges to ship drives. What about long term storage? Let’s say that all of your native media is in the cloud and you pay according to a monthly or annual subscription plan. But what if you want to stop? That media will have to be downloaded and stored locally, which will incur data rate charges, as well as your time to download everything.

Security

Think these sites are unequivocally secure? Look at any data hack at a major company. Security is such a concern in our business that most major movie studios won’t let their editors connect the computers to the internet. Many make these editors check their cell phones at the door. No matter how secure, it’s going to be a hard sell, except for limited slices of the production, such as cloud-based VFX rendering.

I do believe 2020 will be a year in which many will take advantage of some modes of long distance, cloud-based edit services using low-res proxy media. Increasingly some services will be used to move dailies and deliverables around the globe via the cloud. But that’s a big difference from cloud-based editing becoming the norm. One edit scenario many will experiment with is to store the edit project files in the cloud, but with the media mirrored locally at each edit site. This way only the lightweight files used for edit collaboration need be moved over the internet. Think of this as Google Docs for editing. Adobe already offers a version of this, but I suspect you’ll see others, including solutions for Final Cut Pro X. So while true cloud-based editing is not a near-term solution, bits and pieces will become increasingly commonplace.

Originally written for Creative Planet Network.

©2020 Oliver Peters

Every NLE is a Database

Apple’s Final Cut Pro X has spawned many tribal arguments since its launch eight years ago. There have been plenty of debates about the pros and cons of its innovative design and editing model. One that I’ve heard a number of times is that FCPX is a relational database, while traditional editing applications are more like an Excel spreadsheet. I can see how the presentation of a bin in the list view format might convey that impression, but that doesn’t make it accurate. Spreadsheets are a grid of cells that are based on a combination of mathematical formulae, regardless of whether the info is text or numbers. All nonlinear editing applications (NLE) use a relational database to track media, although the type and format of this database will differ among brands. In all cases, these function altogether differently than how a spreadsheet functions.

It started with film

When all editing was done on film, the editors cut work print, which was a reversal copy printed from the camera negative. Edits made on the work print were eventually duplicated on the pristine negative by a negative cutter, based on a cut list. Determining where to cut and join the film segments together was based on a list of edit points corresponding to the source rolls of the film, plus a foot+frame count for specific edit points. The work print, which the editors could physically cut and splice as needed, was effectively an abstraction of – and stand-in for – the negative.

In order to enable the process, assistant editors (or in some cases, the editor) created a handwritten log, known as a codebook. This started with the dailies and included all the pertinent information, such as source roll, shoot days/dates, scenes/takes, director’s notes, editor’s notes, and so on. The codebook was a physical database that allowed an editor to know what the options were and where to find them.

During the videotape-editing era prior to NLEs, any sort of database for tracking source information was still manual. Only the cut list portion, known as the edit decision list, could be generated by the edit computer, based on the timecode values recorded on the tape. Timecode became the electronic equivalent of the foot+frame count of physical film.

Fast forward to the modern era with file-based camera acquisition and ubiquitous, inexpensive editing software. The file recorded by the camera is a container of sorts that holds essence (audio and video) and metadata (information about the essence). Some cameras generate a lot of metadata and others don’t. One example of this type of metadata that we all encounter is the information embedded into digital still photos, which can include location, lens data, and a ton more.

When clips are ingested/imported into your NLE – whether into a project, bin, folder, or an event – the NLE links to the essence of the media clips on the hard drive or camera card and brings in whatever clip metadata is understood by that application. In addition, the user can add and merge a lot more metadata derived from other sources, like the sound recorder, script supervisor notes, electronic script, and manually-added data.

The clip that you see in the bin/event/folder is an abstraction for the actual audio and video media, just like work print was for film editors. The bin/folder/event data entries are like the film editor’s codebook and are tracked in the internal database used by that application to cross-reference the clip with the actual stored media. Since a clip in the app’s browser is simply an abstraction, it can appear in multiple places at the same time – in various bins and sequences. The internal database makes sure that each of these instances of the clip all reference the same piece of media accurate down to the video frame or audio sample.

It doesn’t matter how the bin looks

The spreadsheet comparison is based on how bins have appeared in most NLEs, including Final Cut Pro “legacy,” Avid Media Composer, and others. Unfortunately that opinion is usually based on a narrow exposure to other NLEs. As I said, at the core, every NLE is a relational database. And so, there are other things that can be tracked and/or ways it can be displayed.

For instance, older Quantel edit systems displayed source information based on what we would consider a smart search view today. The entirety of the source material was not displayed in front of the editor, since it was a single-screen layout. Entering data into a search field would sift through and present clips matching the requested data.

Avid Media Composer systems also track media based on Script Integration (sometimes incorrectly referred to as ScriptSync, which is a separate Avid option). This is a graphical bin layout with the script text displayed on screen and clips linked to coverage of that scene. Media Composer and now Premiere Pro both permit a freeform clip view for a bin, in which the editor can freely rearrange the position of the clip thumbnails within the bin window. This visual juxtaposition by the user of clips conveys important information to the editor.

All NLEs have multiple ways to present the data and aren’t limited to a grid-style list view that resembles a spreadsheet or a grid of clip thumbnails. Enabling these alternate views takes a lot more than simply cross-referencing your bin and timelines against a set of edit points. That’s where databases come in and why every NLE is built around one.

How can you be in two places at once when you’re not anywhere at all?

My apologies to Firesign Theatre. A huge aspect of the Final Cut Pro X edit workflow is the use of keyword collections. You aren’t limited to being in just a single bin thanks to them. While this is a selling point for FCPX, it is also well within the capabilities of most NLEs.

Organizing your event (bin) media in FCPX can start by assigning keywords to each clip. Each new keyword used creates a keyword collection – sort of a “smart sub-bin.” As you assign one or more keywords to a clip, FCPX automatically sorts the clip into those corresponding keyword collections.  For example, let’s say you have a series of wide and close-up shots featuring both male and female actors. Clip 1 might be sorted into WIDE and MAN; Clip 2 into WIDE and WOMAN; Clip 3 into WOMAN and CLOSE-UP. So then the keyword collection for WIDE displays Clip 1 and Clip 2; MAN displays Clip 1; WOMAN displays Clip 2 and Clip 3; CLOSE-UP displays Clip 3.

Once this initial step is completed it enables the editor to view source clips in a more focused manner. Instead of wading through 100 clips in the event (bin) each time, the editor may only have to deal with 10 clips in the CLOSE-UP keyword collection. Or in any other collection. The beauty of FCPX’s interface design is the speed and fluidity with which this can be accomplished. This feature is one of the hallmarks of the application and no other NLE does it nearly as elegantly. In fact, FCPX tackles the challenge of narrowing down the browser options through three methods – ratings, keyword collections, and smart collection (described in this linked tutorial by Simon Ubsdell).

As elegantly as Final Cut tackles this task, that doesn’t mean that other NLEs can’t function in a similar manner. Within Premiere Pro, those exact same keywords can be assigned to the clips. Then simply create a set of search bins using those same keywords as search criteria. The result is the exact same type of distribution of clips into collections where multiple clips can appear in multiple bins at the same time. Likewise, the editor doesn’t need to go through the full set of clips in a bin, but can concentrate on the small handful in any given search bin. Media Composer also offers search functions, as well as, custom sift routines, which enable you to only display clips matching specific column details, like a custom keyword.

Most NLEs can only store one set of in/out edit marks on a clip within a bin at any given time. On the other hand, Final Cut Pro X offers range-based selection. Clips can retain multiple in/out selections at once. Nevertheless other NLE aren’t behind here either. The obvious solution that most editors use when this is needed is to create a subclip, which can be a duplicate of the entire clip or a portion from within a single clip. Need to pull multiple sections of the clip? Simply create multiple subclips. In effect, these are the same as range-based selections in Final Cut Pro X. Admittedly the FCPX method is more fluid and straightforward. Nevertheless, range-based selections are virtual subclips that are dynamically created by the editor; but unlike subclips, these can’t be moved separately to other events (bins). Two ways to tackle a very similar need.

The bottom line is that under the hood, all NLEs are still very much the same. Let me emphasize that I’m not arguing the superiority, speed, or elegance of one approach or tool over another. Every company has their own set of unique features that appeal to different types of editors. They are simply different methods to place information at your fingertips, get roadblocks out of the way, and thus to make editing more creative and enjoyable.

©2019 Oliver Peters

Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

©2019 Oliver Peters

Handling and Protecting Media

Once the industry entered the file-based era, we realized that dealing with and properly archiving audio and video files could make or break a production company. No more videotapes on the shelf to pull footage from. Unfortunately many companies, producers, clients, and editors simply solved this with a hodgepodge of small, portable drives – Firewire, USB, Thunderbolt, whatever. That’s no longer practical. A typical 10-day, 4K shoot with a handful of formats can easily generate 8-10TB of original footage. That’s if the production is structured. Make that a 2-3 weeklong documentary or reality-style production and you’ll have closer to 20-30TB. Not exactly something you want to deal with in post using a bunch of orange LaCie drives!

The road to safeguarding your files

At the day job, we were able to invest in a LumaForge Jellyfish shared storage network (NAS). It’s 480TB, which sounds like a lot, but after RAID protection the available net capacity is 316TB. And you only want to use up to 80%-90% of that for the most efficient operation. While it still sounds like a lot of storage, it is a finite amount. This means that you need to develop a strategy for archiving older projects and the associated media, but yet easily find and restore it later for revisions.

Cloud storage remains a pipe dream at these quantities. LTO data tape back-up is also impractical, because of its linear read/write nature. It is only intended for deep storage archiving. Facilities who have attempted to use LTO as a type of near-line storage – with frequent restores, updates, and subsequent re-archiving – have worn out their LTO tapes long before the rated life.

Efficient media handling starts when a project or production is first originated. In our case, every new project gets a folder on the Jellyfish and inside that folder is a standard group of subfolders for the corresponding project files, graphics, exports, and source footage. We assign all projects a job number for billing and that number is part of the top level folder name, as well as in any project file name. This default, template starting point is generated for each new production using the Post Haste application.

The location crew

On location all media is copied daily (with verification using the Hedge application) to both master and back-up drives. Depending on the size of the crew, this is the responsibility of the DIT, assistant cameraman, or the director of photography. On large productions, the cost of these drives is built into the budget and they later end up being stored on the shelf for safe keeping. On smaller jobs (or some fast turnaround jobs) temporary, fast SSDs are used, which will later be reused on other projects.

Post starts here

The next step back at the shop is to copy all of this material from the location drives onto the Jellyfish into that project’s Source Media or Dailies subfolder. Once copied, I will proceed to clean up and reorganize all media into subfolders according to this hierarchy:

DATE / CAMERA / REEL

For example: 092819/A-CAMERA_ALEXA/A001

Or outside of the US, maybe: 28SEPT19/A-CAMERA_ALEXA/A001

If a camera file is buried several folders deep – due to the camera card structure or an error made by the crew member on location – I will move those files to the top level within the REEL subfolder without any other levels in between. Camera folders, like DCIM, CLIP, etc are thus orphaned, and so, deleted from Jellyfish. Remember that I still have the original master drive from the location, which will sit on the shelf. If I ever need to get back to the file in its original container, I have that option.

I discussed relinking strategies in the previous post and that comes into play here. Files from semi-pro and non-pro cameras, like DSLRs, GoPros, iPhones, etc will have a prefix appended to the file name using the Better Rename application. The name is typically a short 8-10 character alphanumeric to indicate a job name reference, date, camera letter, and reel.

For instance, a file from the B-camera’s reel 7 for a production done for project ABC on September 28th would get the prefix “ABC0928B07_”. The camera-generated clip name would follow the underscore in that name. The point of doing this is to guarantee unique file names, especially when multiple cameras and filming days are involved. I also apply this process to sound files, even if the clip name reflects the scene and take number.

The last step is to transcode and rate-convert all non-pro media. If my base rate is 23.98fps (23.976), then files like GoPro 59.94fps media get turned into ProRes at 23.98 (slomo). In that case, I will have a subfolder with the original media and a second subfolder with the transcoded media, both with proper file names. I usually apply the “_PR2398” suffix to these transcoded files. I have found that DaVinci Resolve is the best and fastest tool for this transcoding process and large batches can be run overnight as needed.

Archiving your files

If the crew used temporary drives on location, then before these are reformatted and recycled, they are copied to inexpensive portables, like Seagate or Western Digital USB drives. These are then parked on the shelf for safe keeping. The objectives is to end up with at least two copies of the source media – the unaltered, camera original files and the new, master files on the Jellyfish.

Once editing has been completed and approved and the client files have been delivered, we move into the archiving stage. For nearly every project, we try to make sure that a ProRes master and a textless ProRes master have been generated by the editor. In addition, the mixer or the editor will generate a mixed audio file and audio stems for dialogue, SFX, and music (as separate files). Many times, you end up making future changes or versions using these files without going back to the original project file.

The entire project folder with all of the associated media is now copied to a raw, removable hard drive. These are enterprise-grade drives. All of our workstations are equipped with docking stations for such drives. To date, we are up to 200 drives, ranging in size from 2TB to 8TB. They are indexed using the simple DiskCatalogMaker application, which generates a searchable index file of all of these archive drives. (Note – I would recommend spinning up these archive drives every few months.)

Let me mention that while this can be done at the end, I will often split this archival step into two phases. I will first copy only the Dailies media right after I have organized it on Jellyfish (before any editing), leaving the other project subfolders blank. The reason is that once location production is done, there won’t be anything else added to Dailies. In addition, it gives me three copies of the camera files – the location drive (or its back-up), Jellyfish, and the archive drive. Once the project is finished, I only need to copy the rest of the material from the other subfolders.

The last step is to move the project folder from the PROJECTS master folder on Jellyfish to the BACKED UP master folder. As long as we have space on Jellyfish, the project is never deleted. Often changes are required. When that happens, the affected project folder is moved from BACKED UP to PROJECTS again. The changes are made and client files delivered. Then the archive drive for that project is updated and re-indexed to the DiskCatalogMaker catalog file. The project file is finally returned to the BACKED UP folder. As we need space on Jellyfish, the oldest projects that haven’t been touched in a long while are deleted.

Redundancy is the key

There are two additional protection steps taken. All active project files (usually Premiere Pro) are copied to the company’s DropBox by every editor at the end of each day. In the event of a catastrophic NAS failure – before the completion of that project – we can at least get to the project file in the cloud (DropBox) and the media that is stored on hard drive in order to restore the edit. (Note that if you do this with FCPX Libraries, they must first be “zipped,” because DropBox and FCPX Libraries do not play well together.)

The second item is that we have an additional folder on Jellyfish for all completed masters. When an editor generates ProRes master and/or textless files, those files are also copied to this masters folder. That give us quick access to all final versions, should the client require an extra web file or some other type of deliverable. It’s easy to simply encode new files from these ProRes masters, without needing to search out the original project folder.

These steps may sound complex and daunting if you aren’t currently doing them. I have covered some of this in past posts, but I do update my processes over time. Once you get into a routine of doing these steps, the benefits pay off immensely. Your media is better protected, it’s easier to find in the future, and relinking is a no-brainer.

©2019 Oliver Peters

Foolproof Relinking Strategy

Prior to file-based camera capture, film and then videotape were the dominant visual acquisition technologies. To accommodate, post-production adopted a two-stage solution: work print editing + negative conform for film, offline/online editing for video. During the linear editing era high-res media on tape was transferred to a low-res tape format, like 3/4″, for creative editing (offline). The locked cut was assembled and enhanced with effects and graphics in a high-end online suite using an edit decision list and the high-res media. The inherent constraints of tape formats forced consistency in media standards and frame rates.

In the early nonlinear days, storage capacities were low and hard drives expensive, so this offline/online methodology persisted. Eventually storage could cost-effectively handle high-res media, but this didn’t eliminate these workflows. File-based camera acquisition has brought down operating cost, but the proliferation of formats and ever-increasing resolutions have meant that there is still a need for such a two-stage approach. This is now generally referred to as proxy versus full-resolution editing. The reasons vary, but typically it’s a matter of storage size, system performance, or the capabilities of the systems and operator/artist running the finishing/full-res (aka “online”) system.

All of this requires moving media around among drives, systems, locations, and facilities, thus making correct list management essential. Whether or not it works well depends on the ability to accurately relink media with each of these moves. Despite the ability of most modern NLEs to freely mix and match formats, sizes, frame rates, etc., ignoring certain criteria will break media relinking. You must be able to relink the same media between systems or between low and high-res media on the same or different systems.

Criterial for successful relinking

– Unique file names that match between low and high-res media (extensions are usually not important).

– Proper timecode that does not repeat within a single clip.

– A single, standard frame rate that matches the project’s base frame-rate. Using conform or interpret functions within an NLE to alter a clip’s frame rate will mess up relinking on another system. Constant speed changes (such as slomo at 50%) is generally OK, but speed ramp effects tend to be proprietary with every NLE and typically do not translate correctly between different edit or grading applications.

– Match audio configurations between low and high-res media. If your camera source has eight channels of audio, then so must the low-res proxy media.

– Match clip duration. High-res media and proxies must be of the exact same length.

– Note that what is not important is matching frame size or codec or movie wrapper type (extension).

Proxy workflows

Several NLE applications – particularly Final Cut Pro X and Premiere Pro – offer built-in proxy workflows, which automatically generate proxy media and let the editor seamlessly toggle between full-res and proxy files. These are nice as long as you don’t move files around between hard drives.

In the case of Premiere Pro, you can delete proxy files once you no longer need them. From that point on you are only working with full-res media. However, the Premiere project continues to expect to have the proxy file available and wants to locate them when you launch the project. You can, of course, ignore this prompt, but it’s still hard to get rid of completely.

With FCPX, any time you move media and the Library file to another drive with a different volume name, FCPX prompts a relink dialogue. It seems to relink master clips just fine, but not the proxy media that it generated IF stored outside of the Library package. The solution is to set your proxy location to be inside the Library. However, this will cause the Library file to bloat in size, making transfers of Library files between drives and editors that much more cumbersome. So for these and other reasons (like not adhering strictly to the criteria listed above) relinking can often be problematic to impossible (Avid, I’m looking at you).

Instead of using the built-in proxy workflows for projects with extended timetables or huge amounts of media, I prefer an old-school method. Simply transcode everything, work with low-res media, and then relink to the master clips for finishing. Final Cut Pro X, Premiere Pro, and Resolve all allow the relinking of master clips to different media if the criteria match.

Here are five simple steps to make that foolproof.

1. Transcode all non-professional camera originals to a high-quality mastering codec for optimized performance on your systems. I’m talking about footage from DSLRs, GoPros, drones, smart phones, etc. On Macs this will tend to be the ProRes codec family. On PCs, I would recommend DNxHD/HR. Make sure file names are unique (rename if needed) and that there is proper timecode. Adjust frame rates in the transcode if needed. For example, 29.97fps recordings for a playback base rate of 23.98fps should be transcoded to play natively at 23.98fps. This new media will become your master files, so park the camera originals on the shelf with the intent of never needing them (but for safety, DO NOT erase).

2. Transcode all master clips (both pro formats like RED or ARRI, as well as those transcoded in step 1) to your proxy format. Typically this might be ProRes Proxy at a lower frame size, like 1280 x 720. (This is obviously an optional step. If your system has sufficient performance and you have enough available drive space, then you may be able to simply edit with your master source files.)

3. Edit with your proxy media.

4. When you are ready to finish, relink the locked cut to your master files – pro formats like RED and ARRI – and/or the high-res transcodes from step 1.

5. Color correct/grade and add any final effects for finish and delivery.

©2019 Oliver Peters

Affinity Publisher

The software market offers numerous alternatives to Adobe Photoshop, but few companies have taken on the challenge to go further and create a competitive suite of graphics tools – until now. Serif has completed the circle with the release of Affinity Publisher, a full-featured, desktop publishing application. This adds to the toolkit that already includes Affinity Photo (an image editor) and Affinity Designer (a vector-based illustration app). All three applications support Windows and macOS, but Photo and Designer are also available as full-fledged pro applications for the iPad. This graphic design toolkit collectively constitutes an alternative to Adobe Photoshop, Illustrator, and InDesign.

Personas and StudioLink

The core user interface feature of the Affinity applications is that various modules are presented as Personas, which are accessed by the icons in the upper left corner of the interface. For example, in Affinity Photo basic image manipulation happens in the Photo Persona, but for mesh deformations, you need to shift to the Liquify Persona.

Affinity Publisher starts with the Publisher Persona. That’s where you set up page layouts, import and arrange images, create text blocks, and handle print specs and soft proofs. However, with Publisher, Affinity has taken Personas a step further through a technology they call StudioLink. If you also have the Photo and Designer applications installed on the same machine, then a subset of these applications is directly accessible within Publisher as the Photo and/or Designer Persona. If you have both Photo and Designer installed, then the controls for both Personas are functional in Publisher; but, if you only have one of the others installed, then just that Persona offers additional controls.

Users of Adobe InDesign know that to edit an image within a document you have to “open in Photoshop,” which launches the full Photoshop application where you would make the changes and then roundtrip back to InDesign. However, with Affinity Publisher the process is more straightforward, because the Photo Persona is right there. Just select the image within the document and click on the Photo Persona button in the upper left, which then shifts the UI to display the image processing tools. Likewise, clicking on the Designer Persona will display vector-based drawing tools. Effectivity Serif has done with Affinity Publisher what Blackmagic Design has done with the various pages in DaVinci Resolve. Click a button and shift to the function specifically designed for the task at hand without the need to change to a completely different application.

Document handling

All of the Affinity apps are layer-based, so while you are working in any of the three Personas within Publisher, you can see the layer order on the right to let you know where you are in the document. Affinity Photo offers superb compatibility with layered Photoshop PSD files, which means that your interchange with outside designers – who may use Adobe Photoshop – will be quite good.

Affinity Publisher documents are based on Master Pages and Pages. This is similar to the approach taken by many website design applications. When you create a document, you can set up a Master Page to define a uniform style template for that document. From there you would build individual Pages. Any changes made to a Master Page will then change and update the altered design elements for all of the Pages in the rest of that document. Since Affinity Publisher is designed for desktop publishing, single and multi-page document creation and export settings are both web and print-friendly. Publisher also offers a split-view display, which presents your document in a vector view on the left and as a rasterized pixel view on the right.

Getting started

Any complex application can be daunting at first, but I find the Affinity applications offer a very logical layout that makes it easy to get up to speed. In addition, when you start any of these applications you will first see a launch page that offers a direct link to various tutorials, sample documents and/or layered images. A beginner can quickly download these samples in order to dissect the layers and see exactly how they were created. Aside from these links to the tutorials, you can simply go to the website where you’ll find extensive, detailed video tutorials for each step of the process for any of these three applications.

If you are seeking to shake off subscriptions or simply not bound to using Adobe’s design tools for work, then these Affinity applications offer a great alternative. Affinity Publisher, Photo, and Designer are standalone applications, but the combination of the three forms a comprehensive image and design collection. Whether you are a professional designer or just someone who needs to generate the occasional print document, Affinity Publisher is a solid addition to your software tools.

©2019 Oliver Peters

Black Mirror: Bandersnatch

Bandersnatch was initially conceived as an interactive episode within the popular Black Mirror anthology series on Netflix. Instead, Netflix decided to release it as a standalone, spin-off film in December 2018. It’s the story of programmer Stefan Butler (Fionn Whitehead) as he adapts a choose-your-own-adventure novel into a video game. Set in 1984, the viewers get to make decisions for Butler’s actions, which then determine the next branch of the story shown to the viewer. They can go back though Bandersnatch and opt for different decisions, in order to experience other versions of the story.

Bandersnatch was written by show creator Charlie Brooker (Black Mirror, Cunk on Britain, Cunk on Shakespeare), directed by David Slade (American Gods, Hannibal, The Twilight Saga: Eclipse), and edited by Tony Kearns (The Lodgers, Cardboard Gangsters, Moon Dogs). I recently had a chance to interview Kearns about the experience of working on such a unique production.

__________________________________________________

[OP] Please tell me a little about your editing background leading up to cutting Bandersnatch.

[TK] I started out almost 30 years ago editing music videos in London. I did that full-time for about 15 years working for record companies and directors. At the tail end of that a lot of the directors I was working with moved into doing commercials, so I started editing commercials more and more in Dublin and London. In Dublin I started working on long form, feature film projects and cut about 10 projects that were UK or European co-productions with the Irish Film Board.

In 2017 I got a call from Black Mirror to edit the Metalhead episode, which was directed by David Slade. He was someone I had worked with on music videos and commercials 15 years previously, before he had moved to the United States. That was a nice circularity. We were together working again, but on a completely different type of project – drama, on a really cool series, like Black Mirror. It went very well, so David and I were asked to get involved with Bandersnatch, which we jumped at, because it was such an amazing, different kind of project. It was unlike anything either of us – or anyone else, for that matter – has ever done to that level of complexity.

[OP] Other attempts at interactive storytelling – with the exception of the video game genre – have been a hit-or-miss. What were your initial thoughts when you read the script for the first time?

[TK] I really enjoyed the script. It was written like a conventional script, but with software called Twine, so you could click on it and go down different paths. Initially I was overwhelmed at the complexity of the story and the structure. It wasn’t that I was like a deer in the headlights, but it gave me a sense of scale of the project and [writer/show runner] Charlie Brooker’s ambition to take the interactive story to so many layers.

On my own time I broke down the script and created spreadsheets for each of the eight sections in the script and wrote descriptions of every possible permutation, just to give me a sense of what was involved and to get it in my head what was going on. There are so many different narrative paths – it was helpful to have that in my brain. When we started editing, that would also help me to keep a clear eye at any point.

[OP] How long of a schedule did you have to post Bandersnatch?

[TK] 17 weeks was the official edit time, which isn’t much longer than on a low-budget feature. When I mentioned that to people, they felt that was a really short amount of time; but, we did a couple of weekends, we were really efficient, and we knew what we were doing.

[OP] Were you under any running length constraints, in the same way that a TV show or a feature film editor often wrestles with on a conventional linear program?

[TK] Not at all. This is the difference – linear doesn’t exist. The length depends on the choices that are made. The only direction was for it not to be a sprawling 15-hour epic – that there would be some sort of ball park time. We weren’t constrained, just that each segment had to feel right – tight, but not rushed.

[OP] With that in mind, what sort of process did you do through to get it to feel right?

[TK] Part of each edit review was to make it as tight or as lean as it needed to be. Netflix developed their own software, called Branch Manager, which allowed people to review the cut interactively by selecting the choice points. My amazing assistant editor, John Weeks, is also a coder, so he acquired an extra job, which was to take the exports and do the coding in order to have everything work in Branch Manager. He’s a very robust person, but I think we almost broke him (laughs), because there were up to 100 Branch Manager versions by the end. The coding was hanging on by a thread. He was a bit like Scotty in Star Trek, “The engines can’t hold it anymore, Captain!”

By using Branch Manager, people could choose a path and view it and give notes. So I would take the notes, make the changes, and it would be re-exported. Some segments might have five cuts while others would be up to 13 or 14. Some scenes were very straightforward, but others were more difficult to repurpose.

Originally there were more segments in the script, but after the first viewings it was felt that there were too many in there. It was on the borderline of being off-putting for viewers. So we combined a few, but I made sure to keep track of that so it was in the system. There was a lot of reviewing, making notes, updating spreadsheets, and then making sure John had the right version for the next Branch Manager creation. It was quite an involved process.

[OP] How were you able to keep all of this straight? Did you use the common technique of scenes cards on the wall or something different?

[TK] If you looked at flowcharts your head would explode, because it would be like looking at the wiring diagram of an old-fashioned telephone exchange. There wouldn’t have been enough room on the wall. For us, it would just be on paper – notebooks and spreadsheets. It was more in our heads – our own sense of what was happening – that made it less confusing. If you had the whole thing as a picture, you just wouldn’t know where to look.

[OP] In a conventional production an editor always has to be mindful that when something is removed, it may have ramifications to the story later on. In this case, I would imagine that those revisions affected the story in either direction. How were you able to deal with that?

[TK] I have been asked about how did we know that each path would have a sense of a narrative arc. We couldn’t think of it as one, total narrative arc. That’s impossible. You’d have to be a genius to know that it’s all going to work. We felt the performances were great, the story was strong, but it doesn’t have a conventional flow. There are choice points, which act as a propellant into the next part of the film thus creating an unconventional experience to the straight story arc of conventional films or episodes. Although there wasn’t a traditional arc, it still had to feel like a well-told story. And that you would have empathy and a sense of engagement – that it wasn’t a gimmick.

[OP] How did the crew and actors mange to keep the story straight in their minds as scenes were filmed?

[TK] As with any production, the first few days are finding out what you’ve let yourself in for. This was a steep learning curve in that respect. Only three weeks of the seven-week shoot was in the same studio complex where I was working, so I wasn’t present. But there was a sense that they needed to make it easier for the actors and the crew. The script supervisor, Marilyn Kirby, was amazing. She was the oracle for the whole shoot. She kept the whole show on the road, even when it was quite complicated. The actors got into the swing of it quickly, because I had no issues with the rushes. They were fantastic.

[OP] What camera formats were used and what is your preparation process for this footage prior to editing?

[TK] It’s the most variety of camera formats I’ve ever worked on. ARRI Alexa 65 and RED, but also 1980s Ikegami TV cameras, Super 8mm, 35mm, 16mm, and VHS. Plus, all of the print stills were shot on black-and-white film. The data lab handled the huge job to keep this all organized and provide us with the rushes. So, when I got them, they were ready to go. The look was obviously different between the sources, but otherwise it was the same as a regular film. Each morning there was a set of ProRes Proxy rushes ready for us. John synced and organized them and handed them over. And then I started cutting. Considering all the prep the DIT and the data lab had to go through, I think I was in a privileged position!

[OP] What is your method when first starting to edit a scene?

[TK] I watch all of the rushes and can quickly see which take might be the bedrock framing for a scene – which is best for a given line. At that point I don’t just slap things together on a timeline. I try to get a first assembly to be as good as possible, because it just helps anyone who sees it. If you show a director or a show runner a sloppy cut, they’ll get anxious and I don’t want that to happen. I don’t want to give the wrong impression.

When I start a scene, I usually put the wide down end-to-end, so I know I have the whole scene. Then I’ll play it and see what I have in the different framings for each line – and then the next line and the next and so on. Finally, I go back and take out angles where I think I may be repeating a shot too much, extend others, and so on. It’s a built-it-up process in an effort to get to a semi-fine cut as quickly as possible.

[OP] Were you able to work with circle takes and director’s notes on Bandersnatch?

[TK] I did get circle takes, but no director’s notes. David and I have an intuitive understanding, which I hope to fulfill each time – that when I watch the footage he shoots, that I’ll get what he’s looking for in the scene. With circles takes, I have to find out very quickly whether the script supervisor is any good or not. Marilyn is brilliant so whenever she’s doing that, I know that take is the one. David is a very efficient director, so there weren’t a massive number of takes – usually two or three takes for each set-up. Everything was shot with two cameras, so I had plenty of coverage. I understand what David is looking for and he trusts me to get close to that.

[OP] With all of the various formats, what sort of shooting ratio did you encounter? Plus, you had mentioned two-camera scenes. What is your approach to that in your edit application?

[TK] I believe the various story paths totaled about four-and-a-half hours of finished material. There was a 3:1 shooting ratio, times two cameras – so maybe 6:1 or even 9:1. I never really got a final total of what was shot, but it wasn’t as big as you’d expect. 

When I have two-camera coverage I deal with it as two individual cameras. I can just type in the same timecode for the other matching angle. I just get more confused with what’s there when I use multi-cam. I prefer to think of it as that’s the clip from the clip. I hope I’m not displaying an anti-technology thing, but I’m used to it this way from doing music videos. I used to use group clips in Avid and found that I could think about each camera angle more clearly by dealing with them separately.

[OP] I understand that you edited Bandersnatch on Adobe Premiere Pro. Is that your preferred editing software?

[TK] I’ve used Premiere Pro on two feature films, which I cut in Dublin, and a number of shorts and TV commercials. If I am working where I can set up my own cutting room, then I’m working with Premiere. I use both Avid and Adobe, but I find I’m faster on Premiere Pro than on Media Composer. The tools are tuned to help me work faster.

The big thing on this job was that you can have multiple sequences open at the same time in Premiere. That was going to be the crunch thing for me. I didn’t know about Branch Manager when I specified Premiere Pro, so I figured that would be the way we work need to review the segments – simply click on a sequence tab and play it as a rudimentary way to review a story path. The company that supplied the gear wasn’t as familiar with Premiere [as they were with Avid], so there were some issues, but it was definitely the right choice.

[OP] Media Composer’s strength is in multi-editor workflows. How did you handle edit collaboration in Premiere Pro?

[TK] We used Adobe’s shared projects feature, which worked, but wasn’t as efficient as working with Avid in that version of Premiere. It also wasn’t ideal that we were working from Avid Nexis as the shared storage platform. In the last couple of months I’ve been in contact with the people at Adobe and I believe they are sorting out some of the issues we were having in order to make it more efficient. I’m keen for that to happen.

In the UK and London in particular, the big player is Avid and that’s what people know, so anything different, like Premiere Pro, is seen with a degree of suspicion. When someone like me comes in and requests something different, I guess I’m viewed as a bit of a pain in the ass. But, there shouldn’t just be one behemoth. If you had worked on the old Final Cut Pro, then Premiere Pro is a natural fit – only more advanced and supported by a company that didn’t want to make smart phones and tablets.

[OP] Since Adobe Creative Cloud offers a suite of compatible software tools, did you tap into After Effects or other tools for your edit?

[TK] That was another real advantage – the interaction with the graphics user interface and with After Effects. When we mocked up the first choice points, it was so easy to create, import, and adjust. That was a huge advantage. Our VFX editor was able to build temp VFX in After Effects and we could integrate that really easily. He wasn’t just using an edit system’s effects tool, but actual VFX software, which seamlessly integrated with Premiere. Although these weren’t final effects at full 4K resolution, he was able to do some very complex things, so that everyone could go, “Yes, that’s it.”

[OP] In closing, what take-away would you offer an editor interested in tackling an interactive story as compared to a conventional linear film?

[TK] I learned to love spreadsheets (laugh). I realized I had to be really, really organized. When I saw the script I knew I had to go through it with a fine-tooth comb and get a sense of it. I also realized you had to unlearn some things you knew about conventional episodic TV. You can’t think of some things in the same way. A practical thing for the team is that you have to have someone who knows coding, if you are using a similar tool to Branch Manager. It’s the only way you will be able to see it properly.

It’s a different kind of storytelling pressure that you have to deal with, mostly because you have to trust your instincts even more that it will work as a coherent story across all the narrative paths. You also have to be prepared to unlearn some of the normal methods you might use. One example is that you have to cut the opening of different segments differently to work with the last shot of the previous choice point, so you can’t just go for one option, you have to think more carefully what the options are. The thing is not to walk in thinking it’s going to be the same as any other production, because it ain’t.

For more on Bandersnatch, check out these links: postPerspective, an Art of the Guillotine interview with Tony Kearns, and a scene analysis at This Guy Edits.

Images courtesy of Netflix and Tony Kearns.

©2019 Oliver Peters