A First Look at Postlab Cloud

Apple developed Final Cut Pro X around single-editor workflows. As such, professional editing teams who wanted to use this tool for collaborative editing have been challenged to develop their own solutions. One approach was Postlab, which was developed in-house at Dutch broadcaster Evanglische Omroep (EO). In order to expand the product as a commercial application, lead developer Jasper Siegers decided to move it under the Hedge umbrella. This required the app to be rebuilt with new code before it could be offered to the FCPX market. That time has come and Postlab is now available as Postlab Cloud.

As the name implies, Postlab Cloud hosts your FCPX libraries “in the cloud,” i.e. on Postlab’s servers. Some production companies or broadcasters are reticent to have their editing computers connected online, but it’s important to note that only libraries and no media or caches are hosted by Postlab. This keeps the transfer times fast and file sizes light. Cache and media files stay local, whether on your machine or on connected shared storage. Postlab sets up accounts based on site licenses and numbers of users. Each user is assigned a log-in based on an e-mail address and a password. This means that a production hosted by Postlab can be accessed by authorized users anywhere in the world, provided there’s a viable internet connection.

The owner of the account can set up productions and organize them within folders. Each production is a collection or bundle of one or more Final Cut Pro X libraries. If you have ever worked with Final Cut Server in the FCP7 days, then the Postlab workflow is very similar. Once a production has been created, an editor can log in, download the library (a check-out step), edit in it, and then upload the changed version (a check-in step). As part of this upload, the Postlab interface prompts you to add comments describing the work you’ve done. Only one editor at a time can download a library and have write access; however, other users can still download it with read-only access. If you have two editors ping-ponging work on the same library file, then one has to upload it (check in) before the other editor can download it (check out) for their edits.

Getting started

I decided to test Postlab Cloud in two scenarios: a) multiple workstations connected to a shared storage network, and b) two disconnected editors collaborating over long distances. To start, once an account has been established, any editor using Postlab Cloud must install the small Postlab application. Since the app controls some of Final Cut’s functions, you will be prompted to enable GUI Scripting in your privacy preferences. In order for Postlab to work properly, media and cache files need to be outside of the library bundle. When you first download a library, you may be prompted to change your settings. In a networked environment with media on shared storage, the path to the media should be the same on each workstation. This means when Editor A finishes and checks in the production and then Editor B checks it back out, you generally will not need to relink the media files on Editor B’s system. Therefore, this edit collaboration can proceed fluidly.

Once a production has been downloaded, the library file exists as a temporary file on the local machine and not the network. This means that Postlab can still work in tandem with storage solutions that don’t normally perform well with FCPX libraries. In addition to this temporary library file, the Final Cut backup library is also stored in the location you have designated. If you are working in a networked, collaborative environment, then the advantage Postlab offers is version tracking and the ability for multiple users to open a library file (only one with write access).

Long distance

The second scenario is working with other editors outside of your facility. The first step is to get the media to the outside editor. You could certainly send a drive, but that isn’t efficient in time nor cost, especially across continents. If you only need creative editing and not finishing services, then low-res, proxy files are fine. So I converted my 4K UHD ProRes HQ files to 960 x 540 H264 (3Mbps) files and used Frame.io to transfer them over the internet. The key to proper relinking when you are done is to set audio to pass-through when converting this files. This was a double-system sound shoot, so I uploaded both the H264 videos files and the sound recordist’s WAV files to Frame and then downloaded them again at the other end (my home). Now I had media in both locations. The process would be the same even if it were two editors in two different countries.

The first Postlab step is to create and upload this FCPX library. Once that has been established, any authorized user with a Postlab log-in can access the production. I decided to go back and forth on this production between my home and the facility and also using different user log-ins – thus simulating a team of remote editors. Each time I did this, version changes were tracked by Postlab. If I were working with multiple editors, I would have been able to see what tasks each had performed.

It’s important to note that when you collaborate in this way, each editor should be using the same effects, LUTs, and Motion templates, otherwise some things will appear offline. Since the path to the media was different at home versus at the facility, each time I went between the two, checking in and then checking out the production, media files would appear offline. A simple relink fixed this, but it’s something to be aware of. Once totally done, I could relink to the high-res camera files and “finish” the project back at the office.

Wrap-up

When you upload a library back to Postlab, that open FCPX library is closed within Final Cut Pro X on your system, because you have checked it back in. Once you log out of Postlab, the temporary library file is moved to the trash. If you need a local version of the library, then export it from the Postlab app.

Once you get the hang of it, collaboration is simple using Postlab Cloud. Library files stay light without any of the sort of corruption caused by using services like DropBox. My test project included synchronized multi-cam clips and multi-channel audio. Each time during this exchange clips, projects, and edits showed up as expected when going between the various users. Whether or not Apple ever tackles collaboration within Final Cut Pro X is an unknown. But why wait? If you need that today, then Postlab Cloud offers a solid answer.

The relaunched Postlab Cloud includes three plans, which are priced per user/per year: Postlab, Postlab Pro, and Postlab Server. The first tier only allows for library version tracking and sharing. Pro allows for a lot more libraries to be shared and comes with more features. Server is a dedicated Postlab Cloud server for larger teams or those that require IT-specific features like Active Directory. Finally, Hedge/Postlab plans to ship a local version of Postlab – designed for use within local networks – soon after launch.

Postlab has now expanded to include Premiere Pro users.

Check out the Postlab tutorials for more information.

The article was originally written for FCP.co.

©2020 Oliver Peters

ADA Compliance

The Americans with Disabilities Act (ADA) has enriched the lives of many in the disabled community since its introduction in 1990. It affects all of our lives, from wheelchair-friendly ramps on street corners and business entrances to the various accessibility modes in our computers and smart devices. While many editors don’t have to deal directly with the impact of the ADA on media, the law does affect broadcasters and streaming platforms. If you deliver commercials and programs, then your production will be affected in one way or another. Typically the producer is not directly subject to compliance, but the platform is. This means someone has to provide the elements that complete compliance as part of any distribution arrangement, whether it is the producer or the outlet itself.

Two components are involved to meet proper ADA compliance: closed captions and described audio (aka audio descriptions). Captions come in two flavors – open and closed. Open captions or subtitles consists of text “burned” into the image. It is customarily used when a foreign language is spoken in an otherwise English program (or the equivalent in non-English-speaking countries). Closed captions are enclosed in a data stream that can be turned on and off by the viewer, device, or the platform and are intended to make the dialogue accessible to the hearing-impaired. Closed captions are often also turned on in noisy environments, like a TV playing in a gym or a bar.

Audio descriptions are intended to aid the visually-impaired. This is a version of the audio mix with an additional voice-over element. An announcer describes visual information that is not readily obvious from the audio of the program itself. This voice-over fills in the gaps, such as “man climbs to the top of a large hill” or “logos appear on screen.”

Closed captions

Historically post houses and producers have opted to outsource caption creation to companies that specialize in those services. However, modern NLEs enable any editor to handle captions themselves and the increasing enforcement of ADA compliance is now adding to the deliverable requirements for many editors. With this increased demand, using a specialist may become cost prohibitive; therefore, built-in tools are all the more attractive.

There are numerous closed caption standards and various captioning file formats. The most common are .scc (Scenarist), .srt (SubRip), and .vtt (preferred for the web). Captions can be supplied as “embedded” (secondary data within the master file) or as a separate “sidecar” file, which is intended to play in sync with the video file. Not all of these are equal. For example, .scc files (embedded or as sidecar files) support text formatting and positioning, while .srt and .vtt do not. For example, if you have a lower-third name graphic come on screen, you want to move any caption from its usual lower-third, safe-title position to the top of the screen while that name graphic is visible. This way both remain legible. The .scc format supports that, but the other two don’t. The visual appearance of the caption text is a function of the playback hardware or software, so the same captions look different in QuickTime Player versus Switch or VLC. In addition, SubRip (.srt) captions all appear at the bottom, even if you repositioned them to the top, while .vtt captions appear at the top of the screen.

You may prefer to first create a transcription of the dialogue using an outside service, rather than simply typing in the captions from scratch. There are several online resources that automate speech-to-text, including SpeedScriber, Simon Says, Transcriptive, and others. Since AI-based transcription is only as good as the intelligibility of the audio and dialects of the speakers, they all require further text editing/correction through on online tool before they are ready to use.

One service that I’ve used with good results is REV.com, which uses human transcribers for greater accuracy, as well as offering on online text editing tool. The transcription can be downloaded in various formats, including simple text (.txt). Once you have a valid transcription, that file can be converted through a variety of software applications into .srt, .scc, or .vtt files. These in turn can be imported into your preferred NLE for timing, formatting, and positioning adjustments.

Getting the right look

There are guidelines that captioning specialists follow, but some are merely customary and do not affect compliance. For example, upper and lower case text is currently the norm, but you’ll still be OK if your text is all caps. There are also accepted norms when English (or other) subtitles appear on screen, such as for someone speaking in a foreign language. In those cases, no additional closed caption text is used, since the subtitle already provides that information. However, a caption may appear at the top of the screen identifying that a foreign language is being spoken. Likewise, during sections with only music or ambient sounds, a caption may briefly identifying it as such.

When creating captions, you have to understand that readability is key, so the text will not always run perfectly in sync with the dialogue. For instance, when two actors engage in rapid fire dialogue, each caption may stay on longer than the spoken line. You can adjust the timing against that scene so that they eventually catch up once the pace slows down. It’s good to watch a few captioned programs before starting from scratch – just to get a sense of what works and what doesn’t.

If you are creating captions for a program to run on a specific broadcast network or streaming services, then it’s a good idea to find out of they provide a style guide for captions.

Using your NLE to create closed captions

Avid Media Composer, Adobe Premiere Pro, DaVinci Resolve, and Apple Final Cut Pro X all support closed captions. I find FCPX to be the best of this group, because of its extensive editing control over captions and ease of use. This includes text formatting, but also display methods, like pop-on, paint-on, and roll-up effects. Import .scc files for maximum control or extract captions from an existing master, if your media already has embedded caption data. The other three NLEs place the captions onto a single data track (like a video track) within which captions can be edited. Final Cut Pro X places them as a series of connected clips, like any other video clip or graphic. If you perform additional editing, the FCPX magnetic timeline takes care of keeping the captions in sync with the associated dialogue.

Final Cut’s big plus for me is that validation errors are flagged in red. Validation errors occur when caption clips overlap, may be too short for the display method (like a paint-on), are too close to the start of the file, or other errors. It’s easy to find and fix these before exporting the master file.

Deliverables

NLEs support the export of a master file with embedded captions, or “burned” into the video as a subtitle, or the captions exported as a separate sidecar file. Specific format support for embedded captions varies among applications. For example, Premiere Pro – as well as Adobe Media Encoder – will only embed captioning data when you export your sequence or encode a file as a QuickTime-wrapped master file. (I’m running macOS, so there may be other options with Windows.)

On the other hand, Apple Compressor and Final Cut Pro X can encode or export files with embedded captions for formats such as MPEG2 TS, MPEG 2 PS, or MP4. It would be nice if all these NLEs supported the same range of formats, but they don’t. If your goal is a sidecar caption file instead of embedded data, then it’s a far simpler and more reliable process.

Audio descriptions

Compared to closed captions, providing audio description files is relatively easy. These can either be separate audio files – used as sidecar files for secondary audio – or additional tracks on the delivery master. Sometimes it’s a completely separate video file with only this version of the mix. Advanced platforms like Netflix may also require an IMF (Interoperable Master Format) package, which would include an audio description track as part of that package. When audio sidecar files are requested for the web or certain playback platforms, like hotel TV systems, the common deliverable formats are .mp3 or .m4a. The key is that the audio track should be able to run in sync with the rest of the program.

Producing an audio description file doesn’t require any new skills. A voice-over announcer is describing any action that occurs on screen, but which wouldn’t otherwise make sense if you were only listening to audio without that. Think of it like a radio play or podcast version of your TV program. This can be as simple as fitting additional VO into the gaps between actor/host/speaker dialogue. If you have access to the original files (such as a Pro Tools session) or dialogue/music/effects stems, then you have some latitude to adjust audio elements in order to fit in the additional voice-over lines. For example, sometimes the off-camera dialogue may be moved or edited in order to make more space for the VO descriptions. However, on-camera/sync dialogue is left untouched. In that case, some of this audio may be muted or ducked to make space for even longer descriptions.

Some of the same captioning service providers also provide audio description services, using their pool of announcers. Yet, there’s nothing about the process that any producer or editor couldn’t handle themselves. For example, scripting the extra lines, hiring and directing talent, and producing the final mix only require a bit more time added to the schedule, yet permits the most creative control.

ADA compliance has been around since 1990, but hasn’t been widely enforced outside of broadcast. That’s changing and there are no more excuses with the new NLE tools. It’s become easier than ever for any editor or producer to make sure they can provide the proper elements to touch every potential viewer.

For additional information, consult the FCC guidelines on closed captions.

The article was originally written for Pro Video Coalition.

©2020 Oliver Peters

Video Technology 2020 – Shared Storage

Shared storage used to be the domain of “heavy iron” facilities with Avid, Facilis, and earlier Apple Xserve systems providing the horsepower. Thanks to advances in networking and Ethernet technology, shared storage is accessible to any user. Whether built-in or via adapters, modern computers can tap into 1Gbps, 10Gbps, and even higher, networking speeds. Most computers can natively access Gigabit Ethernet networks (1Gbps) – adequate for SD and HD workflows. Computers designed for the pro video market increasingly sport built-in 10GbE ports, enabling comfortable collaboration with 4K media and up. Some of today’s most popular shared storage vendors include QNAP, Synology, and LumaForge.

This technology will become more prolific in 2020, with systems easier to connect and administer, making shared storage as plug-and-play as any local drives. Network Attached Storage (NAS) systems can service a single workstation or multiple users. In fact, companies like QNAP even offer consumer versions of these products designed to operate as home media servers. Even LumaForge sells a version of its popular Jellyfish through the online Apple Store. A simple, on-line connection guide will get you up and running, no IT department required. This is ideal for the individual editor or small post shop.

Expect 2020 to see higher connection speeds, such as 40GbE, and NAS proliferation even more widespread. It’s not just a matter of growth. These vendors are also interested in extending the functionality of their products beyond being a simple bucket for media. NAS systems will become full-featured media hubs. For example, if you an Avid user, you are familiar with their Media Central concept. In essence, this means the shared storage solution is a platform for various other applications, including the editing software. There are additional media applications that include management apps for user permission control, media queries, and more. Like Avid, the other vendors are exploring similar extensibility through third-party apps, such as Axle Video, Kyno, Hedge, Frame.io, and others. As such, a shared network becomes the whole that is greater than the sum of its parts.

Along with increased functionality, expect changes in the hardware, too. Modern NAS hardware is largely based on RAID arrays with spinning mechanical drives. As solid state (SSD) storage devices become more affordable, many NAS vendors will offer some of their products featuring RAID arrays configured with SSDs or even NVMe systems. Or a mixture of the two, with the SSD-based units used for short-term projects or cache files. Eventually the cost will come down enough so that large storage volumes can be cost-effectively populated with only SSDs. Don’t expect to be purchasing 100TB of SSD storage at a reasonable price in 2020; however, that is the direction in which we are headed. At least in this coming year, mechanical drives will still rule. Nevertheless, start looking at some percentage of your storage inventory to soon be based on SSDs.

Click here for more on shared storage solutions.

Originally written for Creative Planet Network.

©2020 Oliver Peters

Video Technology 2020 – The Cloud

The “cloud” is merely a collection of physical data centers in multiple locations around the world – not much different than a small storage center you might have. Of course, they employ more advanced systems for power, redundancy, and security than you do. When you work with one of the companies marketing cloud-based editing or a review-and-approval service, like Frame.io or Wipster, they provide the user-facing interface, but are actually renting storage space from one of the big three cloud providers – Google, Amazon, or Microsoft.

There are three reasons that I’m skeptical about ubiquitous, cloud-based editing (with media at native resolutions) in the short term: upload speeds, cost, and security.

Speed

5G (fifth generation wireless) is the technology predicted to offer adequate speeds and low latency for native 4K (and higher) media. While 5G will be a great advancement for many things, it’s a short distance signal requiring more transmission spots than current wireless technology. Full coverage in most metro areas, let alone widespread geographical coverage worldwide, will take many years to fully deploy. Other than potential camera-to-cloud uploads of proxy media in the field, 5G won’t soon be the killer solution. Current technology still dictates that if you want the fastest possible upload speeds for large amounts of data, then you have to tap as close as possible to the internet’s backbone.

Cost

Cloud storage is cheap, but extensive upload and download times aren’t. Unfortunately modern video resolutions also result in huge amounts of data generated on every shoot. Uploading native 4K media for a week-long production is considerably more expensive than FedEx and overnight charges to ship drives. What about long term storage? Let’s say that all of your native media is in the cloud and you pay according to a monthly or annual subscription plan. But what if you want to stop? That media will have to be downloaded and stored locally, which will incur data rate charges, as well as your time to download everything.

Security

Think these sites are unequivocally secure? Look at any data hack at a major company. Security is such a concern in our business that most major movie studios won’t let their editors connect the computers to the internet. Many make these editors check their cell phones at the door. No matter how secure, it’s going to be a hard sell, except for limited slices of the production, such as cloud-based VFX rendering.

I do believe 2020 will be a year in which many will take advantage of some modes of long distance, cloud-based edit services using low-res proxy media. Increasingly some services will be used to move dailies and deliverables around the globe via the cloud. But that’s a big difference from cloud-based editing becoming the norm. One edit scenario many will experiment with is to store the edit project files in the cloud, but with the media mirrored locally at each edit site. This way only the lightweight files used for edit collaboration need be moved over the internet. Think of this as Google Docs for editing. Adobe already offers a version of this, but I suspect you’ll see others, including solutions for Final Cut Pro X. So while true cloud-based editing is not a near-term solution, bits and pieces will become increasingly commonplace.

Originally written for Creative Planet Network.

©2020 Oliver Peters

Every NLE is a Database

Apple’s Final Cut Pro X has spawned many tribal arguments since its launch eight years ago. There have been plenty of debates about the pros and cons of its innovative design and editing model. One that I’ve heard a number of times is that FCPX is a relational database, while traditional editing applications are more like an Excel spreadsheet. I can see how the presentation of a bin in the list view format might convey that impression, but that doesn’t make it accurate. Spreadsheets are a grid of cells that are based on a combination of mathematical formulae, regardless of whether the info is text or numbers. All nonlinear editing applications (NLE) use a relational database to track media, although the type and format of this database will differ among brands. In all cases, these function altogether differently than how a spreadsheet functions.

It started with film

When all editing was done on film, the editors cut work print, which was a reversal copy printed from the camera negative. Edits made on the work print were eventually duplicated on the pristine negative by a negative cutter, based on a cut list. Determining where to cut and join the film segments together was based on a list of edit points corresponding to the source rolls of the film, plus a foot+frame count for specific edit points. The work print, which the editors could physically cut and splice as needed, was effectively an abstraction of – and stand-in for – the negative.

In order to enable the process, assistant editors (or in some cases, the editor) created a handwritten log, known as a codebook. This started with the dailies and included all the pertinent information, such as source roll, shoot days/dates, scenes/takes, director’s notes, editor’s notes, and so on. The codebook was a physical database that allowed an editor to know what the options were and where to find them.

During the videotape-editing era prior to NLEs, any sort of database for tracking source information was still manual. Only the cut list portion, known as the edit decision list, could be generated by the edit computer, based on the timecode values recorded on the tape. Timecode became the electronic equivalent of the foot+frame count of physical film.

Fast forward to the modern era with file-based camera acquisition and ubiquitous, inexpensive editing software. The file recorded by the camera is a container of sorts that holds essence (audio and video) and metadata (information about the essence). Some cameras generate a lot of metadata and others don’t. One example of this type of metadata that we all encounter is the information embedded into digital still photos, which can include location, lens data, and a ton more.

When clips are ingested/imported into your NLE – whether into a project, bin, folder, or an event – the NLE links to the essence of the media clips on the hard drive or camera card and brings in whatever clip metadata is understood by that application. In addition, the user can add and merge a lot more metadata derived from other sources, like the sound recorder, script supervisor notes, electronic script, and manually-added data.

The clip that you see in the bin/event/folder is an abstraction for the actual audio and video media, just like work print was for film editors. The bin/folder/event data entries are like the film editor’s codebook and are tracked in the internal database used by that application to cross-reference the clip with the actual stored media. Since a clip in the app’s browser is simply an abstraction, it can appear in multiple places at the same time – in various bins and sequences. The internal database makes sure that each of these instances of the clip all reference the same piece of media accurate down to the video frame or audio sample.

It doesn’t matter how the bin looks

The spreadsheet comparison is based on how bins have appeared in most NLEs, including Final Cut Pro “legacy,” Avid Media Composer, and others. Unfortunately that opinion is usually based on a narrow exposure to other NLEs. As I said, at the core, every NLE is a relational database. And so, there are other things that can be tracked and/or ways it can be displayed.

For instance, older Quantel edit systems displayed source information based on what we would consider a smart search view today. The entirety of the source material was not displayed in front of the editor, since it was a single-screen layout. Entering data into a search field would sift through and present clips matching the requested data.

Avid Media Composer systems also track media based on Script Integration (sometimes incorrectly referred to as ScriptSync, which is a separate Avid option). This is a graphical bin layout with the script text displayed on screen and clips linked to coverage of that scene. Media Composer and now Premiere Pro both permit a freeform clip view for a bin, in which the editor can freely rearrange the position of the clip thumbnails within the bin window. This visual juxtaposition by the user of clips conveys important information to the editor.

All NLEs have multiple ways to present the data and aren’t limited to a grid-style list view that resembles a spreadsheet or a grid of clip thumbnails. Enabling these alternate views takes a lot more than simply cross-referencing your bin and timelines against a set of edit points. That’s where databases come in and why every NLE is built around one.

How can you be in two places at once when you’re not anywhere at all?

My apologies to Firesign Theatre. A huge aspect of the Final Cut Pro X edit workflow is the use of keyword collections. You aren’t limited to being in just a single bin thanks to them. While this is a selling point for FCPX, it is also well within the capabilities of most NLEs.

Organizing your event (bin) media in FCPX can start by assigning keywords to each clip. Each new keyword used creates a keyword collection – sort of a “smart sub-bin.” As you assign one or more keywords to a clip, FCPX automatically sorts the clip into those corresponding keyword collections.  For example, let’s say you have a series of wide and close-up shots featuring both male and female actors. Clip 1 might be sorted into WIDE and MAN; Clip 2 into WIDE and WOMAN; Clip 3 into WOMAN and CLOSE-UP. So then the keyword collection for WIDE displays Clip 1 and Clip 2; MAN displays Clip 1; WOMAN displays Clip 2 and Clip 3; CLOSE-UP displays Clip 3.

Once this initial step is completed it enables the editor to view source clips in a more focused manner. Instead of wading through 100 clips in the event (bin) each time, the editor may only have to deal with 10 clips in the CLOSE-UP keyword collection. Or in any other collection. The beauty of FCPX’s interface design is the speed and fluidity with which this can be accomplished. This feature is one of the hallmarks of the application and no other NLE does it nearly as elegantly. In fact, FCPX tackles the challenge of narrowing down the browser options through three methods – ratings, keyword collections, and smart collection (described in this linked tutorial by Simon Ubsdell).

As elegantly as Final Cut tackles this task, that doesn’t mean that other NLEs can’t function in a similar manner. Within Premiere Pro, those exact same keywords can be assigned to the clips. Then simply create a set of search bins using those same keywords as search criteria. The result is the exact same type of distribution of clips into collections where multiple clips can appear in multiple bins at the same time. Likewise, the editor doesn’t need to go through the full set of clips in a bin, but can concentrate on the small handful in any given search bin. Media Composer also offers search functions, as well as, custom sift routines, which enable you to only display clips matching specific column details, like a custom keyword.

Most NLEs can only store one set of in/out edit marks on a clip within a bin at any given time. On the other hand, Final Cut Pro X offers range-based selection. Clips can retain multiple in/out selections at once. Nevertheless other NLE aren’t behind here either. The obvious solution that most editors use when this is needed is to create a subclip, which can be a duplicate of the entire clip or a portion from within a single clip. Need to pull multiple sections of the clip? Simply create multiple subclips. In effect, these are the same as range-based selections in Final Cut Pro X. Admittedly the FCPX method is more fluid and straightforward. Nevertheless, range-based selections are virtual subclips that are dynamically created by the editor; but unlike subclips, these can’t be moved separately to other events (bins). Two ways to tackle a very similar need.

The bottom line is that under the hood, all NLEs are still very much the same. Let me emphasize that I’m not arguing the superiority, speed, or elegance of one approach or tool over another. Every company has their own set of unique features that appeal to different types of editors. They are simply different methods to place information at your fingertips, get roadblocks out of the way, and thus to make editing more creative and enjoyable.

©2019 Oliver Peters

Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

For more information about storage, here’s an article I wrote for Pro Video Coalition.

©2019 Oliver Peters

Handling and Protecting Media

Once the industry entered the file-based era, we realized that dealing with and properly archiving audio and video files could make or break a production company. No more videotapes on the shelf to pull footage from. Unfortunately many companies, producers, clients, and editors simply solved this with a hodgepodge of small, portable drives – Firewire, USB, Thunderbolt, whatever. That’s no longer practical. A typical 10-day, 4K shoot with a handful of formats can easily generate 8-10TB of original footage. That’s if the production is structured. Make that a 2-3 weeklong documentary or reality-style production and you’ll have closer to 20-30TB. Not exactly something you want to deal with in post using a bunch of orange LaCie drives!

The road to safeguarding your files

At the day job, we were able to invest in a LumaForge Jellyfish shared storage network (NAS). It’s 480TB, which sounds like a lot, but after RAID protection the available net capacity is 316TB. And you only want to use up to 80%-90% of that for the most efficient operation. While it still sounds like a lot of storage, it is a finite amount. This means that you need to develop a strategy for archiving older projects and the associated media, but yet easily find and restore it later for revisions.

Cloud storage remains a pipe dream at these quantities. LTO data tape back-up is also impractical, because of its linear read/write nature. It is only intended for deep storage archiving. Facilities who have attempted to use LTO as a type of near-line storage – with frequent restores, updates, and subsequent re-archiving – have worn out their LTO tapes long before the rated life.

Efficient media handling starts when a project or production is first originated. In our case, every new project gets a folder on the Jellyfish and inside that folder is a standard group of subfolders for the corresponding project files, graphics, exports, and source footage. We assign all projects a job number for billing and that number is part of the top level folder name, as well as in any project file name. This default, template starting point is generated for each new production using the Post Haste application.

The location crew

On location all media is copied daily (with verification using the Hedge application) to both master and back-up drives. Depending on the size of the crew, this is the responsibility of the DIT, assistant cameraman, or the director of photography. On large productions, the cost of these drives is built into the budget and they later end up being stored on the shelf for safe keeping. On smaller jobs (or some fast turnaround jobs) temporary, fast SSDs are used, which will later be reused on other projects.

Post starts here

The next step back at the shop is to copy all of this material from the location drives onto the Jellyfish into that project’s Source Media or Dailies subfolder. Once copied, I will proceed to clean up and reorganize all media into subfolders according to this hierarchy:

DATE / CAMERA / REEL

For example: 092819/A-CAMERA_ALEXA/A001

Or outside of the US, maybe: 28SEPT19/A-CAMERA_ALEXA/A001

If a camera file is buried several folders deep – due to the camera card structure or an error made by the crew member on location – I will move those files to the top level within the REEL subfolder without any other levels in between. Camera folders, like DCIM, CLIP, etc are thus orphaned, and so, deleted from Jellyfish. Remember that I still have the original master drive from the location, which will sit on the shelf. If I ever need to get back to the file in its original container, I have that option.

I discussed relinking strategies in the previous post and that comes into play here. Files from semi-pro and non-pro cameras, like DSLRs, GoPros, iPhones, etc will have a prefix appended to the file name using the Better Rename application. The name is typically a short 8-10 character alphanumeric to indicate a job name reference, date, camera letter, and reel.

For instance, a file from the B-camera’s reel 7 for a production done for project ABC on September 28th would get the prefix “ABC0928B07_”. The camera-generated clip name would follow the underscore in that name. The point of doing this is to guarantee unique file names, especially when multiple cameras and filming days are involved. I also apply this process to sound files, even if the clip name reflects the scene and take number.

The last step is to transcode and rate-convert all non-pro media. If my base rate is 23.98fps (23.976), then files like GoPro 59.94fps media get turned into ProRes at 23.98 (slomo). In that case, I will have a subfolder with the original media and a second subfolder with the transcoded media, both with proper file names. I usually apply the “_PR2398” suffix to these transcoded files. I have found that DaVinci Resolve is the best and fastest tool for this transcoding process and large batches can be run overnight as needed.

Archiving your files

If the crew used temporary drives on location, then before these are reformatted and recycled, they are copied to inexpensive portables, like Seagate or Western Digital USB drives. These are then parked on the shelf for safe keeping. The objectives is to end up with at least two copies of the source media – the unaltered, camera original files and the new, master files on the Jellyfish.

Once editing has been completed and approved and the client files have been delivered, we move into the archiving stage. For nearly every project, we try to make sure that a ProRes master and a textless ProRes master have been generated by the editor. In addition, the mixer or the editor will generate a mixed audio file and audio stems for dialogue, SFX, and music (as separate files). Many times, you end up making future changes or versions using these files without going back to the original project file.

The entire project folder with all of the associated media is now copied to a raw, removable hard drive. These are enterprise-grade drives. All of our workstations are equipped with docking stations for such drives. To date, we are up to 200 drives, ranging in size from 2TB to 8TB. They are indexed using the simple DiskCatalogMaker application, which generates a searchable index file of all of these archive drives. (Note – I would recommend spinning up these archive drives every few months.)

Let me mention that while this can be done at the end, I will often split this archival step into two phases. I will first copy only the Dailies media right after I have organized it on Jellyfish (before any editing), leaving the other project subfolders blank. The reason is that once location production is done, there won’t be anything else added to Dailies. In addition, it gives me three copies of the camera files – the location drive (or its back-up), Jellyfish, and the archive drive. Once the project is finished, I only need to copy the rest of the material from the other subfolders.

The last step is to move the project folder from the PROJECTS master folder on Jellyfish to the BACKED UP master folder. As long as we have space on Jellyfish, the project is never deleted. Often changes are required. When that happens, the affected project folder is moved from BACKED UP to PROJECTS again. The changes are made and client files delivered. Then the archive drive for that project is updated and re-indexed to the DiskCatalogMaker catalog file. The project file is finally returned to the BACKED UP folder. As we need space on Jellyfish, the oldest projects that haven’t been touched in a long while are deleted.

Redundancy is the key

There are two additional protection steps taken. All active project files (usually Premiere Pro) are copied to the company’s DropBox by every editor at the end of each day. In the event of a catastrophic NAS failure – before the completion of that project – we can at least get to the project file in the cloud (DropBox) and the media that is stored on hard drive in order to restore the edit. (Note that if you do this with FCPX Libraries, they must first be “zipped,” because DropBox and FCPX Libraries do not play well together.)

The second item is that we have an additional folder on Jellyfish for all completed masters. When an editor generates ProRes master and/or textless files, those files are also copied to this masters folder. That give us quick access to all final versions, should the client require an extra web file or some other type of deliverable. It’s easy to simply encode new files from these ProRes masters, without needing to search out the original project folder.

These steps may sound complex and daunting if you aren’t currently doing them. I have covered some of this in past posts, but I do update my processes over time. Once you get into a routine of doing these steps, the benefits pay off immensely. Your media is better protected, it’s easier to find in the future, and relinking is a no-brainer.

©2019 Oliver Peters