Every NLE is a Database

Apple’s Final Cut Pro X has spawned many tribal arguments since its launch eight years ago. There have been plenty of debates about the pros and cons of its innovative design and editing model. One that I’ve heard a number of times is that FCPX is a relational database, while traditional editing applications are more like an Excel spreadsheet. I can see how the presentation of a bin in the list view format might convey that impression, but that doesn’t make it accurate. Spreadsheets are a grid of cells that are based on a combination of mathematical formulae, regardless of whether the info is text or numbers. All nonlinear editing applications (NLE) use a relational database to track media, although the type and format of this database will differ among brands. In all cases, these function altogether differently than how a spreadsheet functions.

It started with film

When all editing was done on film, the editors cut work print, which was a reversal copy printed from the camera negative. Edits made on the work print were eventually duplicated on the pristine negative by a negative cutter, based on a cut list. Determining where to cut and join the film segments together was based on a list of edit points corresponding to the source rolls of the film, plus a foot+frame count for specific edit points. The work print, which the editors could physically cut and splice as needed, was effectively an abstraction of – and stand-in for – the negative.

In order to enable the process, assistant editors (or in some cases, the editor) created a handwritten log, known as a codebook. This started with the dailies and included all the pertinent information, such as source roll, shoot days/dates, scenes/takes, director’s notes, editor’s notes, and so on. The codebook was a physical database that allowed an editor to know what the options were and where to find them.

During the videotape-editing era prior to NLEs, any sort of database for tracking source information was still manual. Only the cut list portion, known as the edit decision list, could be generated by the edit computer, based on the timecode values recorded on the tape. Timecode became the electronic equivalent of the foot+frame count of physical film.

Fast forward to the modern era with file-based camera acquisition and ubiquitous, inexpensive editing software. The file recorded by the camera is a container of sorts that holds essence (audio and video) and metadata (information about the essence). Some cameras generate a lot of metadata and others don’t. One example of this type of metadata that we all encounter is the information embedded into digital still photos, which can include location, lens data, and a ton more.

When clips are ingested/imported into your NLE – whether into a project, bin, folder, or an event – the NLE links to the essence of the media clips on the hard drive or camera card and brings in whatever clip metadata is understood by that application. In addition, the user can add and merge a lot more metadata derived from other sources, like the sound recorder, script supervisor notes, electronic script, and manually-added data.

The clip that you see in the bin/event/folder is an abstraction for the actual audio and video media, just like work print was for film editors. The bin/folder/event data entries are like the film editor’s codebook and are tracked in the internal database used by that application to cross-reference the clip with the actual stored media. Since a clip in the app’s browser is simply an abstraction, it can appear in multiple places at the same time – in various bins and sequences. The internal database makes sure that each of these instances of the clip all reference the same piece of media accurate down to the video frame or audio sample.

It doesn’t matter how the bin looks

The spreadsheet comparison is based on how bins have appeared in most NLEs, including Final Cut Pro “legacy,” Avid Media Composer, and others. Unfortunately that opinion is usually based on a narrow exposure to other NLEs. As I said, at the core, every NLE is a relational database. And so, there are other things that can be tracked and/or ways it can be displayed.

For instance, older Quantel edit systems displayed source information based on what we would consider a smart search view today. The entirety of the source material was not displayed in front of the editor, since it was a single-screen layout. Entering data into a search field would sift through and present clips matching the requested data.

Avid Media Composer systems also track media based on Script Integration (sometimes incorrectly referred to as ScriptSync, which is a separate Avid option). This is a graphical bin layout with the script text displayed on screen and clips linked to coverage of that scene. Media Composer and now Premiere Pro both permit a freeform clip view for a bin, in which the editor can freely rearrange the position of the clip thumbnails within the bin window. This visual juxtaposition by the user of clips conveys important information to the editor.

All NLEs have multiple ways to present the data and aren’t limited to a grid-style list view that resembles a spreadsheet or a grid of clip thumbnails. Enabling these alternate views takes a lot more than simply cross-referencing your bin and timelines against a set of edit points. That’s where databases come in and why every NLE is built around one.

How can you be in two places at once when you’re not anywhere at all?

My apologies to Firesign Theatre. A huge aspect of the Final Cut Pro X edit workflow is the use of keyword collections. You aren’t limited to being in just a single bin thanks to them. While this is a selling point for FCPX, it is also well within the capabilities of most NLEs.

Organizing your event (bin) media in FCPX can start by assigning keywords to each clip. Each new keyword used creates a keyword collection – sort of a “smart sub-bin.” As you assign one or more keywords to a clip, FCPX automatically sorts the clip into those corresponding keyword collections.  For example, let’s say you have a series of wide and close-up shots featuring both male and female actors. Clip 1 might be sorted into WIDE and MAN; Clip 2 into WIDE and WOMAN; Clip 3 into WOMAN and CLOSE-UP. So then the keyword collection for WIDE displays Clip 1 and Clip 2; MAN displays Clip 1; WOMAN displays Clip 2 and Clip 3; CLOSE-UP displays Clip 3.

Once this initial step is completed it enables the editor to view source clips in a more focused manner. Instead of wading through 100 clips in the event (bin) each time, the editor may only have to deal with 10 clips in the CLOSE-UP keyword collection. Or in any other collection. The beauty of FCPX’s interface design is the speed and fluidity with which this can be accomplished. This feature is one of the hallmarks of the application and no other NLE does it nearly as elegantly. In fact, FCPX tackles the challenge of narrowing down the browser options through three methods – ratings, keyword collections, and smart collection (described in this linked tutorial by Simon Ubsdell).

As elegantly as Final Cut tackles this task, that doesn’t mean that other NLEs can’t function in a similar manner. Within Premiere Pro, those exact same keywords can be assigned to the clips. Then simply create a set of search bins using those same keywords as search criteria. The result is the exact same type of distribution of clips into collections where multiple clips can appear in multiple bins at the same time. Likewise, the editor doesn’t need to go through the full set of clips in a bin, but can concentrate on the small handful in any given search bin. Media Composer also offers search functions, as well as, custom sift routines, which enable you to only display clips matching specific column details, like a custom keyword.

Most NLEs can only store one set of in/out edit marks on a clip within a bin at any given time. On the other hand, Final Cut Pro X offers range-based selection. Clips can retain multiple in/out selections at once. Nevertheless other NLE aren’t behind here either. The obvious solution that most editors use when this is needed is to create a subclip, which can be a duplicate of the entire clip or a portion from within a single clip. Need to pull multiple sections of the clip? Simply create multiple subclips. In effect, these are the same as range-based selections in Final Cut Pro X. Admittedly the FCPX method is more fluid and straightforward. Nevertheless, range-based selections are virtual subclips that are dynamically created by the editor; but unlike subclips, these can’t be moved separately to other events (bins). Two ways to tackle a very similar need.

The bottom line is that under the hood, all NLEs are still very much the same. Let me emphasize that I’m not arguing the superiority, speed, or elegance of one approach or tool over another. Every company has their own set of unique features that appeal to different types of editors. They are simply different methods to place information at your fingertips, get roadblocks out of the way, and thus to make editing more creative and enjoyable.

©2019 Oliver Peters