The Continuing Case For Offline Editing


In the beginning, there was film editing. You made your creative decisions by editing work print – called editing the “rough cut”. You completed the movie by sending those decisions along with the edited work print as a guide to a negative cutter who frame-accurately “conformed” (physically cut) the negative to match those same edit points. When computer-assisted linear video editing dominated all but the feature film industry, this concept evolved into “offline” and “online” editing. These terms were borrowed from computer jargon, indicating the status of gear connected to a mainframe computer.


The idea was that offline editing suites used cheaper equipment at a lower hourly rate, while the online suites used the high-end equipment at a higher hourly rate. Since time is money, the goal for producers and editors working in offline edit suites at a lower hourly rate (made possible by the lower investment cost in cheaper equipment) was to be free to pursue creative experimentation and reduce the pressure of the clock. Although this was generally the practice, in point of fact, the type of gear used in offline or online suites wasn’t important, but rather the objective. Offline editing – like editing work print to create a rough cut – was intended to result in a finished set of editorial decisions. Online editing – like a negative cutter or the film lab – was intended to result in a finished master. These concepts were independent of the actual cost of equipment used for each function.


Nonlinear edit systems replaced low cost linear offline edit suites. The initial image quality of NLEs was fine for offline editing but not for generating high-quality masters. That has changed over time so that now, most NLEs are capable of doing it all – offline, online, effects, mixing and color grading. So, one has to ask… Is there even a need for offline editing anymore? After all, some folks look at offline and online editing as simply “editing the same program twice!” The cost of storage is so cheap, that it’s relatively easy to capture and have access to all of your project footage at full resolution. Simply capture, cut, output and deliver. Wham-bam and you’re done. In fact, I work mostly in DV50 when I’m cutting non-broadcast standard def videos in FCP or 2:1 when I’m cutting on an Avid. In these cases, I, too, will skip the offline editing phase and simply cut until I get client approval. When the picture is locked, I’m done, except for the final mix and color-correction.


On the flipside, there are many projects where it still makes more sense to follow the traditional offline/online workflow. Here are a few reasons why:


High Definition – Although HD editing has become easier, it still takes a lot of horsepower. For more complex projects, it makes sense to cut in SD or a lower-resolution version of HD (like DVCPROHD or Avid’s DNxHD36).


Higher Resolution – Today HD editing seems pretty simple. That wasn’t always the view, but the industry doesn’t stand still. Today people routinely discuss finishing at 2K film resolution and the RED One camera has challenged our imagination with the possibility of desktop 4K finishing. The reality is that most general purpose computers can’t handle these tasks in a manner that is anything less than a total struggle. 4K finishing might be possible, but you probably don’t really want to creatively cut a full length movie this way.


Laptops – One of the advantages of modern technology is editing mobility. Many editors like to cut at home or on location using laptops and portable FireWire drives. Again it’s a horsepower issue. SD is simply easier to deal with than HD. Standard def at DV25 is less taxing on a notebook computer than uncompressed 601 video. Here again it makes sense to edit creatively at a somewhat lower resolution and then go back for the online edit on a more advanced system.


Editing Specialization – Not all editors are created equal. Some are sloppy, but creative. Others are anal retentive in their attention to detail, but not that inspired. A rare few can be both meticulous and artistic. Division of labor is the key. I often work with clients who do their own cutting. They are creative, but not necessarily power users of NLE software. It works for them to refine the creative cut and then have me come in at the end to work with full resolution media, add some creative flair, clean up any technical issues and, in general, wrangle the finished product for them.


Horses for Courses – A British term that denotes selecting the appropriate tool for the task at hand. Often projects are creatively edited (offline, rough cut) on one brand of NLE, but finished (online editing) on a completely different brand. The reasons for this vary, but suffice it to say that some NLEs have certain advantages in horsepower or effects capabilities. So, you may be a whizz at FCP but have never seen a Quantel iQ in your life. However, next month you have to edit and deliver your first 4K project. What are you going to do? In this scenario, an FCP-based rough cut and an iQ online and finish makes all the sense in the world.


All of this is made possible because nearly all computer-assisted editing systems track media based on reel identification and timecode. Using a 4-digit reel number and timecode, you can locate each unique frame of content within 10,000 hours of media. This is a methodology that has served the industry well for over three decades, as witnessed by the fact the simple CMX EDL (edit decision list) – the legacy of a defunct pioneering editing manufacturer – is still the only universally-accepted method of interchange between different NLE brands. In fact, even in film and DI work, labs will often rely on variations of this 30-year-old EDL format.


That’s in spite of the fact that they could use other tracking schemes, such as film’s keycode and/or DPX files with header metadata. Reel ID and timecode information makes it possible to capture footage from a DVCAM dub of an HDCAM camera master, use that for the offline editing and then frame-accurately recapture the high-quality media from the HDCAM master for final output. That’s the heart of how all NLEs operate.


Today there are new issues introduced by file-based cameras.  How do you track clips when there is no physical reel of videotape? For example, when you import a P2 clip recorded as 720p or 1080p, it’s going to be copied to your hard drives at the native resolution. That’s different than a videotape source, which can be captured and recaptured at different resolutions based on the settings of your video capture hardware. There really is no true equivalent procedure in a file-based workflow.  You must take extra steps to keep the native resolution media (higher quality), as well as to separately transcode media file copies at a (lower quality) draft resolution. You’d work with the draft copies during offline editing and later relink your clips to the higher resolution files for online finishing. All this must be done with proper data tracking in order to avoid errors.


Or take QuickTime for example. It’s the heart of FCP, which reads and writes timecode and reel numbers to and from QuickTime files. Avid, on the other hand, cannot do the same, importing all QuickTime files with an assigned default timecode start number, instead of the actual number stored in the file’s own metadata.


All of these challenges will be with us for years as NLE software engineers tweak the code to take advantage of file-based post. Nevertheless, there are still many valid reasons for editors to continue to chant the offline/online mantra and push the engineers to improve media management until it is truly bullet-proof.


© 2008 Oliver Peters