Resolution Purists and the Real World

I love to lurk over at RedUser.net, the unofficial online forum for RED owners and enthusiasts. It’s a great place to gain insight about the technology, but it’s also just pure fun reading the various perceptions of the lesser experienced RED aficionados. The RED One camera employs a single 4520 x 2540 CMOS sensor to capture various image sizes – the most popular of which is 4096 x 2048. This is considered to be a 4K file with a 2:1 aspect ratio. Many people confuse resolution and file size, so a 4K file isn’t necessarily 4K worth of resolution. There’s also a lot of confusion between the terms resolution and sharpness. The simplest explanation is that resolution is the measurable ability to resolve fine detail, while sharpness relates to your eyes’ and brain’s perception of whether or not an image is crisp and shows a lot of detail. Both Mark Schubin (Videography magazine’s technical editor) and Adam Wilt (Pro Video Coalition) have written at length on these subjects.

 

As a poor country editor who isn’t a DP or image scientist, I defer to the authorities on these subjects, but I have spent several decades working in all sorts of image formats, resolutions and display technologies. From this experience, I can say that often the supposed resolution of the sensor, as expressed in pixels, has very little to do with how the image looks. I see a lot of folks online expressing the desire to finish in 4K, without any understanding of the real world cost or desirability of 4K post and distribution. Not to mention the fact that true 4K theatrical displays are quite a few years off, if for no other reason than the lack of financial incentive for major theater chains to convert all their 35mm film projection to something like Sony’s SRX-series digital cinema projectors. So in spite of an interest on the part of content producers to see 4K presentation venues, the reality is that high-resolution-originated product will continue to end up being viewed on various displays, from web movies to SD and HD television up to film projection and/or digital cinema projection at 2K or less.

 

 

Been There – Done That – Got the Belt Buckle

The irony of all of this is that we’ve been there before. I even have the limited edition belt buckle to prove it! In the late 70s I worked with the CEI 310 camera. This was a 2-piece electronic field camera that was definitely geared towards high-quality production and not news. The CEI 310 eventually became the basis of Panavision’s Panacam – their first foray into electronic cameras equipped with Panavision film lenses. Bear in mind that the 310 and Panacam were always SD cameras without any 24P capabilities. On the plus side, the colorimetry of the CEI camera appeared more “filmic” than its ENG counterparts, which was further enhanced by the addition of Panavision lenses and accessories.

 

At the time, I was responsible for a facility that cranked out a ton of grocery store commercials. “Painting” the camera to get the most out of tabletop shots was the job of the video engineer (often called the “video shader”). A lot of what I learned about color correction (and have since passed on to others) came from trying to get a cooked ham or roast to look appetizing using our RCA studio cameras! When Panavision set up the deal with CEI to market Panacams, they established a number of authorized rental/production facilities who would supply the camera accompanied by a trained technician. Again, this person’s job was to paint the image for the most pleasing look. Fast forward a couple of decades and you have the position of the DIT (digital imaging technician), who today fulfills the role of video shading, among other tasks, when HD cameras are used on high-budget shoots, like feature films.

 

These early attempts at electronic cinematography really didn’t go far, due to the limiting resolution of NTSC and PAL video. Sure the images looked great, but you were really only working in a medium that was acceptable for television and not the big screen. Nevertheless, companies like Panavision, CEI and other competitors (like Ikegami with the EC-35) proved that properly adjusted video cameras coupled with high-quality glass could be a good marriage, regardless of the resolution of the camera.

 

 

High Definition to Small Definition

Fortunately HD came along, reviving the ongoing interest to use electronic cameras for theatrical distribution. The company I worked for in the 90s was an early adopter of HD. We bought two of Sony’s HDW-730 cameras, which were interlaced 1080 HDCAM camcorders. Interlacing causes many of the purists to snub their noses, preferring the later 24P models as true film-style images. In spite of this, we produced quite a lot of impressive content, including a Biblical-based dramatic production for a themed attraction called “The Holyland Experience”. Our 20-minute film was shot on location in Israel and projected in a custom theater that rivaled any big screen movie theater in size and scope. The final master was edited in 1080i but encoded into 720p and projected using a Barco data-grade (not digital cinema) projector. Interlaced or not, this image was as impressive and as high-quality to the eye as if this had been a full blown 35mm film production.

 

On the other end of the scale, I’ve also posted the video portions of IllumiNations: Reflections Of Earth, Disney’s nighttime show at EPCOT – a fireworks and laser extravaganza choreographed to music. ROE’s video segments are presented on a 29’ tall rotating earth globe mounted on a barge in the middle of the EPCOT lagoon. The continental masses on that globe consist of LED displays. The final image that fills these screens is actually a 360 x 128 pixel video movie composited like a world map. The pixels for the continents are, in turn, mapped onto the matching LED coordinates of the globe. Australia only has the resolution of a typical computer desktop icon, yet it is still possible to discern imagery with a display this coarse. The trick is in the fact that viewing distances are 500’ to 700’ away and your brain fills in the gaps. This works much like the image of Lincoln’s face that’s made up of a mosaic of other images. When you get far enough back, you recognize Lincoln, instead of focusing on the individual components.

 

 

High Definition and the Silver Screen

 

Most folks now agree that the actual resolution of the RED One camera with proper lenses and accurate focus is in excess of 3K, though not quite as high as 4K. Compare this to film. 35mm negative is said to be as high as the equivalent of 8K (though 4K is generally accepted by most as “full” resolution), but typically is scanned at 4K or 2K resolution. However, the image you see in the theater from a projected release print, is generally considered to be closer to 1K. This varies with the quality of the print, projector lens and dimness of the projector lamp. Meanwhile, most of the popular HD cameras used for digital cinematography (Grass Valley Viper, Sony F900, Sony F23, etc.) capture images at 1920 x 1080, leaving you with a 16 x 9 image that’s comparable to a 2K film scan when the aspect ratio is 1.85:1. I’ve seen quite a few of the movies in theaters that were “filmed” using digital cameras (Collateral, Apocalypto, Zodiac, Star Wars, Once Upon A Time In Mexico, etc.) and I find very little to quibble about. In fact, Star Wars was shot with the wider 2.35:1 aspect, meaning that the top and bottom were cropped. So really only about 700 pixels out of the actual 1080 pixel height show up in the final prints.

 

I’ve also edited a film that was finished through a DI process using Assimilate SCRATCH. Our film was shot on 4-perf Super35mm negative and transferred to HDCAM-SR. Since we intended to end up in 1.85:1, the 4-perf Super35mm frame provided the closest fit to the 16 x 9 aspect ratio of HD, without wasting part of the top and bottom of the negative’s frame. This technique results in smaller film grain within the HD frame because more of the whole film frame is used. Internally our SCRATCH files were 2K DPX files and the output was back to an HDCAM-SR master. I’ve seen this film projected at DCI spec in the lab’s screening room, as well as HDCAM running through a projector at 1080i (interlaced with added 3:2 pulldown) and I must say that this image would not have looked any better had we worked off of a 4K film scan.

 

The reason I say this is due to the general texture of film and the creative choices made for exposure, lighting and lens/filter selection. Images that are often more pleasing to the eye are sometimes technically lower in sharpness. In other words, when you stick your nose up close to the screen, the image will tend to appear soft. Having higher resolution doesn’t matter, because there is no more real detail in the image to bring out except bigger film grain. One interesting comparison is last year’s There Will Be Blood versus No Country For Old Men. Blood went through a traditional film, rather than a digital finish, whereas No Country was completed at 2K resolution using a digital intermediate process. Both were nominated for an Oscar for Best Cinematography. By all rights, Blood should have had the higher resolution image, yet in point of fact, both looked about the same to the casual eye when seen in the theaters. The cinematography was striking enough to earn each a nomination.

 

 

It’s in the Glass

 

Going back to the Panacam example, what you start to find out is that the quality of the glass is a major factor in what ends up being recorded. I once did a film shot with a Sony F900 camera (24P). The DP/owner-operator opted to rent a “Panavised” Sony F900 (like those used on Star Wars) instead of using his own camera, so that he could take advantage of the better Panavision lenses. The result was a dramatic difference between the image quality of those lenses as compared to standard HD lenses. Likewise, some of the RED examples I’ve seen online that were shot with various non-optimized lenses, such as prime lenses designed for still photo cameras, exhibited less-than-superb quality. This is also why there have been a number of successful indie films shot with a Panasonic VariCam. Technically the VariCam, with its 1280 x 720 imager, should look significantly worse on the big screen than a Sony F900. Yet, many of these have been shot using 35mm lens adapters and high-quality film lenses. The results on screen speak for themselves. The funny thing is that there’s a lot of talk of 4K, yet when I’ve seen Sony’s 4K projector demos, the content comes from 1920 x 1080 sources – shot with various Sony or Panavision digital cameras. I can assure you that these look awesome.

 

 

You ARE Paying for Something

Aside from lenses, another thing to keep in mind is the electronics used by the camera for image enhancement and filtering. Part of the big difference you pay between a RED One and a competing Sony, Grass Valley or Panasonic camera is for the electronics used to enhance the image. The RED One generates a camera raw, Bayer-pattern image. The intent is to do all processing in post, just like sending film negative to a lab. The other cameras have a lot of circuitry designed to control the image in-camera. You may opt for a neutral, flat image, but there’s still processing applied to generate that finished RGB image from the camera, regardless of whether it’s flat or painted. This processing not only applies color matrices but also sharpens detail and reduces noise. By contrast, RED not only doesn’t apply this in-camera, but also uses OLPF (optical low pass filtering), common in digital still camera sensors. OLPF essentially filters out the highest resolution transients so that you don’t have excess aliasing in the image on things like contrasting diagonal lines, such as on a car grill. The design goal is to leave you with true and not artificial resolution. This means the image may at times appear soft, so sharpening and detail enhancement have to be added back (to taste) during the post production conversion of the camera raw files.

 

The dilemma of all of this file conversion needed in post is that you often don’t get the best results. On the plus side, you may reap the benefit of oversampling, meaning that at times an HD image downsampled to SD may look better than it if had been shot in SD to begin with. I have, however, also found the opposite to be true. HD is a very high resolution image that has more actual resolution than our monitors and projectors can truly display. An image looks more natural in HD when less detail enhancement is dialed in. If you crank up the enhancement, like you typically do in most SD cameras, then that image would look garish in HD. Unfortunately, when you downsample this very natural looking HD image into SD, the image tends to look soft, because we are used to the look of overly-enhanced SD cameras. Therefore, downsampling by a dedicated device like the Teranex Mini will give you better results than using the built-in functions of Final Cut Pro or a Kona card or an HD deck, because the Mini lets you subjectively add enhancement, color control and noise reduction as part of the HD-to-SD conversion.

 

Aliasing is another issue. A lot of HD content is captured in progressive formats (such as 24P). Progressive HD images on a native progressive display (projectors, plasmas, LCDs) look great, but when you display these same images as scaled-down NTSC or PAL on an interlaced CRT, something’s got to give. If you take a high-contrast transition, such as the light-to-dark changes between the metal bars in our car grill example, the HD image is able to retain all the anti-aliasing information for the in-between gradients in those transitions from light to dark and back. When this image is downsampled, some of this detail is lost and there’s less anti-aliasing information. The transitions becomes harsher when displayed on the interlaced SD CRT and the metal of the grill appears to scintillate with any movement. In order words, the diagonal edges of the metal grill appear more jagged and tend to “dance” between the scanlines.

 

Unfortunately this is a normal phenomenon and can exist whether you shoot digitally or on film. A few years back Cintel, an established telecine manufacturer, introduced SCAN’dAL, a feature designed specifically to deal with this issue when transferring 35mm footage to video. Although a lot of ink has been spilled about the benefits of oversampling, in some case the matching size yields the best results. I go back to SD videos I’ve cut, which were shot using a Sony Digital Betacam camcorder and am amazed at how much better these look in SD than newer versions of the same program shot on HD and downsampled for SD presentation. When downsampling is part of the workflow, then it is important to try a number of options if quality is critical. For example, sometimes hardware does a better job and at other times software is king. Some of the better HD-to-SD scaling in software is achieved in After Effects and Shake. Often just the smallest touch of Gaussian blur will help as well.

 

 

Reality Check for the Indie Filmmaker

 

One of the reasons this isn’t cut-and-dried is because camera manufacturers play so many games with the image. For example, the Panasonic HXV200 makes outstanding images and is popular with indie filmmakers. Yet it only uses a 960 x 540 pixel sensor to generate 720 or 1080 images – getting there through the magic of pixel shifting (See Adam Wilt). As good as the camera looks, when you put it side-by-side with Panasonic’s VariCam, the latter will appear noticeably sharper than the 200, because it indeed has higher resolution.

 

I’m sure you’re wondering if this is all just a can of worms. You’re right. It is. But often, the most calibrated measuring devices are simply your two eyes. Forget the specs and trust your instincts. A recent example is Shine A Light. This film was shot using a combination of 35mm film cameras and one Panavision Genesis. All footage ended up on HDCAM-SR (1920 x 1080) and the master from this not only was recorded out to 35mm film for release prints, but also IMAX. Even though HD isn’t close to the resolution of a 70mm IMAX negative, the Stones’ concert in Shine A Light looks incredible in IMAX projected onto a 5-story-tall screen!

 

In the real world, it’s amazing what you can get away with. Last year the Billy Graham Library opened with video modules that I edited and finished. The largest screen is in the Finale theater – an ultra-widescreen format that’s a horizontal composite of three 720p projections. Our sources were largely HD, but there were also a smattering of audience close-up shots from Graham’s last crusade in New York City that originated on a Panasonic DVX100A (mini-DV) camera. It was amazing how well these images held up in the finished product. Other great examples are the documentaries Murderball and The War Tapes. Each was shot with a variety of mini-DV cameras, yet in spite of the image defects, the stories and personalities are so enthralling, that image quality is the least important factor.

 

I have a lot of respect for what the team at RED has done, but I’m not yet willing to concede that shooting with the RED One is going to give you a better film than if you used other cameras, like an Arri D-21, Sony F23 or Panasonic’s new HPX3000, just because RED has a higher pixel count for its sensor. In the end, like everything else in this business, content and emotion is the most important ingredient. When it comes to capturing an image, the technical resolution of the camera is a big factor, but it doesn’t automatically guarantee the best image results from the point-of-view of your audience.

 

© 2008 Oliver Peters

Advertisements

Impressions of Las Vegas – NAB 2008

If you’ve casually been following the NAB news, you most likely think that the biggest press is the lack of participation by Avid and Apple. It’s true that neither had a booth, but both were there at customer and reseller events, including Avid’s roll-out the new DX product line. If this is your take away, then you might surmise that NAB was a rather lackluster event for post. Dig a bit deeper and you’ll find that NLEs have reached a certain level of maturity and it’s hard to keep rolling in new features. In fact, camera manufacturers have been driving the show with the latest and greatest file-based formats. The editing system manufacturers have had their hands full simply adding support for each new camera record option. Whether or not your favorite NLE supports P2, XDCAM-HD, REDcode and so on will impact far more users than whether Avid improves color correction or Apple improves media management.

 

If you’re looking for true edit system innovation, then that news came out of Quantel. Not only are they adding significant features, but they’ve wholly embraced the tools to edit and color grade the left and right eye views of stereoscopic imagery. We’ll see if that proves to be a good business model, but right now in the wake of quite a few 3D movies in the theaters, Quantel is betting that the market is there for more than a select few. Autodesk likewise had its own news with the continued unification of the user interfaces between Smoke and Flame. The products each still have a distinct and unique role to play, but Autodesk is integrating across both product groups such common modules as the timeline and batch (Flame’s process tree for effects).

 

As far as Avid’s DX line is concerned, so far the main news is new hardware connected via the PCIe bus and new pricing. This ties in with improved GPU and CPU power as well as Leopard and Vista support and even optimization. In total this will result in more streams of true real time horsepower. Unfortunately, this also means that Avid has to update the system, while staying with the familiar GUI that its user base likes. It might be different under the hood, but on the surface looks and feels the same. Many will applaud this, but it won’t sway the critics and certainly won’t bring back those who’ve left for other NLEs, like Final Cut Pro.

 

 

Trends 

 

If you’re looking for trends, however, it’s become pretty obvious – if you didn’t know already – that the industry is moving away from videotape and towards a myriad of file-based solutions. When Panasonic jumped in originally with P2, Sony made no bones about detracting from their competitor. The funny thing about this is that Sony has now wholeheartedly embraced the concept with its EX1 and now EX3 cameras, sporting their own style of solid state storage, the SxS cards. Users are riding the learning curve, as many still don’t understand the differences when it comes to containers (P2 cards, XDCAM-HD discs, SxS cards), file wrappers (MXF, OMF, QuickTime, AVI, MPEG4) and codecs (DVCPROHD, AVC-Intra, MPEG2). Of course, eventually it will all get sorted out, but what’s worth noting, is that the only new videotape-based VTR introduced at NAB 2008 was an HDCAM-SR player by Sony. Meanwhile Sony and Panasonic both released quite a few VTR “replacement” products that use each manufacturer’s card scheme. Panasonic is growing a product ecosystem around P2 and likewise Sony growing one around the SxS cards.

 

Many experienced video pros look at this in horror, fearing that a few years down the road, it will be hard to mount the hard drives to which this media has been copied after the shoot. I appreciate this sentiment, as you can still readily find decks to play Betacam-SP and even Umatic tapes that are now over two decades old. That isn’t universally true however. In my market, you’d be hard pressed to find decks to play such once-popular formats as D1, D2, D3 or D5. The are only a handful of one-inch Type C VTRs in the market and their reliability is questionable. So the truth of the matter is that you probably aren’t any safer with content on tape as on hard drive, assuming you establish a viable approach to archiving the media. Generally this takes the form of redundant copies on multiple hard drives or at best, data tapes, such as the LTO3 format.

 

With this as a trend, quite a few NAB vendors were showing solutions for lower cost and simpler shared storage as well as asset management software. Some products to look into include Apple’s Final Cut Server, Laird Telemedia’s LairdShareHD, Focus Enhancements’ ProxSys, Gridiron Software’s Flow and Tiger Technologies’ MetaSAN and MetaLAN. In addition, the average cost of local storage is getting cheaper than ever; so, those editors working with P2 or similar technologies will have no problem just dumping all the media at full resolution to their local drives straight from the shoot and cutting happily away.

 

 

RED

 

It’s hard to talk about NAB and not mention RED Digital Camera. Yes, they announced two new cameras (Scarlet and Epic), but more importantly is the fact that the post support structure is growing around them. Even if RED is ultimately not super-successful (unlikely), they will have changed the way many work with images. I believe the camera raw workflow is bound to be adopted by others in the future. Today, Apple and Assimilate are the only official RED partners. They are the only companies with access to the .R3D files. Avid is also able to provide some editorial support through XML list conversions. In the RED booth, a beta version of FCP’s Log and Transfer module was shown that imports and transcodes .R3D files. FCP editors can natively import raw files, transcoding them to another codec, like Apple ProRes 422 on the way in. There was also a technology preview of .R3D files being graded directly in Apple Color, through the addition of a RED-oriented RED Room tab within Color’s interface. 

 

Assimilate introduced its RED-specific SCRATCH CINE, the only full-featured finishing product geared strictly for a RED workflow. But the story doesn’t stop there. Quite a few companies are chomping at the bit to release their own products for RED. At the moment, they are held back by RED Digital Camera’s agreements with its original partners. These are expected to expire soon, with RED releasing an SDK for its REDcode codec. Once that’s done, expect to see companies like Cineform and IRIDAS quickly jump into the game. In fact, these companies already have raw workflow products that are ready for RED, which were developed using existing (but not final) versions of the codec. So just as in the digital still photo world, camera raw will be a concept to which videographers will need to become accustomed.

 

Look for more of my NAB 2008 post production analysis in the June print edition of Videography magazine and also online at DV magazine.


© 2008 Oliver Peters

The Continuing Case For Offline Editing

 

In the beginning, there was film editing. You made your creative decisions by editing work print – called editing the “rough cut”. You completed the movie by sending those decisions along with the edited work print as a guide to a negative cutter who frame-accurately “conformed” (physically cut) the negative to match those same edit points. When computer-assisted linear video editing dominated all but the feature film industry, this concept evolved into “offline” and “online” editing. These terms were borrowed from computer jargon, indicating the status of gear connected to a mainframe computer.

 

The idea was that offline editing suites used cheaper equipment at a lower hourly rate, while the online suites used the high-end equipment at a higher hourly rate. Since time is money, the goal for producers and editors working in offline edit suites at a lower hourly rate (made possible by the lower investment cost in cheaper equipment) was to be free to pursue creative experimentation and reduce the pressure of the clock. Although this was generally the practice, in point of fact, the type of gear used in offline or online suites wasn’t important, but rather the objective. Offline editing – like editing work print to create a rough cut – was intended to result in a finished set of editorial decisions. Online editing – like a negative cutter or the film lab – was intended to result in a finished master. These concepts were independent of the actual cost of equipment used for each function.

 

Nonlinear edit systems replaced low cost linear offline edit suites. The initial image quality of NLEs was fine for offline editing but not for generating high-quality masters. That has changed over time so that now, most NLEs are capable of doing it all – offline, online, effects, mixing and color grading. So, one has to ask… Is there even a need for offline editing anymore? After all, some folks look at offline and online editing as simply “editing the same program twice!” The cost of storage is so cheap, that it’s relatively easy to capture and have access to all of your project footage at full resolution. Simply capture, cut, output and deliver. Wham-bam and you’re done. In fact, I work mostly in DV50 when I’m cutting non-broadcast standard def videos in FCP or 2:1 when I’m cutting on an Avid. In these cases, I, too, will skip the offline editing phase and simply cut until I get client approval. When the picture is locked, I’m done, except for the final mix and color-correction.

 

On the flipside, there are many projects where it still makes more sense to follow the traditional offline/online workflow. Here are a few reasons why:

 

High Definition – Although HD editing has become easier, it still takes a lot of horsepower. For more complex projects, it makes sense to cut in SD or a lower-resolution version of HD (like DVCPROHD or Avid’s DNxHD36).

 

Higher Resolution – Today HD editing seems pretty simple. That wasn’t always the view, but the industry doesn’t stand still. Today people routinely discuss finishing at 2K film resolution and the RED One camera has challenged our imagination with the possibility of desktop 4K finishing. The reality is that most general purpose computers can’t handle these tasks in a manner that is anything less than a total struggle. 4K finishing might be possible, but you probably don’t really want to creatively cut a full length movie this way.

 

Laptops – One of the advantages of modern technology is editing mobility. Many editors like to cut at home or on location using laptops and portable FireWire drives. Again it’s a horsepower issue. SD is simply easier to deal with than HD. Standard def at DV25 is less taxing on a notebook computer than uncompressed 601 video. Here again it makes sense to edit creatively at a somewhat lower resolution and then go back for the online edit on a more advanced system.

 

Editing Specialization – Not all editors are created equal. Some are sloppy, but creative. Others are anal retentive in their attention to detail, but not that inspired. A rare few can be both meticulous and artistic. Division of labor is the key. I often work with clients who do their own cutting. They are creative, but not necessarily power users of NLE software. It works for them to refine the creative cut and then have me come in at the end to work with full resolution media, add some creative flair, clean up any technical issues and, in general, wrangle the finished product for them.

 

Horses for Courses – A British term that denotes selecting the appropriate tool for the task at hand. Often projects are creatively edited (offline, rough cut) on one brand of NLE, but finished (online editing) on a completely different brand. The reasons for this vary, but suffice it to say that some NLEs have certain advantages in horsepower or effects capabilities. So, you may be a whizz at FCP but have never seen a Quantel iQ in your life. However, next month you have to edit and deliver your first 4K project. What are you going to do? In this scenario, an FCP-based rough cut and an iQ online and finish makes all the sense in the world.

 

All of this is made possible because nearly all computer-assisted editing systems track media based on reel identification and timecode. Using a 4-digit reel number and timecode, you can locate each unique frame of content within 10,000 hours of media. This is a methodology that has served the industry well for over three decades, as witnessed by the fact the simple CMX EDL (edit decision list) – the legacy of a defunct pioneering editing manufacturer – is still the only universally-accepted method of interchange between different NLE brands. In fact, even in film and DI work, labs will often rely on variations of this 30-year-old EDL format.

 

That’s in spite of the fact that they could use other tracking schemes, such as film’s keycode and/or DPX files with header metadata. Reel ID and timecode information makes it possible to capture footage from a DVCAM dub of an HDCAM camera master, use that for the offline editing and then frame-accurately recapture the high-quality media from the HDCAM master for final output. That’s the heart of how all NLEs operate.

 

Today there are new issues introduced by file-based cameras.  How do you track clips when there is no physical reel of videotape? For example, when you import a P2 clip recorded as 720p or 1080p, it’s going to be copied to your hard drives at the native resolution. That’s different than a videotape source, which can be captured and recaptured at different resolutions based on the settings of your video capture hardware. There really is no true equivalent procedure in a file-based workflow.  You must take extra steps to keep the native resolution media (higher quality), as well as to separately transcode media file copies at a (lower quality) draft resolution. You’d work with the draft copies during offline editing and later relink your clips to the higher resolution files for online finishing. All this must be done with proper data tracking in order to avoid errors.

 

Or take QuickTime for example. It’s the heart of FCP, which reads and writes timecode and reel numbers to and from QuickTime files. Avid, on the other hand, cannot do the same, importing all QuickTime files with an assigned default timecode start number, instead of the actual number stored in the file’s own metadata.

 

All of these challenges will be with us for years as NLE software engineers tweak the code to take advantage of file-based post. Nevertheless, there are still many valid reasons for editors to continue to chant the offline/online mantra and push the engineers to improve media management until it is truly bullet-proof.

 

© 2008 Oliver Peters