FCP Helpers

Apple Final Cut Pro is generally said to be an 80/20 application, trading off some niche features for a lower price. More often than not, this descriptor is meant in the negative. Avid editors using FCP frequently lament about media management, render files and so on when comparing FCP with Media Composer. Yet, the fact that Apple targets the sweet middle, has left the field open for high-end systems on the Mac, like Avid Symphony, Autodesk Smoke and now, Blackmagic Design DaVinci Resolve. This means advanced systems are available for the tiny market segment that wants them, without the need for Apple to develop a similar application itself.

I tend to view this 80/20 scenario as an opportunity for innovation. Avid, Autodesk, Quantel and others largely handle R&D internally. Although they embrace some openness in the interchange of media and file formats, their core features are typically closed to outside development, unless there’s an applicable SDK or API. Final Cut Pro incorporates a number of open and extensible technologies often available through the OS itself, like XML, QuickTime, Apple Events, Core Image and others. Granted, these are typically Apple-specific and not actual ratified standards, but they do provide a wide open development field for small and large entrepreneurs alike.

These technologies provide a relatively easy path for programmers to create a mix of plug-ins, utilities and applications that augment the native power of FCP. I’ll be the first to admit that I like to have everything inside the application, but the sheer diversity of options exceeds what’s available in the competing systems. For example, if you want Avid-style media management or control of project preferences, there are several different developers who have such solutions. The beauty of this for the user is more control and customization over your system – sort of the “shade tree mechanic” approach to media.

Here is a concise list of most of the companies building useful tools to enhance your Final Cut environment. Unlike effects plug-ins, these solutions are designed to improve productivity, reliability, efficiency and generally make your FCP experience better.

Assisted Editing

Automatic Duck

AV3 Software / GET

Boris FX / XML Transfer / AAF Transfer

Boris FX / MyMusicSource

Digital Heaven

Digital Rebellion

Edit Groove

Edit Mule

Glue Tools

Post Haste

Singular Software / PluralEyes / DualEyes

Smart Sound



XMEdit / Traffic


Update: With this post, the DigitalFilms blog passed 500,000 views, since its launch in March 2008. I’m glad many of you have found it helpful! Thanks.

©2010 Oliver Peters


Tips for Small Camera and Hybrid DSLR Production


It started in earnest last year and has no sign of abating.  Videographers are clearly in the midst of two revolutions: tapeless recording and the use of the hybrid still/video camera (HDSLR). The tapeless future started with P2 and XDCAM, but these storage devices have been joined by other options, including Compact Flash, SD and SDHC memory cards. The acceptance of small cameras in professional operations first took off with DV cameras from Sony and Panasonic, especially the AG-DVX100. These solutions have evolved into cameras like the Sony HVRZ7U and PMWEX3 and Panasonic’s AG-HPX170 and AVCCAM product line. Modern compressed codecs have made it possible to record high-quality 1080 and 720 HD footage using smaller form factors than ever before.

This evolution has sparked the revolution of the HDSLR cameras, like the Canon EOS 5D Mark II, the new Canon EOS 7D and 1D Mark IV and the Nikon D90, D300s and D3s, to name a few. Although veteran videographers might have initially scoffed at such cameras, it’s important to note that Canon developed the 5D at the urging of Reuters and the Associated Press, so its photographers could deliver both stills and motion video with the least hassle. Numerous small films, starting with photographer Vincent Laforet’s Reverie, have more than proven that HDSLRs are up to the task of challenging their video cousins. From the standpoint of a news or sports department, we have entered an era where every reporter can become a video journalist, simply by having a small camera at the ready. That’s not unlike the days when reporters carried a Canon Scoopic 16mm, in case something newsworthy happened.

These cameras come with challenges, so here is some advice that will make your experience more successful:

1. Ergonomics / stability – Both small video camcorders and HDSLRs are designed for handheld, not shoulder-mounted, operation. This isn’t a great design for stability while recording motion. In order to get the best image out of these cameras, invest in an appropriate tripod and fluid head. For more advanced operations, check out the various camera mounting accessories from companies like Zacuto and Red Rock Micro.

2. Rolling shutter – This phenomenon affects all CMOS cameras to varying degrees. It is caused by horizontal movement and results in an image that is skewed. This distortion is caused by the time differential between information at the top and the bottom of the sensor. The HDSLRs have been criticized for these defects, but others like the EX or the RED One have also displayed the same artifacts to a lesser degree. This defect can be minimized by using a tripod and slow (or no) camera movement.

3. Focus – One of the reasons that shooters like HDSLRs is the large image sensor (compared to video cameras) and film lenses, which provide a shallow depth-of-field. This is a mixed blessing when you are covering a one-time event. Still photo zoom lenses aren’t mechanically designed to be zoomed and focused during the shot like film or video zoom lenses. This makes it harder to nail the shot on-the-fly. Since the depth-of-field is shallow, the focus is also less forgiving. Lastly, the focus is often done using an LCD viewer instead of a high-quality viewfinder. Many shooters using both small video cameras and HDSLRs have added an externally-mounted LCD monitor, as a better device for judging shots.

4. Audio – The issue of audio depends on whether we are talking about a Canon 5D or a Panasonic 170. Professional and even prosumer camcorders have been designed to have mics connected. To date, HDSLRs have not. If you are shooting extensive sync-sound projects with a hybrid camera, then you will want to consider using double-system sound with a separate recorder and mixer (human). At the very least, you’ll want to add an XLR mic adapter/mixer, like the BeachTek DXA-5D.

5. Movie files – Each of these cameras records its own specific format, codec and file wrapper. Production and post personnel have become comfortable with P2 and XDCAM, but the NLE manufacturers are still catching up to the best way of integrating consumer AVCHD content or files from these HDSLRs. Regardless of the camera system you plan to use, make sure that the file format is compatible with (or easily transcoded to) your NLE of choice.

6. Capacity – Most of the cameras use a recording medium that is formatted as FAT32. This limits a single file to 4GB, which in the case of the Canon 5D means the longest recording cannot exceed 12 minutes of HD (1920x1080p at 30fps). Unlike P2, there is no spanning provision to extend the length of a single recording. Make sure to plan your shot list to stay within the file limit. Come with enough media. In the case of P2, many productions bring along a “data wrangler” and a laptop. This person will offload the P2 cards to drives and then reformat (erase) the cards so that the crew can continue recording throughout the day with a limited number of P2 cards.

7. Back-up – Always back-up your camera media onto at least two devices in the original file format. I’ve known producers who merely transferred the files to the edit system’s local array and then trashed the camera media, believing the files were safe. Unfortunately, I’ve seen Avids quarantine files, making them inaccessible. On rare occasion, I’ve also seen Final Cut Pro media files simply disappear. The moral of the story is to treat your original camera media like film negative. Make two, verified back-ups and store them in a safe place should you ever need them again.

The new generation of small video camcorders and Hybrid DSLRs offers the tantalizing combination of lower operating cost and stunning imagery. That’s only possible with some care and planning. These tools aren’t right for every application, but the choices will continue to grow in the coming years. Those who embrace the trend will find new and exciting production options.

© 2009 Oliver Peters

Written for NewBay Media and TV Technology magazine

What’s wrong with this picture?


“May you live in interesting times” is said to be an ancient Chinese curse. That certainly describes modern times, but no more so than in the video world. We are at the intersection of numerous transitions: analog to digital broadcast; SD to HD; CRTs to LCD and plasma displays; and tape-based to file-based acquisition and delivery. Where the industry had the chance to make a clear break with the past, it often chose to integrate solutions that protected legacy formats and infrastructure, leaving us with the bewildering options that we know today.


Broadcasters settled on two standards: 720p and 1080i. These are both full-raster, square pixel formats: 1280x720p/59.94 (60 progressive frames per seconds in NTSC countries) – commonly known as “60P” – and 1920x1080i/59.94 (60 interlaced fields per second in NTSC countries) – commonly known as “60i”. The industry has wrestled with interlacing since before the birth of NTSC.


Interlaced scan


Interlaced displays show a frame as two sequential sets of alternating odd and even-numbered scan lines. Each set is called a field and occurs at 1/60th of a second, so two fields make a single full-resolution frame. Since the fields are displaced in time, one frame with fast horizontal motion will appear like it has serrated edges or horizontal lines. That’s because odd-numbered scan lines show action that occurred 1/60th of a second apart from the even-numbered, adjacent scan lines. If you routinely move interlaced content between software apps, you have to careful to maintain proper field dominance (whether edits start on field 1 or field 2 of a frame) and field order (whether a frame is displayed starting with odd or even-numbered scan lines).


Progressive scan


A progressive format, like 720p, displays a complete, full-resolution frame for each of 60 frames per second. All scan lines show action that was captured at the exact same instance in time. When you combine the spatial with the temporal resolution, the amount of data that passes in front of a viewer’s eyes in one second is essentially the same for 1080i (about 62 million pixels) as for 720p (about 55 million pixels).


Progressive is ultimately a better format solution from the point-of-view of conversions and graphics. Progressive media scales more easily from SD to HD without the risk of introducing interlace errors that can’t be corrected later. Graphic and VFX artists also have a better time with progressive media and won’t have issues with proper field order, as is so often the case when working with NTSC or even 1080i. The benefits of progressive media apply regardless of the format size or frame rate, so 1080p/23.98 offers the same advantages.


Outside of the boundary lines


Modern cameras, display systems and NLEs have allowed us to shed a number of boundaries from the past. Thanks to Sony and Laser Pacific, we’ve added 1920x1080psf/23.98. That’s a “progressive segmented frame” running at the video-friendly rate of 23.98 for 24fps media. PsF is really interlacing, except that at the camera end, both fields are captured at the same point in time. PsF allows the format to be “superimposed” onto an otherwise interlaced infrastructure with less impact on post and manufacturing costs.


Tapeless cameras have added more wrinkles. A Panasonic VariCam records to tape at 59.94fps (60P), even though you are shooting with the camera set to 23.98fps (24P). This is often called 24-over-60. New tapeless Panasonic P2 camcorders aren’t bound by VTR mechanisms and can record a file to the P2 recording media at any “native” frame rate. To conserve data space on the P2 card, simply record at the frame rate you need, like 23.98pn (progressive, native) or 29.97pn. No need for any redundant frames (added 3:2 pulldown) to round 24fps out to 60fps as with the VariCam.


I’d be remiss if I didn’t address raster size. At the top, I mentioned full-raster and square pixels, but the actual video content recorded in the file cheats this by changing the size and pixel aspect ratio as a way of reducing the data rate. This will vary with codec. For example, DVCPRO HD records at a true size of 960×720 pixels, but displays as 1280×720 pixels. Proper display sizes of such files (as compared with actual file sizes) are controlled by the NLE software or a media player application, like QuickTime.


Mixing it up


Editors routinely have to deal with a mix of frame rates, image sizes and aspect ratios, but ultimately this all has to go to tape or distribution through the funnel of the two accepted HD broadcast formats (720p/59.94 and 1080i/59.94). PLUS good old fashioned NTSC and/or PAL. For instance, if you work on a TV or film project being mastered at 1920x1080p/23.98, you need to realize several things: few displays support native 23.98 (24P) frame rates. You will ultimately have to generate not only a 23.98p master videotape or file, but also “broadcast” or “air” masters. Think of your 23.98p master as a “digital internegative”, which will be used to generate 1080i, 720p, NTSC, PAL, 16×9 squeezed, 4×3 center-cut and letterboxed variations.


Unfortunately your NLE won’t totally get you there. I recently finished some spots in 1080p/23.98 on an FCP system with a KONA2 card. If you think the hardware can convert to 1080i output, guess again! Changing FCP’s Video Playback setting to 1080i is really telling the FCP RT engine to do this in software, not in hardware. The ONLY conversions down by the KONA hardware are those available in the primary and secondary format options of the AJA Control Panel. In this case, only the NTSC downconversion gets the benefit of hardware-controlled pulldown insertion.


OK, so let FCP do it. The trouble with that idea is that yes, FCP can mix frame rates and convert them, but it does a poor job of it. Instead of the correct 2:3:2:3 cadence, FCP uses the faster-to-calculate 2:2:2:4. The result is an image that looks like frames are being dropped, because the fourth frame is always being displayed twice, resulting in a noticeable visual stutter. In my case, the solution was to use Apple Compressor to create the 1080i and 720p versions and to use the KONA2’s hardware downconversion for the NTSC Beta-SP dubs. Adobe After Effects also functions as a good, software conversion tool.


Another variation to this dilemma is the 720pn/29.97 (aka 30PN) of the P2 cameras. This is an easily edited format in FCP, but it deviates from the true 720p/59.94 standard. Edit in FCP with a 29.97p timeline, but when you change the Video Playback setting to 59.94, FCP converts the video on-the-fly to send a 60P video stream to the hardware. FCP is adding 2:2 pulldown (doubling each frame) to make the signal compliant. Depending on the horsepower of your workstation, you may, in fact, lower the image resolution by doing this. If you are doing this for HD output, it might actually be better to convert or render the 29.97p timeline to a new 59.94p sequence prior to output, in order to maintain proper resolution.


Converting to NTSC


But what about downconversion? Most of the HD decks and I/O cards you buy have built-in downconversion, right? You would think they do a good job, but when images are really critical, they don’t cut it. Dedicated conversion products, like the Teranex Mini do a far better job in both directions. I delivered a documentary to HBO and one of the items flagged by their QC department was the quality of the credits in the downconverted (letterboxed) Digital Betacam back-up master. I had used rolling end credits on the HD master, so I figured that changing the credits to static cards and bumping up the font size a bit would make it a lot better. I compared the converted quality of these new static HD credits through FCP internally, through the KONA hardware and through the Sony HDW-500 deck. None of these looked as crisp and clean as simply creating new SD credits for the Digital Betacam master. Downconverted video and even lower third graphics all looked fine on the SD master – just not the final credits.


The trouble with flat panels


This would be enough of a mess without display issues. Consumers are buying LCDs and plasmas. CRTs are effectively dead. Yet, CRTs are the only device to properly display interlacing – especially if you are troubleshooting errors. Flat panels all go through conversions and interpolation to display interlaced video in a progressive fashion. Going back to the original 720p versus 1080i options, I really have to wonder whether the rapid technology change in display devices was properly forecast. If you shoot 1080p/23.98, this often gets converted to a 1080i/59.94 broadcast master (with added 3:2 pulldown) and is transmitted to your set as a 1080i signal. The set converts the signal. That’s the best case scenario.


Far more often, the production company, network and local affiliate haven’t adopted the same HD standard. As a result, there may be several 720p-to-1080i and/or 1080i-to-720p that happen along the way. To further complicate things, many older consumer sets are native 720p panels and scale a 1080 image. Many include circuitry to remove 3:2 pulldown and convert 24fps programs back to progressive images. This is usually called the “film” mode setting. It generally doesn’t work well with mixed-cadence shows or rolling/crawling video titles over film content.


The newest sets are 1080p, which is a totally bogus marketing feature. These are designed for video game playback and not TV signals, which are simply frame-doubled. All of this mish-mash – plus the heavy digital compression used in transmission – makes me marvel at how bad a lot of HD signals look in retail stores. I recently saw a clip from NBC’s Heroes on a large 1080p set at a local Sam’s Club. It was far more pleasing to me on my 20” Samsung CRT at home, received over analog cable, than on the big 1080p digital panel.


Progress (?) marches on…


We can’t turn back time , of course, but my feeling about displays is that a 29.97p (30P) signal is the “sweet spot” for most LCD and plasma panels. In fact, 720p on most of today’s consumer panel looks about the same as 1080i or 1080p. When I look at 23.98 (24P) content as 29.97 (24p-over-60i), it looks proper to my eyes on a CRT, but a bit funky on an LCD display. On the other hand 29.97 (30P) strobes a bit on a CRT, but appears very smooth on a flat panel. Panasonic’s 720p/59.94 looks like regular video on a CRT, but 720p recorded as 30p-over-60p looks more film-like. Yet both signals actually look very similar on a flat panel. This is likely due to the refresh rates and image latency in an LCD or plasma panel as compared to a CRT. True 24P is also fine if your target is the web. As a web file it can be displayed as true 24fps without pulldown. Remember that as video, though, many flat panels cannot display 23.98 or 24fps frame rates without pulldown being added.


Unfortunately there is no single, best solution. If your target distribution is for the web or primarily to be viewed on flat panel display devices (including projectors), I highly recommend working strictly in a progressive format and a progressive timeline setting. If interlacing is involved, them make sure to deinterlace these clips or even the entire timeline before your final delivery. Reserve interlaced media and timelines for productions that are intended predominantly for broadcast TV using a 480i (NTSC) or 1080i transmission.


By now you’re probably echoing the common question, “When are we going to get ONE standard?” My answer is that there ARE standards – MANY of them. This won’t get better, so you can only prepare yourself with more knowledge. Learn what works for your system and your customers and then focus on those solutions – and yes – the necessary workarounds, too!


Does your head hurt yet?


© 2009 Oliver Peters

Adobe Creative Suite 4 – A First Look

Hot on the heals of last year’s huge Adobe software release, the company has quickly turned around another batch of impressive updates in its new Creative Suite 4 line-up. Once again, these products can be purchased individually or as part of various collections for web, video and print. Plus the all-in-one Master Collection. All CS4 products will ship by the end of Q4 2008. The Creative Suite family constitutes major growth for Adobe, which expects to ship approximately 500,000 pieces of just the video portion of this software to over 300,000 customers by the end of 2008.


I’ll focus my comments on Adobe Creative Suite 4 Production Premium – the collection for video professionals. Its main applications include Premiere Pro, After Effects, Photoshop Extended, Illustrator, Flash Professional, Encore, OnLocation and Soundbooth. In addition, there are also other utilities designed to aid your workflow, such as Bridge, Device Central, Dynamic Link and Adobe Media Encoder.


Common feature enhancements


Going into depth on each application in the collection would require the entire magazine, so I’ll stick to the highlights. Across the board, Adobe has concentrated on several big improvements and additions between CS3 and CS4. These include user interface changes, searchable metadata based on XMP support and speech-to-text technology. The user interfaces of the various applications continue to move closer to a common Adobe layout. This tabbed workspace design is most completely implemented in Premiere Pro, After Effects and Soundbooth. Most of the applications have gained search fields that operate like Apple’s Spotlight. Typing information into the search field of a Premiere Pro bin will filter the displayed contents to match your criteria. In After Effects, for example, you can filter timeline layers to only display tracks where the object’s position has been altered, simply by typing “position” into the search field. Most of the applications have been metadata-enabled so meaningful descriptions, titles, keywords and copyright information can be captured and embedded into files using open source XMP technology.


Both Premiere Pro and Soundbooth have added a powerful, new speech recognition technology called Speech Search to automatically transcribe dialogue into searchable text. After the transcription process is complete, simply click on a word in the generated text (now part of the clip’s metadata) and the media file will instantly cue to the corresponding point. It’s a great technology, but I was less than satisfied with the accuracy of the automatic transcription. I picked one of Adobe’s demo clips (an interview with cinematographer Rob Legato) and had Soundbooth create a transcription. Legato speaks quickly but clearly, however the accuracy was only about 50% and turned such phrases as “a short shooting schedule” into “the court shaving scandal”. The latter might make for an interesting movie plot, but I wonder whether the time required to edit a transcription is too great of an offset to effectively use this feature on a real project. The accuracy was better on a different test file, but still at least 25% of the phrases were incorrect. In spite of that, Speech Search seems like a very useful tool for documentary editors. In fact, even some Avid editors have theorized that you could use Soundbooth CS4 to create transcriptions that in turn could be imported into Avid Media Composer for use with their

ScriptSync feature.


Aside from Speech Search, the biggest new product feature in Adobe Premiere Pro CS4 and After Effects CS4 is the native support for various tapeless camera formats. You can natively edit content from Panasonic P2 (DVCPRO, DVCPRO HD and AVC-Intra), Sony XDCAM-HD and XDCAM-EX media without transcoding or rewrapping. Premiere Pro can access the metadata for these clips and edit directly from the cards or use its built-in Media Browser to transfer the media to your local media drives for better performance. Running Premiere Pro CS4 on a dual-core 2.8 GHz iMac was a pleasure. Native 720p/23.98 DVCPROHD clips (imported from P2) played smoothly and JKL transport controls were very responsive even on media playing from the internal drive.


Although not technically part of this release, Adobe is currently working with RED Digital Cinema to develop a plug-in that would enable Premiere Pro and After Effects users to natively edit with RED’s .R3D camera raw files. You can see demos of how this will work at Dave Helmly’s blog. Adobe recognizes the potential of a raw workflow and plans to give editors access to debayering, gamma, ISO and white balance controls within their software.


The biggest changes


The most radical change in the Production Premium bundle is Adobe OnLocation CS4. The interface has been “Adobe-ized” and no longer sports the appearance of physical test gear installed in a rack. It now runs on both Macs and PCs and operates as the front-end, direct-to-disk recorder for an integrated end-to-end Adobe workflow. As before, it turns your desktop or laptop into a recording station, complete with monitor (your screen) and software scopes, but now features better clip management and the ability to add metadata to clips. DV and HDV cameras connected via FireWire work with OnLocation.


Soundbooth CS4 has evolved from a two-track to multi-track audio tool. Adobe does not view Soundbooth as a DAW competitor. It offers Audition (only sold individually) for those customers. Instead, Soundbooth CS4 is designed as a “helper” application to be used with Premiere Pro by video editors or Flash Professional by web developers. Soundbooth is designed as a less complex, task-based application for audio recording, editing, clean-up, mixing and music production. Although you can drill down into the effect filters and make custom adjustments, Soundbooth groups its processes by tasks with default presets. There are a decent set of tools for two-track audio production, similar to what you might find in BIAS Peak Pro or Sony Sound Forge. These are augmented with music composition tools using Adobe’s royalty-free scores. You can purchase new scores from Adobe’s Resource Central website, as well as download a wealth of free sound effects. Score creation with Soundbooth CS4 is similar to using Smart Sound’s Sonic Fire Pro, letting you tailor the length and arrangement of the score to your video. Now with multi-track support, you can mix dialogue, music and effects within Soundbooth CS4. A video editor will find Soundbooth CS4 useful for its clean-up and music tools, but a web producer would potentially do 100% of the audio production for a Flash website or a podcast with Soundbooth CS4.


The rest of the collection


Changes in the other applications might seem less dramatic depending on your needs. Photoshop CS4 Extended has gained 3D layer support. For the first time, you can import 3D objects into Photoshop. These can be manipulated in 3D space, including the ability to add textures, paint and make color modification. After Effects CS4 supports these 3D layers and also gained numerous enhancements. It includes a new built-in cartoon effect and comes bundled with Imagineer Systems’ Mocha for After Effects 2.5D planar motion tracking application.


Video layers were added last year to Photoshop CS3 Extended, so CS4 makes Photoshop an even more powerful tool for motion graphics of all types. Even the basic version offers more power than most video editors use, so I wish Adobe would offer a cheaper version with features that fit between Photoshop Elements and Photoshop CS4. I’m also surprised that Adobe hasn’t developed natural media painting features in Photoshop. This still seems to be an area left solely to Corel Painter.


In the past, you had to access the Adobe Media Encoder through Premiere Pro, but it is now included as a standalone application. It includes presets for all the popular media options (MPEG2, H264, iPod, Flash, etc.) and is one of the cleanest encoders I’ve used. I think you’ll find it a worthy rival for Apple Compressor, Sorenson Squeeze or Telestream Episode.


Although Flash CS4 Professional is part of this video bundle, you can now generate a Flash project directly from After Effects. Flash CS4 Professional received a total makeover with a timeline more like After Effects, but if you’re still more comfortable working in After Effects, then start there and later export to Flash CS4 Professional for completion. Another Adobe application that works with Flash is Encore. As in CS3, the updated CS4 version lets you author standard DVDs, Flash projects and Blu-ray high-def DVDs from a single project file. The CS3 version limited the Flash projects to 640×480 window sizes, but this limitation has been lifted in CS4. Now interactive Flash projects created in Encore can be designed in up to HD window sizes. Speaking of interactivity, Adobe is touting better Blu-ray authoring in Encore, though no BD-J authoring. I had no way to test this, but Blu-ray authoring is not yet a mature process. There have been compatibility issues with early players and Adobe has posted a number of trouble-shooting suggestions online. Since Blu-ray is an evolving technology, do your research if the sole interest in this software is to create Blu-ray DVDs.


More tools for your tool chest


As in the past, this collection is one of the most comprehensive “studio” bundles with a price that bests the competition in value. If you’re an Adobe fan, CS4 is a worthy upgrade. If you rely on Apple Final Cut Pro or Avid Media Composer for editing, Adobe is betting that there are enough essential applications in the bundle to make it worth your while just to pick up the whole package. Photoshop and After Effects are integral tools for most editors and Encore continues as a powerful, yet low-cost DVD authoring tool, so right there in three applications, you have paid for all the rest.


Adobe is a company that’s neutral in many of the big platform debates. They sell software and don’t have a vested interest in selling hardware. As such, there’s plenty of third party hardware and plug-in support to make Premiere Pro attractive to first time NLE users or switchers from other systems. With integrated metadata support, native operation with the most popular tapeless cameras and the ability to export to just about every one of today’s popular media formats, Adobe Creative Suite 4 Production Premium is a package you’ll want to add to your system.


Written by Oliver Peters for Videography magazine and NewBay Media, LLC.

Going Tapeless


The popularity of P2 and other tapeless camera formats has had a big impact on the post community. Some editors love it while others view it as a huge pain. Nevertheless – like it or not – file-based production and post production are here to stay. There’s not only P2, but also XDCAM, XDCAM-HD, XDCAM-EX, RED and a whole slew of consumer and prosumer camcorders using SD and CF cards to record various flavors of SD and HD video. And let’s not forget that FireStore and the original Avid/Ikegami EditCam started it all and are still with us today. Sony optical disc XDCAM and XDCAM-HD tend to be the exception, since this media offers a hybrid workflow that bridges the tape and tapeless worlds. To avoid confusion, I’m going to frame my comments around card and drive-based media, like P2. Some of the tips will apply to XDCAM, but others won’t.


There are typically 3 elements to file-based recordings. The first is essence – the actual audio and video content. Audio/video media that is recorded at a particular size, scanning method and frame rate (e.g. 1920x1080p/23.98fps) and uses a specific codec (e.g. DVCPRO HD). This essence is encased in a file wrapper, like MXF, MOV, MP4 and others. The file method used might also include a small metadata file, which is a data file containing information about the essence. When people talk about P2, that terminology should really only be reserved for the actual card and Panasonic product family. P2 devices can record audio and video essence in various formats and with different codecs, yet it’s all still on the same P2 media card.


Even when things look the same, they aren’t. For example, both Sony (XDCAM-HD) and Panasonic (P2) use the MXF wrapper, but the essence inside is not the same. Panasonic P2 MXF files could be natively opened and edited in Avid software, but XDCAM-HD MXF cannot. It doesn’t even stay the same within the same company. Sony’s XDCAM-HD uses the MPEG2 codec for video files, which is wrapped as an MXF file. When the EX-series camera was released, Sony chose to wrap its MPEG2 recordings as MP4 files. You would think the files used an MPEG4 codec by that designation, but not with the EX cameras. In the case of Panasonic, you can now record HD video as either DVCPRO HD or as AVC-Intra and they both appear with MXF file extensions.


When you analyze the file structure of any of these media cards, there is a specific folder and file hierarchy. Depending on the format, this structure has to stay intact. Moving video files outside of their folder often results in the inability of an NLE to read or open these files, so be careful how you handle them. With that in mind, here are some workflow tips for dealing with file-based media in a tapeless world.


Tip 1 – Clone your camera cards or drives


With the exception of XDCAM and XDCAM-HD, all card and hard drive-based media recordings MUST be backed up for protection, because no one plans to leave the card on the shelf. The recommended practice is to “clone” the card, i.e. copy the card in an exact fashion to preserve the original format and codec and maintain its folder and file hierarchy. This step is often done on location using a laptop, so that cards can quickly be reformatted and used for further recordings during the same day. Card capacity has increased from 4GB to 64GB, but it’s important to realize that a large capacity card is not always the best choice. Yes, you can record all day, but that means you’re likely to spend the rest of the entire evening copying and verifying the cards. Even if you have a “data wrangler” on the crew, they will be sitting on their hands if the card is in the camera all day long.


Keep your back-ups native! Some folks have imported their media into FCP or Avid systems and then formatted the cards, thinking that their NLE-compatible media was protected. This may be the case if you also back up your working media drives or your drives are RAID-protected, but the logic is faulty. Once you have imported P2 DVCPRO HD or AVC-Intra files into most NLEs, those files have been altered. Depending on the format and NLE, they have either been rewrapped or transcoded. Destroying the original camera media is tantamount to shooting on film, transferring the film to video and then destroying the negative. If you have maintained a back-up of the camera media in its native form, then you can always go back to these files, should you decide to switch to a different NLE or your working media becomes corrupt.


OK, so we agree that you should back-up your files to match the cards. But how? There are lots of recipes for doing this, but I think the best all-around solution comes from Imagine Products. Their ShotPut software comes in Mac and Windows editions for P2, EX and RED. It’s designed to safely name folders and copy and verify files to as many as three destinations. Having multiple copies is important, because no media product is infallible. People theorize about burning their media to Blu-ray data discs as an archive, but the reality is that transfer rates, burning speeds and BD-R media costs make this unattractive. Other solutions, like LTO3 data tapes and RAID-5 arrays only appeal to a select few. The solution most producers settle on is to buy cheap commodity FireWire, USB or eSATA drives (Maxtor, LaCie, Western Digital, Hitachi, Seagate, etc.) and make at least two copies that will sit on the shelf. The hope is that at least one of these will still spin up and work a year or so down the road when you need to go back to this footage. Remember that this is in addition to the working media used during post production.


Tip 2 – Budget time and media costs


Capture time has been replaced by import time. When I work with videotape, I tend to select a handful of good options for each set-up or scene and digitize only those takes. As a result, I might capture about half of the tape, but this is offset by the review and logging time. Logging plus capture time takes about as long as the full running time of the tape.


With tapeless media, I bring it all in. Yes, I know, the various import modules, like FCP’s Log and Transfer let me cull the footage down, but I just don’t like working with them. I’d rather bring it all in and sort it out in the NLE, which brings us to the point about time and money. Starting a P2 or EX session for example, generally means mounting a cheap USB or FireWire drive and importing all the clips. Unfortunately you are working with one of the slower transfer rates available on computers. The average (good) copy time takes about an hour for every 100GB of data. A typical DVCPRO HD shoot recorded on P2 media might be a few hours of footage delivered on a 200GB USB drive. The import is faster than real time (compared to the running time of the footage), so about 7 hours of 720p DVCPRO HD (at 29.97pN) media might take about 2-4 hours to copy, based on your machine and drives. This is in addition to the original back-up time from the cards, of course. It’s slower with AVC-Intra, because some NLEs (such as FCP) have to transcode this codec during the import. On my MacBook Pro, the transcode to ProRes in FCP’s Log and transfer module was a little slower than real time.


RED footage makes time an even bigger issue. Most editors have been unhappy working with RED’s QuickTime reference files on substantial projects, like feature films. That’s because the QT reference files have to stay linked to the R3D camera raw files and are essentially “windows” that look into the 4K data and extract lower-resolution media on-the-fly. If you want to edit smoothly, then it’s important to transcode the raw files into something easier on your NLE, like DV25, DVCPRO HD, DNxHD or ProRes. In other words, edit using a standard offline/online approach to RED. Exporting transcoded R3D files with a general purpose computer is pretty tedious. Budget between a 3-to-1 and as much as a 20-to-1 ratio to go from RED One’s raw files to your NLE and be ready to start cutting.


Like any other tapeless media, RED camera files also need to be backed-up. REDcode is a variable bit rate codec based on wavelet compression. On average, the files (4096×2048, 2:1 aspect, 23.98fps) consume about 1.5GB for every minute of footage – or about 90GB per hour. An indie feature might shoot around 30 hours of footage, which puts that close to 3TB of required storage, just for the camera raw files. Times 2 if you rely on redundancy for extra safety. To compare, 1080p/23.98 DVCPRO HD would only use about half of that. Same for ProRes and about two-thirds for ProResHQ.


Tip 3 – Organizing files in your NLE


The hardest thing to get used to with file-based media is the cryptic naming conventions used by the cameras. When you import these files, you typically get long alphanumeric file names and not “Scene 1 / Take 1” or “Wide shot of person sitting on the bench”. Some NLEs will let you safely change the file or clip names. Others won’t. Avid has always let you do this, but it has traditionally been a no-no with Final Cut. Recent versions of FCP have made that safer with some formats, but I really urge you to resist the temptation. Remember that at some point you might need to relink media files or restore from the backed-up camera files. You are only going to be able to do this when the file name matches. Changing the name from “0014EF” to “Scene 7 / Take 3” might be fine and safe in an ideal world, but if all else fails and you have to resort to some type of manual search, keeping this name relationship the same will save your butt.


I recommend using one of the other bin description or comments columns as a place to assign a useful name. Both Avid Media Composer and Apple Final Cut Pro include numerous descriptor columns, so feel free to use these for custom names. You can also easily search and sort these, giving you the best of both worlds.


The other organizing factor is reel ID. Since there are no tape reels in the tapeless world, NLEs vary in their approach. Software like that from Imagine Products will let you rename cards. This is a wise approach. All too often, I have been handed a drive containing the contents from several cloned P2 cards. A volume for each card will mount on the desktop (on a Mac), labeled “No Name 1”, “No Name 2” and so on. What do you think is going to be on the next day’s drive? Same thing! So I urge you to properly name the cards in a consistent manner, using either film style (camera rolls) or video style (tape numbers) labeling. This may or may not be important for your NLE, but it is imperative if you have to locate shots on these drives in the future.


Tip 4 – Cataloguing your footage


You have been shooting with your RED One or HVX-200 for a few months and have started to accumulate a bunch of small FireWire drives holding the footage from each project. That’s easy to do, because the drives are so cheap that you buy a new one for each shoot. Just charge it off as part of the production budget, like tape stock. That’s all well and good, but now these are starting to pile up just like the camera tapes you used to have in the library. What’s the next step?


The simple and obvious step is to physically label the drives – just like your tapes. No wait – better than you used to label the tapes! Before you get buried in a pile of portable hard drives, start a cataloguing system. There is plenty of software to choose from and can be as simple or elaborate as you need. The main criteria is that the process be quick and easy when you want to know what’s on each drive or where to look for something shot during a given production. Choices include Apple Final Cut Server, Imagine Products, Bento, Filemaker Pro, CatDV or just an Excel spreadsheet. Whatever it is, start doing it yesterday!


Tip 5 – Mastering


I know, I know – it’s a tapeless world. The truth is, I still feel very comfortable having my finished production on a piece of tape. Most of my clients still own some VTRs. If you have to revise a project a year down the road, it’s often easier to ingest a videotape master and make revisions than to reload the entire original project from data back-ups.


My favorite mastering procedure is to generate four outputs of my edited sequence. These include a final videotape master of the edited program that is mixed, color-corrected and includes all titles and graphics. In addition, I will output a videotape submaster that is “superless” (no titles) with the audio in “stems” (separated dialogue, effects and music). Such a submaster makes any of the common revisions very easy.


That’s two of the four. Next, I’ll also export self-contained media files (such as QuickTime movies) in these same configurations – final master and superless submaster. This level of simple and easy protection neatly fits into the budget of most producers. For example, an hour-long, 1080i, 8-bit uncompressed QuickTime file with stereo audio requires about 400GB of drive space. Dumping a master file onto a FireWire drive is still more expensive than an hour-long HDCAM tape, but you can work with the media, even if you don’t actually own or have access to the tape deck.


Careful planning, organization and a policy for data management and protection will help you survive and thrive in the transition from tape to files.


© 2008 Oliver Peters

Impressions of Las Vegas – NAB 2008

If you’ve casually been following the NAB news, you most likely think that the biggest press is the lack of participation by Avid and Apple. It’s true that neither had a booth, but both were there at customer and reseller events, including Avid’s roll-out the new DX product line. If this is your take away, then you might surmise that NAB was a rather lackluster event for post. Dig a bit deeper and you’ll find that NLEs have reached a certain level of maturity and it’s hard to keep rolling in new features. In fact, camera manufacturers have been driving the show with the latest and greatest file-based formats. The editing system manufacturers have had their hands full simply adding support for each new camera record option. Whether or not your favorite NLE supports P2, XDCAM-HD, REDcode and so on will impact far more users than whether Avid improves color correction or Apple improves media management.


If you’re looking for true edit system innovation, then that news came out of Quantel. Not only are they adding significant features, but they’ve wholly embraced the tools to edit and color grade the left and right eye views of stereoscopic imagery. We’ll see if that proves to be a good business model, but right now in the wake of quite a few 3D movies in the theaters, Quantel is betting that the market is there for more than a select few. Autodesk likewise had its own news with the continued unification of the user interfaces between Smoke and Flame. The products each still have a distinct and unique role to play, but Autodesk is integrating across both product groups such common modules as the timeline and batch (Flame’s process tree for effects).


As far as Avid’s DX line is concerned, so far the main news is new hardware connected via the PCIe bus and new pricing. This ties in with improved GPU and CPU power as well as Leopard and Vista support and even optimization. In total this will result in more streams of true real time horsepower. Unfortunately, this also means that Avid has to update the system, while staying with the familiar GUI that its user base likes. It might be different under the hood, but on the surface looks and feels the same. Many will applaud this, but it won’t sway the critics and certainly won’t bring back those who’ve left for other NLEs, like Final Cut Pro.





If you’re looking for trends, however, it’s become pretty obvious – if you didn’t know already – that the industry is moving away from videotape and towards a myriad of file-based solutions. When Panasonic jumped in originally with P2, Sony made no bones about detracting from their competitor. The funny thing about this is that Sony has now wholeheartedly embraced the concept with its EX1 and now EX3 cameras, sporting their own style of solid state storage, the SxS cards. Users are riding the learning curve, as many still don’t understand the differences when it comes to containers (P2 cards, XDCAM-HD discs, SxS cards), file wrappers (MXF, OMF, QuickTime, AVI, MPEG4) and codecs (DVCPROHD, AVC-Intra, MPEG2). Of course, eventually it will all get sorted out, but what’s worth noting, is that the only new videotape-based VTR introduced at NAB 2008 was an HDCAM-SR player by Sony. Meanwhile Sony and Panasonic both released quite a few VTR “replacement” products that use each manufacturer’s card scheme. Panasonic is growing a product ecosystem around P2 and likewise Sony growing one around the SxS cards.


Many experienced video pros look at this in horror, fearing that a few years down the road, it will be hard to mount the hard drives to which this media has been copied after the shoot. I appreciate this sentiment, as you can still readily find decks to play Betacam-SP and even Umatic tapes that are now over two decades old. That isn’t universally true however. In my market, you’d be hard pressed to find decks to play such once-popular formats as D1, D2, D3 or D5. The are only a handful of one-inch Type C VTRs in the market and their reliability is questionable. So the truth of the matter is that you probably aren’t any safer with content on tape as on hard drive, assuming you establish a viable approach to archiving the media. Generally this takes the form of redundant copies on multiple hard drives or at best, data tapes, such as the LTO3 format.


With this as a trend, quite a few NAB vendors were showing solutions for lower cost and simpler shared storage as well as asset management software. Some products to look into include Apple’s Final Cut Server, Laird Telemedia’s LairdShareHD, Focus Enhancements’ ProxSys, Gridiron Software’s Flow and Tiger Technologies’ MetaSAN and MetaLAN. In addition, the average cost of local storage is getting cheaper than ever; so, those editors working with P2 or similar technologies will have no problem just dumping all the media at full resolution to their local drives straight from the shoot and cutting happily away.





It’s hard to talk about NAB and not mention RED Digital Camera. Yes, they announced two new cameras (Scarlet and Epic), but more importantly is the fact that the post support structure is growing around them. Even if RED is ultimately not super-successful (unlikely), they will have changed the way many work with images. I believe the camera raw workflow is bound to be adopted by others in the future. Today, Apple and Assimilate are the only official RED partners. They are the only companies with access to the .R3D files. Avid is also able to provide some editorial support through XML list conversions. In the RED booth, a beta version of FCP’s Log and Transfer module was shown that imports and transcodes .R3D files. FCP editors can natively import raw files, transcoding them to another codec, like Apple ProRes 422 on the way in. There was also a technology preview of .R3D files being graded directly in Apple Color, through the addition of a RED-oriented RED Room tab within Color’s interface. 


Assimilate introduced its RED-specific SCRATCH CINE, the only full-featured finishing product geared strictly for a RED workflow. But the story doesn’t stop there. Quite a few companies are chomping at the bit to release their own products for RED. At the moment, they are held back by RED Digital Camera’s agreements with its original partners. These are expected to expire soon, with RED releasing an SDK for its REDcode codec. Once that’s done, expect to see companies like Cineform and IRIDAS quickly jump into the game. In fact, these companies already have raw workflow products that are ready for RED, which were developed using existing (but not final) versions of the codec. So just as in the digital still photo world, camera raw will be a concept to which videographers will need to become accustomed.


Look for more of my NAB 2008 post production analysis in the June print edition of Videography magazine and also online at DV magazine.

© 2008 Oliver Peters

Staying Green In Post


A lot of emphasis is being placed on saving the environment and operating in a “greener” workplace. That may be easy to see in a production on location, where waste is easy to identify, but how is that applied to post facilities and editing boutiques? Let me outline some simple steps to help you do your part.




I’m not exactly sure when it became the norm for everyone to have their own personal bottle of water, but palettes of bottled water have taken over the frig at most post houses. If you’ve listened to the news for the most fleeting moment, you should be aware that our landfills are being filled with these plastic bottles in spite of recycling efforts. You can make your contribution by going back to other sources of water for yourself and your clients. After all, the source for what’s in those bottles is generally the same as what’s coming from your tap anyway. You can handle this by something as simple as using a large water supply service to stock a water cooler of the same stuff, but in much larger, recycled containers. Or how about enhancing your customer service and actually bringing your clients a tray of glasses and a pitcher of ice water into the session? While we’re at it, the same logic can be applied to cans of soda.




Through my decades in the business, common wisdom said that equipment should stay on 24/7 and that more gear dies from being powered up than from staying on constantly. I’m here to tell you that at least with today’s technology, this is total bunk. When you’re done for the day or the week – shut the power off! I’ll admit that I have had some gear break when it was first turned on, but these cases have been rare and nothing in the last ten years. In fact, most of the shops in which I freelance, routinely power down decks, computers and drives at the end of the day. None have had any issues. Hard drives are the only item I tend to see left on, but I would recommend turning these off as well. 


Remember that many items use standby power even when the units are off. This standby power feature enables faster startups, but in some cases draws almost as much power as if the unit were still on. I would recommend that you put such gear on a power strip. You can hit one breaker switch and turn off the current feeding that unit, after using the computer’s software shut down. This has the added benefit that you are truly turning off the unit, so the next time the computer is booted, it starts clean and “flushes” out any problems that might have been held by standby power. Macs are especially susceptible to this, as “gremlins” are often held in memory in spite of shutdowns or restarts. These miraculously go away when you actually kill the power to the unit and do a reboot from a true powered down condition.


Let me point out that power surges and poorly conditioned power do more harm to gear than whether or not it stays on 24/7. So as a normal installation item, I would recommend that all drives and computers be connected to a large uninterrupted power supply (UPS) from a reliable manufacturer, such as APC. If you get the more expensive models (not the cheapos from an office supply store), they will apply some power conditioning to the signal. Believe it or not, I have seen where the absence of a UPS has caused file loss and/or corruption on a SAN array! All purely a result of the lack of this sort of power conditioning.


Air Conditioning


Another holdover from the old days is air conditioning. Tape rooms used to be set to about 60 or 65 degrees – and suites close to it – so it was a common sight to see editors and clients in sweaters and even heavy jackets during a session on a hot summer day. The logic was that heat kills gear and so if the ambient temperature was about 65 degrees, then it was hotter inside the equipment racks and probably close to 100 degrees on the circuit boards themselves. Again, technology has advanced since the 1950s. In a recent Google study, their engineers analyzed the failure rates of hard drives at Google data centers. In this study they found that there was no strong correlation between heat and drive failure. The researchers are careful to point out this doesn’t mean that there isn’t one, but that heat is only one of the factors in drive failure rates.


Ultimately all drives fail, so you have to balance the energy costs against the hardware replacement costs and decide whether 10 degrees difference in temperature is worth the possibility of gaining an extra year or so of life from your hard drives. Most of the smaller boutiques in which I work haven’t had the luxury of designing large, cold machine rooms that mimic a Google data center. Instead, racks are installed in standard office or remodeled home environments. Since equipment and people share the same spaces, I find that the thermostats are typically set in the low to mid 70 degree range. Low and behold the gear is just fine and anecdotally, I don’t see any higher failure rates than when I worked in the frozen tape rooms of the past.




Heat is one factor, but an even bigger factor is how clean your gear stays. Most computers and drives that employ fans, use a front-fed, flow-though ventilation. Air is sucked in the front and pushed out the back. Most of the rooms where you find this gear could hardly be considered a “clean room” environment. Even the cleanliest environment has dirt and dust, especially if there’s carpet. Take a look at the fans or open up your computer occasionally and you’ll be appalled at the amount of dust that’s trapped inside. This dust prevents proper cooling, so if heat is a factor, then this dust is greatly reducing the efficiency of your air conditioning. The best solution is to establish a monthly maintenance routine in which computers are opened and vacuumed out. Drives are removed and either vacuumed or blown out with compressed air. Obviously the latter should be done outside so that you aren’t simply blowing this dust back into the same environment from where it came.


File Based Media


Many people are discussing the concept that video technology is cleaner than film technology and that ultimately file based digital productions (P2, XDCAM, RED, S.two, etc.) are environmentally better. I haven’t done any sort of analysis on this and quite frankly, many environmental arguments often don’t actually hold up once you look at the total net effect of the alternative. For example, yes, manufacturing film stock and processing negative is a very dirty technology, however, there’s not much 35mm film production being done worldwide anymore outside of the motion picture industry. On the other hand, digital storage for still photographers and videographers is mushrooming – so I don’t think you can definitively say yet whether manufacturing all the solid state storage, hard drives and data back-up tapes to enable this digital revolution is actually cleaner than what it has replaced. After all, manufacturing digital media is not without its own environmental impact.


That is, of course, primarily a production question, which means the decision has been made before it gets to the editing suite. On the other hand, there are a lot of things editors and post facilities have historically done to protect assets in post and these practices should be revisited in light of cost and the environment. For instance, if you produce a set of shows, it’s common to output various formats (master, textless, 4×3, 16×9, letterboxed, etc.) to individual tapes. This is an item that is consumed for each piece of programming and even if you get the right length of videotape to match that program, the cost of cassette shells, cases and mechanisms is the same whether it’s a 5 minute or 60 minute program. Hard drives are cheap these days. It makes more sense to archive this content in a data format. You can get many more programs on a single hard drive or even data back-up tape than if videotapes are used. In the future, as massive online storage becomes the norm, courtesy of folks like Google, it might be feasible and in fact preferable, to archive your assets in the Internet cloud and not on-site as a physical piece of media.


Review And Approval


Edit sessions used to involve working with a client who sat in on the session and then walked with review dubs (3/4”, Beta-SP, VHS, etc.) for their bosses or clients. As our business changed, more of this work has become long distance and I find it to be the exception when a client spends the entire time in the session. At first, this meant making dubs to review (VHS or DVD) and shipping these across the country via Federal Express or another carrier or locally across town using a courier service. Hence, cost for materials – that eventually get tossed into the trash – as well as transportation. Again, the Internet is your answer. Many editors routinely turn to services like YouSendIt, SyncVue or Xprove to send review files to their clients. Internet services have become fast enough and compression quality good enough that it takes next to no time to upload or transfer 320×240-sized review videos at a sufficient quality level to get client feedback and approval. On most of my projects, voice-over recording sessions, music library searches and client review and approval cycles are entirely handled via the Internet. No material or transportation costs involved, so all-in-all, a much more environmentally-friendly process.


Even if you don’t believe in many of the environmental or energy arguments offered, it still makes perfect sense to come up with a plan to incorporate these suggestions. If nothing else, they will go a long way towards reducing your business’ operating costs and might just be beneficial for the rest of us, too.


©2008 Oliver Peters