Blackmagic Design UltraScope

blg_bmd_uscope

Blackmagic Design’s UltraScope gained a lot of buzz at NAB 2009. In a time when fewer facilities are spending precious budget dollars on high-end video and technical monitors, the UltraScope seems to fit the bill for a high-quality, but low-cost waveform monitor and vectorscope. It doesn’t answer all needs, but if you are interested in replacing that trusty NTSC Tektronix , Leader or Videotek scope with something that’s both cost–effective and designed for HD, then the UltraScope may be right for you.

The Blackmagic Design Ultrascope is an outgrowth of the company’s development of the Decklink cards. Purchasing UltraScope provides you with two components – a PCIe SDI/HD-SDI input card and the UltraScope software. These are to be installed into a qualified Windows PC with a high-resolution monitor and in total, provide a multi-pattern monitoring system. The PC specs are pretty loose. Blackmagic Design has listed a number of qualified systems on their website, but like most companies, these represent products that have been tested and known to work – not all the possible options that, in fact, will work. Stick to the list and you are safe. Pick other options and your mileage may vary.

Configuring your system

The idea behind UltraScope is to end up with a product that gives you high-quality HD and SD monitoring, but without the cost of top-of-the-line dedicated hardware or rasterizing scopes. The key ingredients are a PC with a PCIe bus and the appropriate graphics display card. The PC should have an Intel Core 2 Duo 2.5GHz processor (or better) and run Windows XP or Vista. Windows 32-bit and 64-bit versions are supported, but check Blackmagic Design’s tech specs page for exact details. According to Blackmagic Design, the card has to incorporate the OpenGL 2.1 (or better) standard. A fellow editor configured his system with an off-the-shelf card from a computer retailer for about $100. In his case, a Diamond-branded card using the ATI 4650 chipset worked just fine.

You need the right monitor for the best experience. Initial marketing information specified 24” monitors. In fact, the requirement is to be able to support a 1920×1200 screen resolution. My friend is using an older 23” Apple Cinema Display. HP also makes some monitors with that resolution in the 22” range for under $300. If you are prepared to do a little “DIY” experimentation and don’t mind returning a product to the store if it doesn’t work, then you can certainly get UltraScope to work on a PC that isn’t on Blackmagic Design’s list. Putting together such a system should cost under $2,000, including the UltraScope and monitor, which is well under the price of the lowest-cost competitor.

Once you have a PC with UltraScope installed, the rest is pretty simple. The UltraScope software is simply another Windows application, so it can operate on a workstation that is shared for other tasks. UltraScope becomes the dominant application when you launch it. Its interface hides everything else and can’t be minimized, so you are either running UltraScope or not. As such, I’d recommend using a PC that isn’t intended for essential editing tasks, if you plan to use UltraScope fulltime.

Connect your input cable to the PCIe card and whatever is being sent will be displayed in the interface. The UltraScope input card can handle coax and fiber optic SDI at up to 3Gb/s and each connection offers a loop-through. Most, but not all, NTSC, PAL and HD formats and frame-rates are supported. For instance, 1080p/23.98 is supported but 720p/23.98 is not. The input is auto-sensing, so as you change project settings or output formats on your NLE, the UltraScope adjusts accordingly. No operator interaction is required.

The UltraScope display is divided into six panes that display parade, waveform, vectorscope, histogram, audio and picture. The audio pane supports up to 8 embedded SDI channels and shows both volume and phase. The picture pane displays a color image and VITC timecode. There’s very little to it beyond that. You can’t change the displays or rearrange them. You also cannot zoom, magnify or calibrate the scope readouts in any way. If you need to measure horizontal or vertical blanking or where captioning is located within the vertical interval, then this product isn’t for you. The main function of the UltraScope is to display levels for quality control monitoring and color correction and it does that quite well. Video levels that run out of bounds are indicated with a red color, so video peaks that exceed 100 change from white to red as they cross over.

Is it right for you?

The UltraScope is going to be more useful to some than others. For instance, if you run Apple Final Cut Studio, then the built-in software scopes in Final Cut Pro or Color will show you the same information and, in general use, seem about as accurate. The advantage of UltraScope for such users, is the ability to check levels at the output of any hardware i/o card or VTR, not just within the editing software. If you are an Avid editor, then you only have access to built-in scopes when in the color correction mode, so UltraScope is of greater benefit.

My colleague’s system is an Avid Media Composer equipped with Mojo DX. By adding UltraScope he now has fulltime monitoring of video waveforms, which is something the Media Composer doesn’t provide. The real-time updating of the display seems very fast without lag. I did notice that the confidence video in the picture pane dropped a few frames at times, but the scopes appeared to keep up. I’m not sure, but it seems that Blackmagic Design has given preference in the software to the scopes over the image display, which is a good thing. The only problem we encountered was audio. When the Mojo DX was supposed to be outputting eight discrete audio channels, only four showed up on the UltraScope meters. As we didn’t have an 8-channel VTR to test this, I’m not sure if this was an Avid or Blackmagic Design issue.

Since the input card takes any SDI signal, it also makes perfect sense to use the Blackmagic Design UltraScope as a central monitor. You could assign the input to the card from a router or patch bay and use it in a central machine room. Another option is to locate the computer centrally, but use Cat5-DVI extenders to place a monitor in several different edit bays. This way, at any given time, one room could use the UltraScope, without necessarily installing a complete system into each room.

Future-proofed through software

It’s important to remember that this is 1.0 product. Because UltraScope is software-based, features that aren’t available today can easily be added. Blackmagic Design has already been doing that over the years with its other products. For instance, scaling and calibration aren’t there today, but if enough customers request it, then it might be available in the next software update as a simple downloadable update.

Blackmagic Design UltraScope is a great product for the editor that misses having a dedicated set of scopes, but who doesn’t want to break the bank anymore. Unlike hardware units, a software product like UltraScope makes it easier than ever to update features and improve the existing product over time. Even if you have built-in scopes within your NLE, this is going to be the only way to make sure your i/o card is really outputting the right levels, plus it gives you an ideal way to check the signal on your VTR without tying up other systems. And besides… What’s cooler to impress a client than having another monitor whose display looks like you are landing 747s at LAX?

©2009 Oliver Peters

Written for NewBay Media LLC and DV magazine

What’s wrong with this picture?

blg_whatswrong

“May you live in interesting times” is said to be an ancient Chinese curse. That certainly describes modern times, but no more so than in the video world. We are at the intersection of numerous transitions: analog to digital broadcast; SD to HD; CRTs to LCD and plasma displays; and tape-based to file-based acquisition and delivery. Where the industry had the chance to make a clear break with the past, it often chose to integrate solutions that protected legacy formats and infrastructure, leaving us with the bewildering options that we know today.

 

Broadcasters settled on two standards: 720p and 1080i. These are both full-raster, square pixel formats: 1280x720p/59.94 (60 progressive frames per seconds in NTSC countries) – commonly known as “60P” – and 1920x1080i/59.94 (60 interlaced fields per second in NTSC countries) – commonly known as “60i”. The industry has wrestled with interlacing since before the birth of NTSC.

 

Interlaced scan

 

Interlaced displays show a frame as two sequential sets of alternating odd and even-numbered scan lines. Each set is called a field and occurs at 1/60th of a second, so two fields make a single full-resolution frame. Since the fields are displaced in time, one frame with fast horizontal motion will appear like it has serrated edges or horizontal lines. That’s because odd-numbered scan lines show action that occurred 1/60th of a second apart from the even-numbered, adjacent scan lines. If you routinely move interlaced content between software apps, you have to careful to maintain proper field dominance (whether edits start on field 1 or field 2 of a frame) and field order (whether a frame is displayed starting with odd or even-numbered scan lines).

 

Progressive scan

 

A progressive format, like 720p, displays a complete, full-resolution frame for each of 60 frames per second. All scan lines show action that was captured at the exact same instance in time. When you combine the spatial with the temporal resolution, the amount of data that passes in front of a viewer’s eyes in one second is essentially the same for 1080i (about 62 million pixels) as for 720p (about 55 million pixels).

 

Progressive is ultimately a better format solution from the point-of-view of conversions and graphics. Progressive media scales more easily from SD to HD without the risk of introducing interlace errors that can’t be corrected later. Graphic and VFX artists also have a better time with progressive media and won’t have issues with proper field order, as is so often the case when working with NTSC or even 1080i. The benefits of progressive media apply regardless of the format size or frame rate, so 1080p/23.98 offers the same advantages.

 

Outside of the boundary lines

 

Modern cameras, display systems and NLEs have allowed us to shed a number of boundaries from the past. Thanks to Sony and Laser Pacific, we’ve added 1920x1080psf/23.98. That’s a “progressive segmented frame” running at the video-friendly rate of 23.98 for 24fps media. PsF is really interlacing, except that at the camera end, both fields are captured at the same point in time. PsF allows the format to be “superimposed” onto an otherwise interlaced infrastructure with less impact on post and manufacturing costs.

 

Tapeless cameras have added more wrinkles. A Panasonic VariCam records to tape at 59.94fps (60P), even though you are shooting with the camera set to 23.98fps (24P). This is often called 24-over-60. New tapeless Panasonic P2 camcorders aren’t bound by VTR mechanisms and can record a file to the P2 recording media at any “native” frame rate. To conserve data space on the P2 card, simply record at the frame rate you need, like 23.98pn (progressive, native) or 29.97pn. No need for any redundant frames (added 3:2 pulldown) to round 24fps out to 60fps as with the VariCam.

 

I’d be remiss if I didn’t address raster size. At the top, I mentioned full-raster and square pixels, but the actual video content recorded in the file cheats this by changing the size and pixel aspect ratio as a way of reducing the data rate. This will vary with codec. For example, DVCPRO HD records at a true size of 960×720 pixels, but displays as 1280×720 pixels. Proper display sizes of such files (as compared with actual file sizes) are controlled by the NLE software or a media player application, like QuickTime.

 

Mixing it up

 

Editors routinely have to deal with a mix of frame rates, image sizes and aspect ratios, but ultimately this all has to go to tape or distribution through the funnel of the two accepted HD broadcast formats (720p/59.94 and 1080i/59.94). PLUS good old fashioned NTSC and/or PAL. For instance, if you work on a TV or film project being mastered at 1920x1080p/23.98, you need to realize several things: few displays support native 23.98 (24P) frame rates. You will ultimately have to generate not only a 23.98p master videotape or file, but also “broadcast” or “air” masters. Think of your 23.98p master as a “digital internegative”, which will be used to generate 1080i, 720p, NTSC, PAL, 16×9 squeezed, 4×3 center-cut and letterboxed variations.

 

Unfortunately your NLE won’t totally get you there. I recently finished some spots in 1080p/23.98 on an FCP system with a KONA2 card. If you think the hardware can convert to 1080i output, guess again! Changing FCP’s Video Playback setting to 1080i is really telling the FCP RT engine to do this in software, not in hardware. The ONLY conversions down by the KONA hardware are those available in the primary and secondary format options of the AJA Control Panel. In this case, only the NTSC downconversion gets the benefit of hardware-controlled pulldown insertion.

 

OK, so let FCP do it. The trouble with that idea is that yes, FCP can mix frame rates and convert them, but it does a poor job of it. Instead of the correct 2:3:2:3 cadence, FCP uses the faster-to-calculate 2:2:2:4. The result is an image that looks like frames are being dropped, because the fourth frame is always being displayed twice, resulting in a noticeable visual stutter. In my case, the solution was to use Apple Compressor to create the 1080i and 720p versions and to use the KONA2’s hardware downconversion for the NTSC Beta-SP dubs. Adobe After Effects also functions as a good, software conversion tool.

 

Another variation to this dilemma is the 720pn/29.97 (aka 30PN) of the P2 cameras. This is an easily edited format in FCP, but it deviates from the true 720p/59.94 standard. Edit in FCP with a 29.97p timeline, but when you change the Video Playback setting to 59.94, FCP converts the video on-the-fly to send a 60P video stream to the hardware. FCP is adding 2:2 pulldown (doubling each frame) to make the signal compliant. Depending on the horsepower of your workstation, you may, in fact, lower the image resolution by doing this. If you are doing this for HD output, it might actually be better to convert or render the 29.97p timeline to a new 59.94p sequence prior to output, in order to maintain proper resolution.

 

Converting to NTSC

 

But what about downconversion? Most of the HD decks and I/O cards you buy have built-in downconversion, right? You would think they do a good job, but when images are really critical, they don’t cut it. Dedicated conversion products, like the Teranex Mini do a far better job in both directions. I delivered a documentary to HBO and one of the items flagged by their QC department was the quality of the credits in the downconverted (letterboxed) Digital Betacam back-up master. I had used rolling end credits on the HD master, so I figured that changing the credits to static cards and bumping up the font size a bit would make it a lot better. I compared the converted quality of these new static HD credits through FCP internally, through the KONA hardware and through the Sony HDW-500 deck. None of these looked as crisp and clean as simply creating new SD credits for the Digital Betacam master. Downconverted video and even lower third graphics all looked fine on the SD master – just not the final credits.

 

The trouble with flat panels

 

This would be enough of a mess without display issues. Consumers are buying LCDs and plasmas. CRTs are effectively dead. Yet, CRTs are the only device to properly display interlacing – especially if you are troubleshooting errors. Flat panels all go through conversions and interpolation to display interlaced video in a progressive fashion. Going back to the original 720p versus 1080i options, I really have to wonder whether the rapid technology change in display devices was properly forecast. If you shoot 1080p/23.98, this often gets converted to a 1080i/59.94 broadcast master (with added 3:2 pulldown) and is transmitted to your set as a 1080i signal. The set converts the signal. That’s the best case scenario.

 

Far more often, the production company, network and local affiliate haven’t adopted the same HD standard. As a result, there may be several 720p-to-1080i and/or 1080i-to-720p that happen along the way. To further complicate things, many older consumer sets are native 720p panels and scale a 1080 image. Many include circuitry to remove 3:2 pulldown and convert 24fps programs back to progressive images. This is usually called the “film” mode setting. It generally doesn’t work well with mixed-cadence shows or rolling/crawling video titles over film content.

 

The newest sets are 1080p, which is a totally bogus marketing feature. These are designed for video game playback and not TV signals, which are simply frame-doubled. All of this mish-mash – plus the heavy digital compression used in transmission – makes me marvel at how bad a lot of HD signals look in retail stores. I recently saw a clip from NBC’s Heroes on a large 1080p set at a local Sam’s Club. It was far more pleasing to me on my 20” Samsung CRT at home, received over analog cable, than on the big 1080p digital panel.

 

Progress (?) marches on…

 

We can’t turn back time , of course, but my feeling about displays is that a 29.97p (30P) signal is the “sweet spot” for most LCD and plasma panels. In fact, 720p on most of today’s consumer panel looks about the same as 1080i or 1080p. When I look at 23.98 (24P) content as 29.97 (24p-over-60i), it looks proper to my eyes on a CRT, but a bit funky on an LCD display. On the other hand 29.97 (30P) strobes a bit on a CRT, but appears very smooth on a flat panel. Panasonic’s 720p/59.94 looks like regular video on a CRT, but 720p recorded as 30p-over-60p looks more film-like. Yet both signals actually look very similar on a flat panel. This is likely due to the refresh rates and image latency in an LCD or plasma panel as compared to a CRT. True 24P is also fine if your target is the web. As a web file it can be displayed as true 24fps without pulldown. Remember that as video, though, many flat panels cannot display 23.98 or 24fps frame rates without pulldown being added.

 

Unfortunately there is no single, best solution. If your target distribution is for the web or primarily to be viewed on flat panel display devices (including projectors), I highly recommend working strictly in a progressive format and a progressive timeline setting. If interlacing is involved, them make sure to deinterlace these clips or even the entire timeline before your final delivery. Reserve interlaced media and timelines for productions that are intended predominantly for broadcast TV using a 480i (NTSC) or 1080i transmission.

 

By now you’re probably echoing the common question, “When are we going to get ONE standard?” My answer is that there ARE standards – MANY of them. This won’t get better, so you can only prepare yourself with more knowledge. Learn what works for your system and your customers and then focus on those solutions – and yes – the necessary workarounds, too!

 

Does your head hurt yet?

 

© 2009 Oliver Peters

Avid ScriptSync – Automating Script Based Editing

Script continuity is the basis of organizing any dramatic television production or feature film. The script supervisor’s so-called lined script provides editors with a schematic for the coverage available for each scene in the script and is the basis for the concept of script based editing. As a scene is filmed the supervisor writes the scene and take number at the dialogue line on the script page where the shot starts and then draws a vertical line down through the page, stopping at the point when the director calls “cut”. As the director films various takes for master shots, close-ups and pick-ups, each one is indicated on that page with a scene/take number and a corresponding vertical line.

 

Script based editing for nonlinear systems has its origins in Cinedco’s Ediflex. To prepare dailies, assistant editors used a process called Script Mimic. They would draw numbered horizontal lines across the script at every sentence or paragraph of dialogue. Once dailies were available, the assistant would next enter timecodes that corresponded to this script breakdown for each scene and take. Ediflex used a unique lightpen-driven interface and a screen layout similar to the appearance of an edit decision list. Clicking on the intersection on the screen of a vertical (scene/take) and horizontal (dialogue line) entry permitted the editor to instantly zero in on the exact line of dialogue from any given take loaded by the assistant.

 

After the demise of Cinedco, the intellectual property of Ediflex’s Script Mimic ended up in the hands of Avid Technology. This formed the basis of Avid’s own Script Integration feature, first introduced in 1998 as a function within the Media Composer and Film Composer product family. The script based toolset has continued to be developed ever since and is available in both Avid Media Composer and Avid Xpress Pro software. Since this is a patented technology, Avid is the only nonlinear editing company to offer this feature and no competitor has anything to offer that’s even remotely close.

 

 

Script Based Editing Becomes Faster Than Ever

 

To date, Avid script based editing has generally stayed in the domain of episodic television shows and feature films. These are productions that budget the time and money for assistant editors, who in turn take over the responsibility of getting dailies ready for the editor so he or she can take advantage of these tools. Until recently this has been a time-consuming process. A year ago, Avid released ScriptSync as part of the Avid Media Composer 2.7 software (not included with Avid Xpress Pro). ScriptSync uses voice recognition technology licensed from Nexidia to automate the match of a media clip with the text of the script.

 

Here’s a quick overview. To use script based editing you first have to import the script. This has to be an ASCII text file with the document formatting maintained. Most film and TV writers use Final Draft to write their scripts and this application already has an “export for Avid” function. Inside the Media Composer interface, open the script bin and corresponding clip bin. Highlight a section of the script with the dialogue for those clips and then drag-and-drop one or several clips onto the highlighted section of the script. Now the script bin is updated to display the same vertical lines drawn through the text as you would see in a script supervisor’s lined script. In addition, if there are portions of the dialogue that are off-camera for a character, the software lets you highlight those dialogue lines and add the same sort of squiggly notation for that sentence or paragraph as you’d see in the supervisor’s hand-written notations.

 

Now the true magic happens. Once you’ve established the link between the text and the media, highlight the clips and select ScriptSync from the pulldown menu. At this point voice recognition analysis kicks in. According to Avid’s explanation, phonetic characters are generated for the text in the script and these are matched to the waveforms of the audio tracks. There are various preference settings that can be adjusted, which will affect the results. For example, first pick one of the nine languages that are recognized so far. You can select from audio tracks A1, A2 or both, in the case where different speakers are separated onto different channels. Lastly, there are settings to skip or ignore certain text conditions, like capital letters, which might be used for character names or scene descriptions in the script.

 

Several clips can be analyzed simultaneously at a rate far faster than real-time. Once this is completed, each vertical line descending from a media clip will have a series of nodes at each line of dialogue. By simply clicking on one of these points, the editor has instant access to that exact line of dialogue on any one of the applicable clips. A process that used to takes hours has literally been reduced to minutes and is probably one of the greatest productivity gains of any new NLE feature to come along in years.

 

 

Avid ScriptSync In The Real World

 

Avid’s script based editing is a tool that many experienced editors have never used, but it’s also one that other editors simply can’t live without. I had a chance to explore this with Brian Schnuckel and Zene Baker, two film and television editors who rely on it for their projects. Schnuckel has most recently been editing Just Jordan, a Nickelodeon sitcom that’s in its second season. In the first season, this was a single-camera show and the assistant editor handled script preparation manually. Season two is a multi-camera show shot in two days. One of the biggest challenges for script based editing is with ad libs or dialogue changes.

 

According to Brian, “When there are relatively simple changes, like a few words that are different, it’s not too bad and ScriptSync is smart enough to skip over these and catch up to the right point in the dialogue. However, it’s tougher when whole lines of dialogue are different. Then my assistant has to sync these areas by hand again.” The Avid software does permit you to cut, copy and paste changes directly in the script bin, but you can only work with lines or paragraphs, not individual words. Since you cannot undo these changes, Avid recommends making such changes in a word processor and then pasting the new text into the script bin.

 

Schnuckel continued, “Restarts are the biggest problem. Avid is working on ways to tell the software to ignore certain areas, but for the time being, these issues have to be fixed by hand. Depending on the production, these fixes offset the gains offered by ScriptSync’s automation, so you might not end up saving as much time as you’d hoped.” Surprisingly ScriptSync doesn’t have too much problem sifting through less-than-pristine audio. Editors even report that there’s little or no issue with actors who are speaking English but with a heavy foreign accent. In spite of a few issues, Schnuckel reports, “I’ve really come to rely on this feature and would have to change my whole workflow if I were editing with another system.”

 

Zene Baker is currently cutting a low-budget, indie feature with the working title of She Lived. He reports his preference for Avid Media Composer over Apple Final Cut Pro, because “there’s less to worry about and it’s easier to be your own assistant.”  Baker is cutting She Lived on what he describes as a “poor man’s Unity”. Two Media Composer systems connected via Ethernet and each working with a set of duplicate media files. Baker explained his experience with script based editing. “I was familiar with the old manual way prior to ScriptSync and found it to be very time-consuming, but I tried it on a few short projects and liked it. I have the luxury of an assistant editor on She Lived, as well as receiving digital dailies on hard drives.  This frees up some of the more tedious operations an assistant would normally be busy with and allows her to prepare the material more thoroughly with Avid’s script based editing and ScriptSync.  The weakest area is still with restarts and script page changes.  Features films that are shot rigidly according to the script work the best and comedy, with its ad libs, is still the toughest.”

 

 

Working Smarter

 

Both editors pinpointed the same software weaknesses, such as restarts, but Baker suggested that first subclipping the takes that had restarts was a good workaround. Another tip he offered was to create numerous bins with a smaller number of scenes in each bin. “It just gets to be too much data for the system to handle efficiently if you try to work in a single master script bin with all the clips tied to the film script. Instead, import the script into several bins and then just work with one to five scenes in each bin.”

 

Although the software continues to evolve, both Baker and Schnuckel pointed out one key advantage. As Brian put it, “It makes you look smarter! The session just goes more smoothly when you’re in the room with the director and every take is at your fingertips.” Zene added, “If you don’t have an assistant, you’d really have to weigh the advantages against the schedule. You spend more time on the front end, but you really make it up on the back end. It’s a real time-saver when people are in the room. I just love the feature of highlighting a section of dialogue and quickly being able to see and hear every bit of coverage for that line.”

 

Generally script based editing pops up among scripted TV drama and film editors, but it’s a great tool for other productions, too. For example, documentaries and reality television shows typically transcribe all the spoken raw footage, such as interviews. This type of footage becomes a natural for script based editing and Avid’s ScriptSync. Just think, with one click you can find any word or sentence within a lengthy interview and better yet, the script bin works with Media Composer’s internal Find and Find Next commands. Avid’s script based editing and ScriptSync form a clear advantage over the competition, so if you cut on Avid NLEs and have never tried it, you’re only the next project away from making this a key part of your workflow.

 

Written by Oliver Peters for Videography magazine (NewBay Media, LLC)

Posting The Closer

blg_closer.jpg

Powerful and well-crafted original, television dramas are no longer limited to the “big three” networks or HBO. Viewer hits can be found all over the dial. One such success story is TNT’s The Closer, which has clocked in as ad-supported cable’s Number One series of all time. Golden Globe-winner Kyra Sedgwick returns for season three as the offbeat investigator and interrogator, Deputy Police Chief Brenda Leigh Johnson. The Closer comes to TNT from The Shephard/Robin Company in association with Warner Bros. Television. It is executive-produced by Greer Shephard, Michael M. Robin and James Duff. The various partners in this team have helped bring to the small screen such provocative shows as The D.A., nip/tuck, The Agency and NYPD Blue.

 

The new season is off and running, but the editors are already in the midst of the sixth show in a fifteen-show run. The Closer is one of a handful of high-profile shows cut on Apple’s Final Cut Pro editing software. Instead of renting Avid systems, which is the normal Hollywood business model, Shephard/Robin opted to purchase their own Final Cut editing systems. Six workstations are used by the show’s three editors (Eli Nilsen, Mike Smith and Butch Wertman) and their three assistants. Nilsen typifies today’s modern editor. She graduated from AFI just in time to bypass physical film cutting (other than in school). She’s only ever edited professionally on nonlinear systems. Working up through the ranks as an assistant, she eventually got her break on NYPD Blue, where she was promoted from assistant to editor. Along the way, she earned credits on such diverse productions as Roger Corman’s TV series, The Black Scorpion and South Park – The Movie. She even cut two feature films in her native Norway.

 

During a break in post, Nilsen was able to discuss her experiences on The Closer. She described the typical post schedule to me, “Each hour-long episode films in about seven or eight days. It’s shot on 35mm – often with two cameras. The show is transferred to HD and the editors receive DVCAM copies of the dailies. My editor’s cut is due about four or five days after production is wrapped. Then I get about the same with the director and another one to two weeks to finish off the producer, studio and network notes. In total, it takes about four weeks to edit The Closer, which is about the same amount of time I had on NYPD Blue.” Unlike other shows, the six NLEs on The Closer are not connected to a shared storage network, such as an Avid Unity system. Eli explained, “Each editor and assistant has their own separate workstation. Since we aren’t really sharing footage between the editors, local storage works just fine. Each editor is working on a different show with unique footage. When the dailies come in, my assistant [Susan Demskey-Horiuchi] captures them to her local drives and then ‘sneakernets’ those to me. Common elements, like music and sound effects cues, are cloned onto a set of duplicate drives for each cutting room.”

 

Editing Challenges That Make the Show

 

Like most TV shows, The Closer has its own share of special editing challenges. I asked Eli to elaborate. “Our show uses an ensemble cast, so sometimes there are six or seven actors talking in a scene. There’s overlapping dialogue, so as the editor you have to get the right dynamic between the characters and still be able to get the story across. It’s a sound editing challenge, but you also want to make sure that you maintain the right pace as you go between the different takes and angles. In addition, this is a handheld show, which is often filmed with two cameras. When they shoot coverage on the set or on location, the camera doesn’t hold a static shot of the main character, because of the handheld nature of this show. It would be too boring if the camera stayed locked down on one character. Since the camera is moving, I try to use that to my advantage to keep the editing fluid – using the camera movement to motivate the cut.”

 

And did this style and amount of coverage add to the workload? “The Closer uses eight or nine different directors throughout the season. They each have their own style, but, of course, try to stay consistent with the look of the show. Like any television series, some directors roll more film than others, so there are days where I’ll have over three hours of dailies. Those are only the transferred circle takes, which amount to about a third to a half of the total negative that’s exposed. So on those days, it obviously takes a lot longer just to review the takes than on days when we only have an hour of dailies. One of the FCP features I put to use for this show is multi-camera editing. I used that in Avid a lot, but I think Apple has even improved it a bit in their implementation.”

 

Making The Move To Final Cut Pro

 

Nilsen was joined in the interview by Sheelin Choksey, the show’s Co-Producer who is primarily responsible for post. I asked Choksey to explain how the decision was made to use Final Cut on The Closer. “The Shephard/Robin Company had used Final Cut Pro with great success on nip/tuck. Michael Robin, one of our three executive producers, is very post-savvy and he’s really responsible for convincing the studio that it was okay. We went through the growing pains with Final Cut on nip/tuck and had a lot of direct contact with Apple. They were very responsive, so we really love FCP. I really feel Final Cut does a better job with designing and cutting sound. As for picture, it is far superior, since we don’t work with low resolution video, which one often does on the Avid.  There, rough cuts generally look and sound very ‘temp’, but with Final Cut Pro, producers and executives often share the opinion that the rough cut is almost good enough to air.”

 

Nilsen explained in greater detail how she handles audio, “I always build up my cut with a full mix of sound effects and music – even the editor’s cut. Jimmy Levine, our composer, has built up a library of cues from the past two seasons that we can also use to create a temporary score. We communicate with him early, so often I’ll send him a working version of a scene and he’ll start scoring to that. Susan [assistant editor] is often building up sound effects on a show for me, using her workstation, while I continue cutting on mine. The location production mixer usually mixes the overlapping dialogue to one track, but if I need to isolate certain dialogue lines, I can get the split tracks if I need them. I find that Final Cut makes it really easy to dial in the mix. Of course, this mix is just for screening, so when I’m done the sequence is sent to our mixer as an OMF file.” Technicolor Sound Services loads these into Pro Tools and then the sound department edits the final sound effects and rebuilds the dialogue tracks from the original recordings.

 

A Unique Approach To Post

 

One unique aspect to the Shephard/Robin approach is that mixing has been brought in-house. Choksey expanded on this, “The mixer works for Technicolor, but he’s assigned to this show and works out of our offices here at Raleigh Studios. We set up a small mixing room using Pro Tools and a Pro Console. The mixer takes about two days to predub the show and then another two days with the producers, for the final mix. We’ll take the final mix on hard drives over to Technicolor for the layback to the master. Instead of mixing in a large, film-style dubbing stage, Michael Robin and Michael Weiss, the show’s Producer, wanted the mix to be a ‘near field experience’. The Closer is a TV show, so it isn’t being seen in a theater, but in living rooms. We want to be able to hear it in an environment similar to that of our viewers.” As an editor, Nilsen really liked this approach, “I love the fact that the mix happens next door. I can sit in on the mix and it gives me a chance to make sure nothing was missed.”

 

Right now the final, high definition, online editing is done at Encore Hollywood, complete with a daVinci tape-to-tape color grading pass. In fact, the HD-D5 tapes are conformed and mastered in a linear suite using edit decision lists (EDLs) from Final Cut Pro. Keeping an eye on the future, Sheelin let me in on some of their plans. “We are really interested in Apple’s new Final Cut Studio 2 and are considering bringing the finishing and color grading in-house, too. It hasn’t been decided yet, but that might be something we test on the last couple of episodes in this season.”

 

Nilsen is an editor who’s made the transition from Avid to Final Cut Pro, so I asked her for some personal impressions. “First of all, it’s great because Final Cut is so affordable,” she said. “This gives you a lot of freedom. The Closer has six systems that each only cost about $10,000, which in total, is less than Avid rental would typically cost for a season. More important to me is that I can afford to own a system at home. I’m a young mother, so sometimes I’ll take the drives home and work on an episode there. This gives me a chance to spend more time with my kids and that’s very liberating. In addition, the assistants have greater access to the project and can cut some scenes on their own, giving them a way to hone their own skills.”

 

The Shephard/Robin Company currently is in development on a new series for Warner Bros. called State of Mind. Like nip/tuck and The Closer, State of Mind will also be edited on Final Cut Pro. Shephard/Robin hopes to follow the same winning formula, by keeping as many of the resources as possible under one roof.

 

Written by Oliver Peters for Videography magazine (NewBay Media, LLC)