HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The Hobbit

df_hobbit_1Peter Jackson’s The Hobbit: An Unexpected Journey was one of the most anticipated films of 2012. It broke new technological boundaries and presented many creative challenges to its editor. After working as a television editor, Jabez Olssen started his own odyssey with Jackson in 2000 as an assistant editor and operator on The Lord of the Rings trilogy. After assisting again on King Kong, he next cut Jackson’s Lovely Bones as the first feature film on which he was the sole editor. The director tapped Olssen again for The Hobbit trilogy, where unlike the Rings trilogy, he will be the sole editor on all three films.

Much like the Rings films, all production for the three Hobbit films was shoot in a single eighteen month stretch. Jackson employed as many as 60 RED Digital Cinema EPIC cameras rigged for stereoscopic acquisition at 48fps – double the standard rate of traditional feature photography. Olssen was editing the first film in parallel with the principal photography phase. He had a very tight schedule that only allowed about five months after the production wrapped to lock the cut and get the film ready for release.

To get The Hobbit out on such an aggressive schedule, Olssen leaned hard on a post production infrastructure built around Avid’s technology, including 13 Media Composers (10 with Nitris DX hardware) and an ISIS 7000 with 128TB of storage. Peter Jackson’s production facilities are located in Wellington, New Zealand, where active fibre channel connections tie Stone Street Studio, Weta Digital, Park Road Post Production and the cutting rooms to the Avid ISIS storage. The three films combined, total 2200 hours (1100 x two eyes) of footage, which is the equivalent of 24 million feet of film. In addition, an Apace active backup solution with 72TB of storage was also installed, which could immediately switch over if ISIS failed.

The editorial team – headed up by first assistant editor Dan Best – consisted of eight assistant editors, including three visual effects editors. According to Olssen, “We mimicked a similar pipeline to a film project. Think of the RED camera .r3d media files as a digital negative. Peter’s facility, Park Road Post Production, functioned as the digital lab. They took the RED media from the set and generated one-light, color-corrected dailies for the editors. 24fps 2D DNxHD36 files were created by dropping every second frame from the files of one ‘eye’ of a stereo recording. For example, we used 24fps timecode with the difference between the 48fps frames being a period instead of a colon. Frame A would be 11.22.21.13 and frame B would be 11:22:21:13. This was a very natural solution for editing and a lot like working with single-field media files on interlaced television projects. The DNxHD files were then delivered to the assistant editors, who synced, subclipped and organized clips into the Avid projects. Since we were all on ISIS shared storage, once they were done, I could access the bins and the footage was ready to edit, even if I were on set. For me, working with RED files was no different than a standard film production.”

df_hobbit_2Olssen continued, “A big change for the team since the Rings movies is that the Avid systems have become more portable. Plus the fibre channel connection to ISIS allows us to run much longer distances. This enabled me to have a mobile cart on the set with a portable Media Composer system connected to the ISIS storage in the main editing building. In addition, we also had a camper van outfitted as a more comfortable mobile editing room with its own Media Composer; we called it the EMC – ‘Editorial Mobile Command’. So, I could cut on set while Peter was shooting, using the cart and, as needed, use the EMC for some quick screening of edits during a break in production. I was also on location around New Zealand for three months and during that time I cut on a laptop with mirrored media on external drives.”

The main editing room was set up with a full-blown Nitris DX system connected to a 103” plasma screen for Jackson. The original plan was to cut in 2D and then periodically consolidate scenes to conform a stereo version for screening in the Media Composer suite. Instead they took a different approach. Olssen explained, “We didn’t have enough storage to have all three films’ worth of footage loaded as stereo media, but Peter was comfortable cutting the film in 2D. This was equally important, since more theaters displayed this version of the film. Every few weeks, Park Road Post Production would conform a 48fps stereo version so we could screen the cut. They used an SGO Mistika system for the DI, because it could handle the frame rate and had very good stereo adjustment tools. Although you often have to tweak the cuts after you see the film in a stereo screening, I found we had to do far less of that than I’d expected. We were cognizant of stereo-related concerns during editing. It also helped that we could judge a cut straight from the Avid on the 103” plasma, instead of relying on a small TV screen.”

df_hobbit_3The editorial team was working with what amounted to 24fps high-definition proxy files for stereo 48fps RED .r3d camera masters. Edit decision lists were shared with Weta Digital and Park Road Post Production for visual effects, conform and digital intermediate color correction/finishing at a 2K resolution. Based on these EDLs, each unit would retrieve the specific footage needed from the camera masters, which had been archived onto LTO data tape.

The Hobbit trilogy is a heavy visual effects production, which had Olssen tapping into the Media Composer toolkit. Olssen said, “We started with a lot of low resolution, pre-visualization animations as placeholders for the effects shots. As the real effects started coming in, we would replace the pre-vis footage with the correct effects shots. With the Gollum scenes we were lucky enough to have Andy Serkis in the actual live action footage from set, so they were easy to visualize how the scene would look. But other CG characters, like Azog, were captured separately on a Performance Capture stage. That meant we had to layer separately-shot material into a single shot. We were cutting vertically in the timeline, as well as horizontally. In the early stages, many of the scenes were a patchwork of live action and pre-vis, so I used PIP effects to overlay elements to determine the scene timing. Naturally, I had to do a lot of temp green-screen composites. The dwarves are full-size actors and for many of the scenes, we had to scale them down and reposition them in the shot so we could see how the shots were coming together.”

As with most feature film editors, Jabez Olssen likes to fill out his cut with temporary sound effects and music, so that in-progress screenings feel like a complete film. He continued, “We were lucky to use some of Howard Shore’s music from the Rings films for character themes that tie The Hobbit back into The Lord of the Rings. He wrote some nice ‘Hobbity’ music for those. We couldn’t use too much of it, though, because it was so familiar to us! The sound department at Park Road Post Production uses Avid Pro Tools systems. They also have a Media Composer connected to the same ISIS storage, which enabled the sound editors to screen the cut there. From it, they generated QuickTime files for picture reference and audio files so the sound editors could work locally on their own Pro Tools workstations.”

Audiences are looking forward to the next two films in the series, which means the adventure continues for Jabez Olssen. On such a long term production many editors would be reluctant to update software, but not this time. Olssen concluded, “I actually like to upgrade, because I look forward to the new features. Although, I usually wait a few weeks until everyone knows it’s safe. We ended up on version 6.0 at the end of the first film and are on 6.5 now. Other nonlinear editing software packages are more designed for one-man bands, but Media Composer is really the only software that works for a huge visual effects film. You can’t underestimate how valuable it is to have all of the assistant editors be able to open the same projects and bins. The stability and reliability is the best. It means that we can deliver challenging films like The Hobbit trilogy on a tight post production schedule and know the system won’t let us down.”

Originally written for Avid Technology, Inc.

©2013 Oliver Peters

Post Production Mastering Tips

The last step in commercial music production is mastering. Typically this involves making a recording sound as good as it possibly can through the application of equalization and multiband compression. In the case of LPs and CDs (remember those?), this also includes setting up the flow from one tune to the next and balancing out levels so the entire product has a consistent sound. Video post has a similar phase, which has historically been in the hands of the finishing or online editor.

That sounds so sweet

The most direct comparison between the last video finishing steps and commercial music mastering is how filters are applied in order to properly compress the audio track and to bring video levels within legal broadcast specs. When I edit projects in Apple Final Cut Pro 7 and do my own mixes, I frequently use Soundtrack Pro as the place to polish the audio. My STP mixing strategy employs tracks that route into one or more subgroup buses and then a master output bus. Four to eight tracks of content in FCP might become twenty tracks in STP. Voice-over, sync-sound, SFX and music elements get spread over more tracks and routed to appropriate subgroups. These subgroups then flow into the master bus. This gives me the flexibility to apply specific filters to a track and have fine control over the audio.

I’ll usually apply a compressor across the master bus to tame any peaks and beef up the mix. My settings involve a low compression ratio and a hard limit at -10dB. The objective is to keep the mix levels reasonable so as to preserve dynamic range. I don’t want to slam the meters and drive the signal hard into compression. Even when I do the complete mix in Final Cut, I will still use Soundtrack Pro simply to compress the composite mix, because I prefer its filters. When you set the reference tone to -20dB, then these levels will match the nominal levels for most digital VTRs. If you are laying off to an analog format, such as Betacam-SP, set your reference tone to -12dB and match the input on the deck to 0VU.

Getting ready for broadcast

The video equivalent is the broadcast safe limiting filter. Most NLEs have one, including Avid Media Composer and both old and new versions of Final Cut. This should normally be the last filter in the chain of effects. It’s often best to apply it to a self-contained file in FCP 7, a higher track in Media Composer or a compound clip in FCP X. Broadcast specs will vary with the network or station receiving your files or tapes, so check first. It’s worth noting that many popular effects, like glow dissolves, violate these parameters. You want the maximum luminance levels (white peaks) to be limited to 100 IRE and chrominance to not exceed 110, 115 or 120, depending on the specs of the broadcaster to whom you are delivering. In short, the chroma should stay within the outer ring of a vectorscope. I usually turn off any RGB limiting to avoid artifacts.

It’s often a good idea to reduce the overall video levels by about five percent prior to the application of a broadcast safe filter, simply so you don’t clip too harshly. That’s the same principle as I’ve applied to the audio mix. For example, I will often first apply a color correction filter to slightly lower the luminance level and reduce chroma. In addition, I’ll frequently use a desaturate highlights or lows filter. As you raise midrange or highlight levels and crush shadows during color correction, the chroma is also driven higher and/or lower accordingly. Red, blues and yellows are most susceptible, so it’s a good idea to tone down chroma saturation above 90 IRE and below 20 IRE. Most of these filters let you feather the transition range and the percentage of desaturation, so play with the settings to get the most subtle result. This keeps the overall image vibrant, but still legal.

Let me interject at this point that what you pay for when using a music mastering specialist are the “ears” (and brain) of the engineer and their premium monitoring environment. This should be equally true of a video finishing environment. Without proper audio and video monitoring, it’s impossible to tell whether the adjustments being made are correct. Accurate speakers, calibrated broadcast video monitors and video scopes are essential tools. Having said that though, software scopes and modern computer displays aren’t completely inaccurate. For example, the software scopes in FCP X and Apple’s ColorSync technology are quite good. Tools like Blackmagic Design Ultrascope, HP Dreamcolor or Apple Cinema Displays do provide accurate monitoring in lower-cost situations. I’ve compared the FCP X Viewer on an iMac to the output displayed on a broadcast monitor fed by an AJA IoXT. I find that both match surprisingly well. Ultimately it gets down to trusting an editor who knows how to get the best out of any given system.

Navigating the formats

Editors work in a multi-standard world. I frequently cut HD spots that run as downconverted SD content for broadcast, as well as at a higher HD resolution for the internet. The best production and post “lingua franca” format today is 1080p/23.976. This format fits a sweet spot for the internet, Blu-ray, DVD and modern LCD and plasma displays. It’s also readily available in just about every camera at any price range. Even if your product is only intended to be displayed as standard definition today, it’s a good idea to future-proof it by working in HD.

If you shoot, edit and master at 1080p/23.976, then you can easily convert to NTSC, 720p/59.94 or 1080i/29.97 for broadcast. The last step for many of my projects is to create deliverables from my master file. Usually this involves creating three separate broadcast files in SD and two HD formats using either ProRes or uncompressed codecs. I will also generate an internet version (without bars, tone, countdown or slate) that’s a high-quality H.264 file in the 720p/23.976 format. Either .mov or .mp4 is fine.

Adobe After Effects is my tool of choice for these broadcast conversions, because it does high-quality scaling and adds proper cadences. I follow these steps.

A) Export a self-contained 1080p/23.976 ProResHQ file from FCP 7 or X.

B) Place that into a 720×486, 29.97fps After Effects D1 composition and scale the source clip to size. Generally this will be letterboxed inside of the 4×3 frame.

C) Render an uncompressed QuickTime file, which is lower-field ordered with added 2:3 pulldown.

D) Re-import that into FCP 7 or X using a matching sequence setting, add the mixed track and format it with bars, tone, countdown and slate.

E) Export a final self-contained broadcast master file.

F) Repeat the process for each additional broadcast format.

Getting back there

Archiving is “The $64,000 Question” for today’s digital media shops. File-based mastering and archiving introduces dilemmas that didn’t exist with videotape. I recommend always exporting a final mixed master file along with a split-track, textless submaster. QuickTime files support multi-channel audio configurations, so building such a file with separate stereo stems for dialogue, sound effects and music is very easy in just about any NLE. Self-contained QuickTime movies with discrete audio channels can be exported from both FCP 7 and FCP X (using Roles).

Even if your NLE can’t export multi-channel master files, export the individual submixed elements as .wav or .aif audio files for future use. In addition to the audio track configuration, remove any titles and logos. By having these two files (master and submaster), it’s very simple to make most of the future revisions you might encounter without ever having to restore the original editorial project. Naturally, one question is which codec to use for access in the future. The preferred codec families these days are Avid DNxHD, Apple ProRes, uncompressed, OP1a MXF (XDCAM) or IMX. FCP editors will tend towards ProRes and Avid editors towards DNxHD, but uncompressed is very viable with the low cost of storage. For feature films, another option to consider would be image sequences, like a string of uncompressed TIFF or DPX files.

Whichever format you standardize on, make multiple copies. LTO data tape is considered the best storage medium, but for small files, like edited TV commercial masters, DVD-ROM, Blu-ray and XDCAM media are likely the most robust. This is especially true in the case of water damage.

The typical strategy for most small users who don’t want to invest in LTO drives is a three-pronged solution.

A) Store all camera footage, elements and masters on a RAID array for near-term editing access.

B) Back-up the same items onto at least two copies of raw SATA or SSD hard drives for longer storage.

C) Burn DVD-ROM or BD-ROM copies of edited master files, submasters, project files and elements (music, VO, graphics, etc.).

A properly polished production with audio and video levels that conform to standards is an essential aspect of delivering a professional product. Developing effective mastering and archiving procedures will protect the investment your clients have made in a production. Even better, a reliable archive routine will bring you repeat business, because it’s easy to return to the project in the future.

Originally written for DV magazine/Creative Planet/NewBay Media, LLC

©2012 Oliver Peters

Levels – Avid vs. FCP

One of the frequent misconceptions between Avid and Final Cut editors involves video levels. Many argue that FCP does not work within the proper video level standards, which is incorrect. This belief stems from the fact that FCP is based on QuickTime and permits a mixture of consumer and professional codecs. QuickTime Player often changes a file’s appearance as compared with FCP, when it is used to play the file directly. QuickTime Player is trying to optimize the file to look its best on your computer monitor; however, it isn’t actually changing the file itself. Furthermore, two identical clips will appear to be different within each NLE’s interface. Avid clips look flatter and more washed out inside Media Composer. FCP clips will be optimized for the computer display and appear to have more contrast and a different gamma value. This is explained well by Janusz Baranek in this Avid Community thread.

Contrary to popular opinion, both NLEs work within the digital video standards for levels and color space – aka Rec. 601 (SD) and Rec. 709 (HD). Digital video levels are generally expressed using an 8-bit/256-step scale. The nominal black point is mapped to 16 and the white point to 235, which permits level excursions without clipping: 0-16 for shadow detail and 235-255 for highlight recovery. This standard was derived from both camera design and legacy analog NTSC transmission. On most waveform monitors digital 0, analog 7.5 IRE and 16 on this scale are all the same level. Digital 100 (700 millivolts on some scopes), analog 100 IRE and 235 on the scale are also equal. Higher and lower levels will be displayed on a waveform as video above 100/100IRE/235 and below 0/7.5IRE/16.

I want to be clear that this post is not a right/wrong, good/bad approach. It’s simply an exploration in how each editing application treats video levels. This is in an effort to help you see where adjustments can be made if you are encountering problems.

Avid Media Composer/NewsCutter/Symphony

Video captured through Avid’s i/o hardware is mapped to this 16-235 range. Video imported from the computer, like stills and animation can have either a full range of 0-255 (so called “full swing”) or a digital video range of 16-235 (so called “studio swing”) values. Prior to AMA (Avid Media Access), Avid editors would determine these import values in the Media Composer settings, by selecting to import files with RGB values or 601/709 values. You can “cheat” the system by importing digital camera files with an expanded range (spreading the levels to “full swing” of 0-255). Doing so may appear to offer greater color grading latitude, but it introduces two issues. First, all clips have to be color corrected to adjust the levels for proper output values (legal for broadcast). Second, some filters, like the BCC effects, clip rendered files at 16 and 235, thus defeating the original purpose.

It has now become a lot more complex in the file-based world. The files you import are no longer just stills and animation, but also camera and master files from a variety of sources, including other NLEs, like FCP – or HDLSRs, like the Canon 5D. Thanks to AMA in Media Composer 5, this is now automatically taken care of. AMA will properly import files at the right levels based on the format. A digital graphic, like a 0-255 color bar test pattern, is imported at the full range without rescaling the color values from 0-255 to 16-235. A digital video movie from a Canon 5D will be imported with values fitting into the 16-235 range.

Because of the nature of how Avid handles media on the timeline, it is possible to have a full range (0-255) clip on the same timeline next to a studio range clip (16-235) and levels will be correctly scaled and preserved for each. Avid uses absolute values on its internal waveform (accessed in the color correction mode), so you are always able to see where level excursions occur above 235 (digital 100) and below 16 (digital 0).

I would offer one caveat about AMA importing.  Apparently some users have posted threads at the Avid Community Forums indicating some inconsistencies in behavior. In my case, everything is working as expected on multiple systems and from various Canon HDSLR cameras, but others haven’t been so lucky. As they say, “Your mileage may vary.”

Apple Final Cut Pro

If you capture video into FCP using one of the hardware options and a professional codec (uncompressed, ProRes, IMX, DV/DV50/DV100), then the media files will have levels mapped to 601/709 values (16-235). From here, the waters get muddy, because the way in which those levels are handled in the timeline is based on your processing settings. This affects all imported files, as well, including graphics, animation and media files from cameras and other NLEs.

Confusion is compounded by FCP’s internal waveform monitor, which always represents video with a relative 0-100 percent scale. These display numbers do not represent actual video levels in any absolute sense. When you process in YUV, then the full display area of the waveform from top to bottom equals a range of 0-255. The “legal” digital video standard of 16-235 is represented by the area within the 0-100% markings of the scope. However, when you process in RGB, then the portion within the 0-100% marks represents the full 0-255 range. Yes – in an effort to make it simple, Apple has made it very confusing!

When you set the sequence processing to YUV, with “white” as “white”, then all timeline video is mapped to a “studio swing” range of 16-235. On the scope 0% = 16 and 100% = 235. If you import a “full swing” color bar pattern (0-255), the values will be rescaled by the sequence processing setting to fall into the 16-235 range.

When you set the sequence processing to YUV, with “white” as “superwhite”, you’ve extended the upper end of the range, so that the 16-235 scale now becomes 16-255. The 0-255 color bar pattern is now effectively rescaled to 16-255; however, so is any video as well. Digital video that used to peak at 100% will now peak at 120%.

The YUV processing issues are also affected by the 8-bit, versus “high-precision” rendering options. When you elect to process all video as 8-bit, excursions above 100% and below 0% caused by color correction will be clipped. If you change to “high-precision YUV”, then these excursions are preserved, because they fall within the 0-16 and 235-255 regions. Unfortunately, certain effects and transition filters will still clip at 0% and 100% after rendering.

One way to fully protect “full swing” 0-255 levels is to work in RGB processing. A 0-255 color bar pattern will be correctly displayed, but unfortunately all video is spread to the full range, as well. This would mean that all clips would have to be color corrected to adjust for proper video levels. The only way that I’ve found for FCP to display both a 0-255 and a 16-235 clip on the same timeline and maintain correct levels is to apply a color correction filter to adjust the levels on one of these clips.

For general purposes, the best way to work with FCP is to use the ProRes family of codecs and set your sequence settings for YUV processing, white=white and high-precision rendering. This offers the most practical way of working. The only caveat to this is that any “full swing” file will be rescaled so that levels fall into the 0%-100% (16-235) “studio swing” range. If you need to preserve the full range, then FCP’s color correction filters will allow you to expand the range. The levels may appear to clip as you make the adjustment, but the rendered result will be full range.

Real world examples

I’ve done some quick examples to show how these level issues manifest themselves in actual practice. It’s important to understand that for the most part, the same clip would appear the same in either Media Composer of Final Cut as viewed on a broadcast monitor through output hardware. It will also look the same (more or less) when displayed to a computer screen using each app’s full screen preview function.

The following screen grabs from my tests include a 0-255 color bar chart (TIFF original) and a frame from an H.264 Canon 5D clip. The movie frame represents a good spread from shadow to highlights. I imported the files into both Avid Media Composer 5 (via AMA) and Apple Final Cut Pro 7. The FCP clips were rendered and exported and then brought into MC5 for comparison. The reason to do this last step was so that I could check these on a reasonably trustworthy internal scope, which displayed an 8-bit range in absolute values. It is not meant to be a direct comparison of how the video looks in the UI.

Click any image to enlarge.

Imported into Final Cut. ProResHQ with YUV processing. White as white. Note that the peak white of both images is 100%.

Imported into Final Cut. ProResHQ with YUV processing. White as SuperWhite. Note that peak white of both images exceeds 100%.

Imported into Final Cut. ProRes4444 with RGB processing. Note the boundary limits at 0% and 100%.

Imported into Media Composer 5 using AMA. Note that the color bars are a 0-255 range, while the Canon clip is 16-235.

FCP7 YUV export, imported into MC5 via AMA. Note that the color bar pattern has been rescaled to 16-235 and is no longer full range.

FCP7 YUV export with SuperWhite values, imported into MC5 via AMA. Note that the color bar pattern has been rescaled to 16-255 and is no longer full range. It has a higher top-end, but black values are incorrect. This also alters the scaling values of the levels for the Canon clip. Color correction filters would have to be applied in FCP for a “sort of correct” level match between the bars and the Canon clip.

FCP7 RGB export, imported into MC5 via AMA. Note that the color bar pattern exhibits the full 0-255 range. The Canon clip has also been rescaled to 0-255. Color correction filters would have to be applied in FCP to the Canon clip to bring it back into the correct relative range.

I have revisited the YUV settings in FCP7. This is a ProResHQ sequence rendered with high-precision processing. I have applied a color corrector to the color bars, expanded the range and rendered. Note the regions above 100% and below 0%.

FCP7 YUV export (color correction filter applied to the color bars), imported into MC5 via AMA. Note that the color bar pattern spreads from 0-255, while the Canon clip is still within the standard 16-235 range.

©2010 Oliver Peters

Codec Smackdown

Modern digital acquisition, post and distribution wouldn’t be possible without data rate reduction, AKA compression. People like to disparage compression, but I dare say that few folks – including most post production professionals – have actually seen much uncompressed content. In fact, by the time you see a television program or a digitally-projected movie it has passed through at least three, four or more different compression algorithms – i.e. codecs.

Avid Media Composer and Apple Final Cut Pro dominate the editing landscape, so the most popular high-end HD codecs are the Avid DNxHD and Apple ProRes 422 codec families. Each offers several codecs at differing levels of compression, which are often used for broadcast mastering and delivery. Apple and Avid, along with most other NLE manufacturers, also natively support other camera codecs, such as those from Sony (XDCAM-HD, HD422, EX) and Panasonic (DVCPRO HD, AVC-Intra). Even these camera codecs are being used for intermediate post. I frequently use DVCPRO HD for FCP jobs and I recently received an edited segment as a QuickTime movie encoded with the Sony EX codec. It’s not a question of whether compression is good or bad, but rather, which codec gives you the best results.

Click on the above images to see an enlarged view. (Images from Olympus camera, prior to NLE roundtrip. Resized from original.)

I decided to test some of these codecs to see the results. I started with two stills taken with my Olympus C4000Z – a 4MP point-and-shoot digital camera. These images were originally captured in-camera as 2288-pixel-wide JPEGs in the best setting and then – for this test – converted to 1920×1080 TIFFs in Photoshop. My reason for doing this instead of using captured video, was to get the best starting point. Digital video cameras often exhibit sensor noise and the footage may not have been captured under optimum lighting conditions, which can tend to skew the results. The two images I chose are of the Donnington Grove Country Club and Hotel near Newbury, England – taken on a nice, sunny day. They had good dynamic range and the size reduction in Photoshop added the advantages of oversampling – thus, very clean video images.

I tested various codecs in both Avid Media Composer 4.0.5 and Apple Final Cut Pro 7. Step one was to import the images into each NLE. In Avid, the conversion occurs during the import stage, so I set my import levels to RGB (for computer files) and imported the stills numerous times in these codecs: 1:1 MXF (uncompressed), DNxHD145, DNxHD220, DNxHD220x, XDCAM-EX 35Mbps and XDCAM-HD422 50Mbps. In Final Cut Pro, the conversion occurs when files are placed on the timeline and rendered to the codec setting of that timeline. I imported the two stills and placed and rendered them onto timelines using these codecs: Apple 8-bit (uncompressed), ProRes LT, ProRes, ProRes HQ, DVCPRO HD and XDCAM-EX 35Mbps. These files were then exported again as uncompressed TIFFs for comparison in Photoshop. For Avid, this means exporting the files with RGB levels (for computer files) and for FCP, using the QuickTime Conversion – Still Image option (set to TIFF).

Note that in Final Cut Pro you have the option of controlling the import gamma settings of stills and animation files. Depending on the selection (source, 1.8, 2.20, 2.22) you choose, your video in and back out of Final Cut may or may not be identical to the original. In this case, choosing “source” gamma matched the Avid roundtrip, whereas using a gamma setting of 2.2 resulted in a darker image exported from FCP.

Click on the above images to see an enlarged view.

You’ll notice that in addition to various compressed codecs, I also used an uncompressed setting. The reason is that even “uncompressed” is a media codec. Furthermore, to be accurate, compression comparisons need to be done against the uncompressed video image, not the original computer still or graphic. There are always going to be some changes when a computer file is brought into the video domain, so you can’t fairly judge a compressed video file against the original photo. Had I been comparing video captured through a hardware card, then obviously I would only have uncompressed video files as my cleanest reference images.

I lined up the exported TIFFs as Photoshop layers and generated comparisons by setting the layer mode to “difference”. This generates a composite image based on any pixel value that is different between the two layers. These difference images were generated by matching a compressed layer against the corresponding Avid or FCP uncompressed video layer. In other words, I’m trying to show how much data is lost when you use a given compressed codec versus the uncompressed video image. Most compression methods disproportionately affect the image in the shadow areas. When you look at a histogram displaying these difference results, you only see levels in the darkest portion of an 8-bit scale. On a 0-255 range of levels, the histogram will be flat down to about 20 or 30 and then slope up quickly to a spike at close to 0.

This tells you that the largest difference is in the darkest areas. The maximum compression artifacts are visible in this range. The higher quality codecs (least compressed), exhibit a smaller histogram range that is closer to 0. The more highly-compressed codecs have a fatter range. This fact largely explains why – when you color grade highly compressed camera images – compression artifacts become quite visible if you raise black or gamma levels.

The resulting difference images were then adjusted to show artifacts clearly in these posted images. By adjusted, I mean changing the levels range by dropping the input white point from 255 to 40 and the output black point from 0 to 20. This is mainly for illustration and I want to reiterate that the normal composite images DO NOT look as bad as my adjusted images would imply. In fact, if you looked at the uncorrected images on a computer screen without benefit of a histogram display, you might think there was nothing there. I merely stretched the available dynamic range for demonstration purposes.

Of these various codecs, the Apple DVCPRO HD codec shows some extreme difference results. That’s because it’s the only one of these codecs that uses horizontal raster scaling. Not only is the data compressed, but the image is horizontally squeezed. In this roundtrip, the image has gone from 1920-pixels-wide (TIFF) to 1280 (DVCPRO HD) back to 1920 (exported TIFF). The effects of this clearly show in the difference image.

Click on the above images to see an enlarged view.

There are a couple of other things you may notice, such as level differences between the Avid and Apple images and between each of these and the originals. As I said before, there will always be some differences in this sort of conversion. Plus, Apple and Avid do not handle color space, level and gamma mapping in the same way, so a round trip through each application will yield slightly different results. Generally, if 2.2 gamma is selected for imported stills, the Apple FCP image will have a bit more contrast and somewhat darker shadow areas when compared to Avid on a computer screen – even when proper RGB versus Rec. 709 settings are maintained for Avid. This is mainly a result of the various QuickTime and other conversions going on.

If I were to capture video with Avid DX hardware on the Media Composer and AJA, Matrox or Blackmagic hardware on FCP – and compared these images on a video monitor and with scopes – there would likely be no such visible difference. When I used “source” gamma in FCP, then the two matched each other. Likewise, when you review the difference images below, 2.2 gamma in this case resulted in a fault difference composite between the FCP uncompressed and the original photo. The “source” gamma version more closely resembles the Avid result and is the right setting for these images.

The take-away from these tests should be that the most important comparisons are those that are relative, i.e. “within species”. In other words, how does ProRes LT compare to ProRes HQ or how does DNxHD 145 compare to DNxHD 220x? Not, how an Avid export compares with a Final Cut export. A valid inter-NLE comparison, however, is whether Avid’s DNxHD220x shows more or less compression artifacts than Apple’s ProRes HQ.

I think these results are pretty obvious: Higher-data-rate codecs (less compression) like Apple ProRes HQ or Avid DNxHD 220x yield superb results. Lower-date-rate codecs (more compression) like XDCAM-EX yield results that aren’t as good. I hope that arming you with some visible evidence of these comparisons, will help you better decide what post trade-off to use in the future.

(In case you’re wondering, I do highly recommend the Donnington Grove for a relaxing vacation in the English countryside. Cheers!)

Click on these images to see an enlarged view.

©2010 Oliver Peters

Mixing formats in the edit

The sheer mix and volume of formats to deal with today can be mind-boggling. Videotape player/recorders – formerly a common denominator – are a vanishing breed. Post facilities still own and use VTRs, but operations at the local market level, especially in broadcast, are becoming increasingly tapeless. Clearly, once the current crop of installed VTRs become a maintenance headache or are no longer an important cog in the operation, they won’t be replaced with another shiny new mechanical videotape transport from Sony or Panasonic.

It all starts with the camera, so the driving influence is the move to tapeless acquisition – P2, XDCAM-HD, GFcam, AVC-HD and so on. On the bright side, it means that the integration of another format will cost no more than the purchase of an inexpensive reader, rather than a new VTR to support that format. Unfortunately this will also mean a proliferation of new formats for the editor to deal with.

The term format should be clarified with tapeless media, like P2. First, there is the codec used for the actual audio and video content (essence). That essence is defined by the compression method (like DVCPRO HD or AVC-Intra), frame size (SD or HD), pixel aspect ratio and frame rate. The essence is encapsulated into a file wrapper (MXF), which holds the essence and metadata (information about the essence). Lastly, in the P2 example, the files are written to a physical transport medium (the P2 card itself), using a specific folder and file hierarchy. Maintaining this folder structure is critical in order that an NLE can natively recognize the media, once it’s copied from the card to a hard drive.

Nonlinear editing systems have been built around a specific media structure. Avid Media Composer uses OMF and MXF. Apple Final Cut Pro is based on QuickTime. In theory, each can ingest a wide range of tapeless file formats, but the truth is that they only work well with a much narrower range of optimized media. For instance, DVCPRO HD is handled well by most NLEs, but H.264 is not. You can toss a mix of formats onto a common timeline, but the system is internally operating with specific settings (codec, frame size and frame rate) for that timeline.

These settings are established when you first create a new project or a new sequence, depending on the application. Any media on the timeline that deviates from these settings must either be scaled and decompressed on-the-fly by the real-time effects engine of the application – or must be rendered – in order to see full-quality playback.  Most systems are optimized for NTSC, PAL, 720p and 1080i frame sizes. Even Final Cut Pro – touted as resolution independent – works best at these sizes and effectively tops out at 2K film sizes. All the desktop NLEs freely allow you to mix SD and HD content on a timeline, but the rub has been a mix of differing frame rates. FCP could do it, but Media Composer wouldn’t. That barrier disappeared with Avid’s introduction of the Mix & Match feature in the Media Composer 4.0 software. Now, if you edit a native 23.98p clip into a 29.97i timeline, all of the leading editing applications will add a pulldown cadence to the 23.98p clip for proper 29.97i playback.

When editing a project that has a mix of SD and HD sources and formats, it is best to select a timeline or project setting that matches the predominant format. For instance, if 75% of your media was shot using a Panasonic VariCam at 720p/59.94, then you’d want to use a matching timeline preset, so that the 720p footage wouldn’t require any rendering,  except for effects. In this example, if the other 25% was NTSC legacy footage from Betacam-SP, you’d need to have a system equipped with a capture card capable of ingesting analog footage. The Beta-SP footage could be upconverted to HD during the capture using the hardware conversion power of a capture card. Alternately,  it could be captured as standard definition video, edited onto the timeline and then scaled to fill the HD frame. Betacam-SP clips captured as standard definition video would ultimately be rendered to match the 720p/59.94 settings of the timeline.

Until recently, Avid systems transcoded incoming media into an Avid codec wrapped as an MXF file. This creates media files that are optimized for the best performance. Final Cut would let you drag and drop any QuickTime file into the FCP browser without a transcode, but non-QuickTime files had to be converted or rewrapped as QuickTime MOV files. These frontrunners were upstaged by applications like Grass Valley EDIUS and Sony Vegas Pro, which have been able to accept a much wider range of media types in their original form. The trend now is to handle native camera codecs without any conversion. Apple added the Log and Transfer module to Final Cut and Avid added its Avid Media Access (AMA). Both are plug-in architectures designed for native camera media and form a foundation for the use of these files inside each NLE.

Final Cut’s Log and Transfer is recommended for importing P2, RED, XDCAM and other media, but it still doesn’t provide direct editing support. Even natively-supported codecs, like REDCODE and AVC-Intra must first be wrapped as QuickTime files. When clips are ingested via Log and transfer, the files are copied to a target media drive and in the process rewrapped as QuickTime MOV file containers. It’s Apple’s position that this intermediate transcode step is a safer way to handle camera media without the potential of unrecoverable file corruption that can occur if you work directly with the original media.

If you want true native support – meaning the ability to mount the hard drive or card containing your raw media and start editing at full resolution – then the Avid Media Composer family, Grass Valley EDIUS and Adobe Premiere Pro provide the broadcaster with the strongest desktop solutions. All three recognize the file structure of certain camera formats (like P2), natively read the camera codec and let you use the media as an edit source without the need to transcode or copy the file first. These APIs are evolving and are dependent on proper media drivers written by the camera manufacturers. Not all applications handle every format equally well, so select a system that’s appropriate for you. For example, P2 using the DVCPRO HD or AVC-Intra codec is becoming widely supported, but Panasonic’s AVCCAM has less support. Sony hit snags with XDCAM-EX support for Final Cut Pro when Apple upgraded the Mac OS to 10.6 (“Snow Leopard”). Fortunately these issues are short-lived. In the future it will be easier than ever to mix taped and tapeless camera media of nearly any format with little negative impact.

Written for NewBay Media and TV Technology magazine

©2009 Oliver Peters

AJA Ki Pro

blg_kipro

If you thought that there were more than enough tapeless recording devices already on the market by Focus Enhancements, Edirol and Convergent Designs, you would only be partially right. The AJA Ki Pro sparked a lot of enthusiasm at NAB 2009. While it clearly offers cameramen many benefits, it also provides some opportunities for the world of post production.

The Ki Pro was developed by AJA, but like the Io and the IoHD before it, the internal software was co-developed with Apple. Ki Pro approaches tapeless field production from an NLE-friendly, rather than camera-native, design. It records QuickTime movies using embedded versions of Apple’s ProRes 422 and ProRes 422HQ codecs. As a result, you can open these files directly from the hard drive using any QuickTime compliant application, as long as the ProRes codecs are installed on your computer.

As an aside, the name Ki Pro stems from the Asian concept of ki or chi. This is a term for the life force or inner power of all living beings and plays a large part in the philosophies of many types of martial arts.

Configuration

The AJA Ki Pro uses a small, lightweight form factor. It’s about the size of a very large paperback book and can be attached in the field to various camera rigs. The standard package (MSRP $3,995) includes the Ki Pro device, a 250GB removable hard drive and AC power adapters for the Ki Pro and the drive, for when it is detached. Optional accessories include larger capacity drives, solid state storage and a cage and rail system called the Exoskeleton. The latter is a bracket and mount to install the Ki Pro onto a camera rig or tripod and then to attach a small camera to that Exoskeleton system.

Think of the Ki Pro as a recording device that’s built around a version of the AJA FS1 format converter. This means that you not only record in native 525i, 625i, 720p, 1080i or 1080psf, but you can also up/down/cross-convert a signal to one of these formats on input or output. The front panel gives you access to transport controls, menu functions and mix levels for the analog inputs. The back panel holds a series of input and output connectors for HDMI, SDI, component analog and composite video. There are also unbalanced RCA and balanced analog audio XLR connectors with a mic, line and phantom power switch. Finally, there are other interface connections, including timecode in/out, a 9-pin serial port, 1394a, 1394b and Ethernet.

The Ki Pro includes a removable, Mac-formatted 250GB hard drive, which docks to the Ki Pro and connects over a custom multi-pin connector. It can also be connected externally to any computer with a FireWire 800 port (1394b). The Ki Pro front panel sports two ExpressCard|34 memory slots, for optional future recording to a card-based medium.

In the field

I found the Ki Pro to be extremely well thought out. You can run it in the field off of battery or the AC adapter if you have shore power. The system can be controlled from the front panel, a LAN or wirelessly through an access point like an Airport base station. This means you can control it remotely from a laptop or even an iPhone or iPod Touch via a web browser. The latter might come in handy if you have a Ki Pro mounted at the end of a camera crane.

The record settings, like format, clip name, conversion, timecode values, etc. are set by an operator using the front panel controls or one of the remote methods. The menu is easy to navigate once you get the hang of it, but it’s easier to do from the web interface. I tested it through my home router without any issues. Plug in the IP address as the URL and you have access to all the Ki Pro settings (and operational control) using Firefox, Safari or another standard browser.

As an editor, I appreciate the thought put into naming conventions. Unlike the cryptic methods used by camera manufacturers, the Ki Pro lets you assign reel IDs and clip or scene numbers in an EDL and script-compatible manner. Typically all recordings on one drive would have the same reel number, from 001 to 999. Recordings can be designated as clips or scenes with appended alphabetical values and take numbers. Once you assign the initial values, subsequent recordings automatically increment the take number until the operator makes a change. Your first recorded file might be labeled as SC12ATK1, the next would be SC12ATK2 and so on. When you mount the drive on your computer, it shows up with the name of 001 (or another assigned reel number) on the desktop.

Actual use

At the time of this review, shipping units like my evaluation Ki Pro have 1.0 software. Not all functions are yet enabled. For example, I couldn’t start/stop recordings from a camera. AJA is planning an October firmware update that will enable such automatic recording. You will be able to roll the camera and if it provides SDI embedded timecode or has LTC timecode output, then the Ki Pro starts recording when it sees the timecode value change and stops when the value stops changing.

Another function I like is auto-format-sensing. Whatever is coming into Ki Pro will automatically be the native format recorded, unless up/down/cross-conversion is assigned. The exception is 23.98PsF media. To properly record these files, the operator must change the Record Type from Normal to PsF. I was able to test this with SDI from a Sony EX-3 and it worked as advertised. In a future update, AJA plans to provide VFR support as in its KONA and IoHD products. This means you would be able to record the output of a Panasonic VariCam and the Ki Pro would record and recognize the variable speed flags.

AJA started development of Ki Pro long before Apple released its new Final Cut Studio, which included additional ProRes codecs. It is likely that AJA will eventually expand the recording options to include other ProRes codecs; however, the Ki Pro is a single-stream 4:2:2 SDI device. This makes it unlikely that the current Ki Pro model will support the new high-end ProRes 4444 codec. Personally, I have no problem with this, because Ki Pro is intended to be a mastering device on par with high-quality videotape. ProRes 422 equates to the data rate of HDCAM at 147Mbps, while ProRes 422HQ is close to HD-D5 at 220Mbps. In its present form, Ki Pro delivers outstanding visual quality already matching or surpassing all other HD camcorder recordings.

One of the big benefits of Ki Pro is that it extends the life of cameras that have good image technology, but weak recording systems. Many Panasonic VariCam owners aren’t keen to change to newer P2 cameras, since their tape-based VariCams still create very compelling images. Adding a Ki Pro and recording the full-raster, uncompressed HD-SDI output from the camera as native 720p or converted 1080i, means that there’s a lot of life left in those VariCams. Another example is Canon’s XL H1, which is a great camera burdened with a 25Mbps HDV recording mechanism. Ki Pro adds a superior recording system to that camera.

Post production

All of the above makes Ki Pro a great recording product, but the real beauty is for Final Cut editors. Simply eject the 250GB drive, connect it to your computer via FW800 and it mounts on the desktop. All files are contained within a single AJA folder. You can copy those files to your local drive or edit directly from the Ki Pro drive. If you want to edit directly, simply import the AJA folder into the FCP browser and the clips are immediately available. I received a “media not optimized” prompt on my MacBook Pro, but, I didn’t see that same message with a Mac Pro tower. This is a result of how FCP’s Dynamic RT technology indexes performance on these two different computers. Nevertheless, various HD clips in both ProRes 422 and ProRes 422HQ played fine from the Ki Pro’s removable drive on both the laptop and the workstation.

The AJA Ki Pro offers other advantages away from the field. Since up/down/cross-conversion is built-in, simply cable the Ki Pro to nearly any monitor and you can play out audio and video. I was even able to connect HDMI to my living room flat panel and see the high-def video from the Ki Pro. Since the drive uses standard Mac formatting, you can also copy compatible QuickTime ProRes files from the computer back into the AJA folder on the drive. Once the drive is docked back into the Ki Pro, these files can be played out through the video spigots as if they were recorded by the Ki Pro. In addition, the front panel will display the file name, even if it doesn’t conform to the clip/scene naming convention used by the Ki Pro.

(Note: According to AJA this is not yet officially supported, due to some remaining audio work. This will be fully implemented in a future update. Also in the future will be support for the i/o of up to 8 channels of audio over embedded SDI and HDMI.)

This last situation brings up some interesting possibilities. Many small shops are resisting the need to purchase HD VTRs, which can potentially cost more than their entire edit system. If you need to deliver a high definition videotape master (HDCAM, HDCAM-SR, HD-D5, etc.), Ki Pro could be used as an intermediate source. Copy the show to the Ki Pro drive and then take the complete unit to a facility that owns the necessary deck. Connect the Ki Pro to the VTR using SDI and dub from the Ki Pro to the videotape. Granted it’s two steps, but the cost of the Ki Pro, the service and tape stock is a lot less than owning a high-end VTR for only infrequent use. Several days’ rental alone of an HDCAM-SR deck would pay for the Ki Pro.

In closing, it’s important to note that although the ProRes codecs are optimized for Final Cut, this doesn’t mean Ki Pro recordings are limited to only Final Cut Pro. If you run Adobe’s CS4 products on a Mac, then ProRes and ProRes HQ files open and can be used in both Premiere Pro and After Effects. (Final Cut Studio or a ProRes QT component must also be installed to enable this.) Same for Media 100. These files can also be imported into Avid Media Composer, but will be transcoded into DNxHD media upon import. (That might change down the road, if Avid includes drivers for QuickTime files within its Avid Media Architecture API.) Finally, Apple offers a free Windows playback-only QuickTime component for ProRes files. This enables you to open and play ProRes-encoded movies on PCs with QuickTime installed.

On the whole, AJA’s Ki Pro is a versatile product that has quite a few useful applications in the field, the studio and in post. AJA has earned a stellar support reputation, which goes a long way towards pushing the Ki Pro ahead of the competition. If you’ve been looking for a tapeless acquisition device that was designed with post in mind, then look no further. The AJA Ki Pro is it.

© 2009 Oliver Peters

Written for NewBay Media LLC and DV magazine