More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Offline to Online with Premiere Pro or Final Cut Pro X

df_offon_1

Most NLE makers are pushing the ability to edit with native camera media, but there are still plenty of reasons to work in an offline-to-online editing workflow. Both Apple Final Cut Pro X and Adobe Premiere Pro CC make it very easy to do this.

Apple Final Cut Pro X

df_offon_2Apple built offline/online right into the design of FCP X. The application can internally transcode optimized media (such as converting GoPro files to ProRes) and proxy media. Proxy media is usually a half-sized version using the ProRes Proxy codec. There’s a preference toggle to switch between original/optimized or proxy media, with FCP X taking care of making sure all transforms and effects are applied properly between both selections.

df_offon_3What most folks don’t know is that you can “cheat” this system. If you import media and choose to copy it into your Event folder, then source media is stored in the Original Media folder within the Event folder. If you create proxies, those files are stored in the Transcoded Media – Proxy Media folder within the Event folder. It is possible to create and place these folders via the Finder. You just have to be careful about exact name and location. Once you do this, it is possible via the Finder, to copy camera media and edit proxies directly into these folders. For example, your DIT might have created proxies for you on location, using Resolve.

df_offon_4Once you launch FCP X, it will automatically find these files. The main criteria is that file names, timecode and duration are identical between the two sets of files. If X properly recognizes the files, you can easily toggle between original/optimized and proxy with the application behaving correctly. If you are unsure of creating these folders in the first place, then I suggest setting these up within FCP X by importing and transcoding a single bogus clip, like a slate or camera bars. Once the folders are set by FCP X, delete this first clip. DO NOT mix the workflows by importing/transcoding some of the clips via FCP X and then later altering or replacing these clips via the Finder. This will completely confuse X. With these few caveats, it is possible to set up a multi-user offline-online workflow using externally-generated media, but still maintaining control via FCP X.

UPDATE: With the FCP X 10.1 update, you must generate proxies with FCP X. Externally-generated proxies do not link as they did up to 10.0.9.

Adobe Premiere Pro CC

df_offon_5A more customary solution is available to Adobe editors thanks to the new Link and Locate feature. A common scenario is that editors might cut a spot in an offline edit session using proxy edit media – such as low-res files with timecode “burn-ins”. Then the camera files are color corrected in an outside grading session and rendered as final, trimmed clips that match the timeline clip lengths, with a few seconds of “handles”. Now the editor has to conform the sequence by linking to the new high-res, graded files.

With Premiere Pro CC you’d start the process in the normal manner by ingesting and cutting with the proxy files. When the cut is locked, create a trimmed project for the sequence, using the same handle length as the colorist will use. This is created using the Project Manger and you can select the option to make the clips Offline. Next, send an EDL or XML file for your locked cut, plus the camera media to the colorist.

df_offon_6Once you get the graded files back, open your trimmed Premiere Pro project. All media will be offline. Select the master clips and pick the Link Media option to open the Link Media dialogue window. Using the Match File Properties settings, set the parameters so that Premiere Pro will properly link to the altered files. Sometimes files names will be different, so you will have to adjust the the Link and Locate parameters accordingly, by deselecting certain matching options. For example, you might want a match strictly by timecode, ignoring file names.

Press Locate and navigate to the new location of the first missing file and relink. Normally all other clips in the same relative path will automatically relink, as well. Now you’ve got your edited sequence back, except with media populated by the final, high-quality files.

©2013 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

RED Post – the Easy Way III

If you’ve read some of my past articles about RED, you know I’m not a huge fan of “native” editing using the camera raw files as source clips. I find that an offline/online workflow is still best for smoothly editing RED projects, yet it still retains access to the raw color data during the finishing process. Previously I discussed an easy workflow for Apple Final Cut Pro and Color users, but this isn’t the only solution. As you know, Avid Media Composer 5 and Adobe Premiere Pro CS5 have both integrated support for RED’s camera raw files. In this post, I’m going to discuss a couple of ways to use these tools in a non-native fashion.

Option A:  Avid Media Composer 5 offline-online RED workflow

Thanks to AMA and RED camera’s SDK, Media Composer 5 offers access to RED’s .R3D files. You can import camera files and adjust the source color settings from within the NLE’s interface. You can either edit directly from these files or transcode them to Avid media for a smoother and faster editing experience. Here is a short step-by-step explanation of a Media Composer-based workflow.

Step 1. Access/import RED .R3D files via AMA (Avid Media Access). Camera clips will open inside Media Composer bins, complete with camera metadata.

Step 2. If you want to change the levels/gamma/exposure/balance of the file by altering the camera raw data, then open the Source Settings for each clip and adjust the video.

Step 3. Adjust the clip framing by opening the bin Reformat column and set the option for each clip (center cut, letterboxed, etc.). Remember that your RED clips may have a 2:1 aspect ratio, but your Avid sequence will be either HD 16:9 or SD 16:9 / 4:3.

Step 4. Set the Media Creation render tab to a video resolution of DNxHD36 with a Debayer quality of “quarter”. Since the objective is a good rough cut – not “finishing” – this quality settings is more than adequate for editing and screening your creative edits.

Step 5. Transcode all source clips. This process runs at close to real-time on a fast machine. When transcoding is done, close all AMA bins and do not use them during the edit. You’ll edit with the transcoded media only.

Step 6. Edit as normal until you get an approved, “locked” picture.

Step 7. Now it’s time to switch to “finishing”. Move or hide all Avid media (the transcoded DNxHD36 clips) by taking them out of the Avid MediaFiles/MXF/1 folder(s) on your media hard drive(s). You could also delete them, but it’s safer not to do that unless you really have to. Best to simply move them into a relabeled folder. Once you’ve done this, your edited sequence will appear with all media off-line.

Step 8. Open the AMA bins (with the .R3D files) and relink the edited sequence to the AMA clips. Make sure the “Allow relinking of imported/AMA clips by Source File name” is NOT checked in the Relink dialogue window. When relinking is completed, the sequence will be repopulated with AMA media, which will be the native, camera raw .R3D files. If you want to change the raw color data at this point, you will need to change each source clip and then refresh the sequence to update the color for clips that appear within the timeline.

Step 9. Change the Media Creation settings to a higher video resolution (such as DNxHD 175 X) and a Debayer quality of “full”.

Step 10. Consolidate/transcode your sequence. This will create new Avid media clips at full quality that are only the length of the clips as they appear in the cut, plus handles. Since a transcode using a “full” Debayer setting will be EXTREMELY SLOW, make sure you set very short handle lengths. (Note: If you have a Red Rocket card installed, Avid supports hardware-assisted rendering to accelerate the transcoding of RED media.)

Step 11. Finish all effects and color grading within the NLE as you normally would.

Option B:  Apple FCP / Automatic Duck / Adobe CS5 workflow

You might be asking, why not just edit in Final Cut Pro or Premiere Pro? The hitch is that Final Cut doesn’t support 4K files and Premiere Pro has a good native, but not a good offline-online workflow for RED files. FCP users clearly outnumber Premiere Pro users among professional film and video editors, however, both After Effects and Premiere Pro offer some interesting finishing options. In fact, a number of feature films have used both for all or part of the finishing process. A combination of Apple and Adobe tools creates some interesting scenarios for RED post. (Note: Automatic Duck Pro Import AE 5.0 is required.)

Step 1. Ingest your RED .R3D clips into Final Cut Pro using Log and Transfer. Set the preferences to use ProRes Proxy (NOT “native”). Set the color to “as shot”. This requires that the RED plug-in for FCS has been installed. (Refer to the previous article for a more in-depth explanation of this first step.) Please note that it is important to do this with the R3D files and not to start by simply dragging the in-camera-generated H, M or P QuickTime reference files into the FCP browser. Many RED users erroneously consider these to be “proxy” edit files. They are not. They are reference files at different resolutions/sizes that are linked to the R3D files and do not work correctly in this process.

Step 2. Edit normally in FCP until the cut is “locked”.

Step 3. Export an XML of your Final Cut sequence. I prefer using Automatic Duck’s free XML exporter and have had more reliable results with it, but the built-in FCP XML exporter will also work.

Step 4. Launch Adobe After Effects CS5. (Pro Import AE 5 works with CS3 and CS4, too, but you need to use an Adobe CS version compatible with native RED files.) Import the XML file using Pro Import AE 5. Make sure your Automatic Duck preferences are set to “Replace proxy footage with .R3D files.” The result will be an After Effects timeline with settings that match the Final Cut Pro sequence settings, except that all the clips will now be linked to the original camera files.

Step 5. Since the ProRes Proxy files were most likely 2K files, and the newly relinked camera files are the original 4K size, you will need to reset the scale value of each clip in the composition. This reframes the shot to fit inside the 2K frame, just as they did in FCP. Or you can creatively reframe the shots, since you have all the “bleed” of the full 4K frame. Alternatively, you can change the After Effects composition setting to match the 4K size.

At this point you could completely finish the project in After Effects, and there are a number of folks who would advocate that. From my point-of-view, After Effects is a compositing tool, rather than a DI or editing application. With the changes in Premiere Pro CS5, my druthers would be to get the media into that application. I’m only using After Effects as a conduit between Final Cut Pro and Premiere Pro in this process.

You could go from After Effects to Premiere Pro via Adobe’s Dynamic Linking, but I’d rather not. That simply nests the After Effects composition as a single clip on the Premiere Pro timeline. I want the shots available as individual timeline clips, so follow these steps.

Step 6. Launch a new Premiere Pro CS5 project and select a new sequence setting from one of the RED presets, such as a 4K timeline.

Step 7. Highlight all of the .R3D clips in the After Effects composition and Copy.

Step 8. Switch to the Premiere Pro sequence window and Paste. All of the RED clips will now fill up the Premiere Pro sequence. At this point you should have a native 4K sequence with .R3D camera raw media. Corresponding master clips will show up in the Premiere Pro project window.

Step 9. To change the camera raw color settings of the .R3D files, open a clip from the project window and alter its source settings. These changes will automatically update that clip on the timeline.

Step 10. Finish effects and color grading as desired. If you are using this process with the intent of sending files to a DI house for film finishing, then your settings and any grading should be very neutral to allow for maximum latitude at the next stage.

Step 11. Export media. A big selling point of Premiere Pro CS5 to RED users is that it allows you to export DPX image sequences, in addition to all of the standard media options. DPX is the preferred format of most high-end DI solutions, like Quantel Pablo, Autodesk Lustre, etc. Premiere Pro CS5 is one of the few desktop solutions that enables an export of full-resolution 4K DPX files from the edited timeline.

OK, I’ve given you a lot to chew on. In three articles on RED post, I’ve covered quite a few ways to finish RED-acquired projects. Don’t get overwhelmed. Remember that you don’t have to use them all. Simply pick the one that’s best for you and have fun.

©2010 Oliver Peters

RED Post – the Easy Way II

The RED camera company has succeeded in shaking up the industry and getting all other camera manufacturers to rethink what a digital cinema camera should be. This year, the ARRI Alexa presents the first serious challenge by another system designed around a camera raw workflow. Although RED maintains a resolution advantage, which will increase with the forthcoming Epic, there are many other reasons producers might opt for an Alexa, a Panavision Genesis, a Panasonic VariCam/3700/2700/3000 or a Sony F23/F35/F900/F800.

One of the strategic errors that I feel RED made was to emphasize resolution over workflow. By doing so, their innovative approach was tagged early on by detractors as difficult and time-consuming. It’s actually rather straightforward with a lot of versatility and can be adapted to many different production needs. Unfortunately, no matter how easy it has become today, RED will continue to battle this perception issue. This is exacerbated by RED itself, who has never provided good documentation for its products, especially the post production tools. A byproduct of the “perpetual beta” mode in which the company operates.

Native vs. non-native

I haven’t been a big fan of dealing with the camera raw files during editing, opting instead to pre-grade/render/export the camera files first into an edit-friendly format. If you search through the RedUser forum, you’ll find plenty of posts pointing out that the preferred feature film workflow is to export flat-looking DPX files for conforming and grading in DI systems like daVinci, Pablo and Lustre. This is a common workflow for DI and digital acquisition. I’ve demonstrated some of the latitude such a flat image can offer, even though it isn’t camera raw any longer.

Apple and Assimilate were early adopters of being able to access RED’s raw color data. Since then, RED developed an SDK that has allowed many other NLE manufacturers access to the raw data through this spec. Now others, like Avid and Adobe, can open and manipulate RED files based on the camera raw data. This gives editors wide latitude over how the image can look, without being stuck to a “baked in” camera image as a starting point. It’s like editing from transferred film, yet having access to the original negative in the NLE. I’ve recently reviewed Avid Media Composer 5 and Adobe Premiere Pro CS5 and spent some time testing this out. Both do a very good job with native RED files, but my conclusion is still that an offline/online editing methodology works best for complex, long-form productions.

FCP’s Log and Transfer

Last year, I edited 90% of my projects with Final Cut Pro, so I’ve decided to revisit Apple’s “native” RED workflow with a fresh eye. FCP does not let you work directly with the actual .R3D camera files. Instead, RED files are imported via FCP’s Log and Transfer module. Here you have two options: a) import as native REDCODE (the .R3D file is copied and rewrapped with a QuickTime container); or b) import/transcode to an edit-friendly codec, like one of the ProRes codecs. During Log and Transfer, you may select one of several colorimetry presets or “as shot”. Once imported into FCP, you can’t access the source settings (as in Media Composer or Premiere Pro). Instead, the workflow is designed around Apple Color, where the tools are provided to once again access the camera raw color data.

A lot of the RED appeal is over the fact that the camera records 4K images. 4K refers to a frame size of 4096 x 2048 pixels (2:1 aspect ratio). The RED One camera is capable of various frame sizes, but 4K appeals to indie filmmakers as some sort of Holy Grail. That’s in spite of the fact that most feature film DI is done at 2K sizes and some films are even posted using HD video (1920×1080) as an intermediate step. Avid Media Composer 5 limits you to an HD frame while Adobe Premiere Pro CS5 and After Effects CS5 will let you work at 4K. FCP doesn’t allows 4K, so the effective workaround is to downsample the 4K RED images to 2K (2048×1024). FCP and Color deal with this image size quite effectively and i/o hardware like the AJA KONA3 includes presets for 2K images. I like the idea of 4K at the camera, but I’m perfectly okay with 2K and HD in post.

Size and debayering

The downsample issue is confusing, because it affects image size and debayering – the process that turns raw data into RGB video. Unfortunately, RED hasn’t provided clear information as to what is really happening. The rule of thumb is that 2K images are downsampled as 1:1, while larger images use a 2:1 ratio. Since you have no control over the debayering settings in either Final Cut or Color, the belief expressed by some users is that RED’s own post tools, like REDCINE-X, yield better image quality. I haven’t seen anything that’s an issue in my own testing and some of the threads at RedUser would indicate that the results are comparable in head-to-head testing. You’ll have to judge for yourself.

If you are planning to post via this workflow, then it’s important to think about the right image size before production starts. If you shoot at 4K 2:1 (4096×2048), the resulting 2K 2:1 image (2048×1024) in FCP will either have to be center-cut (a blow-up with some cropping on the edges) to fit an HD (1920×1080) frame  – or it will have to be displayed with a letterbox mask.

Color scales the 2K image in the Geometry room as it renders. Since the majority of producers using this workflow are mainly interested in a proper HD image (1920×1080), I would recommend that the original footage be recorded in either 4K 16:9 (4096×2304) or 4K HD 16:9 (3840×2160), aka “quad HD”. The former gives you a little wiggle room for minor reframing, while the latter is an even multiple and will provide the most accurate downsampled image.

RED step-by-step with Final Cut Studio

Let’s take a look at the recommended Apple Final Cut Studio/RED workflow using an offline/online approach and camera raw files. Experienced RED owners who use FCP will be very familiar with this workflow. It’s also clearly described in RED’s FCP whitepaper. On the other hand, if you are about to approach your first RED project and have some trepidation about post, then this is for you. I’ll assume that you didn’t plunk down five grand for a RED Rocket accelerator card and don’t have the budget for a high-end finishing facility using Assimilate Scratch, Quantel Pablo, Avid DS or similar tools. In short, you are looking for the best way to leverage Apple Final Cut Studio and get the most out of your RED files.

Step 1: Download and install the RED Final Cut Studio Installer. This adds the QuickTime codec and the support modules for Final Cut Pro and Color. (The whitepaper is also included in this download.)

Step 2: Copy the RED camera files to your local hard drive array for editing. Back-up the files to other archive media and store in a secure location. (Avoid any illegal characters – like slashes, number signs, etc. – when you label folders.)

Step 3: Start a new FCP project. Use FCP’s Log and Transfer module to import the RED camera files. Set the L&T preferences to a target format of ProRes Proxy. Apply a color preset, like “daylight” if desired or leave “as shot”. This preset will be applied globally to all clips imported in this session.

Step 4: Edit your sequences as you normally would do. If you need to apply certain “looks” to satisfy the producer or client, use the FCP color correction tools for a temporary adjustment. Remember that this is offline editing. The goal is a good rough cut and ultimately an approved, “locked” picture cut.

Step 5: Once the cut is “locked”, use FCP’s Media Manager to generate a version of the final sequence for finishing. Run Media Manager and “create offline” to generate a new FCP project. Set the desired target sequence settings  – most likely Pro Res HQ or Pro Res 4444 (1920×1080 24p 48kHz). Set handle lengths as desired.

Step 6: Open the new media-managed FCP project. Open the Log and Transfer tool. Change the L&T preferences to “native” and “as shot”. Select the master clips (media is currently off-line) and batch capture. The corresponding portions of these RED clips will now be re-imported as native files.

Step 7: Select the final sequence and “Send to Color”. Remember that all of the Color compatibility considerations still apply. Long sequences should be first broken down into shorter sequences. Speed ramps should be “baked in”. In short, do all the usual pre-flight preparation required by the FCP-Color roundtrip.

Step 8: Thanks to the RED Installer, Color has now gained a RED tab in the Primary In room. Camera raw adjustments include gamma, colorspace, temperature, tint, gains, ISO and more. This is similar to making camera raw adjustments to digital still photos in Photoshop. All clips with the native REDCODE codec can be modified by these settings. These changes are on a clip-by-clip basis, but you can copy-and-paste or drag the Primary In settings from one clip to multiple clips.

The rest of the color grading steps follow standard Color operation. Adjust the Geometry settings as desired, render and send back to FCP. There are no raw OLPF (optical low-pass filtering) controls for detail enhancement or sharpening within the RED tab. If you feel that the image is slightly soft, then apply some sharpening within the Color FX room.

It doesn’t really make a lot of difference whether you follow this approach or prep the files first and never return to the native .R3D files. Both methods work and result in great images. It really boils down to what works for you. The process isn’t as hard as people make it out to be. Jump in, test a bit first and then you’re ready to rock!

©2010 Oliver Peters