RED Post – the Easy Way

blg_redpost_1

A commercial case study

Ever since the RED Digital Cinema Camera Company started to ship its innovative RED One camera, producers have been challenged with the best way to post produce its footage. Detractors have slammed RED for a supposed lack of post workflows. This is in wrong, since there are a number of solid ways to post RED footage. The trouble is that there isn’t a single best way and the path you choose is different depending on your computing platform, NLE of choice and destination. Many of the RED proponents over-think the workflow and insist on full 4K, native camera raw post. In my experience that’s unnecessary for 99% of all projects, especially those destined for the web or TV screens.

blg_redpost_2

Camera RAW

The RED One records images using a Bayer-pattern color filter array CMOS sensor, meaning that the data recorded is based on the intensity of red, green or blue light at each sensor pixel location. Standard video cameras record images that permanently incorporate (or “bake in”) the colorimetry of the camera as the look of the final image. The RED One stores colorimetry data recorded in the field for white balance, color temperature, ISO rating, etc. only as a metadata software file that can be nondestructively manipulated or even discarded completely in post. Most high-end DSLR still cameras use the same approach and can record either a camera raw image or a JPEG or TIFF that would have camera colorimetry “baked” into the picture. Shooting camera raw stills with a DSLR requires an application like Apple Aperture or Adobe Photoshop Lightroom or other similar image processing tools to generate final, color-corrected images from the stills you have shot.

Likewise, camera raw images from RED One require electronic processing to turn the Bayer pattern information into RGB video. Most of the typical image processing circuitry used in a standard HD video camera isn’t part of RED One, so these processes have to be applied in post. The amount of computation required means that this won’t happen in real-time and applying this processing requires rendering to “bake” the “look” into a final media file. Think of it as the electronic equivalent of a 35mm film negative. The film negative out of the camera rarely looks like your results after lab developing and film-to-tape transfer (telecine). RED One simply shifts similar steps into the digital realm. The beauty of RED One is that these steps can be done at the desktop level if you have the patience. Converting RED One’s camera raw images into useable video files involves the application of de-Bayering, adding colorimetry information, cropping, scaling, noise reduction and image sharpening.

blg_redpost_3

Native workflow

I am not a big believer in native RED workflows, unless you post with an expensive system, like Avid DS, Assimilate Scratch or Quantel. If you post with Apple Final Cut Studio, Adobe Creative Suite or Avid Media Composer, then the native workflow is largely a pain in the rear. “Native” means that you are working with some sort of reference or transcoded file during the creative editorial process. Because you are still dragging along the 4K-worth of  data, playback tends to be sluggish at the exact point where an editor really wants to rock-n-roll. When you move to the online editing (finishing) phase, you have to go through extra steps to access the original, camera raw media files (.R3D) and do any necessary final conversions. When cutting “native” not all of the color metadata used with the file is recognized, so you may or may not see the DP’s intended “look” during the offline (creative) editing phase.  For example, the application of curves values isn’t passed in the QuickTime reference file.

In some cases, such as visual effects shoots, native post is totally impractical. As the editor (or later the colorist), you may determine one color setting for the video files; but, the visual effects artist creates a different result, because he or she is also working natively with a set of camera raw files. You can easily end up in a situation where the effects shots don’t match the standard shots. Not only don’t they match, but it will be difficult to make them match, unless you go back to the camera raw information. This wouldn’t be possible with final, rendered effects shots. For these and many other reasons, I’m not keen on the native workflow and will discuss an alternative approach.

blg_redpost_12

Commercial post

I just wrapped up two national spots for Honda Generators with area production company, Florida Film & Tape. Brad Fuller (director/director of photography) shot with a RED One and I worked the gig as editor, post supervisor and colorist. The RED One can be set for various frame rates, aspect ratios and frame sizes and until recently, most folks have been shooting at 4096×2048 – a 2:1 aspect ratio. Early camera software builds had issues with 16×9, but that appears to have been fixed, so our footage was recorded as 4096×2304 at 23.98fps. That’s a 16×9 slice of the sensor using 4096 pixels (4K) as the horizontal dimension.

As an aside, there is plenty of discussion on the web about pixel dimensions versus resolution. Our images looked fine at 2K and HD because of the benefits of the oversampled starting point and downsampling that to a smaller size. When I actually extract a 4K TIFF for analysis and look at points of color detail, like the texture on an actor’s face or blades of grass, I see a general, subtle “softness” that I attribute to the REDcode wavelet compression. It’s comparable to the results you get from many standard digital still photo cameras when viewed at 1:1 pixels (a 100% view). I don’t feel that full-size 4K stills look as nice as images from a high-end Nikon or Canon DSLR for print work; but, that’s not my destination for this footage. I’m working in the TV world and our final spots were to be finished as HD (both 1080i and 720p) and NTSC (480i letterboxed). In that environment, the footage holds up quite well when compared with a 35mm film, F900 or VariCam commercial shoot.

The spots were shot on a stage and on location over the course of a week. The camera’s digital imaging tech (DIT) set up camera files on location and client, agency and director/DP worked out their looks based on the 720p video tap from the RED One to an HD video monitor. As with most tapeless media shoots, the media cards from the camera were copied to a set of two G-Tech FireWire drives as part of the on-set data wrangling routine. At this point all media was native .R3D and QuickTime reference files generated in-camera. The big advantage of the QuickTime reference files – and a part of the native workflow that IS quite helpful – is the fact that all folks on the set can review the footage. This allowed the client, agency and director to cull out the selected clips for use in editing. Think of it exactly like a film shoot. These are now your “circle” or “print” takes. Since I’m the “lab” in this scenario, it becomes very helpful to boil down the total of 250 clips shot to only 50 or so clips that I need to “process”.

blg_redpost_5

Processing

This approach is similar to a film shoot with a best-light transfer of dailies, final correction in post and no retransferring of the film. The Honda production wrapped on a Friday and I did my processing on Saturday in time for a Monday edit. This is where the various free and low-cost RED tools come into play. RED Digital Cinema offers several post applications as free downloads. In addition, a number of users have developed their own apps – some free, some for purchase. My first step was to select all the RED clips in Clipfinder. This is a free app that you can use to a) select and review all RED media files in a given volume or folder, b) add comments to individual files and c) control the batch rendering of selected files.

The key application for me is RED Alert. The RED One generates color metadata in-camera and RED Alert can be used to review and alter this metadata. It can also be used to export single TIFF, DPX or rendered, self-contained QuickTime media files, as well as to generate new QuickTime reference files. The beauty is that updating color metadata or generating new reference files is a nearly instantaneous process. Since I am functioning in the role of a colorist at this point, it is important that I communicate what I am doing with the DP and/or director to make sure I don’t undo a look painstakingly created during the shoot.

With all due respect to DPs and DITs everywhere, I’m skeptical that the look everyone liked on an HD monitor during the shoot is really the best setting to get an optimal result in post. There have been a number of evolving issues with RED One over successive camera builds. People have often ended up with a less-pleasing results than they thought they were getting, simply because what they thought they were seeing on set wasn’t what was being recorded.

blg_redpost_71

Three factors affect this: Color Space, Output LUT and ISO settings. Since color settings are simply metadata and don’t actually affect the raw recording, these are all just different ways to interpret the image. Unfortunately that’s a double-edged sword, because each of these settings have a lot of options that drastically change how the image appears. They also affect what you see on location and, if adjusted incorrectly, can cause the DP to under or overexpose the image. My approach in post is generally to ignore the in-camera data and create my own grade in RED Alert. On this job, I set Color Space to REDspace and the Output LUT (look-up table) to Rec 709. The latter is the color space for HD video. From what I can tell, REDspace is RED’s modified and punchier version of Rec 709. These settings essentially tell RED Alert to interpret the camera raw image with REDspace values and convert those to Rec 709. Remember that my destination is TV anyway, so ultimately Rec 709 is really all I’m going to be interested in at the end.

Some folks recommend the Log settings, but I disagree. Log color settings are great for film and are a way of truncating a wider dynamic range into less space by “squeezing” the portion of the light values pertaining to highlights and shadows. The fallacy of this for TV – especially if you are working with FCP or Media Composer – is that these tools don’t employ log-to-linear image conversion, so there’s really no mathematically-accurate way to expand the actual values of this compressed dynamic range. Instead, I prefer to stay in Rec 709 and work with what I see in front of me.

ISO is another much-discussed setting. The RED One is nominally rated as ISO 320 (default). I really think it’s more like 200, because RED One doesn’t have the best low-light sensitivity. When you compare it with available-light shots from the Canon EOS 5D Mark II (for example, stills from Reverie), the Canon will blow away the RED One. The RED One images are especially noisy in the blue channel. You can bump up the ISO setting as high as 2000, but if you do this in camera (and don’t correct it in post), it really isn’t as pleasant as “pushing” film or even using a high-gain setting on an HD video camera.

On the other hand, there are some very nice examples of corrected low-light shots over at RedUser; however, additional post production filtering techniques were used to achieve these cleaner images. Clean-up in post is certainly no substitute for better lighting during the shoot. In reasonably well-lit evening shots, an ISO of 400 or 500 in RED Alert is still OK, but you do start to see noise in the darker areas of the image.

blg_redpost_6

Pre-grading

The rub in all of this, when working with RED Alert, is that you have no output to a video display or scopes by which to accurately judge the image. You see it on your computer display, which is notoriously inaccurate. That’s an RGB display set to goodness-knows-what gamma value!  The only valid analysis tool is RED Alert’s histogram – so learn to use it. Since I am working this process as a “pre-grade” with the intent of final color grading later, my focus is to create a good starting point – not the final look of the shot. This means I will adjust the image within a safe range. In the case of these Honda spots, I increased the contrast and saturation with the intent that my later grading would actually involve a reduction of saturation for the desired appearance. Since my main tool is the histogram, I basically “stretched” the dynamic range of the displayed image to both ends of the histogram scale without clipping highlights or crushing shadows. I rendered new media and didn’t use the QuickTime reference files for post, which allowed me to apply a slight S-curve to my images. RED Alert lets you save grading presets, so even though you can only view one clip at a time, you can save and load presets to be applied to other clips, such as several takes of the same set-up.

Clipfinder and RED Alert work beautifully together. You can simply click on a clip in Clipfinder and it will open in RED Alert. Tweak the color settings and you’re done. It’s just that simple, fast and easy. The bad news is that these tools are Mac Intel only. Nothing for Power PCs. If you are running Windows, then you have to rely on RED Cine for these same tasks. RED Cine is a stripped down version of Scratch and has a lot of power, but I don’t find it as fast or straightforward as the various Mac tools.

blg_redpost_81

Rendering media files

My premise is not to work within the native flow, so I still have to render media files that I’m going to use for the edit. There is no easy way around this, because the good/fast/cheap triad is in effect. (You can only pick two.) If you are doing this at the desktop level, you can either buy the most fire-breathing computer you can afford or you can wait the amount of time it takes to render. Period!

The Mac RED tools require Intel Macs, but my client owns a G5-based FCP suite. To work around this, I processed the RED files at another FCP facility nearby that was equipped with a quad-core Mac Pro. I rendered the files to ProResHQ, which the faster G5s can still play, even though this codec is optimized for Intels. In addition, our visual effects artist was using After Effects on a PC. His druthers were for uncompressed DPX image sequences, but once Apple released its QuickTime decoder for ProRes on Windows, he was able to work with the ProResHQ files without issue on his PC.

My Saturday was spent adjusting color on the 50 circle takes and then I let the Mac Pro render overnight. You can render media files in RED Alert, Clipfinder or RED Rushes (another free RED application), but all three are actually using RED Line – a command-line-driven rendering engine. Clipfinder and RED Rushes simply provide a front-end GUI and batch capabilities so the user doesn’t have to mess with the Mac command line controls. At this point, you set cropping, scaling and de-Bayer values. Choices made here become a trade-off between time and quality. Since I had a bit of time, I went with better quality settings, such as the “half-high” de-Bayer value. This gives you very good results in a downsampled 2K or HD image, but takes a little longer to render.

OK, so how much longer? My 50 clips equaled about 21 minutes of footage. This was 24fps (23.98) footage and rendering averaged about 1.2 to 1.5fps – about 16:1. Ultimately several hours, but not unreasonable for an overnight render on a quad-core. Certainly if I were working with one of the newest, maxed out, octo-core Intel Xeon “Nehalem” Mac Pros, then the rendering would be done in less time!

On Sunday morning I check the files and met with the director/DP to review the preliminary color grade. He was happy, so I was happy and could dupe a set of the files to hand off to the visual effects artist.

blg_redpost_41

The edit

I moved back to the client’s G5 suite with the ProResHQ media. As a back-up plan, I brought along my Macbook Pro laptop – an Intel machine – just in case I had to access any additional native .R3D files during the edit. Turns out I did. Not for the edit, but for some extra plate shots that the effects artist needed, which hadn’t been identified as circle takes. Whip out the laptop and a quick additional render. Like most tapeless media shoots, clips are generally short. My laptop rendered at a rate of about .8fps – not really that shabby compared to the tower. Rendering a few additional clips only took several minutes and then we were ready to rock.

I cut these spots on Apple Final Cut Pro, but understand that there’s nothing about this workflow that would have been significantly different using another NLE, especially Avid Media Composer. In that case, I would have simply rendered DNxHD files or image sequence files, instead of ProResHQ. Since I had rendered 1920×1080 ProResHQ files, I was cutting “offline” with finished-quality media. No real issues there, even on the G5. Our spots were very simple edits, so the main need was to work out the pacing and the right shots to lock picture and hand off clips for visual effects. All client review and approval was done long distance using Xprove. Once the client approved a cut, I sent an EDL to the visual effects artist, who had a duplicate drive full of the same ProResHQ media.

blg_redpost_9

Finishing and final grade

The two spots each used a distinctly different creative approach. One spot was a white limbo studio shoot. The camera follows our lead actor walking in the foreground and activity comes to life in the background as he passes by. The inspiration for the look was a Tim McGraw music video in which McGraw wears a white shirt that is blown out and slightly glowing. Spot number two is all location and was intended to have a look reminiscent of the western Days of Heaven. In that film the colors are quite muted. In the white limbo spot, the effects not only involved manipulating the activity in the background, but creating mattes and the bloom effect for our foreground talent. Ultimately the decision was made to have a totally different look to the color and luminance of our foreground actor and the background elements and background actors. That sequence ended up with five layers to create each scene.

Spot number two wasn’t as complex, but required a rig-removal in nearly every scene. With these heavy VFX components, it seems obvious to me that working with native RED camera files would have been totally impractical. The advantage to native, camera raw files in grading is supposed to be that you have a greater correction range than with standard HD files. I had already done most of that, though, in my RED Alert “pre-grade”. There was very little advantage in returning to the native files at this point.

blg_redpost_7a1

Another wrinkle in our job was the G5. In Apple’s current workflow, you only have direct native access to .R3D files in Apple Color. Most G5s didn’t have graphics display cards up to the task of working with ProResHQ high-def files and Color. I ran a few tests to see if that was even an option and Color just chugged! Instead, I did my final grades in FCP using Magic Bullet Colorista, which was more than capable for this grading. Furthermore, the white limbo spot required different grading on different video tracks and interactive adjustment of grading, opacity and blend modes. The background scene was graded with a lower luminance level and colors were desaturated and shifted to an overall blue tone. Our lead foreground actor was graded very bright with much higher saturation and natural color tones. In the end, it would have been hard to accomplish what I needed to do in Color anyway. FCP was actually the better environment in this case, but After Effects would have been the next best alternative.

blg_redpost_111

Framing

One big advantage to RED is the ability to work with oversized images. I rendered my files at 1920×1080, but I did have to reframe one of our hero product shots. In that case, I simply re-rendered the file as 2K (2048×1152) and positioned it inside FCP’s 1920×1080 timeline. Again, this was a quick render on the laptop to generate the 2K ProResHQ clip.

DPs should consider this as something that works to their advantage. When RED footage was commonly only shot at a 2:1 aspect ratio, there was some “bleed-room” factored in for repositioning within a 16×9 project. Since shooting in 16×9 now means a 1:1 relationship of the camera file to the edited frame, DPs might actually be best off to shoot with a slightly looser composition. This would allow the 4096×2304 file to be rendered to 2K (2048×1152) and then the final position would be adjusted in the NLE. Final Cut Pro, Quantel, Premiere Pro, Autodesk Smoke and Avid DS can all handle 2K files. I understand that DPs might be reticent about leaving the final framing to someone else, but the fact of the matter is that this happens with every film-to-tape transfer of a 35mm negative. It’s easily controlled through proper communication, the use of registration/framing charts on set and ALWAYS keeping the DP in the loop.

Needless to say, most commercials still run as 4×3 on many TV stations and networks, so DPs should frame to protect for 4×3 cropping. This way “center-cut” conversions of HD masters retain the important part of the composition. Many shots composed for 16×9 will work fine in 4×3, but certain shots, like product shots, probably won’t. To avoid problems on the distribution end, compose your shots for both formats when possible and double shoot a shot when it’s not practical. The alternative is to only run letterboxed versions in standard def, but not every client has control of this down the line.

blg_redpost_13

Click to see the finished spots.

Final thoughts

The RED One is an innovative camera that has many converts on the production side. It doesn’t have to become magilla in post if you treat it like digital “film” and design an efficient workflow that accommodates processing, editing, VFX and grading. I believe the honeymoon is waning for RED (in a good way). Now serious users are leaving much of the unabashed enthusiasm behind and are getting down to brass tacks. They are learning how to use the camera and the post tools in the most efficient and productive manner. There are many solutions, but pick the one best for you and stick to it.

Click here for additional RED-related posts on DigitalFilms.

Follow these links to some of the available RED resources:

Clipfinder

Crimson

Cineform
Rubber Monkey Software

R3D Data Manager
Imagine Products
RED’s free tools
MetaCheater
Assimilate
Avid
Quantel
Autodesk
Adobe

©2009 Oliver Peters