RED Post – the Easy Way

blg_redpost_1

A commercial case study

Ever since the RED Digital Cinema Camera Company started to ship its innovative RED One camera, producers have been challenged with the best way to post produce its footage. Detractors have slammed RED for a supposed lack of post workflows. This is in wrong, since there are a number of solid ways to post RED footage. The trouble is that there isn’t a single best way and the path you choose is different depending on your computing platform, NLE of choice and destination. Many of the RED proponents over-think the workflow and insist on full 4K, native camera raw post. In my experience that’s unnecessary for 99% of all projects, especially those destined for the web or TV screens.

blg_redpost_2

Camera RAW

The RED One records images using a Bayer-pattern color filter array CMOS sensor, meaning that the data recorded is based on the intensity of red, green or blue light at each sensor pixel location. Standard video cameras record images that permanently incorporate (or “bake in”) the colorimetry of the camera as the look of the final image. The RED One stores colorimetry data recorded in the field for white balance, color temperature, ISO rating, etc. only as a metadata software file that can be nondestructively manipulated or even discarded completely in post. Most high-end DSLR still cameras use the same approach and can record either a camera raw image or a JPEG or TIFF that would have camera colorimetry “baked” into the picture. Shooting camera raw stills with a DSLR requires an application like Apple Aperture or Adobe Photoshop Lightroom or other similar image processing tools to generate final, color-corrected images from the stills you have shot.

Likewise, camera raw images from RED One require electronic processing to turn the Bayer pattern information into RGB video. Most of the typical image processing circuitry used in a standard HD video camera isn’t part of RED One, so these processes have to be applied in post. The amount of computation required means that this won’t happen in real-time and applying this processing requires rendering to “bake” the “look” into a final media file. Think of it as the electronic equivalent of a 35mm film negative. The film negative out of the camera rarely looks like your results after lab developing and film-to-tape transfer (telecine). RED One simply shifts similar steps into the digital realm. The beauty of RED One is that these steps can be done at the desktop level if you have the patience. Converting RED One’s camera raw images into useable video files involves the application of de-Bayering, adding colorimetry information, cropping, scaling, noise reduction and image sharpening.

blg_redpost_3

Native workflow

I am not a big believer in native RED workflows, unless you post with an expensive system, like Avid DS, Assimilate Scratch or Quantel. If you post with Apple Final Cut Studio, Adobe Creative Suite or Avid Media Composer, then the native workflow is largely a pain in the rear. “Native” means that you are working with some sort of reference or transcoded file during the creative editorial process. Because you are still dragging along the 4K-worth of  data, playback tends to be sluggish at the exact point where an editor really wants to rock-n-roll. When you move to the online editing (finishing) phase, you have to go through extra steps to access the original, camera raw media files (.R3D) and do any necessary final conversions. When cutting “native” not all of the color metadata used with the file is recognized, so you may or may not see the DP’s intended “look” during the offline (creative) editing phase.  For example, the application of curves values isn’t passed in the QuickTime reference file.

In some cases, such as visual effects shoots, native post is totally impractical. As the editor (or later the colorist), you may determine one color setting for the video files; but, the visual effects artist creates a different result, because he or she is also working natively with a set of camera raw files. You can easily end up in a situation where the effects shots don’t match the standard shots. Not only don’t they match, but it will be difficult to make them match, unless you go back to the camera raw information. This wouldn’t be possible with final, rendered effects shots. For these and many other reasons, I’m not keen on the native workflow and will discuss an alternative approach.

blg_redpost_12

Commercial post

I just wrapped up two national spots for Honda Generators with area production company, Florida Film & Tape. Brad Fuller (director/director of photography) shot with a RED One and I worked the gig as editor, post supervisor and colorist. The RED One can be set for various frame rates, aspect ratios and frame sizes and until recently, most folks have been shooting at 4096×2048 – a 2:1 aspect ratio. Early camera software builds had issues with 16×9, but that appears to have been fixed, so our footage was recorded as 4096×2304 at 23.98fps. That’s a 16×9 slice of the sensor using 4096 pixels (4K) as the horizontal dimension.

As an aside, there is plenty of discussion on the web about pixel dimensions versus resolution. Our images looked fine at 2K and HD because of the benefits of the oversampled starting point and downsampling that to a smaller size. When I actually extract a 4K TIFF for analysis and look at points of color detail, like the texture on an actor’s face or blades of grass, I see a general, subtle “softness” that I attribute to the REDcode wavelet compression. It’s comparable to the results you get from many standard digital still photo cameras when viewed at 1:1 pixels (a 100% view). I don’t feel that full-size 4K stills look as nice as images from a high-end Nikon or Canon DSLR for print work; but, that’s not my destination for this footage. I’m working in the TV world and our final spots were to be finished as HD (both 1080i and 720p) and NTSC (480i letterboxed). In that environment, the footage holds up quite well when compared with a 35mm film, F900 or VariCam commercial shoot.

The spots were shot on a stage and on location over the course of a week. The camera’s digital imaging tech (DIT) set up camera files on location and client, agency and director/DP worked out their looks based on the 720p video tap from the RED One to an HD video monitor. As with most tapeless media shoots, the media cards from the camera were copied to a set of two G-Tech FireWire drives as part of the on-set data wrangling routine. At this point all media was native .R3D and QuickTime reference files generated in-camera. The big advantage of the QuickTime reference files – and a part of the native workflow that IS quite helpful – is the fact that all folks on the set can review the footage. This allowed the client, agency and director to cull out the selected clips for use in editing. Think of it exactly like a film shoot. These are now your “circle” or “print” takes. Since I’m the “lab” in this scenario, it becomes very helpful to boil down the total of 250 clips shot to only 50 or so clips that I need to “process”.

blg_redpost_5

Processing

This approach is similar to a film shoot with a best-light transfer of dailies, final correction in post and no retransferring of the film. The Honda production wrapped on a Friday and I did my processing on Saturday in time for a Monday edit. This is where the various free and low-cost RED tools come into play. RED Digital Cinema offers several post applications as free downloads. In addition, a number of users have developed their own apps – some free, some for purchase. My first step was to select all the RED clips in Clipfinder. This is a free app that you can use to a) select and review all RED media files in a given volume or folder, b) add comments to individual files and c) control the batch rendering of selected files.

The key application for me is RED Alert. The RED One generates color metadata in-camera and RED Alert can be used to review and alter this metadata. It can also be used to export single TIFF, DPX or rendered, self-contained QuickTime media files, as well as to generate new QuickTime reference files. The beauty is that updating color metadata or generating new reference files is a nearly instantaneous process. Since I am functioning in the role of a colorist at this point, it is important that I communicate what I am doing with the DP and/or director to make sure I don’t undo a look painstakingly created during the shoot.

With all due respect to DPs and DITs everywhere, I’m skeptical that the look everyone liked on an HD monitor during the shoot is really the best setting to get an optimal result in post. There have been a number of evolving issues with RED One over successive camera builds. People have often ended up with a less-pleasing results than they thought they were getting, simply because what they thought they were seeing on set wasn’t what was being recorded.

blg_redpost_71

Three factors affect this: Color Space, Output LUT and ISO settings. Since color settings are simply metadata and don’t actually affect the raw recording, these are all just different ways to interpret the image. Unfortunately that’s a double-edged sword, because each of these settings have a lot of options that drastically change how the image appears. They also affect what you see on location and, if adjusted incorrectly, can cause the DP to under or overexpose the image. My approach in post is generally to ignore the in-camera data and create my own grade in RED Alert. On this job, I set Color Space to REDspace and the Output LUT (look-up table) to Rec 709. The latter is the color space for HD video. From what I can tell, REDspace is RED’s modified and punchier version of Rec 709. These settings essentially tell RED Alert to interpret the camera raw image with REDspace values and convert those to Rec 709. Remember that my destination is TV anyway, so ultimately Rec 709 is really all I’m going to be interested in at the end.

Some folks recommend the Log settings, but I disagree. Log color settings are great for film and are a way of truncating a wider dynamic range into less space by “squeezing” the portion of the light values pertaining to highlights and shadows. The fallacy of this for TV – especially if you are working with FCP or Media Composer – is that these tools don’t employ log-to-linear image conversion, so there’s really no mathematically-accurate way to expand the actual values of this compressed dynamic range. Instead, I prefer to stay in Rec 709 and work with what I see in front of me.

ISO is another much-discussed setting. The RED One is nominally rated as ISO 320 (default). I really think it’s more like 200, because RED One doesn’t have the best low-light sensitivity. When you compare it with available-light shots from the Canon EOS 5D Mark II (for example, stills from Reverie), the Canon will blow away the RED One. The RED One images are especially noisy in the blue channel. You can bump up the ISO setting as high as 2000, but if you do this in camera (and don’t correct it in post), it really isn’t as pleasant as “pushing” film or even using a high-gain setting on an HD video camera.

On the other hand, there are some very nice examples of corrected low-light shots over at RedUser; however, additional post production filtering techniques were used to achieve these cleaner images. Clean-up in post is certainly no substitute for better lighting during the shoot. In reasonably well-lit evening shots, an ISO of 400 or 500 in RED Alert is still OK, but you do start to see noise in the darker areas of the image.

blg_redpost_6

Pre-grading

The rub in all of this, when working with RED Alert, is that you have no output to a video display or scopes by which to accurately judge the image. You see it on your computer display, which is notoriously inaccurate. That’s an RGB display set to goodness-knows-what gamma value!  The only valid analysis tool is RED Alert’s histogram – so learn to use it. Since I am working this process as a “pre-grade” with the intent of final color grading later, my focus is to create a good starting point – not the final look of the shot. This means I will adjust the image within a safe range. In the case of these Honda spots, I increased the contrast and saturation with the intent that my later grading would actually involve a reduction of saturation for the desired appearance. Since my main tool is the histogram, I basically “stretched” the dynamic range of the displayed image to both ends of the histogram scale without clipping highlights or crushing shadows. I rendered new media and didn’t use the QuickTime reference files for post, which allowed me to apply a slight S-curve to my images. RED Alert lets you save grading presets, so even though you can only view one clip at a time, you can save and load presets to be applied to other clips, such as several takes of the same set-up.

Clipfinder and RED Alert work beautifully together. You can simply click on a clip in Clipfinder and it will open in RED Alert. Tweak the color settings and you’re done. It’s just that simple, fast and easy. The bad news is that these tools are Mac Intel only. Nothing for Power PCs. If you are running Windows, then you have to rely on RED Cine for these same tasks. RED Cine is a stripped down version of Scratch and has a lot of power, but I don’t find it as fast or straightforward as the various Mac tools.

blg_redpost_81

Rendering media files

My premise is not to work within the native flow, so I still have to render media files that I’m going to use for the edit. There is no easy way around this, because the good/fast/cheap triad is in effect. (You can only pick two.) If you are doing this at the desktop level, you can either buy the most fire-breathing computer you can afford or you can wait the amount of time it takes to render. Period!

The Mac RED tools require Intel Macs, but my client owns a G5-based FCP suite. To work around this, I processed the RED files at another FCP facility nearby that was equipped with a quad-core Mac Pro. I rendered the files to ProResHQ, which the faster G5s can still play, even though this codec is optimized for Intels. In addition, our visual effects artist was using After Effects on a PC. His druthers were for uncompressed DPX image sequences, but once Apple released its QuickTime decoder for ProRes on Windows, he was able to work with the ProResHQ files without issue on his PC.

My Saturday was spent adjusting color on the 50 circle takes and then I let the Mac Pro render overnight. You can render media files in RED Alert, Clipfinder or RED Rushes (another free RED application), but all three are actually using RED Line – a command-line-driven rendering engine. Clipfinder and RED Rushes simply provide a front-end GUI and batch capabilities so the user doesn’t have to mess with the Mac command line controls. At this point, you set cropping, scaling and de-Bayer values. Choices made here become a trade-off between time and quality. Since I had a bit of time, I went with better quality settings, such as the “half-high” de-Bayer value. This gives you very good results in a downsampled 2K or HD image, but takes a little longer to render.

OK, so how much longer? My 50 clips equaled about 21 minutes of footage. This was 24fps (23.98) footage and rendering averaged about 1.2 to 1.5fps – about 16:1. Ultimately several hours, but not unreasonable for an overnight render on a quad-core. Certainly if I were working with one of the newest, maxed out, octo-core Intel Xeon “Nehalem” Mac Pros, then the rendering would be done in less time!

On Sunday morning I check the files and met with the director/DP to review the preliminary color grade. He was happy, so I was happy and could dupe a set of the files to hand off to the visual effects artist.

blg_redpost_41

The edit

I moved back to the client’s G5 suite with the ProResHQ media. As a back-up plan, I brought along my Macbook Pro laptop – an Intel machine – just in case I had to access any additional native .R3D files during the edit. Turns out I did. Not for the edit, but for some extra plate shots that the effects artist needed, which hadn’t been identified as circle takes. Whip out the laptop and a quick additional render. Like most tapeless media shoots, clips are generally short. My laptop rendered at a rate of about .8fps – not really that shabby compared to the tower. Rendering a few additional clips only took several minutes and then we were ready to rock.

I cut these spots on Apple Final Cut Pro, but understand that there’s nothing about this workflow that would have been significantly different using another NLE, especially Avid Media Composer. In that case, I would have simply rendered DNxHD files or image sequence files, instead of ProResHQ. Since I had rendered 1920×1080 ProResHQ files, I was cutting “offline” with finished-quality media. No real issues there, even on the G5. Our spots were very simple edits, so the main need was to work out the pacing and the right shots to lock picture and hand off clips for visual effects. All client review and approval was done long distance using Xprove. Once the client approved a cut, I sent an EDL to the visual effects artist, who had a duplicate drive full of the same ProResHQ media.

blg_redpost_9

Finishing and final grade

The two spots each used a distinctly different creative approach. One spot was a white limbo studio shoot. The camera follows our lead actor walking in the foreground and activity comes to life in the background as he passes by. The inspiration for the look was a Tim McGraw music video in which McGraw wears a white shirt that is blown out and slightly glowing. Spot number two is all location and was intended to have a look reminiscent of the western Days of Heaven. In that film the colors are quite muted. In the white limbo spot, the effects not only involved manipulating the activity in the background, but creating mattes and the bloom effect for our foreground talent. Ultimately the decision was made to have a totally different look to the color and luminance of our foreground actor and the background elements and background actors. That sequence ended up with five layers to create each scene.

Spot number two wasn’t as complex, but required a rig-removal in nearly every scene. With these heavy VFX components, it seems obvious to me that working with native RED camera files would have been totally impractical. The advantage to native, camera raw files in grading is supposed to be that you have a greater correction range than with standard HD files. I had already done most of that, though, in my RED Alert “pre-grade”. There was very little advantage in returning to the native files at this point.

blg_redpost_7a1

Another wrinkle in our job was the G5. In Apple’s current workflow, you only have direct native access to .R3D files in Apple Color. Most G5s didn’t have graphics display cards up to the task of working with ProResHQ high-def files and Color. I ran a few tests to see if that was even an option and Color just chugged! Instead, I did my final grades in FCP using Magic Bullet Colorista, which was more than capable for this grading. Furthermore, the white limbo spot required different grading on different video tracks and interactive adjustment of grading, opacity and blend modes. The background scene was graded with a lower luminance level and colors were desaturated and shifted to an overall blue tone. Our lead foreground actor was graded very bright with much higher saturation and natural color tones. In the end, it would have been hard to accomplish what I needed to do in Color anyway. FCP was actually the better environment in this case, but After Effects would have been the next best alternative.

blg_redpost_111

Framing

One big advantage to RED is the ability to work with oversized images. I rendered my files at 1920×1080, but I did have to reframe one of our hero product shots. In that case, I simply re-rendered the file as 2K (2048×1152) and positioned it inside FCP’s 1920×1080 timeline. Again, this was a quick render on the laptop to generate the 2K ProResHQ clip.

DPs should consider this as something that works to their advantage. When RED footage was commonly only shot at a 2:1 aspect ratio, there was some “bleed-room” factored in for repositioning within a 16×9 project. Since shooting in 16×9 now means a 1:1 relationship of the camera file to the edited frame, DPs might actually be best off to shoot with a slightly looser composition. This would allow the 4096×2304 file to be rendered to 2K (2048×1152) and then the final position would be adjusted in the NLE. Final Cut Pro, Quantel, Premiere Pro, Autodesk Smoke and Avid DS can all handle 2K files. I understand that DPs might be reticent about leaving the final framing to someone else, but the fact of the matter is that this happens with every film-to-tape transfer of a 35mm negative. It’s easily controlled through proper communication, the use of registration/framing charts on set and ALWAYS keeping the DP in the loop.

Needless to say, most commercials still run as 4×3 on many TV stations and networks, so DPs should frame to protect for 4×3 cropping. This way “center-cut” conversions of HD masters retain the important part of the composition. Many shots composed for 16×9 will work fine in 4×3, but certain shots, like product shots, probably won’t. To avoid problems on the distribution end, compose your shots for both formats when possible and double shoot a shot when it’s not practical. The alternative is to only run letterboxed versions in standard def, but not every client has control of this down the line.

blg_redpost_13

Click to see the finished spots.

Final thoughts

The RED One is an innovative camera that has many converts on the production side. It doesn’t have to become magilla in post if you treat it like digital “film” and design an efficient workflow that accommodates processing, editing, VFX and grading. I believe the honeymoon is waning for RED (in a good way). Now serious users are leaving much of the unabashed enthusiasm behind and are getting down to brass tacks. They are learning how to use the camera and the post tools in the most efficient and productive manner. There are many solutions, but pick the one best for you and stick to it.

Click here for additional RED-related posts on DigitalFilms.

Follow these links to some of the available RED resources:

Clipfinder

Crimson

Cineform
Rubber Monkey Software

R3D Data Manager
Imagine Products
RED’s free tools
MetaCheater
Assimilate
Avid
Quantel
Autodesk
Adobe

©2009 Oliver Peters

Scare Zone

blg_sz_1

Anatomy of posting a digital, indie feature film


This blog is named Digital Films and today’s post is definitely in keeping with that title. I wrapped up last year cutting an indie feature, called Scare Zone – a comedy/horror – or horror/comedy – film that is the brainchild of writer/director Jon Binkowski. Jon and I have worked on projects for years. His forte is creative design for theme parks and although he has written and directed a number of short films for park attractions, Scare Zone was his first full-length, dramatic feature film. The story takes place in a seasonal, Halloween-style, haunted house attraction. Our cast is an ensemble of young folks who’ve taken part-time jobs at the attraction for its short run; but, it turns out that someone is actually killing people at the Scare Zone.


Like all good low-budget films, Scare Zone benefited from good timing. Namely, that Jon was able to mount the production at Universal Studios Florida right after their annual Halloween Horror Nights park events. Some of the attraction sets are constructed in the soundstages and Scare Zone was able to take advantage of these during the window between the end of Halloween Horror Nights and the time when the sets were scheduled to be destroyed for another year. One key partner in this endeavor was area producer, Ben Kupfer, who produced and co-edited Scare Zone. As a low-budget film targeted for DVD distribution, Jon and Ben opted to shoot the film digitally, relying on two Sony XDCAM-EX1 and EX3 cameras for the look of this film.


blg_sz_2

Straight to the cut


Scare Zone started production in November and I’ll have to say that I have never worked on a film this fast before. I signed on as editor and colorist and started my first cut a week or so after taping commenced. Since this was a digital feature, we opted to cut this film natively (using the original compressed HD format from the camera) and not follow the more standard offline/online approach. Each day’s worth of shooting from the A and B cameras – housed on SxS cards from the EX cameras – was backed up to two Western Digital drives at the end of the day. Imagine Products’ ShotPut software was used, because this offered copy and verification to multiple drives and the ability to add some improved organization, such as adding tape name prefixes to files. Once files were backed up, the drives were sent over to the cutting room and media loaded to our storage array.


Although I’ve cut a lot of long form projects with Apple Final Cut Pro, this was actually the first dramatic feature (not including documentaries) that I’ve cut start-to-finish on FCP. Since I was cutting natively, we used the standard XDCAM import routine, which imports and rewraps the 1920×1080, 23.98fps, 35Mbps VBR MPEG2 files from the EX cameras into QuickTime media files. We also recorded double-system broadcast WAVE files for back-up audio, but only accessed these for a few lines – notably, when the footage was shot on GlideCam and the camera was untethered from any mics. In our native workflow, I was always cutting with final-quality footage and the quad-core Mac Pro had no trouble keeping up with the footage.


Both cameras accrued about 26 hours of combined raw footage. I finished my first cut in the equivalent of 15 working days. Basically, I was done in time for the wrap party! I have to point out that I’ve never cut a film this quickly and although FCP, tapeless media and/or native editing might have been a factor, I certainly have to extend kudos to an organized shoot, good directing and a talented cast. Jon did a lot of cutting in his head as he directed. As an editor, I prefer that a director not do too much of this, but I generally found that I had as much coverage as needed on Scare Zone. Our ensemble cast was on the money, which limited the number of takes. I rarely had more than four takes and most were good. This meant solid performances and good continuity, which lets a film like this almost cut itself. Since the storyline has to progress in a linear fashion over a 3-day period, there wasn’t a huge need to veer from the chronology of the original script.


blg_sz_4

Locking the picture


After turning in a solid first cut, Jon and Ben took a pass at it. Ben is also an experienced editor, so this gave them a chance to review my cut and modify it as needed. My mantra is to cut tighter, so a lot of their tweaks came in opening up some of the cuts. This was less of a stylistic difference and more because Jon’s pace was musically driven. Jon already had a score in his head, which required some more breath in certain scenes. In addition, a few ad hoc changes to the script had been made on the set. I was cutting in parallel to the production, so these changes needed to be incorporated, since they weren’t in my cut. In the end, after a few weeks of tweaking and some informal, “friends and family” focus screenings, the picture was locked – largely reflecting the structure of the first cut.


A locked picture meant we could move on to music, visual effects, sound design/mix and color grading. I tackled the latter. Working in a native form meant we could go straight to finishing – no “uprez” step required. I’ve had my ups and downs with Apple Color, but it was an ideal choice for Scare Zone. I split my timeline into 6 reels that were sent to Color for grading. Splitting up the timeline into reels of fewer than 200 edits is a general recommendation for long form projects sent to Color. Once in Color, I set up my project to render in ProResHQ. Color renders new media and these rendered clips with “baked in” color grading become the linked video files when you roundtrip back to FCP. Thus your final, graded FCP timeline will be linked to the Color renders and not the original camera files. This effectively gives you an additional level of redundancy, because you have duplicated the clips in the actual cut, in addition to the files imported from the camera.


blg_sz_3

Grading


The Sony EX cameras can be preset to a number of different Cine-Gamma style settings. At the front end, Ben and DP Mike Gluckman decided on a setting that was generally brighter and flatter than the desired, final look. This is a preset intended as an optimal starting point when post-production grading is to be used. It gives you the advantage of using the lower cost EX cameras in much the same way as you would use Sony’s expensive F900 or F23 CineAlta cameras. Unfortunately, there was no budget for film lens adapters or prime lenses, so standard video zooms were used. Nevertheless, the look was very filmic, given the rather tight quarters of our haunted attraction sets.


During grading, I generally brought levels down, making most of the film darker and less saturated. Typically, I would set a slight S-curve value in Color’s Primary In “room” as my first basic setting. This would increase the contrast of the picture and counteract the flatness of the Cine-Gamma setting used in camera. Effectively you are working to increase dynamic range in a way similar to film negative. This is a key issue, because the MPEG2 compression used by Sony in its XDCAM and EX cameras is not kind when you have to increase gain and gamma. By doing so, you tend to raise the noise floor and at times start to see the compression artifacts. If you start brighter and lower the “pedestal”, “lift” or “black” settings as you grade, you will end up with a better look and don’t run the risks caused by raising gamma. This is in keeping with the “expose to the right” philosophy that most digital shooters try to adhere to these days.


The proof of the look for us was at the first official screening held in one of Universal’s digital HD theaters. Scare Zone was encoded to Blu-Ray and run on a 2K Christie projector for our cast, crew, families and investors. The grading done to the Panasonic reference monitor held up quite well on a large theater screen. In the end, Scare Zone went from the first day of shooting to this “premiere” in about 3 ½ months. This is easily 1/3 the time that most films take for the same processes. Like all films, the next phase is sales and distribution – often the hardest. For now though, everyone involved was very happy with the positive response enjoyed at these screenings. Mixing up horror and comedy can be quite dicey, but judging by the audience reactions, it seems to have been pulled off! In any case, Scare Zone is another example to show that digital production and desktop tools have come of age, when it comes to entertaining the traditional film audience.


Click here for more on Scare Zone.


© 2009 Oliver Peters

Music Video Fun

blg_mv1

The internet is a great place for discovering new contacts. Music video post lets you extend your creativity. This intersection brought me in contact with G.No, a fellow Final Cut Pro enthusiast. G.No is a French R-n-B artist, whose soulful, Latin hip-hop riffs complement his moniker of “The Latin Bird”. After a few e-mail exchanges, I was off and running to color-grade three of his current music videos – long distance, to boot!

 

blg_mv2

 

Someone 2 Luv Me Before and After Frames


Two of the videos had already been edited, and the third – up to me. The direction here was to make them look better than they started and to try to make each look different from the others. Like many such videos, the footage was shot with a consumer/prosumer digital camera, so the first objective was to achieve a less video-like – and more film-like – “look”. There’s plenty of inspiration at places like YouTube, so after G.No offered suggestions of other videos that he liked, I had a sense of the direction to take.


blg_mv4

 

En Mi Vida Before and After Frames


I have written about grading in Final Cut Pro and this was one that fit the bill. Basic color-grading was done with the FCP 3-way and Red Giant Software’s Magic Bullet Colorista plug-ins. Both can give you good results, but Colorista offers an additional exposure control that works a lot like the exposure slider in photo processing applications. Colorista also lets you mask areas of the image and lighten, darken or change the balance inside or outside of the windowed area. You can use several instances of Colorista in order to treat various areas separately within the frame. I also use the Face Light plug-in for the same reason. My main purpose with Face Light is to brighten faces, as the name implies.


blg_mv3

blg_mv8

 

Someone 2 Luv Me Before and After Frames plus filter pane


Tools like Colorista allow you to shape the lighting of a flatter image, but one other useful tool is a vignette filter. There are variations of this filter available in different packages, but the general purpose is to darken the outer edge of the image. This mimics a distortion that lens manufacturers work hard to eliminate. Used creatively, this further helps to shape the look of a scene. Not all vignette filters work the same, because many use blend modes to darken the image’s edges. Using a “darken” or “multiply” mode rather than “normal” yields different results that change with the brightness of the scene itself.

 

The last little technique to essentially “re-light” a shot is the subtle use of chromatic glow effects (like FCP bloom or one of Joe’s Filters). Use these to diffuse highlights and cause them to glow. Along these same lines, I will also use selective focus or soft spot effects (Magic Bullet Looks or Joe’s Filters) to blur the outer edges of a scene and keep the center sharp. One benefit of chromatic glows, when used subtly, is to brighten facial highlights. When I shift the midtones in the 3-way to a more reddish complexion, using a chromatic glow effect brings back more highlights and added definition to facial areas. The reason for applying this mix of effects is primarily to draw the eye to the central point of the image, which is typically our singer, in the case of these videos.


blg_mv5

 

En Mi Vida Before and After Frames


Since the main outlet for these videos is the web, each clip on all three videos was also deinterlaced from the original interlaced PAL format. Nattress deinterlace filters fit that need. This even benefits the videos for use on TV and in DVDs, because it creates a more filmic frame-rate, since all interlaced frames become progressive in appearance. Last but not least, all three videos were polished off with a letterbox mask for the faux-widescreen look. This mask required that most shots had to be repositioned for optimal framing.

 

The first two videos were filmed (or should I say “taped”) during a trip to Venezuela. Someone 2 Luv Me got a treatment that was richer looking, more saturated and generally softer than it started. Tackling En Mi Vida required a different approach. Much of this video took place in the hotel room, so I opted for a look reminiscent of old, distressed Ektrachrome reversal film from the 1960s. Choosing a different toolset, I did almost all of the color-grading for this video within Magic Bullet Looks. This filter runs inside FCP, but when you modify any parameters, you enter Looks’ own unique user interface. Tools are grouped by steps in the camera, processing and/or post chain. You can get pretty elaborate ganging up a series of complementary processes all within this one filter.


blg_mv9

 

En Mi Vida frame in Looks interface

 

In addition, I added some film damage effects for grain, scratches and dirt courtesy of Boris Continuum Complete. Typically, I’ll use these effects for a few shots. You have to be mindful of the fact that when you apply these to a lot of clips, the project size grows exponentially. My original 2MB FCP project ballooned to about 50MB largely through the effects added to this second timeline.

 

One point worth noting is that it’s OK to do interesting things to the picture in hip-hop music “just because”. Producers and engineers commonly add vinyl record effects like scratches and pops to a digital mix. In these videos random flashes were added to the picture to visually accentuate some of the music beats. In the client’s cut, these were 1-2 frame cuts of white. I changed these white flashes into glow dissolve transitions for a more organic look. This style was continued throughout all three videos.


blg_mv11

 

Buenas Noches frame with After Effects CS4 cartoon effect

 

The third video, Buenas Noches, called for a different touch. The storyline was boy meets girl; they enjoy a day in Paris during the Christmas season; and meet again the next evening. Looking for something completely different, I tried the new Adobe After Effects CS4 cartoon filter. I was striving for a look reminiscent of the feature film A Scanner Darkly. It was certainly an interesting look, but not a winner with my client. Taking another swipe at this idea, I tried a similar look using the CHV silk & fog filter, set to the borders mode. Again – an interesting look – but still not right.


blg_mv12

blg_mv13

 

Buenas Noches frame with CHV silk & fog filter plus filter pane


blg_mv10

 

Buenas Noches frame in Color interface

 

blg_mv14

 

Buenas Noches frame with filter pane in FCP

 

Once we discarded the effects-driven looks, it was back to attaining a style through grading alone. With that in mind, I decided to use Apple Color on Buenas Noches. The intent was for a somewhat desaturated look. I liked the look I got out of Color, but neither my client nor I were as happy with the result – for this video – as I’d hoped. I have to agree that the element missing in all three attempts was a sense of romance that a day in Paris should evoke.


blg_mv6

blg_mv7

 

Buenas Noches before and after frames with final effects


blg_mv15

 

Buenas Noches frame with final effects plus filter pane


Back to the drawing board- using the same approach as in the first video. Back to FCP with a witches’ brew of filters. In this case, the color correction standbys (3-way and Colorista), plus chromatic glow, vignette, Face Light and others. I also made use of FCP’s compound blur, which – when used in a small increment – adds nice diffusion to the image. In addition, I decided to crank up the chroma saturation big time! In the end, both of us were very happy with the results.


blg_mv16

 

En Mi Vida frame with final look


blg_mv17

 

En Mi Vida frame with final look


blg_mv18

 

Someone 2 Luv Me frame with final look


blg_mv19

 

Buenas Noches frame with CHV silk and fog look


blg_mv201

 

Buenas Noches frame with After Effects CS4 cartoon look


blg_mv21

 

Buenas Noches frame with final look


blg_mv22

 

Buenas Noches frame with final look


blg_mv23

 

Buenas Noches frame with final look


Color grading music videos is about emotion and style, not about fixing the image. It’s all about the “look” – not the right or wrong. And it’s about having fun getting there.

 

For more on G.No, check out Gardelino.com as well as his posts on YouTube (or here) and on MySpace.


© 2009 Oliver Peters