Magic Bullet Colorista II

Red Giant Software’s engineers have been busy this year expanding the Magic Bullet franchise. Products have included versions for Photoshop and the iPhone, as well as variations of the ever-popular Looks. This line of innovative color correction tools got its start with Colorista, a custom 3-way color correction plug-in for Apple Final Cut Pro, Motion and Adobe After Effects. Colorista is a deceptively simple grading tool, used by many editors who like the added power over other built-in correction filters.

Red Giant has released Magic Bullet Colorista II, a highly enhanced follow-up to the original. Colorista II is designed to work with Apple Final Cut Pro, Adobe After Effects CS5 and for the first time, Premiere Pro CS5. No other NLEs or Motion, yet. The original filter featured a standard design of three color/level wheels, augmented by exposure and saturation controls plus a power mask for vignettes. By stacking multiple instances of Colorista, an editor could grade shots with much of the same power as in more advanced grading products, like Apple Color.

Three grading stages in a single filter

Colorista II takes it up several notches by providing three stages of color correction in a single filter – divided into primary, secondary and master sections. Each section has controls for shadow/midrange/highlight color balance and levels, plus exposure, density (contrast) and saturation. A couple of new basic tools have been added, including a single auto balance control, which adjusts both white and black balance in one step, and a highlight recovery tool.

What sets Colorista II apart is a new 8-vector HSL control in the primary and master sections. If you’ve used Adobe Lightroom 3, then this will be familiar. Want a bluer sky? Push the blue dot on the saturation/hue color wheel outward and blues become richer. Orange coincides with skin tones. If you want to brighten a person’s face, adjust the orange dot on the lightness wheel and faces become brighter. You can also enable a Skin Overlay grid (taken from Magic Bullet Mojo) to steer you in the right direction of matching a cinematic skin tone. Another addition that’s bound to be popular is master curves. There you can adjust the S-curve characteristics of RGB as well as red, blue and green individually.

Enhanced secondary control

Colorista II still has power masks for rectangular and elliptical vignettes, but now there are two – in the secondary and master sections. These masks can be used individually or in a combined manner, similar to the way you can add or subtract selections in Photoshop. Colorista II adds a very accurate color keyer as part of its secondary correction tools. The keyer opens in its own GUI, where you can select a color and then expand or reduce the range. The keyer is interactive with the masks, giving you more precise control to include or exclude regions from your secondary correction.

Anyone familiar with Lightroom’s Clarity control will recognize Pop – another new secondary feature. Pop is a localized contrast control. Crank the slider to the right and edge contrast is enhanced as a “glow dark” effect, which makes the image appear crisper. Move the slider to the left and you get the appearance of highlight glows. The image will be softer, so if used very subtly, then it’s a helpful tool to smooth out facial textures. It works much like a “silk & fog” filter. One last little touch is that all tools with a custom GUI, like a color wheel, can also be adjusted using a numeric entry or a slider.


Wow! That’s a lot, but how is it to work with? When you install Colorista II, you also get the latest version of the original Colorista filter (including Colorista-Sliders). This is to maintain compatibility with previous projects. Since Colorista II is so drastically different, your existing effects cannot be “promoted” to Colorista II; therefore, you still need this updated version. Colorista II and Colorista 1.2 have been optimized for stability (including CS5 64-bit support), so you should remove older versions of Colorista.

I tested Colorista II in Final Cut Pro 7, After Effects CS5 and Premiere Pro CS5. The filter works in all three applications, but I did encounter differences in responsiveness. First, Colorista II works best inside After Effects, which has an API that is most conducive to plug-ins with custom GUIs. I found After Effects to have the most direct control. Move a slider or dot on a color wheel and the image changed immediately. No lag on the control or as the image updated.

Of the three applications, Premiere Pro CS5 was the least responsive when moving positions on a color wheel. This is instantly obvious when comparing against Premiere Pro’s built-in correction filters, which are very responsive. According to Red Giant, Premiere Pro’s API doesn’t work well with third-party custom filter interfaces. Adobe’s engineers can go outside of the bounds of the API with internal filters, but third-party developers can’t. If you are a Premiere Pro CS5 editor, I would recommend using Colorista II within After Effects and then bringing that clip back into Premiere Pro through Adobe’s Dynamic Link.

Performance in Final Cut Pro is similar to Colorista version 1. If you don’t push the controls too quickly, the interface will keep up with you. The big difference between After Effects and Final Cut is that After Effects’ image will actively update as you move a control. FCP updates the image (and videoscopes) after you stop pushing a control. This is less responsive than FCP’s built-in 3-way color correction filter, but once you get a feel for it, you can grade images quite quickly with Colorista II. According to Red Giant, not all Final Cut users experience this lag, and they are working with Apple to release a patch that dramatically improves performance. Of course, I’m primarily taking about the responsiveness of the color wheels and the HSL controls when you use their GUIs. Much of this speeds up if use the sliders or numerical entries instead.

Additional thoughts

In the week since Colorista II has been on the market, I’ve seen a number of forum questions about it and the core issue many have is “why?” If you own Final Cut Studio, you already have a great color correction tool in Apple Color. Why do you need Colorista II, or for that matter, any other color correction plug-in? I use Color and like it a lot, but it’s not the right tool for every project. There’s a definite process you must go through to roundtrip media between FCP and Color. Extra media files are rendered by Color and you can’t make editorial changes to the timeline while working in Color. If you added any other FCP filters, you won’t see them while grading. Lastly, Color uses a very complex GUI that scares many potential users. For these and other reasons, many editors prefer to “grade in context”, by applying filters to clips on the FCP timeline. I have used Colorista, as well as other correction filters, to grade complete shows and even features all while staying in Final Cut.

Another consideration is After Effects. If you don’t own Final Cut, then you don’t own Color. A lot of folks like to do the “heavy lifting” in After Effects, including color correction. After Effects CS5 owners already have Synthetic Aperture Color Finesse 3, which likewise is a very powerful tool. It doesn’t have masks, like Colorista and Colorista II, but otherwise is a very advanced grading solution. Unfortunately, you have to use Color Finesse in its Full Interface mode to go beyond the basic controls, which takes you outside of After Effects. By using Colorista II, you keep all of its horsepower, while still able to work with all of the other After Effects tools. Another situation of staying “in context”.

In these examples, it’s not an either-or situation. Add as many tools to the kit as you can learn and afford to buy. The versatility of the secondary masking/keying and the many controls Colorista II has to offer is amazing. It introduces much of the power of a full-blown color correction application in a single filter. Red Giant has raised the bar again with Magic Bullet Colorista II.

By the way, here are some very nice before-and-after grading examples using Colorista II.

Written for Videography and DV magazines (NewBay Media LLC).

©2010 Oliver Peters

One bite at a time

… or, how to tackle large projects.

Anytime you start a complex, long-form project – whether it’s a feature film, documentary, TV show or corporate video – it can seem very overwhelming. With 30, 40, 80, over 100 hours of footage and more – where do you start? The answer is to start at the beginning. Like the response to the old question, “How do you eat an elephant?” – it is best handled “one bite at a time.”

Unlike many other editors, I’m not a big one for scripts and transcripts. I like to work with the material that’s in front of me and refer to the scripts or transcripts as needed. Don’t get me wrong – transcripts and tools like Avid’s ScriptSync can be great – but they aren’t for everyone and not always at your disposal. So let’s look at tools your editing software offers to make life easier and more organized.

Editing tools to the rescue

Nearly all NLEs offer productivity features to make it easier to organize your footage. Best known and utilized are markers and bin columns. Most NLEs offer certain preset columns, like scene and take, but generally you may add custom columns of your own. For example, with file-based footage like P2, I tend to leave the name “as is” and add my own descriptions in a comments or description column. Once you get the bin organized with the columns most useful to you, hide the other non-essential data columns and save that view as your custom bin view. Each NLE works a bit differently in this regard, but most have a similar feature.

No editor can avoid reviewing all the footage. Nor should they! Just because the script supervisor listed the last take of a shot as the “circle take” doesn’t mean that’s really the best performance. Maybe the first take was better for the start of the shot. If the scene cuts around to other footage, it’s quite likely that you may use the start of take one and the ending of the last take to capture the best performance. You won’t know this until you’ve loaded ALL the shots, reviewed ALL the shots and MADE NOTES. That’s where using multiple comments columns can be very helpful.

When I cut a feature film, I’ll note the director’s choices as well as my own, plus comments. That way, a few weeks later when someone wants to know why I used a different take, I can refer back to the notes in a column to indicate what about the other take struck me at that time as being better. Sometimes it’s obvious by comparing the two, but sometimes it isn’t. Know why you made the editing decisions you made.

An NLE is nothing more than a large database. As such, there are many built-in tools to help you sort, search and find the necessary shots. The most obvious is a finder-style column sort. Highlight the column header and sort by ascending or descending values. Most do a single-column sort, but Avid Media Composer actually does a two-column sort. So for example, Scene might be a “primary” sort field and Take might be the “secondary”. Another fine Media Composer bin feature is Custom Sift. Set the criteria for one or more columns and Custom Sift will display the matches and hide the rest. When cutting a feature, I’ll have a Selects column and indicate preferred takes with an “X”. By sifting for any “X” in the Selects column, the bin will only display the few preferred takes instead of all the takes. Switch back to an un-sifted view and the bin shows all clips again.

Most NLEs offer Find commands. In the case of Premiere Pro, the top of each bin window contains a search field. Type in a value for a Spotlight-style search and the resulting finds are displayed. Final Cut Pro offers a Find window that can search your entire project for specific criteria in one or more columns. The matching results will be displayed in a separate window. Of course, how much can be found clearly depends on the time you took at the beginning to enter metadata, comments and notes.

Editing strategy

When I edit an unscripted documentary, made up largely of interviews, the approach I take is like a sieve. Pour in a lot at the top and keep refining until the right content comes out at the bottom. In this type of production, the editor is, in effect, writing the story through the selection and juxtaposition of soundbites. It’s critical that you watch and listen to everything. If you have 60 hours of interviews, then there simply is no way around concentrating on what those people said in those 60 hours. You have to find the gems, string them together in a coherent fashion and make sense out of apparent chaos.

Step one. The three NLE tools I use at this point are custom columns (notes, comments, etc.), markers to identify the good statements and subclips. I don’t use all of these on every projects, but these are the first tools to use. For example, if I have an interview with a subject that covers an entire one-hour tape, I will typically ingest the complete one-hour tape as a single clip (timecode permitting) at a lower resolution, like DV25. In a column, I will describe the tape – subject and general topics mentioned. The next step is to review that clip, placing markers for good statements or creating a subclip for sections. Again, add notes, notes and more notes.

Step two is to start organizing each person’s comments. Edit a sequence of selected soundbites for each interview subject. At this point, I will include just about everything that seems moderately useful. Next, duplicate the sequences and start to whittle them done, keeping only the strongest statements. Since I’m working with a copy, I can always pull a clip from the longer sequence, if I decide to include a statement I had cut out. At the end of this process, I have a long and short sequence of selects for each subject.

Step three is to organize the sequences of people into new sequences by topic. As you’ve been listening to the comments, several themes will start to emerge. These may be predetermined – based on a set of questions that the interviewer was using – or it might come out of the natural on-camera discussion. Don’t get rid of your first set of selected sequences by person, in case you need to refer back to one of them. Depending on your NLE and style, build these by copy-and-pasting clips or by editing from one sequence into another.

When you are done, you will have a set of selected sequences for each topic, containing only the strongest soundbites from each person who discussed that matter. Naturally more than one person discussed the same topic and not everyone gave an equally strong, succinct or passionate response. So, you will need to drop some of the repeated answers in order to keep the best of the lot. As in the previous step, take a second pass at these sequences to whittle them down (keep both versions of course).

Step four is the point at which you can actually start building a show sequence with a story structure. Up to this point you have created at least four sets of selected sequences: interview selects (long and short) and topic selects (long and short). Build the story structure by rearranging your topic clips in a way that the comments start to create a natural narrative outlining the facts. Depending on the amount of material and whether or not you had any help from a story producer or similar person, it might be days, weeks or months before you have reached this point. Now it’s time to take your topic selects and assemble a story. Some of these topic sequences will be very long, but others will be too short to include in the story. Decide how many tangents to explore within your program. To tell the best story – even in a documentary – you need to consider character development and story arc, just as in a dramatic, scripted production.

From here on out, it’s a matter of continually refining the rough cut until you get a locked picture. By using the strongest statements, your cut may too heavily favor a small sample of your interview subjects. If that’s the case, you’ll need to revisit the earlier sequences (by person or topic) in order to swap out some of the people for others. This way the finished cut will better hold the viewers’ attention. Once again, it becomes very important to have added good notes to your bins, sequence markers and so on, in order for the search and find functions to be of use in making such changes.

As a sidebar, one interesting approach to all of this can be found at Assisted Editing. You’ve reviewed the clips and entered all the metadata, but you just have “editor’s block” and want some help getting started. Then it’s time for First Cuts to come to the rescue. By applying some artificial intelligence to the equation, the First Cuts application will generate any number of versions for you, based on assigned parameters like length, themes and so on. It’s not meant to replace the editor, but merely to automatically generate a good first assembly as a starting point from which to build. It’s also a great way to vary the story line, since it can be easy to start going down a single road, get tunnel vision and lose sight of other ways of editing the story.

I started out by saying that I tend not to rely on transcripts, but it’s in this final process where transcripts can be a great help. For me the process is one of reviewing and discarding, so how do you best handle the situation when the producer says, “I think someone else said this better. Where is it?” Most transcripts are typed with a timecode value noted every few paragraphs. A search in Word will let you find the statement. Then use the closest timecode value as a means by which to find that general area in the source footage.

If you are working with Avid ScriptSync, then this becomes a fairly instant process. It’s one of the reasons Avid editors who use that feature find it to be so essential.

It really doesn’t matter how you tackle a large project nor the specific tools that are best for you. The point is to be methodical and to make the best use of the tools at your disposal.

Click here for more documentary film editing tips.

©2010 Oliver Peters

Connections – looking back at the future

Maybe it’s because of The Jetsons or the 1964 World’s Fair or Disney’s Tomorrowland, but it’s always fun to look back at our past views of the future. What did we get right? What is laughable today?

I had occasion to work on a more serious future-vision project back in the 90s for AT&T. Connections: AT&T’s Vision of the Future was a 1993 corporate image video that was produced as a short film by Century III, then the resident post house at Universal Studios Florida. I was reminded of this a few years ago when someone sent me a link to the Paleo-Future blog. It’s a fun site that talks about all sorts of futuristic concepts, like “where are our flying cars?” Connections became a topic of a series of posts, including links to all sections of the original video.

The genesis of the video was the need to showcase technology, which AT&T had in the lab, in an entertaining way. It was meant to demonstrate the type of daily impact this technology might have in everyday life a few short years down the road. The project was spearheaded by AT&T executive Henry Bassman, who brought the production to Century III. We were ideally suited for this effort, thanks to our post and effects pipeline in sci-fi/fantasy television production (The Adventures of Superboy, Super Force, Swamp Thing, etc.) and our experience in high-value, corporate image projects. Being on the lot gave us access to Universal’s soundstages and working on these series put us together with leading dramatic directors.

One of these directors was Bob Wiemer, who had worked on a number of the episodes at Universal as well as other shows (Star Trek: The Next Generation, SeaQuest, etc.). Bassman, Wiemer and other principals, including cinematographer Glenn Kershaw, ASC, together with the crew at Century III formed the production and post team behind Connections. It was filmed on 35mm and intended to have all the production value of any prime time TV show. I was the online editor and part of the visual effects team on this show.

The goal of Connections was to present a slice-of-life scenario approximately 20 years into the future. Throughout the course of telling the story, key technology was matter-of-factly used. We are not quite at the 20-year mark, but it’s interesting to see where things have actually gone. In the early 90s, many of the showcased technologies were either in their infancy or non-existent. The Internet was young, the Apple Newton was a model PDA and all TV sets were 4×3 CRTs. Looking back at this video, there’s a lot that you’ll recognize as common reality today and a few things you won’t.

Some that are spot-on, include seat-back airplane TVs, monitors that are 16×9 aspect ratio, role-playing/collaborative video games, the use of PDAs in the form of iPhones, iPads and smart phones. In some cases, the technology is close, but didn’t quite evolve the way it was imagined – at least not yet. For example, Connections displayed the use foldable screens on PDAs. Not here yet. It also showed the use of simultaneous translation, complete with image morphing for lipsync and accurate speech-to-text on screen. Only a small part of that’s a reality. Video gamers interact in many role-playing games, even online, but they have yet to reach the level of virtual reality presented.

Nearly all depicted personal electronic devices demonstrate multimedia convergence. PDAs and cell phones merged into a close representation of today’s iPhone or Droid phone. Home and office computers and televisions are networked systems that tie into the same computing and entertainment options. In one scene, the father is able to access the computer from the widescreen TV set in his bedroom.

One big area that has never made it into practice is the way interaction with the computer was presented. The futurists at AT&T believed that the primary future interface with a computer would be via speech. They felt that the operating system would be represented to us by a customizable, personalized avatar. This was based on their extrapolation from actual artificial intelligence research. Think of Jeeves on steroids. Or maybe Microsoft’s Bob.  Well, maybe not. So far, the technology hasn’t made it that far and people don’t seem to want to adopt that type of a solution.

The following are a some examples of showcased technologies from Connections. Click on any frame for an enlarged view.

In the opening scene, the daughter (an anthropologist) is on a return flight from a trip to the Himalayas. She is on an in-flight 3-way call with her fiancé (in France) and a local artisan, who is making a custom rug for their wedding. This scene depicts videophone communications, 16×9 seat-back in-flight monitors with phone, movie and TV capabilities. Note the simultaneous translation with text, speech and image (lipsync) adjustment for all parties in the call.

The father (a city planner) is looking at a potential urban renewal site. He is using a foldable PDA with built-in camera and videophone. The software renders a CAD version of the possible new building to be constructed. His wife calls and appears on screen. Clearly we are very close to this technology today, when you look at iPhone 4, the iPad and Apple’s new FaceTime videophone application.

The son is playing a virtual reality, interactive role-playing game with two friends. Each player is rendered as a character within the game and displayed that way on the other players’ displays. Virtual reality gloves permit the player to interact with virtual objects on the screen. The game is interrupted by a message from mom, which causes the players to morph back into their normal appearance, while the game is on hold.

The mother appears in his visor as a pre-recorded reminder, letting him know it’s time to do his homework. The son exits the game. One of the friends morphs back into her vampire persona as the game resumes.

Mom and dad pick up the daughter at the airport. They go into a public phone area, which is an open-air booth, employing noise-cancelling technology for quiet and privacy in the air terminal. She activates and places the international call (voice identification) to introduce her new fiancé to her parents. This again depicts simultaneous translation and speech-to-text technology.

The mother (a medical professional) is consulting with a client (a teen athlete with a prosthetic leg) and the orthopedic surgeon. They are discussing possible changes to the design of the limb in a 3-way videophone conference call. Her display is the computer screen, which depicts the live feed of the callers, a CAD rendering of the limb design, along with the computer avatars from the doctor’s and her own computer. The avatars provide useful research information, as well as initiate the call at her voice request.

Mother and daughter are picking a wedding dress. The dress shop has the daughter’s electronic body measurements on file and can use these to display an accurate 3-sided, animated visual of how she will look in the various dress designs. She can interactively make design alterations, which are then instantly modified on screen from one look to the next.

In order to actually produce this shot, the actress was simultaneously filmed with three cameras in a black limbo set. These were synced in post and one wardrobe was morphed into another as a visual effect. By filming the one actress with three cameras, her motions in all three angles maintained perfect sync.

The father visits an advanced, experimental school where all students receive standardized instructions from an off-campus subject specialist. The in-classroom teacher assists any students with questions for personalized assistance. Each student has their own display system. Think of online learning mashed up with the iPad and One Laptop Per Child and you’ll get the idea.

I assembled a short video of excerpts from some of these scenes. Click the link to watch it or watch the full video at the Paleo-Future blog.

AT&T ultimately used the Connections short film in numerous venues to demonstrate how they felt their technology would change lives. The film debuted at the Smithsonian National Air and Space Museum as an opener for a showing of 2001: A Space Odyssey, commemorating its 25th anniversary re-release.

©2010 Oliver Peters

Adobe Premiere Pro CS5

Adobe is shipping its much-anticipated Creative Suite 5. The video applications are available either as single products or bundled in the Master Collection or Production Premium suite. Most video editors will be interested in the latter, because it includes Premiere Pro, OnLocation, Encore, After Effects, Photoshop Extended, Illustrator, Adobe Media Encoder, Soundbooth, Flash Catalyst and Flash Professional.

The big story is native 64-bit operation for all of the applications, which requires a 64-bit OS (Windows Vista, Windows 7 or Mac OS X “Snow Leopard”) running on a processor that supports 64-bit operation. The upside of this is much better performance, but the downside is that you’ll have to upgrade all of your plug-ins to 64-bit versions.

Concentration on performance

Adobe really honed in on performance. I’m running a late-2009 8-core (2.26GHz) Apple Mac Pro with 12GB RAM. The change from CS4 to CS5 provided noticeably faster launch times and in general, more responsiveness in all of the Adobe applications, but in particular, Premiere Pro.

There have been quite a few “under-the-hood” workflow improvements, but the general editing features have not significantly changed. If you liked Premiere Pro before, then you’ll really love CS5. If you weren’t a fan, then improved performance and the easy integration of RED and HDLSR footage might sway you. I’ve never had any real stability issues with Premiere Pro, but one complaint you often hear is that it doesn’t scale well to large, complex projects. I haven’t tackled a large job with CS5 yet, so I can’t say, but over all, the application “feels” much more solid to me than previous releases.

Accelerated effects

The highlights are the Mercury Playback Engine, more native file and camera support and accelerated effects. According to Karl Soule (Adobe Technical Evangelist, Dynamic Media), “The Mercury Playback Engine is made up of a number of different technologies that use the latest hardware in computers. The three main technologies are 64-bit native code, multicore optimization and GPU acceleration. 64-bit code means that Premiere can access more RAM than before and can process larger numbers much faster. Multicore optimization means that Premiere Pro will take full advantage of all cores in multicore CPUs, splitting processor threads so that the load is balanced and distributed evenly. GPU acceleration uses both OpenGL technology for display playback and [NVIDIA’s] CUDA-accelerated effects and filters for color correction, chromakeying and more.”

Sean Kilbride (NVIDIA Technical Marketing Manager) continues, “By moving core visual processing tasks in the Mercury Playback Engine to CUDA, the [Adobe] team was able to create highly efficient GPU accelerated functions with performance gains of up to 70 times.” Adobe has certified several CUDA-enabled NVIDIA graphics cards, including the Quadro FX 5800/4800/3800 series and the GeForce GTX 285.

Since the Mercury Playback Engine is more than just GPU-based hardware acceleration, you’ll see the benefits of increased performance even with other cards. Karl Soule points out that, “On my 17-inch MacBook Pro laptop, I can edit clips from my Canon DSLR camera natively, without any need to transcode the footage ahead of time. I can also play back somewhere between five to seven layers of formats like AVC-Intra with no problem.”

The Mercury Playback Engine is designed to accelerate certain effects (like color correction, the Ultra keyer or picture-in-picture layers) and formats (like RED or HDV) and in general, delivers more composited layers in real-time. As part of this redesign, the available Premiere Pro effects are marked with icons to let you know which offer hardware acceleration, 32-bit and/or YUV processing. I was able to test CS5 using both my stock GeForce 120 card, as well as a Quadro FX 4800 loaned by NVIDIA for this review. Clearly the FX 4800 offers superior performance, but it wasn’t shabby with the GeForce, either. For example, if most of your work consisted of “cuts-and-dissolves” projects shot on P2, then you’ll be very happy with a standard card.


Premiere Pro CS5 now hosts many new, native formats, so you may typically see a yellow or red line over a timeline, but rendering isn’t a “given”.  A red render bar indicates a section that probably must be rendered to play back in real time at full frame rate. A yellow render bar indicates that it may not need to be rendered. If you are exporting to tape, you will need to render these sections, however, in most cases these sections will play smoothly enough to not interrupt your creative flow during editing.

Premiere Pro launches a version of Adobe Media Encoder when you choose to export the sequence to a deliverable file. It’s a full-featured encoder capable of compressing to a variety of formats for masters, web, BD/DVD and more. Mercury Playback kicks in here as well, because all rendering and encoding from Premiere Pro takes advantage of GPU-acceleration whenever possible. Depending on the format and the effects used, rendering with a CUDA-enabled card will be faster than one without this architecture. In order to maintain maximum quality, Premiere Pro CS5 encodes exported files by accessing the original source media. You have the option to use render files as part of the export, but generally these are considered temporary preview files.

A potpourri of formats

Some of the native formats handled by Premiere Pro CS5 include AVC-Intra, H.264, Apple ProRes and REDCODE camera raw. These formats all play smoothly under the right system requirements and Premiere Pro includes a number of corresponding project presets. (Some of these won’t be accessible in a trial mode.) Premiere Pro’s newfound performance doesn’t negate the need for a fast drive array, especially with native RED files.

I tested all of these formats with both the FX 4800 and the GeForce card. All played at least one stream in real-time on either card, but quality varied with the type of media. Premiere Pro throttles performance through its display resolution settings – typically full, half, quarter, etc. The FX 4800 clearly excelled with native RED 4K, playing more smoothly and at a higher resolution setting than the GeForce.

RED is a special case, of course, because thanks to the RED SDK, CS5 adds native control over the RAW colorimetry settings. You can actually edit a 4K sequence in Premiere Pro CS5! In fact, it’s less taxing to work in native 4K than to place the 4K media on an HD timeline, since less scaling is involved this way. Although you can work with native RED raw – and Premiere Pro handles it well – I wouldn’t really want to edit a project this way. For instance, going through the SDK doesn’t give you access to the curves control, as you do in RED’s own software. Second, it’s still a bit touchy. I had problems playing this media with either card in a full screen mode. Lastly, you can change the raw setting by opening and adjusting the source settings for the file, but then it is very slow to update the look within the Premiere Pro project. For RED, I’d still opt for an offline-online editing workflow.

Adobe has been working closely with the BBC to tightly integrate Premiere Pro with P2 media and metadata. AVC-Intra performance was especially impressive. This is a computationally-intensive codec, but even though I was playing from a striped pair (RAID-0) of FireWire 800 drives, 1080p/23.98 files (100Mbps AVC-I) played and scrubbed as if they were DV. Hybrid DSLRs like the Canon EOS 5D Mark II are hot, which Adobe has taken that into account with CS5. H.264 files from a Canon 5D or 7D play quite smoothly in Premiere Pro CS5, so even Final Cut Pro fans may find themselves using Premiere Pro as the first choice when working with these projects.

Premiere Pro’s Media Browser is a handy feature, that lets you find and review native-format camera files on your drives. Navigate to P2, XDCAM or RED media folders on your hard drive. It uses the format-specific folder/file hierarchy to hide the extraneous metadata and proxy folders that are associated with that specific format.

Pushing the Mercury

I put the NVIDIA Quadro FX 4800 through its paces. I was easily able to build up eight layers of native RED media on an HD timeline, complete with accelerated color correction effects and 2D picture-in-picture layering. The timeline stayed yellow as long as I was in the GPU-hardware-accelerated render setting. Remember, these are native 4K RED camera raw files, so there’s a ton of scaling happening! Since I was only playing the media from my FireWire 800 stripe, clearly the drives couldn’t keep up for long playback, but it did work and would have been better with a beefy drive array. As a general rule, when I could play native RED files at half-resolution with the Quadro card, the GeForce would have to run the same file at quarter-resolution to get acceptable playback.

A more realistic experiment was six layers of Apple ProResLT (with effects on each layer). This played fine in full screen at half resolution using the FX 4800, but started to drop frames at full resolution. Another variation was a single ProResLT layer with four filters (fast color corrector, Gaussian blur, noise and brightness/contrast), which played fine in full resolution as a full screen image. The same clip had to be dropped to half-resolution with the GeForce card.

As an example of how well the FX 4800 handled AVC-Intra, I built up nine layers of a two-minute long 1080p/23.98 clip. This played at full-resolution without dropping frames for the full length of the clip. Only when I added an accelerated effect to each of the nine layers did it start to drop frames, requiring me to drop to half-resolution for error-free playback.

Some bumps

One of the big selling points Adobe offers Final Cut and Avid editors is to use Premiere Pro as a conduit to get into After Effects. Once inside Premiere Pro, Adobe’s Dynamic Link offers superb integration with After Effects. Like CS4, Premiere Pro CS5 can import XML and AAF files. In actual practice, I haven’t had good luck with this. I’ve never been able to successfully bring in a Media Composer sequence and my success with Final Cut XML files has been spotty.

I was able to successfully import an FCP sequence only after I stripped out all effects filters, but then still had odd audio sync issues. The timeline clips were linked to ProResLT and AIFF files that were originally converted files from a Canon 5D camera and Zoom handheld audio recorder. Picture clips were perfectly positioned, but audio sync seemed to come from different sections of the audio files. Inexplicably, when I opened this same Premiere Pro project a day later, the sequence was perfectly in sync. The third day – back to random sync. My suspicion is that the double-system sound files from the Zoom might be the issue here. (EDIT: I did a little more digging and it seems that there is a known issue with Premiere Pro CS5 and AIFF files. Convert the audio files to WAVE format and it appears to fix this problem.)

Premiere Pro writes cache files for each piece of media, including database files and waveform caches. Adobe Media Encoder, Premiere Pro, Encore and Soundbooth share a common media cache database, so each of these applications can read from and write to the same set of cached media files. Premiere Pro also “conforms” all non-standard audio files to uncompressed 48kHz. This includes any compressed audio, like MP3 files, or audio with other sample rates. In the case of the handful of files I’ve been using for these tests, Premiere Pro has already consumed 1.5GB of space for conformed audio. This is by merely linking to files that already exist elsewhere on my hard drives. These files had 44.1kHz audio, requiring Premiere Pro to write new 48kHz audio files, which are used in the project. Generally 10GB of free space will be adequate for cache files and preview render files.


I’ve barely scared the surface, but you can see there’s a lot in Adobe Creative Suite 5. Aside from my few nitpicks, this is very healthy upgrade that provides a number of feature enhancements, but truly delivers on the side of performance. Premiere Pro’s Mercury Playback Engine contains over 30 image processing effects, which take advantage of the NVIDIA GPU’s CUDA processing power, but you’ll enjoy a significant performance upgrade even with a non-CUDA graphics card.

If you’re choosing a nonlinear editor without any preconceived notions, then clearly Adobe is an outstanding choice on either a Windows or Mac workstation or laptop. In addition, vendors like AJA, Blackmagic Design and Matrox currently (or later this year) will provide CS5-compatible hardware support with their i/o products. Even if you’re happy with another NLE, you’ll find plenty for reasons to pick up CS5 Production Premium and add it to your toolkit.

Written for Videography and DV magazines (NewBay Media LLC).

©2010 Oliver Peters

Avid Media Composer 5

Avid is on a roll in 2010, highlighted by the purchase of Euphonix and the release of Media Composer version 5, its signature creative editing application. The company has been on an accelerated development pace for the Media Composer/NewsCutter/Symphony editing family, with recent releases adding such innovative features as AMA (Avid Media Access), Stereo 3D editing tools and Frame Rate Mix and Match. New features in version 5 (approximately the seventeenth generation of Media Composer) encompass expanded AMA support, the ability to work in RGB color space, in-context timeline editing tools and a redesigned audio framework. This is also the first Media Composer product to formally support third-party monitoring hardware.

AMA (Avid Media Access) expands

AMA is a plug-in API for camera manufacturers that lets Media Composer systems natively open and edit various acquisition formats, without the need to first transcode these files into MXF media. Earlier versions supported Panasonic P2, Sony XDCAM and Ikegami GFCAM media, but AMA in version 5 has become an even more open environment, supporting more native formats than most of the competition. New support has been added for Canon’s XF format and RED camera raw files. The biggest news, however, is that Avid has taken the initiative to natively support QuickTime media. This is vitally important, as Apple’s ProRes codec has been adopted for acquisition on several devices, including the AJA Ki Pro and the new ARRI Alexa digital camera. This openness extends to the H.264 files recorded by HD-capable DSLRs, like the Canon EOS 5D/7D/1D hybrid cameras.

ProRes, RED and H.264 editing was the first thing I tested. Avid’s recommended workflow is to use AMA as a way to cull selects before transcoding the media into the MXF format, however the performance and stability indicate that it may be viable to stay in AMA for an entire project. To access AMA, you must link to an AMA volume, which can be a drive, folder or subfolder on your system. Unlike simply dragging a folder to your project window, Media Composer’s AMA imports all the camera metadata, where available, into a full-fledged Avid bin. The key difference is that the media is linked outside of Avid’s normal media databases. AMA-linked files are highlighted with yellow in the bin.

RED files come in through the RED SDK, so editors can manipulate the raw color metadata. As with other implementations of this SDK, the data access isn’t as deep as with RED’s own software (for example, no curves). Avid fits these files to an HD frame size at fixed parameters, so there is no adjustable control to scale or crop RED’s 4K images. Avid has added a new source-side reformat setting, so you have the option to use RED’s 2:1 aspect files with either a letterboxed or a center-cut framing inside a 16:9 HD frame.

On my 8-core Mac Pro (12GB of RAM, stock GeForce card), RED files played adequately at a draft (yellow) or medium (yellow-green) video quality setting, but not well at full quality (full green).

My conclusion – as with other native RED implementations – is that you wouldn’t want to edit a complex project using the camera raw files. I still contend, RED projects are best handled in a traditional offline/online editing workflow. Part of the Avid/RED story, however, is support for the RED Rocket card, a custom accelerator board. I couldn’t test that without a card, but Media Composer 5 is supposed to access this hardware (if installed) to vastly speed up transcoding RED files into MXF media.

Other formats provided a more pleasing experience. H.264 Canon files edited and played well, but were a bit clunky when scrubbing through the media files. Far more impressive was working with Apple ProRes media. Scrubbing, playing and editing these files was nearly as fluid as with Avid DNxHD media. It really does mean that you could record with an AJA Ki Pro, open the drive in Media Composer and start editing.

Next, I performed a basic layer test (five tracks – one background and four PIP layers). This dropped frames with five layers of ProRes, but had no problems playing in real-time when I used DNxHD media. This same layer test in Apple Final Cut Pro using the ProRes files performed as well as DNxHD media in Media Composer. The bottom line is that Media Composer 5 now handles ProRes files quite well, but you’ll still get a performance edge with media that is native to Avid.

Timeline enhancements

The biggest changes for veteran Avid editors are a new Smart Tool mode with drag-and-drop capabilities and a new audio track framework. Smart Tool offers contextual timeline editing functionality, for behavior more like Final Cut, Vegas Pro or Premiere Pro. When you hover over portions of a timeline track, Media Composer automatically enables certain segment editing modes. When you get close to a cut, a trim tool is automatically enabled. It’s easier to perform direct edits within the timeline without first entering a special mode. This behavior is optional, controlled by a new Smart Tool palette, thus giving editors two styles of working.

The audio side of Media Composer 5 has gained significant features from its Pro Tools sibling. I’ve harped on this for years, so kudos to the Avid designers and engineers involved in this effort. Media Composer now has both stereo and mono audio tracks and adds real-time, track-based audio plug-ins.

In fact, two Pro Tools plug-in formats are supported for the best of both worlds. Audiosuite filters can be applied to and rendered with individual clips (as before). Now real-time RTAS filters can also be applied to an entire audio track, as is common in DAW software, like Pro Tools. Up to five RTAS plug-ins can be applied per track. Real-time performance is based on the horsepower of your machine, so applying a handful of RTAS filters should be no problem, but if you had 16 tracks, each with five filters applied, it could be a different matter.

Avid currently only qualifies the RTAS filters that ship with Media Composer 5. It’s up to the third-party developers to qualify their own RTAS filters for Media Composer. I encountered existing RTAS versions of my BIAS plug-ins that had been installed as part of a past BIAS Peak Pro installation. These were in an existing plug-ins folder (in the application support files) and showed up in the Media Composer 5 effects palette. Unfortunately they didn’t work correctly. My point is that you might unknowingly have some existing RTAS plug-ins installed on your system from other unrelated audio software. These filters may not be fully compliant and should be removed.

Third-party hardware

Avid Media Composer editors have been screaming for I/O options outside of Avid’s proprietary hardware solutions. Media Composer 5 opens that door ever so slightly with the qualification of Matrox’s MXO2 Mini as a monitoring solution. You can still operate with Avid hardware, including Mojo, Mojo SDI, Adrenaline, Mojo DX and Nitris DX, but the Mini addresses the needs of file-based workflows, where tape ingest is of little importance. Naturally, users hope this will be broadened to include full support of all Matrox, Blackmagic Design, AJA and MOTU products – not to mention the newly acquired Euphonix Artist Series controllers. For now, the Mini is a good first step.

I tested Media Composer 5 on my MacBook Pro with a Matrox MXO2 Mini and the system worked as advertised. The same Matrox MXO2 Mac drivers (1.9.2 or higher) work for both Apple Final Cut Pro and Avid Media Composer 5. Be sure to install (or re-install) the MXO2 software after Media Composer has been installed. If done correctly, a button on the timeline toolbar toggles between 1394 and MXO2. Select MXO2 and video output passes through the Mini. The MXO2 Mini features HDMI and analog (composite, S or component) output in both SD and HD formats. It will also up-, down- and cross-convert the video, but Media Composer 5 presently doesn’t support ingest through the Mini.

Avid’s own solutions, like the Avid Nitris DX hardware, do provide some performance boosts, thanks to hardware scaling of thin raster formats, like HDV and DVCPRO HD, along with hardware decoding of the DNxHD codecs. You will need a Nitris DX to take full advantage of Media Composer 5’s support of the HD RGB color space. If you are working with HDCAM-SR 4:4:4, a dual-link SDI connection (available on Nitris DX) is required for ingest.

Final thoughts

Avid provides both Mac OS and Windows installers with the same purchase, but OS requirements have tightened. Media Composer 5 will run on Windows XP (SP3, 32-bit), Vista (SP2, 64-bit) or Windows 7 (64-bit). Mac users must upgrade to “Snow Leopard” Mac OS 10.6.3. The boxed, retail version of Media Composer 5 ($2495) includes Production Suite (Avid DVD, Avid FX, BCC filters, Sorenson Squeeze and SmartSound Sonicfire Pro), worth $3800 MSRP if purchased individually. The download version of Media Composer 5 is less ($2295), with the option to purchase Production Suite separately ($295). Avid doesn’t market Media Composer 5 as a “studio”, “suite” or “collection”, but the total package offers more than just an editing tool.

Media Composer 5 is a powerful upgrade to an industry-standard NLE, but I hope that new attention will be paid to the interface on the next revision cycle. The Smart Tool palette was placed into the timeline window with no ability to hide or move it. The custom color options have been streamlined, but you can no longer alter button colors, shapes or highlights. Avid has added a UI brightness slider (similar to Adobe’s applications), but you don’t get lighter text on a darker background until the darkest setting. A medium grey background leaves you with dark text that’s hard to read. Interface design is important to many editors and customization has been a high point for Avid, so I was a disappointed to see these changes.

There are plenty of other small and large improvements, that I haven’t included. For instance, film-based metadata enhancements, support for AVCHD import, capture to the DVCPRO HD or XDCAM HD50 codecs over HD-SDI and e-mail notification of completed renders, just for starters. It’s clear that Media Composer 5 is a milestone software release for Avid. From native camera formats to RED and QuickTime support through AMA to unique editing tools, like Stereo 3D – this is a powerhouse post production solution. Avid editors will love it, but even those using competing tools are bound to look at Media Composer 5 with renewed interest. There is simply no other NLE that packs in as many creative features as Avid Media Composer 5.

Written for Videography and DV magazines (NewBay Media LLC).

©2010 Oliver Peters