Are we there yet?

The filmmaker’s guide to budgeting post

 

I’m often asked, “How much does it cost to produce a film?” That’s like asking, “How much does it cost to make a car?” You can’t answer that without a lot of specifics. Instead, I’m going to address the issue of posting a film in terms of time, since time is money. It has been said that posting a film is like having a baby – about 9 months. Others advise that a film is never finished – merely abandoned. There’s truth in both statements, but I’d like to break it down in detail. When I managed post facilities, one of my tasks was to budget the post for TV series and feature films for our clients. This has helped me develop some useful guidelines.

 

The estimates I’ve worked up are based on posting half-hour, single-camera film-style, dramatic television series. This formula will also apply to most small budget films without extensive visual effects. In other words, a film like Garden State and not like The Dark Knight. In half-hour shows, you’re producing about 20-22 minutes of content and delivering a show a week. Each episode is shot in a week, the cut is locked in the second and audio editing/mixing and the finishing happens in the third. Typically, two or three editors alternate episodes, but the rest of the post crew, like the audio editors, are turning around a new show each and every week. Based on this experience, here are some guidelines for posting the basic 100-minute, small-budget, feature film.

 

Telecine

 

Most films are still shot on FILM, so there’s a telecine (film-to-tape transfer) process involved. Most shoots generate about an hour of raw footage a day, which is processed overnight and transferred the next day. This is either a “one-light” (one setting for the whole reel based on a reference film) or “best light” (colorist provides some overall, but minimal, subjective adjustment to the scenes) transfer. If the budget permits, the colorist will also sync double-system sound, so that the dailies reels have sync sound and picture. Depending on these various factors, dailies transfers take between 1.5 and 4 times the running length of the footage. If you are shooting in the same town as the lab and transfer facilities, you’ll see dailies on videotape or DVD in time for lunch the day after the shoot. If it’s not in the same town, then figure at least 48 hours later.

 

This process is reduced or eliminated when you shoot with an F900 or VariCam, but not if you shoot with the RED One. The workflow of dealing with camera raw digital images is a lot like shooting film. Prepping dailies and getting footage ready for the editor follows similar steps, such as a “best light” correction and rendering to an editable format. This rendering varies with the software and the processing horsepower, but figure 6-to-1 or higher. Assuming an hour of content a day, do-it-yourselfers will need to budget a separate, dedicated workstation simply to turn around RED files for the editor.

 

Editing

 

The editing process involves three phases – the editor’s first cut, the director’s cut and the locked cut. On many films, the editor is cutting during the actual production, working a few days behind any given point in the script. This lets the editor flag story and continuity issues early, when it’s easy to shoot additional coverage. The disadvantage to this approach is that the editor and any assistants are on the payroll for the entire 30-60 day shooting schedule and must be moved and housed as part of any crew location move.

 

The alternative is for the editor to come on board when all the production is completed and start with a fresh, objective point-of-view. In either case, the editor’s goal is to construct a film based on the scripted story and the coverage that was shot. You have to be faithful to the script and the director’s initial instincts. Good editors will do the best job they can at this point to create a tight, polished first cut. Using my half-hour TV show yardstick, the first cut of a 2-hour-long film can often be delivered in about 5 weeks. If you have a 5-hour-long first cut, it also means more footage to wade through. Getting to a first cut is going to take that much longer. If the editor has been cutting during the production, then he or she may be ready to show a first cut as early as a few weeks after the end of filming.

 

Next comes the director’s cut. Most directors will give an editor the space needed to get to a first cut and then sit in daily to get to the director’s cut. DGA rules dictate the right to take up to 10 weeks for the director’s cut. Some directors will take this and others won’t. It all depends on how much “help” a film needs during editing to make it better. If the producers are generally happy with the director’s cut, then the editing is more or less finished, except for some final polishing.

 

If the producers hate the director’s cut, or it’s much longer than the producers want to market, or focus group screenings point out some problems, then more editing takes place to get to the locked cut. It’s impossible to predict how long that will take and tends to vary with the size of the “committee” that’s involved. Editing a feature can take anywhere between 8 weeks and 12 months, but for most small indies, budgeting 12-16 weeks (for a locked cut) is a pretty safe estimate.

 

Audio

 

In the age of project studios, audio post on a film is vastly underestimated. A great picture will be dragged down by a mediocre track, but a great track will often help overcome visual problems, such as substandard image quality. Audio post consists of ADR (automatic dialogue replacement or “looping”), dialogue editing, “group loops”, sound effects editing, Foley sound effects recording, the music score and the final mix. Audio post normally starts with a “spotting session” after the cut is locked. Here, the producers, director and editor meet with the audio department and review the film from top to bottom, for the sole purpose of identifying specific sound requirements. Notes are generated that become the template for the next several weeks of work for the sound designers and audio editors.

 

ADR is used to re-record poor quality location dialogue lines. If the location wasn’t a challenging environment and the mixer was doing a good job, ADR will be minimal. Often ADR is done on location, but if not, it’s generally booked and recorded in a studio during post. It can be done before or after the cut is locked. A good rule of thumb is to budget about 10 days for all the actors in the film. The key to successful ADR is obviously lip-sync, but also matching the mics and sound quality of the actor’s lines when they were originally delivered on location.

 

“Group loops” – also called crowd “walla” – are recordings of a murmuring crowd. This can be kids in a school lunch room, soldiers in battle, background office voices or any other scene requiring the ambience of human background voices. It’s generally recorded in a studio and can usually be knocked out in a day. Sometimes these sounds are made up of nonsense words, but other times, there are specifics, like a PA announcement: “paging Dr. Smith, paging Dr. Smith…”

 

Dialogue editing is required to take the audio from the creative cut and get it ready for a mix. The dialogue editor will make sure all audio edits are smooth. Depending on the quality of the audio coming from the picture editor and the budget, the dialogue editor may go back to the original sound recordings and reload and sync those to maintain the best quality. Dialogue editors will also add ambiences and room tone recorded on location to help hide any mismatches between takes.

 

One big task is to split out all of the audio. This means that the voice for each character in a scene is isolated and moved to an individual track. This facilitates mixing, because the re-recording mixer might choose to equalize one voice differently than the others. The dialogue editor might start with 2 tracks of dialogue and end up delivering 8 to 24 tracks of dialogue for the final mix. In general, dialogue editing takes about 4-5 weeks for a 100 minute feature.

 

Sound effects editing or sound design is what makes a film like Stars Wars, Apocalypse Now or Wall-E. The obvious part is enhancing any practical sounds from the actual location recording that weren’t sufficiently dramatic. Some are obvious, like car explosions, but others are more subtle – like the lapping of water on the side of a canoe. Sound editors use real location recordings, stock sound effects libraries and even unusual items to fill out a soundtrack. In the case of sci-fi and horrors films, unusually-generated sounds are the norm. For example, the drone in the labs, the zaps of a superhero and so on. Like dialogue editing, sound effects editing / sound design takes about 4-5 weeks as well.

 

Foley sound effects recording is often a two-person process, requiring a recording engineer / audio editor and a Foley “walker” or “artist”. Foley is the art of having humans create live sound effects in sync with the picture. Foley can overlap with the other sound design being done on the film, so it has to be made clear during the spotting session where the division of labor occurs. The obvious Foley examples are footsteps in a scene, or punches to a body in a fight, but others may include the rustle of clothing, a kiss or a drink being sipped. If your film is going international, then the Foley crew has to create extra sound effects that already exist from the location recording. This would be the case if those sounds were recorded under dialogue lines and would be lost when foreign language dialogue is added to replace the domestic dialogue tracks. A good Foley team (artist and editor) can cover all the Foley needed for a film in about 3 weeks.

 

The musical score can be the most memorable part of a film. This task falls to the composer, who will create an original score – or in some cases a music editor, when the score is made up of commercial recordings. The composer might be involved from the beginning or may step in once the cut is locked. Under the best of circumstances, a composer will deliver a complete score that is locked to picture and simply has to be placed and mixed. The worst case is when audio is delivered in various pieces and song elements and a sound editor has to edit and place these into the scenes. Assuming that the composer you pick uses his own project studio and doesn’t have to book a symphony orchestra, then you can expect about the same 4-5 week schedule as the other audio segments.

 

The mix is where it all comes together. In most films, each of the audio steps I’ve outlined are performed by different people. The benefit from this parallel processing is that, hopefully, 6 weeks after you’ve locked the cut, your film is ready to mix. Presumably during that time, the director and producers have heard, adjusted and approved the various sound elements (dialogue, effects, score), so that nothing will be a surprise when these are heard in the mix. The mix stage (also called the dubbing stage or re-recording stage) is there to blend and balance, not to make editorial changes. Of course, that DOES happen, so make sure that the mix stage you use allows for quick tweaks.

 

Modern mixing is often done inside a workstation, like Pro Tools. The best of all worlds, however, is to have a Pro Tools-equipped mixing stage with the outputs of one or more workstations feeding a larger automated mixing console, such as a Digidesign ICON. Each portion of the soundtrack – dialogue, sound effects, music – can take up 24 or more tracks. It’s very easy to see how a film mixing console might need to accommodate over 100 individual tracks of sound elements. Most mixes are manned by 2 or 3 re-recording mixers, with the person responsible for the dialogue tracks taking the role of the lead mixer. A 3-mixer crew used to be the norm; however, mixes can be done by only one person, as well. Most modern rooms typically use 2 mixers. Television mixers can generally do a half-hour show in a day or two, so figure a couple of days per reel (20 minutes) for an indie feature. By this measure, you should estimate at least two weeks for the final mix. If there are other versions, like surround versus stereo, or “sanitized” dialogue versus R-rated, then budget at least three weeks of time.

 

Finishing

 

When we left our picture, the cut had been locked, but that doesn’t mean it’s ready to deliver. During the time all the audio work is going on, you will also be finishing the picture portion of the film. Hopefully, this will all be done in time for the mix, which would allow you to mix to the real, final image. There are many steps in traditional film finishing (negative cutting, opticals, etc.) that I won’t cover here. Odds are these will take a bit longer than the time budgeted for sound. If you are doing a video finish or a DI, then the process is faster.

 

If you shot film, then high-quality transfers have to be made, either to videotape (HDCAM-SR, HD-D5) or as scanned files (2K, 4K DPX). This will take at least a week from selected footage. This footage will be ingested or loaded into a DI system and matched (conformed) to a reference of your locked cut. Again this process also takes about a week to load, check and adjust each shot. If you shot with an HD camera, like an F900, then the week of transfer/scanning can be skipped, but if you shot with a RED One, simply replace the film workflow with the camera raw workflow associated with RED.

 

Within the finishing system – which could be an NLE, like Avid, FCP, Smoke, Quantel – or a DI system, like Scratch, Nucoda, daVinci, etc. – you will handle color grading and versioning. Budget about a week of color grading and another week of rendering and exports. A simple film shot with an F900 could be banged out in a total of one week if you’re doing all the grading inside FCP or Avid, but don’t cut yourself short at the budgeting stage.

 

Hopefully, I’ve illuminated some of the pieces to the puzzle. Remember that the budget doesn’t end when the shooting wraps. You must have time and money left over to complete the project. Money is negotiable and you can often cut great deals that help you deliver a lot of quality on the screen. You can work the price, but don’t cut yourself short on time. Time equals quality more so than money!

 

©2008 Oliver Peters

Avid’s New Thinking and the DX Hardware

Right on schedule, Avid rolled out its new DX product line in June. This was previewed at customer events before and during NAB as the tangible part of the company’s “New Thinking” campaign. It is typified by new hardware, lower prices and a simplification of the core editing product line, including the end-of-life for most of its DNA (digital nonlinear accelerator) products, launched in 2003. With DX, Avid has abandoned the use of the FireWire bus and older PCI-X slots for the much wider data path offered by the PCI Express (PCIe) computing architecture favored by Apple and HP. The new editing family now consists of the Media Composer software, two hardware/software bundles (Mojo DX and Nitris DX) and the turnkey Symphony Nitris DX system.

 

In the box

 

Avid Media Composer 3.0 hit the street at a price of under $2500 – literally half off of the previous MSRP. As a software product, it begs direct comparison to the suites offered by Adobe and Apple. Media Composer previously shipped with some third party applications, but Media Composer 3.0 is a powerful collection that rivals the competing suites. Along with Avid’s own utilities like EDL Manager and FilmScribe, you get solutions for encoding (Sorenson Squeeze), standard-def and Blu-ray high-def DVD authoring (Avid DVD by Sonic), music scoring (SmartSound Sonicfire Pro with two library discs) and compositing (Avid FX and Boris Continuum Complete filters). Avid FX (originally offered as part of the optional Avid Toolkit) is the AVX plug-in version of Boris Red, so you get as much compositing horsepower as After Effects or Motion right inside the NLE host. Another important Avid application included is MetaFuze, which is designed to convert file-based media, such as DPX and TIFF image sequences, into Avid-compatible media. MetaFuze is a key application for editors using Media Composer in a DI environment.

 

Avid Media Composer 3.0 sports minor cosmetic changes to the interface and performance tweaks, along with two new effects – a real-time timecode generator and subtitle captioning. Neither of these counts as a big marketing bullet-point, but they will rate high with every editor, because they improve daily productivity. The most impressive change, however, is under the hood. The code to process the video effects pipeline has been rewritten and is now optimized for multi-core processing and the power of modern graphic display cards. Avid operates in four video quality modes identified at the bottom of the timeline by a green and/or yellow button: full 10-bit, full, draft or best performance. The two highest settings are full quality that you can use to master to tape, while the lower two are best when you need to preview a very complex effects composite and want to see maximum real-time playback, but with a softer image. It is important to understand that unlike other software NLEs, when a Media Composer real-time effect plays in the full quality mode, it is indeed just that and does not require rendering.

 

The DX hardware

 

The key new products are the Mojo DX and Nitris DX interfaces. Both use a 4-data-lane PCIe card installed into the workstation, which connects to the external break-out box with a tether cable. Avid offers an optional PCIe Express 34 slot laptop card, so both Mojo DX and Nitris DX can also be used in the field. Thanks to PCIe’s wider data path, Mojo DX and Nitris DX capture and playback most SD and HD formats over SDI/HD-SDI. Mojo DX offers digital I/O (plus analog audio monitoring) while Nitris DX sports a wider range of digital and analog connections. Both include an HDMI port for monitoring, freeing SDI/HD-SDI for VTRs or digital routers. A lot of attention was paid to the J-K-L transport ballistics and this new hardware feels as responsive as the old, reliable Meridien Avids. The FireWire latency of Adrenaline that drove many editors up a wall is a thing of the past. Nitris DX also includes an embedded DNxHD hardware codec, but on a fast machine both DX units will let you capture uncompressed or DNxHD-compressed high-def video. With a Mojo DX, the computer’s CPU has to carry the compression load for DNxHD, while Nitris DX offloads this function to its internal hardware.

 

Pick your raster

 

The most visible part of this new pipeline is the ability to select between standard and thin raster format settings. This toggle affects both the hardware and the software. For example, the standard display raster for 1080i is 1920×1080 pixels, but if you are shooting HDV, the actual recording is only 1440 pixels wide. In a project that exclusively uses native HDV footage – or for that matter XDCAM-HD or DVCPRO HD – you can choose to edit in the thin raster size of 1440×1080. When you do that, the entire video pipeline operates at that size, so no GPU or CPU cycles are wasted during real-time or rendered compositing, simply to expand 1440 pixels out to 1920 pixels and then back. Everything is done internally at the thin size and then sent to Mojo DX or Nitris DX, which employs hardware scaling to expand the outbound image back to a standard size of 1920×1080. This is most useful when all of your footage is predominantly one format. If you have a timeline mashing up different native formats and sizes, the hardware scaling offers less of an advantage, since then the scaling of any mixed-format composites has to be done in the software first.

 

Avid Media Composer integrates one of the best on-the-fly format conversions. You can freely intercut 480, 720 and 1080 formats (with compatible frame rates) on the same timeline and they will be scaled to a common target output – up, down or cross-converted – in real-time. You can even layer clips of different formats over each other with full quality playback! When you master to tape via HD-SDI or SDI out of Mojo DX or Nitris DX, the final scaling of a mixed-format timeline is handled by that hardware.

 

Real world performance

 

I ran a number of tests with various Symphony and Media Composer configurations on octo-core (3GHz) MacPro and HP workstations, as well as a custom quad-core (2.66GHz) GoFlex317HD laptop provided by 1 Beyond Systems. My tests included the typical layering benchmark, where you stack a bunch of tracks with 2D PIP effects and see when the system starts to drop frames. Most editors don’t spend a lot of time building flying boxes so other tests were more true to life – mixing formats, adding color correction, transitions and titles. The DX hardware does not accelerate effects – only output scaling and DNxHD encode/decode in the case of Nitris DX. Therefore, results were pretty similar between the two units and even in a software-only mode, which means that most of the horsepower comes from the new programming and optimization.

 

Most of the systems played back full quality 1080i or 1080p content during the layer tests up to about the fifth track of video before dropping frames. Simple 2D PIP effects are a “friendly” test for a system that optimizes GPU power, but as expected, the performance drops when you add 3D DVE rotation or a drop shadow. A more accurate real-world test was one of the mixed format tests. In one 1080i HDV sequence, the system was able to handle a single track with real-time color-correction, dissolves and titles without dropping frames. Even the laptop let me add two alternating tracks of 720p/23.98 content with 2D PIP effects over a background of 1080p/23.98 and never dropped frames in the full quality mode. These results exceed the performance of Avid Media Composer Adrenaline or an Apple Final Cut Pro system with an AJA or Blackmagic Design card.

 

Real-time performance is nice, but even better is vastly improved rendering. Most of my timelines involved eight or nine layers for 30 to 40 seconds. All of these machines (MacPro, HP or GoFlex317HD laptop) took between one and two minutes to render each test, although render times naturally increased when 2D effects were changed to 3D. As with real-time performance, Media Composer 3.0 won the speed test when compared with similar sequences using Final Cut Pro 6.0.3 running on a quad-core (3GHz) MacPro.

 

Is it enough?

 

Avid is slugging it out for the hearts and minds of professional editors. The Avid DX equipment is a big improvement over DNA, but most of the power is in the software. You could argue that makes DX a pricey alternative to AJA, Matrox or Blackmagic Design hardware, but the value is in the total package. If you want to run hardware with Media Composer 3.0, you have to buy Avid hardware. If you own any viable editorial business, then the difference in the comparative costs over a three year period is inconsequential. So, if you prefer editing on an Avid, then there’s really no financial reason not to.

 

I’m less enthused about Symphony Nitris DX. On the plus side, Symphony is back on the Mac; but, for the first time in Symphony’s existence, there is no hardware advantage for the product over Media Composer. It’s the exact same Nitris DX hardware. Symphony adds Universal Mastering (the ability to generate NTSC or PAL from 480p/23.98 or 1080p/25 from 1080p/23.98 media files) and advanced color-correction with source-based and secondary grading. Both are nice features, but it’s hard to justify the nearly $14,000 difference between turnkey versions of Media Composer Nitris DX and Symphony Nitris DX.

 

The Avid DX products are very solid and hopefully the start of more great things to come. Aside from focusing on real-time performance, remember that Media Composer continues to offer many advanced features, like ScriptSync, an integrated (and newly improved) 3D DVE and robust media management. The changes in Media Composer code have laid the groundwork for even more features to be added at a swifter pace. Avid Media Composer remains the most feature-laden, powerful NLE on the market – at the most affordable price ever.

 

Written by Oliver Peters for Videography magazine and NewBay Media, L.L.C.

Encounters at the End of the World

Apple has made major strides in getting the professional film and video community to accept Final Cut Pro as a worthy tool, but what about other portions of Final Cut’s software suite? Notably Color, which was originally developed as Final Touch by Silicon Color and integrated by Apple into Final Cut Studio 2 in 2007. Is a desktop tool like Color up to the task of being the hero grading tool for a major film release helmed by a world-renowned director? I have written about Color in the past, including its use in film as Final Touch, but the recent release of Encounters at the End of the World has given me an opportunity to revisit the topic.

 

Encounters at the End of the World is the latest release by noted filmmaker Werner Herzog. The German-born director has produced, written and directed more than forty films and his creative efforts have extended into books and operas, as well. Herzog is the documentarian that brought us Grizzly Man, the story of Timothy Treadwell – a man who thought he could safely live among bears and in the end was eaten by one. In Encounters at the End of the World, which was sponsored by Discovery Films, Werner Herzog turns his eye towards the McMurdo Research Station, the largest settlement of humans in Antarctica. This isn’t a story about cuddly penguins, but rather, Herzog’s subjective look at life in Antarctica and the people who live and work there.

 

A Final Cut finish

 

The film was edited on Final Cut Pro by Joe Bini, who has worked with Herzog on numerous films, including Grizzly Man, Rescue Dawn and The Wild Blue Yonder. He is currently cutting Herzog’s latest, Bad Lieutenant: Port of Call New Orleans. Since the cut had been done in Final Cut Pro, it made the most sense to also finish the film using Final Cut. Through a client referral, that task landed at Alphadogs – a Burbank post house that specializes in Avid and Final Cut Pro finishing services for TV shows and indie features. Alphadogs also posted Tom Petty and the Heartbreakers: Runnin’ Down A Dream. One of the folks working on both of these projects was Brian Hutchings, a freelance LA colorist who handles grading in daVinci, Avid and Apple Color. You don’t know it, but you’ve probably seen Brian’s work in the past as the colorist who graded the Oscar-opening tribute clips featuring Billy Crystal. Brian and I chatted recently about using Apple Color to grade Encounters at the End of the World.

 

Although this is a documentary “film”, Encounters at the End of the World generally consists of video sources. Hutchings explained the workflow, “This film uses a mixture of formats, but all the original content that Herzog’s crew shot in Antarctica was recorded in high-def on the Sony XDCAM-HD optical disc format. They felt that the harsh climatic conditions would really impede any other mechanical videotape camcorders and that XDCAM-HD was the best way to go. All the XDCAM-HD material was transferred to DVCPRO HD tape in order to get the file-based media into a more familiar tape-based workflow. Alphadogs handled the conform from Joe Bini’s offline edit, as well as titles, effects and the integration and reformatting of archival footage. The DVCPRO HD tapes were conformed in uncompressed 10-bit HD, which is what I worked with inside Color at Alphadogs. The final output was mastered to HDCAM-SR [1080p/23.98] for delivery.”

 

The director’s point of view

 

Werner Herzog is a director who presents his material from a certain perspective and as with any film, color grading becomes a way to help convey mood and attitude. Hutchings explained what it was like to work on this project. “Antarctica is really a world all its own and this is Herzog’s look at that world. My job is to try to help a director express his or her ideas. This was a maiden voyage with Color for both Bini and Herzog. Typically Joe would give me the guidelines for how scenes should look. He knows Herzog very well and that gave me a good starting point. I would then be on my own to do the color grading. Herzog would come in and review my work for a day and then I’d make the necessary tweaks. Since Bini knows Herzog so well, the starting point I was given by Joe would typically be very close.”

 

“One of the big differences compared to a daVinci,” Hutchings continued, “is that grading in Color requires rendering. You can’t immediately view the results in context in real-time with sync audio. Herzog understood this and was willing to offer the necessary flexibility. We generally followed the process I now encourage with all of my clients. That is, the client sits in for the correction, we render overnight and then review and make any changes the next day. daVinci is still better if you are on a time-stressed project, but Color can save you a lot of money if you can work within its boundaries. The real-time performance of a daVinci isn’t always required.”

 

Hutchings went on to explain about executing Werner Herzog’s vision, “My job is to visually support the vision of the director. Antarctica is a very harsh environment and color grading helped to enhance this mood. The underwater scenes within the film are spectacular and unprecedented, showing unusual ice cliffs and sea creatures. These are very unique images and Herzog really wanted to bring out the blues in these shots. They are really vibrant and I’m not sure I would have naturally gone in that direction; but, after I saw the whole film in context, I could really see what Herzog was going for. Another example of a different vision is inside the building where hydroponics are grown. They use special lights, which create a rather surreal environment. Rather than try to normalize these shots, we opted to play with the look instead – to end up with a result that was more visually interesting.”

 

Using Color’s tools

 

Apple Color uses an tabbed interface divided into Rooms for Primary color correction, Secondaries (up to eight layers), Color FX (filters) and Geometry (sizing and repositioning). No two colorists approach these tools in the same way, so I asked Brian about how he worked with Color. “I know a lot of editors who do color correction that like curves, but I’m a lift/gamma/gain-type of guy. Coming from my daVinci background, I do 99% of my work in the primaries and then use the secondaries for tweaking, like improving skin tones. Antarctica is not California, so you don’t want people to have cheeks that look too rosy! I use the tracking in Color like Power Windows in daVinci.”

 

Even though Encounters at the End of the World was ultimately recorded to film, Hutchings handled all the color grading in a standard video color space. Brian explained how he dealt with that, “I rely on scopes. These are truly essential as we transition from a CRT to an LCD world. I generally grade by the scopes, because I know how that information will translate to different displays. Then a properly calibrated monitor just becomes confirmation for me. I typically work in standard Rec. 601/709 video color space without any special look-up tables [LUTs]. Most labs have their own special LUTs to adjust for proper film-outs anyway. In the case of Encounters, I didn’t know while I was grading it that in the end it was going to be recorded out to film. The correction turned out well, but generally when I know something is going to film, I tend to be a bit more conservative in my grading, making sure not to crush the blacks or blow-out the highlights. This way you protect for shadow and highlight detail, which can still be adjusted before recording to film.”

 

Encounters at the End of the World was recorded to film at Fotokem from Alphadogs’ HDCAM-SR master. Fotokem made minor, final color adjustments for the proper film color space before recording to a film negative.

 

 

Tips for working with Color

 

Brian and I wrapped up the conversation with some tips to help producers get the most out of working with Color. Brian offered these suggestions, “I’ve worked with Color quite a bit now and have a recommended routine that I try to follow. As a daVinci colorist coming from tape, Color’s media management was something I had to get used to. Plus Color’s interface is very un-Mac-like. So now I use a sort of pre-flight scenario. You want to plan time at the front of the process to anticipate and fix any problems. For example, Color has trouble with Final Cut’s multi-clip sequences and shots with speed adjustments. These have to be fixed or ‘baked-in’ before sending the project to Color. You can’t run long sequences through Color. The 90 minute timeline for Encounters at the End of the World was broken into five separate reels.”

 

“Make sure that you budget enough time to prep, grade, render, review, tweak and re-render. As a freelancer it becomes a bit tougher to make sure this is properly scheduled. daVinci is real-time, so I can see the corrections and make sure they are right as we lay off to tape, but Color requires rendering. The producers need to budget the time so that I can check the project after the render and roundtrip back to Final Cut to make sure that all the correction actually ‘stuck’. Encounters at the End of the World was graded in a couple of weeks. Because the producers were flexible with the schedule, they were able to benefit from the advantages offered by a software-based, desktop solution.”

 

Written by Oliver Peters for Videography magazine and NewBay Media, LLC

Plug-ins to Enhance Your Creativity

Effects filters and plug-ins are a way to enhance your creativity and expand the toolset that you can offer your clients. Effects packages are the way to “pimp your NLE”. Here’s a look at three useful options.

 

A. Noise Industries FxFactory 2

 

Final Cut Pro introduced the ability for editors to create and modify effects using Apple’s FxBuilder, which can be found under the Tools menu. Most of the free or low cost FCP plug-ins available on the web were created using this method. When Apple changed to Mac OS X and in particular Tiger (Mac OS 10.4), they added a powerful built-in technology called Core Image, which could be used to benefit video applications. Core Image uses the OpenGL power of modern graphics cards to manipulate screen images in interesting ways and in real-time. A simple example of this in daily use is the small screen ripple that occurs when you add or remove a Widget on the desktop. That’s a Core Image effect. This type of built-in functionality has been expanded in Leopard (Mac OS 10.5) with the addition of Core Animation effects.

 

Factory Tools

 

Noise Industries decided to leverage Apple’s Core Image technology in 2004 as the architecture behind a new AVX effects plug-in package for Avid systems. When Avid made the move to Tiger the following year with the Mac versions of Avid Media Composer and Xpress Pro, Noise Industries was ready with its first product, Factory Tools – a unique set of sophisticated real-time effects built around Core Image. Until then, these types of complex effects (blurs, color effects, convolutions, distortions, etc.) would have required extensive rendering time.

 

More interesting though, than a new set of plug-ins, was that for the first time, Noise Industries was offering Avid editors a way to modify and even create new, custom effects using the tools available from Noise Industries and Apple. This is made possible by a free Apple developer application called Quartz Composer, which is a tool for creating custom screen effects, like the Widget ripple example. Quartz Composer is a simple, node-based application – not unlike a vastly simplified version of Shake or Combustion – that may be used to link together a series of effects nodes to create a new end result. Noise Industries used this existing mechanism and wrote additional code to turn these new Quartz Compositions into AVX-compliant Avid effects. Factory Tools is actually two elements: 1) Factory Floor – the overall software application, which is used to organize, create and control various plug-in packages; and 2) the effects filters themselves. Factory Floor is also the conduit to Quartz Composer. You can own or create all the effects you like, but without Factory Floor, they won’t run as AVX effects.

 

 

FxFactory

 

As Apple advanced and improved its professional applications within the Final Cut Studio bundle, it added the new FxPlug API. This came first to Motion and was later integrated into Final Cut Pro. FxPlug, like Core Image, takes advantage of the OpenGL graphics card performance to accelerate effects – often running up to real-time. When FxPlug made the move to Final Cut Pro, Noise Industries made the move as well, with the release of FxFactory – making them at the time the first plug-in developer to take advantage of this new, native architecture. A single installation of FxFactory installs the same set of filters into both Motion and Final Cut Pro, since both Apple applications integrate this common plug-in architecture. These run as native filters, transitions and generators in FCP and filters and generators in Motion.

 

Furthermore, in the latest builds, FxFactory now also installs these plug-ins into Adobe After Effects, so a single installation places a matching set of filters into three powerful applications. The apps share these plug-ins, so there is consistency between the three and the customer benefits by not having to purchase three sets of plug-ins. Noise Industries is up to the FxFactory 2.0.5 version. It is fully compatible with Tiger and Leopard, but a few of the plug-ins depend on additional Leopard-based technologies, so a small handful of effects will not be enabled on Macs running Tiger.

 

Like the Factory Tools AVX version, FxFactory is both an application to control and organize effects packages, as well as a series of effects. Since the AVX and FxPlug architectures are not compatible, you need both Factory Tools and FxFactory and their associated effects packs if you intend to run both Avid Media Composer and Apple Final Cut Studio on the same computer. Effects packages are different between AVX and Apple FxPlug, because the AVX filters require more sophistication. Programmers have the option of adding ingredients to the effect that are missing inside the application itself. For example, Factory Tool AVX effects include Photoshop-style composite (blend) modes and geometry parameters (scale, position, rotation). FxFactory effects don’t require these, because composite modes and clip geometry already exist within Final Cut’s built-in tools.

 

Virtual Corporation

 

The model behind Factory Tools and FxFactory has encouraged outside designers to create new effects, which are made available for purchase through the Noise Industries website. If you search their site, you’ll even find quite a few free effects, like a plug-in to create 3D animations of planets. These require one of the Noise Industries products to run, but are otherwise self-contained. As a result, Noise Industries fits the definition of a virtual corporation, with key contributors from all over the world, including experienced visual effects artist Roger Bolton and noted Final Cut Pro editor Peter Wiggins. Bolton’s Organoptics FX pack is now included as part of Factory Tools Pro 2 and his CoreMelt effects are used in FxFactory 2.

 

Peter Wiggins launched iDustrial Revolution and first introduced Volumetrix, a light spill effect. This was quickly followed up with a set of four free effects: MultiSpace, iSight Live, Rack Focus and Opposites. MultiSpace acts much like an old two-channel Ampex ADO 3D digital video effects device. Two video planes can be moved and even intersected in 3D space. In the linear editing days, ADO hardware to achieve effects like these cost hundreds of thousands of dollars. Now you can do it on a laptop! A new free filter is CoverFlux – a multi-mage effect similar to the iTunes Coverflow transitions. Wiggins also introduced SupaWipe, which is a Final Cut transition package that lets you use a full screen graphic to wipe the screen from one piece of video to another.

 

New from CoreMelt is Bolton’s PolyChrome package. This includes 40 transition effects including particle dissolves, film blow-out wipes, page curl strips, exposure flashes and much more. The newest member of this collective is SUGARfx, which offers several effect generators targeted at TV promo-style production. These use a moving graphic theme into which you can drop your own images. For example, one motif is a moving filmstrip. Select the appropriate SUGARfx generator from the Final Cut pulldown menu and it appears as a single clip on your timeline. Simply open the effects editor and assign a picture folder from your hard drives as a source. Now the template is filled with your own images.

 

 

Taking the Spin

 

I’ve been using the various Noise Industry effects since the Avid introduction and find them to be some of the best effects on the market. They are clean, high-quality effects that are easy to use and ran quickly on even my older PowerBook G4. FxFactory effects have become a staple when I run Final Cut on my current MacBook Pro. If you can only get one package for your NLE, this is a pretty good one to have. It’s not just the quality, but new effects keep coming out that extend the value of your investment in the software.

 

Since I had yet to try my hand at creating new ones, I felt it was time to “walk the walk” and build my own custom filters. It couldn’t have been simpler. All you have to do is launch FxFactory or Factory Floor and choose the option to Create FxPack or Create plug-in Library. From the application’s top menu, select Actions / Open Quartz Composer to enter Apple’s nodal development application. In Quartz Composer, you are offered a series of building blocks to create effects. All you have to do is drag-and-drop these blocks onto the canvas, link them together and decide which parameters to publish. Publishing makes it available to the operator so that the filter setting can be adjusted during actual use. The rest is just a matter of giving it a name and saving it inside FxFactory or Factory Tools. The simple effect I created combined Zoom Blur, Gamma Correction and Posterization into a single filter.

 

 

B. Boris Continuum Complete 5

 

Boris FX has a long history of supplying a spectrum of motion graphics products that support the widest range of editing and compositing hosts. They are on top of the latest technology trends, so you are likely to find a Boris FX product that not only works with your NLE, but is also current with its newest features. Boris Continuum Complete is a plug-in filter package designed to integrate with a variety of host architectures, including Adobe After Effects (and compatible products), Avid AVX, Apple FxPlug (Final Cut Pro and Motion) and Autodesk Sparks (Flint, Flame, Smoke, Fire and Inferno). More than a year ago, Boris FX introduced BCC 4.0, which added tracking and optical flow technology to its filters.

 

This year Boris Continuum Complete was updated to BCC 5.0, adding support for FxPlug and Avid AVX 2. This change allowed for a cleaner user interface within these hosts and the ability to process effects in 16-bit “deep color”, for improved image quality during YUV rendering. Two key new technologies in BCC 5.0 are the advanced BCC UpRez Filter to scale SD clips to HD and the BCC MatchMove Filter, which locks one clip to another via motion tracking. Other competing filters to BCC UpRez are often sold as standalone products and not included as part of a package like this.

 

BCC 5.0 ships with over 180 filters and many use mature motion tracking and optical flow technologies in non-obvious ways to improve the overall quality of the effect. The UI enhancements allow on-screen controls, also called “heads up displays” (HUD), in certain effects. For example, the new BCC Pan and Zoom Filter is used to add camera-style moves to still images (the so called “Ken Burns” effect). BCC Pan and Zoom gives you HUD graphics for start and end positions to easily preview the effect. Other BCC filters add HUD graphics for tracking points and geometry information. One handy aspect of most BCC filters is that they have built-in geometry controls (for scaling and positioning) and Boris FX’s Pixel Chooser. The latter lets you define the portion of the image to which the filter will be applied. In essence, each filter includes sophisticated DVE and masking controls.

 

No effects package upgrade would be complete without a set of cool new filters and BCC 5.0 doesn’t disappoint. In addition to those I’ve mentioned, you get such new filters as LED, Prism, Scanline, Damaged TV, Turbulence, Noise Map 2 and Color Choker. These offer advanced effects that would otherwise require serious compositing and are based on the use of OpenGL technology and the new engine developed for Noise Map 2. In fact, BCC 5.0 effects run faster and smoother than ever on modern computers and display cards. There is a filter group specifically designed as OpenGL effects, which run in real-time on most machines; but, nearly all of the standard filters display quickly and preview at a fast enough frame rate to see what result the filter is having.

 

 

C. 12 Inch Design LiveType Expansion Packs

 

Most Final Cut Pro users have had at least a fleeting exposure to LiveType, the motion graphics application acquired by Apple and included with versions of Final Cut Pro, Final Cut Studio and Final Cut Express. Editors generally use it for quick, flashy text animations, even if they haven’t made full use of its compositing abilities. Once Apple introduced Motion, corporate attention to LiveType waned, leaving its potential under-developed. Nevertheless, LiveType is still one of my favorites. I’ve used it to create fully animated DVD menus, as well as text animations for not only Final Cut Pro, but also other NLEs like Avid Media Composer.

 

FCP editors are familiar with the canned animation effects that can be applied to text and the custom LiveFonts, which feature preset 3D, 2D and cell-animated fonts. Originally LiveFonts were raster-based, but now, nearly all LiveType effects and LiveFonts are vector-based, which means you can blow them up to a larger size without degradation. LiveType projects can be dropped directly onto a Final Cut timeline and rendered along with other effects or they may be exported as self-contained QuickTime movies (with or without alpha channels) in any of the codecs available on your computer. The early plan was to nurture a development community that expanded the offerings of LiveType through the LiveType Central website. This is now part of the product line offered by 12 Inch Design, a company dedicated to producing and marketing template-driven, customizable motion graphics elements for editors and designers. 12 Inch Design offers a set of five LiveType Expansion Packs through LiveType Central.

 

Installing the LiveType Expansion Pack DVD-ROMs is simple. The first pack is installed to a user-defined drive location. Subsequent packs will be installed to the same location by pointing the installer to that same drive. The location of these new Expansion Packs is selected within the LiveType preferences, so once set, LiveType will find this new content each time it is launched. LiveType content is divided into LiveFonts, Fonts, Objects, Textures and Effects. Fonts come from the standard set available on your system drive, but the other four groups are comprised of content installed by LiveType or an Expansion Pack.

 

Most of the five 12 Inch Design Expansion Packs include some of each group, though one expansion pack might include more LiveFonts while another has more Textures. If you’ve never used LiveType beyond text animations, it’s helpful to know that it can be used as a full-fledged multi-track compositor. This includes full-screen background files along with other composited elements. To access such files, LiveType uses the Place command instead of more common interface terminology. When you Place an object, a dialogue box opens to let you find the file on your hard drive and then it is inserted onto a track of your compositing timeline. If you are unsure of how to start, there’s a wealth of templates for all types of content, from lower thirds to full-screen animations useful for opens, DVD menus and so on.

 

The 12 Inch Design LiveType Central Expansion Packs include a wide range of standard and unique items, such as swirling backgrounds, globes and even stock footage of people. There are also static and animated mattes which can be used in LiveType or exported as elements useful in other compositors, like After Effects or even Photoshop. Each element can be previewed in LiveType’s small media browser window. Some of the new and existing effects are particle-based pyro effects, like sparkles, magic wand pixie dust, fire and more. I do a lot of Central Florida theme park work and these preset effects became instantly productive once I installed my expansion packs. LiveType and the Expansion Packs can be used for NTSC, PAL and HD formats. Although some content, like the stock footage of people is not vector information, it is saved at a large enough size that it may still work inside an HD composition. Many other design elements are stored at larger sizes than NTSC, so scaling isn’t a big issue.

 

Written by Oliver Peters for Videography magazine and NewBay Media, LLC