Film editing stages – Sound

df_filmsoundeditLike picture editing, the completion of sound for a film also goes through a series of component parts. These normally start after “picture lock” and are performed by a team of sound editors and mixers. On small, indie films, a single sound designer/editor/mixer might cover all of these roles. On larger films, specific tasks are covered by different individuals. Depending on whether it’s one individual or a team, sound post can take anywhere from four weeks to several months to complete.

Location mixing – During original production, the recording of live sound is handled by the location mixer. This is considered mixing, because originally, multiple mics were mixed “on-the-fly” to a single mono or stereo recording device. In modern films with digital location recordings, the mixer tends to record what is really only a mixed reference track for the editors, while simultaneously recording separate tracks of each isolated microphone to be used in the actual post production mix.

ADR – automatic dialogue replacement or “looping”. ADR is the recording of replacement dialogue in sync with the picture. The actors do this while watching their performance on screen. Sometimes this is done during production and sometimes during post. ADR will be used when location audio has technical flaws. Sometimes ADR is also used to record additional dialogue – for instance, when an actor has his or her back turned. ADR can also be used to record “sanitized” dialogue to remove profanity.

Walla or “group loop” – Additional audio is recorded for groups of people. This is usually for background sounds, like guests in a restaurant. The term “walla” comes from the fact that actors were (and often still are) instructed to say “walla, walla, walla” instead of real dialogue. The point is to create a sound effect of a crowd murmuring, without any recognizable dialogue line being heard. You don’t want anything distinctive to stand out above the murmur, other than the lead actors’ dialogue lines.

Dialogue editing – When the film editor (i.e. the picture editor) hands over the locked cut to the sound editors, it generally will include all properly edited dialogue for the scenes. However, this is not prepared for mixing. The dialogue editor will take this cut and break out all individual mic tracks. They will make sure all director’s cues are removed and they will often add room tone and ambience to smooth out the recording. In addition, specific actor mics will be grouped to common tracks so that it is easier to mix and apply specific processing, as needed, for any given character.

Sound effects editing/sound design – Sound effects for a film come from a variety of sources, including live recordings, sound effects libraries and sound synthesizers. Putting this all together is the role of the sound effects editor(s). Because many have elevated the art, by creating very specific senses of place, the term “sound designer” has come into vogue. For example, the villain’s lair might always feature certain sounds that are identifiable with that character – e.g. dripping water, rats squeaking, a distant clock chiming, etc. These become thematic, just like a character’s musical theme. The sound effects editors are the ones that record, find and place such sound effects.

Foley – Foley is the art of live sound effects recording. This is often done by a two-person team consisting of a recordist and a Foley walker, who is the artist physically performing these sounds. It literally IS a performance, because the walker does this in sync to the picture. Examples of Foley include footsteps, clothes rustling, punches in a fight scene and so on. It is usually faster and more appropriate-sounding to record live sound effects than to use library cues from a CD.

In addition to standard sound effects, additional Foley is recorded for international mixes. When an actor deliveries a dialogue line over a sound recorded as part of a scene – a door closing or a cup being set on a table – that sound will naturally be removed when English dialogue is replaced by foreign dialogue in international versions of the film. Therefore, additional sound effects are recorded to fill in these gaps. Having a proper international mix (often called “fully filled”) is usually a deliverable requirement by any distributor.

Music – In an ideal film scenario, a composer creates all the music for a film. He or she is working in parallel with the sound and dialogue editors. Music is usually divided between source cues (e.g. the background songs playing from a jukebox at a bar) and musical score.

Recorded songs may also be used as score elements during montages. Sometimes different musicians, other than the composer, will create songs for source cues or for use in the score. Alternatively, the producers may license affordable recordings from unsigned artists. Rarely is recognizable popular music used, unless the production has a huge budget. It is important that the producers, composer and sound editors communicate with each other, to define whether items like songs are to be treated as a musical element or as a background sound effect.

The best situation is when an experienced film composer delivers all completed music that is timed and synced to picture. The composer may deliver the score in submixed, musical stems (rhythm instruments separated from lead instruments, for instance) for greater control in the mix. However, sometimes it isn’t possible for the composer to provide a finished, ready-to-mix score. In that case, a music editor may get involved, in order to edit and position music to picture as if it were the score.

Laugh tracks – This is usually a part of sitcom TV production and not feature films. When laugh tracks are added, the laughs are usually placed by sound effects editors who specialize in adding laughs. The appropriate laugh tracks are kept separate so they can be added or removed in the final mix and/or as part of any deliverables.

Re-recording mix – Since location recording is called location mixing, the final, post production mix is called a re-recording mix. This is the point at which divergent sound elements – dialogue, ADR, sound effects, Foley and music – all meet and are mixed in sync to the final picture. On a large film, these various elements can easily take up 150 or more tracks and require two or three mixers to man the console. With the introduction of automated systems and the ability to completely mix “in the box”, using a DAW like Pro Tools, smaller films may be mixed by one or two mixers. Typically the lead mixer handles the dialogue tracks and the second and third mixers control sound effects and music. Mixing most feature films takes one to two weeks, plus the time to output various deliverable versions (stereo, surround, international, etc.).

The deliverable requirements for most TV shows and features are to create a so-called composite mix (in several variations), along with separate stems for dialogue, sound effects and music. A stem is a submix of just a group of component items, such as a stereo stem for only dialogue.The combination of the stems should equal the mix. By having stems available, the distributors can easily create foreign versions and trailers.

©2013 Oliver Peters

iZotope RX 2 Advanced

df_izotope_1_smiZotope is known as a company that makes software and hardware, including high-quality plug-ins for mastering, noise reduction and audio restoration. A number of applications come bundled with some of their tools, most notably Sony Sound Forge Pro, Adobe Audition CC and Premiere Pro CC. As with most plug-in developers, iZotope offers a nice family of effects that can be installed and run on a variety of audio and video host applications. In addition, iZotope also offers its own host application called RX 2. It runs as a standalone single track (mono or stereo) audio application that leverages the power of the iZotope DSP and forms a dedicated repair and mastering suite. RX 2 is ideal for any music, audio production or video post production challenge. It can read most standard audio files, but cannot directly work on an audio track embedded within a video file, like a QuickTime movie.

iZotope RX 2 comes in a standard and advanced version. Both include such modules as Denoiser, Spectral Repair, Declip, Declick, Decrackle, Hum Removal, EQ and Channel Operations. RX 2 Advanced also adds adaptive noise reduction, third-party plug-in support, a Deconstruct module, dithering, 64-bit sample rate conversion, iZotope’s Radius time and pitch control, as well as azimuth alignment for the restoration of poor recordings from audiotape. Of course, RX 2 is also useful as a standard file-based audio editor, with delete, insert and replace functions.

Both versions are engineered around sophisticated spectral analysis. The RX 2 display superimposes the spectral graph with the audio waveform and gives you a balance slider control to adjust their relative visibilities. If you’ve used any Adobe audio software that included spectral-based repair tools, like SoundBooth or Audition, then you already know how this works in RX 2. Frequencies can be isolated using the spectral display or unwanted noises can be “lassoed” and then corrected or removed.  RX 2 also includes an unlimited level of undos and retains a current state history. When you return to the program it picks up where you left off. It also holds four temporary history locations or “snapshots”, that are ideal for comparing the audio with or without certain processing applied.

df_izotope_2_smThe iZotope RX 2 interface is designed for efficient operation with all available modules running down the right side of the window, as well as being accessible from the top menu. Click a module button and the specific iZotope plug-in window opens for that task. There you can make adjustments to the parameters or save and recall presets. Unlike a DAW application, the modules/plug-ins must be previewed from the plug-in window and then applied to process your audio file. You cannot add multiple modules and have them all run in real-time without processing the audio to a buffer first. That’s where the four temporary history buttons come in handy, as you can quickly toggle between several versions of applied effects on the same audio file for comparison. RX 2 includes a batch processor that can run in the background. If you have a group of modules to be applied to a series of audio files, simply set up a preset of those settings and apply them to the batch of files.

When you install the RX 2 package, the iZotope modules are also available as plug-ins within other compatible applications. For example, on my Mac Pro, these plug-ins show up and work within Final Cut Pro X. Now with RX 2 Advanced, it works the other way, too. Any AU, VST, RTAS or Direct-X plug-in installed on your computer can be accessed from the RX 2 Advanced interface. In my case, that includes some Waves, Focusrite and Final Cut Audio Units effects filters. If I want to use the Waves Vocalrider plug-in to smooth out the dynamics of a voice-over recording, I simply access it as a plug-in, select a preset or make manual adjustments, preview and process – just like with the native iZotope plug-ins.

RX 2 Advanced also adds an adaptive noise mode to the Denoiser module. This is ideal for noisy on-location production, where the conditions change during the course of the recording. For instance, an air conditioner going on and off within a single recorded track. Another unique feature in RX 2 Advanced is a new Deconstruct module. This tool lets you break down a recording into parts for further analysis and/or correction. For example, you can separate noise from desired tonal elements and adjust the balance between them.

iZotope’s RX 2 and RX 2 Advanced are one-stop applications for cleaning up bad audio. Some of these tools overlap with what you may already own, but if you need to do a lot of this type of work, then RX 2 will be more efficient and adds more capabilities. In September 2013, iZotope will release the updates for RX3 and RX3 Advanced. iZotope’s algorithms are some of the best on the market, so sonic quality is never compromised. Whether it’s poorly recorded audio or restoring archival material, RX 2 or RX 3 offer a toolkit that’s perfect for the task.

Originally written for Digital Video magazine

©2013 Oliver Peters

Sound Forge Pro for the Mac

df_soundforge_1_sm

Sony Creative Software has been the home for an innovative set of audio and video editing and mixing tools originally developed by Sonic Foundry. These include Vegas Pro, ACID and Sound Forge, which have traditionally been tightly integrated with the Windows operating system. On the other side of the fence, Mac OS has enjoyed a wide range of creative tools, especially for audio production and post. Until recently BIAS Peak had been go-to, two-track audio editor and mastering tool for Mac-based audio engineers; but, the company has apparently withdrawn from the market, leaving an opening for some new blood to step in. Enter Sony’s Sound Forge Pro for the Mac.

Sound Forge has been the tool of choice for Windows-based audio production and now Sony has made a strong entry into the Mac creative universe. Sound Forge Pro Mac 1.0 is a comprehensive tool for audio analysis, recording, editing, processing and mastering. Although it is thought of as a two-track editor, it can deal with multi-channel files with as many as 32 embedded channels, sample rates up to 192kHz and bit depths up to 64-bit float. Since most users are going to be limited by their I/O hardware, they will likely work with 24-bit, 48kHz stereo files. To be clear, it’s designed to edit and master single files and is not a multi-track digital audio workstation application for mixing.

Sound Forge can be used as a recording application if you have an input device on your system, such as the Avid/Digidesign Mbox2 Mini that I use. Sound Forge sports a clean user interface that will appeal to the professional. It might look a tad Spartan to some, since it bucks the current trend of dark, dimensional interfaces. In other words, it’s devoid of unnecessary “chrome”. The operation is very easy to learn, thanks to a tabbed window layout, easy-to-understand controls and menus and a good user guide.

Sound Forge Pro Mac comes with a set of Sony plug-ins, as well as the iZotope mastering suite filters. In addition, Sound Forge will support many third party VST and Mac Audio Units plug-ins. I have a set of Focusrite Scarlett filters, the Waves OneKnob series and Waves Vocalrider plug-ins installed on my Mac Pro, which all show up and work properly within Sound Forge. The iZotope set is superb, so for pristine audio quality, Sound Forge is as good as it gets. I applied a Declicker noise reduction filter to an old recording from a vinyl LP. This filter did one of the best jobs I’ve heard to remove and/or reduce the record pops and clicks without adding negative artifacts to the file.

Audio filters can be applied as a processing step – meaning the filter is set and previewed and then applied to alter the file. Sound Forge also includes a real-time plug-in chain. Stack up a series of filters in the chain window and tweak the adjustments. The order can be changed and saved as a preset for later use. Simply listen to the file in real-time with the filter chain applied. If you like the result, apply these settings in a “save as” function and the file will be rendered in a faster-than-real-time “bounce”. Some filters, like Timestretch can only be applied as an effects process and won’t function as part of a real-time plug-in chain.

As an editing tool, Sound Forge lets the editor get down to the sample level. You can redraw waveforms with a pen tool in addition to the usual keyframed changes to parameters like the volume envelope. Unlike other audio editors, where volume and pan are part of the basic track window, Sound Forge gives you several ways to adjust volume. One way is to add a specific volume filter where you apply any audio keyframe adjustments. Another way is to create an event (a section of timeline) and drag the volume level up or down.

The audio editing tools are quite simplified. Selected a range you want to remove, hit the delete key and you’ve made the edit. There’s even an edit preview function so you can hear what the edit will sound like before committing. To add space, insert silence. This methodology is a bit foreign to video editors used to the way NLEs handle audio tracks. Once you make an edit in Sound Forge this way, there’s no segment in the track or cut marks on the clip indicating where the edit had been made. If you split the track into events, however, then track segment appear more familiar and you have the ability to trim, edit, slip clip segments and add crossfades at overlaps.

You can also mark up the file into regions, which may be separately exported. In the example I cited earlier of the old vinyl LP, I recorded each complete side as a single audio file. After audio clean-up in Sound Forge, the file would be broken into regions for each song on that LP side. These would finally be exported as separate regions to result in a new digital file for each individual song.

There are some missing elements in this 1.0 version. For example, Sound Forge doesn’t recognize most video files. I was able to open the audio track from an MP4 file, but not a QuickTime movie. There is no JKL transport control and no scrubbing. You can loop playback, but you cannot shuttle through the track with the mouse and hear either an analog or digital-style scrubbing sound. It’s real-time playback or nothing. The application is a good file conversion utility. If you need to generate high-quality MP3 files for clients, Sound Forge is definitely useful. Unfortunately there’s no batch conversion function. Another curious omission for an audio-centric tool is the lack of CD track layout and burning tools. I realize that we work in a file-based world, but when Adobe dropped the same tools from Audition, they ended up having to add them back in Creative Suite 6. Obviously users still feel that there’s a need for this.

Audio engineers and mixers can see the obvious benefit to another great audio tool for the Mac – especially with the demise of BIAS Peak and the end-of-life of Apple’s Soundtrack Pro. For video editors, it might be a bit more questionable. I find Sound Forge Pro to be a solid tool when you need to focus on audio-only tasks, like dialogue clean-up, noise reduction and voice-over recordings. Clients often request radio versions of the TV commercials I edit. Here again, working in a tool that’s optimized for the task is the right way to go. The lack of video support is a wrinkle, but it’s easy enough to export a WAV or AIF file from most NLEs. Then open that file in Sound Forge and work your magic.

Sony’s Sound Forge Pro Mac 1.0 is a solid first step to bring this application to Mac users. I haven’t had any hiccups with it, in spite of the fact that it’s a 1.0 product. If Sony expands on some of the missing items, this will become the go-to professional audio tool for Mac users, just as it has been for Windows.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters

PluralEyes 3

df_pluraleyes3_01_smThe concept of synchronizing clips by sound seems so obvious in retrospect, but when Bruce Sharpe showed his first version of PluralEyes at a small NAB booth, it struck many as nothing short of magic. The first version was designed to sync multiple consumer and prosumer video cameras by aligning their sound tracks in the absence of recorded timecode. With the unanticipated popularity of the HDSLR cameras, like the Canon EOS 5D Mark II in late 2009, PluralEyes gained a big boost. It became the easiest way to sync 5D clips with double-system audio recorded using low-cost devices, such as the Zoom H4n handheld digital audio recorder. PluralEyes expanded from a plug-in for Final Cut Pro to add the standalone DualEyes, used to sync double-system sound projects. In a very short time period, PluralEyes went from an unknown to a brand name synonymous with a product or process, much like Coke or Kleenex.

Now that Sharpe’s Singular Software products are part of the Red Giant Software family, PluralEyes is available as the new and improved, standalone PluralEyes 3 (currently in version 3.1). It encompasses all of the features of both the original PluralEyes and of DualEyes. This means that PluralEyes 3 supports two basic processes: a) synchronizing camera files with external audio, and b) synchronizing multiple cameras to each other or to a common sound track. This is all done by comparing the audio tracks against each other without the use of timecode, clapsticks or other common reference points.

PluralEyes 3 analyzes and matches audio waveform shapes to accomplish this, so without belaboring the obvious, all camera files have to include an audio track recorded in the same general environment. Since PluralEyes uses very good audio analysis tools and audio normalization to aid the process, the camera audio does not have to be pristine. The most common scenario is a high-quality audio recording as a separate digital audio file and camera audio that was recorded solely with the onboard mic. Naturally the cleaner this onboard recording is, the more likely that synchronization will be successful.

The new features of PluralEyes 3 include a brand new user interface, faster synchronization, NLE round-tripping support (Apple Final Cut Pro, Final Cut Pro X, Avid Media Composer and Adobe Premiere Pro) and direct exporting of new, synchronized media files. To synchronize double-system projects, simply drag your camera files into the interface’s camera section and the audio tracks into the audio section. PluralEyes 3 lets you create multiple bins as tabs across the top of the interface for use in organizing your files. For instance, you might want a separate bin for each camera or shoot date or location.

As you add the camera and audio clips to these sections, they will be lined up in ascending order within the lower timeline window. Once the timeline is filled, click “synchronize” and watch PluralEyes 3 do its magic. If the audio recording is low, you can opt to level the audio (normalization) during this process. That will make it easier for successful matching, but it’s an extra step, so the total synchronizing process will take a little longer. Part of PluralEyes 3’s new interface is a 2-up view, which makes it possible to see how the audio tracks align. This view will aid you in adjusting sync if needed.

When synchronization is complete, PluralEyes 3 offers several export options. If you are sending these files to Premiere Pro, Final Cut Pro or Final Cut Pro X, simply export the appropriate XML version. You can choose to replace the camera audio tracks with the audio file’s track as part of this step. Then import that XML into the NLE you selected. When I ran this test with FCP X, the export options let me send two new Events (synchronized clips plus synchronized clips with replaced audio) and a new sequence (Project) representing the PluralEyes timeline. This timeline had both sets of audio channels turned on, so you’ll have to mute the camera tracks first if you intend to use this timeline.

A new feature is the ability to export new media files. For instance, if you want new clips where the high-quality audio has replaced the camera’s reference track, PluralEyes 3 will export these and write new media files. The advantage is that this approach is independent of your NLE choice, making the self-contained, synchronized files easy to migrate between systems.

PluralEyes 3 can also sync multiple cameras for a multi-camera edit session. First, start in the NLE by building a timeline with the clips for each camera placed on a separate video track. Video 1 = camera 1, video 2 = camera 2 and so on. Multiple broken clips from the same camera angle should be placed back-to-back on the same track. In the case of FCP X, group multiple clips from the same camera into a single secondary storyline, before proceeding to the next camera. Once you are done, export an XML file for that sequence. For Avid Media Composer projects, export an AAF file with the media linked and not embedded.

The XML or AAF file is then imported into PluralEyes 3. You’ll end up with a timeline that is populated with the different camera angles corresponding to your NLE sequence. Next, click “synchronize” and watch as PluralEyes realigns the camera clips by referencing the sound tracks against each other. The 2-up view is handy to compare two cameras (as well as their audio tracks) against each other, in case you have any question regarding their synchronization. Once this process is done, export a new XML or AAF from PluralEyes. Import that file into the NLE and you will have a timeline with camera clips rearranged in sync. This would represent what editors typically call a “sync map”. In the case of FCP X, the PluralEyes 3 export settings offer the option of exporting new events, as well as multicam clips. These can be used in FCP X’s standard multicam editing workflow. Open the FCP X angle viewer for access to editing between camera angles.

Red Giant’s PluralEyes 3 is a major advance over the original concept. It’s no longer tied to a single NLE, but is useful both in standalone and NLE-specific workflows. As editors deal with an ever-increasing, diverse spectrum of media sources, a tool like PluralEyes is an essential part of the kit. It was a no-brainer on day one, but even more so in this new and improved version.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters

Adobe Creative Suite 5.5

Adobe’s development efforts have been running full bore with several Creative Suite updates in close succession during recent years. 2011 is no exception, with the launch of Adobe Creative Suite 5.5, announced at NAB. This is a point-five release that concentrates mainly on the video products, which are offered in the Production Premium software collection.

In addition to various improvements and enhancements throughout the applications, Adobe CS5.5 signals some big changes. The first is that the Adobe Creative Suite is now available on both a purchase and a subscription basis. For the first time, Adobe customers may access the power of the Creative Suite tools on a low-cost monthly basis. This is designed to cover suite owners who might be ineligible for upgrade pricing, single product owners and new customers who may want to test the waters for a few months. The individual collections (Master, Design, Web and Production), as well as individual products, may be used with a one-year or month-by-month subscription.

The second big change is that Audition comes to the Mac platform. Audition is Adobe’s full-featured digital audio workstation software. As a Windows application, it was originally part of the collection prior to Adobe Premiere Pro’s return to the Mac. Audition will replace Soundbooth and once again be the audio tool within the Production Premium bundle. Unfortunately, if you really liked Soundbooth, it is now an end-of-life product.

Premiere Pro

I’m going to focus this first look on three of the core products in the Production Premium suite – Premiere Pro, After Effects and Audition. In CS5, Premiere Pro made a big splash with tons of native camera format support, 64-bit operation and the Mercury Playback Engine. For many, Mercury Playback seemed to come down to the CUDA processing technology of specific NVIDIA graphics cards. In reality, MPE is a lot more and not just a function of CUDA. The NVIDIA cards do indeed accelerate certain effects and a wider range of NVIDIA cards is now supported; however, Adobe has integrated additional CPU optimization throughout the product. My Mac Pro has the ATI 5870 card installed and I have no problems with RED, ProRes, AVC-Intra or other native camera formats.

Adobe’s optimization for Premiere Pro CS5.5 is most evident with plug-ins. Previously, filters with custom user interfaces, like Red Giant’s Magic Bullet Colorista II, were extremely sluggish in Premiere Pro. This is now a thing of the past, with Colorista II nearly as responsive as it is in After Effects. That’s super news for me, as I much prefer it to Premiere Pro’s built-in color correction tools. Another example is GenArts Sapphire – a plug-in package installed for After Effects, but which also works in Premiere Pro. When I drop one of these filters onto a 1920×1080 ProRes clip in the timeline, playback is still smooth at full resolution without dropping frames.

Premiere Pro has attracted the interest of the RED camera user community as one of the better desktop editing solutions for their needs. It’s one of the few that can actually work natively with the camera raw .r3d files at a full 4K timeline resolution. Premiere Pro CS5.5 uses the latest RED SDK, which gives Adobe editors access to RED’s “new color science” – meaning better color processing from old and new camera files. The beta version I tested did not yet handle files from EPIC – RED’s newest camera, but Adobe plans to support EPIC at launch, including 5K RED timelines. There will be a link to a software extension via the Adobe Labs website.

Another improvement is better XML and AAF translation when importing projects started in Apple Final Cut Pro and Avid Media Composer, respectively. That often hasn’t worked well for complex sequences when using Premiere Pro CS5; however, it appears to have been fixed in CS5.5. I ran into very few issues importing my XML sequence files from FCP7 into Premiere Pro. To be safe, I first removed filters and embedded Motion projects in Final Cut Pro, but the import worked perfectly with several different test projects. These sequences included video clips, audio files, text and graphics. Premiere Pro properly linked to all of these, plus translated dissolves, opacity changes, speed changes, placeholder text and audio keyframes. Likewise, I was able to move Media Composer sequences, with QuickTime media only linked via Avid Media Access, straight into Premiere Pro and from there into After Effects. This improved interoperability offers the possibility of some exciting alternative workflows for the future.

Audition

Soundbooth was a simplified, task-specific “lite” audio production and post tool. Audition is more sophisticated and can easily hold its own against competitors, like Avid Pro Tools and Apple Logic or Soundtrack Pro. Unlike some of these other applications, it’s designed specifically around audio production, editing and mixing and isn’t bloated with tons of audio loops and MIDI features. This makes it very streamlined for easy use by a video editor. It can be launched and run by itself or as part of a roundtrip with Premiere Pro CS5.5. There is also OMF support for audio post with projects started on Media Composer or Final Cut Pro systems.

Moving from Premiere Pro to Audition is simply a matter of right-clicking the sequence in the Premiere Pro project window and “sending to” an Audition multi-track session. All tracks open in Audition and retain the volume keyframing created in Premiere Pro. In Audition, feel free to edit, add filters and mix. To return, simply use the “Export to Adobe Premiere Pro” option. Completed mono, stereo or 5.1 surround mixes can also be exported as a mixdown, which may be re-imported into Premiere Pro (or another application) for final mastering and encoding.

Audition is designed as two applications in one – a multitrack editor/mixer and a clip-based audio editor. These are identified in the interface by the Waveform and Multitrack tabs. Opening a clip directly in Waveform (or clicking a timeline clip to send it to Waveform) opens a clip-based tool to clean-up, process or otherwise alter individual clips. This uses one of the best spectral view displays of any audio tool and permits Photoshop-style “healing” functions to eliminate unwanted background sounds. The Multitrack tab opens a multitrack timeline and mixer window where you would slip or trim clips, add cross-fades, adjust levels and add track-based filters.

Audition comes with a large selection of Adobe and iZotope filters and can access any VST and/or AU filters installed on your system. This includes those used by Final Cut Pro and additional plug-ins, like Focusrite Scarlett and BIAS. Audition is really a joy to use. It’s very responsive, thanks to a new code base that’s been optimized for multi-core, multi-processor systems. You can easily make on-the-fly changes to filters and other parameters without any hiccups as the timeline continues to play.

Adobe still has some unfinished business to round out Audition. There is no control surface hardware support and I/O for the Mac is limited to units that are supported at the Mac system level. For example, my Avid Mbox2 Mini audio interface works fine for stereo monitoring. There is also no automation mixing. If you want to ride levels throughout a piece, you’ll either have to do that by rubberbanding keyframes for each track or use the mixer automation function within Premiere Pro.

Users can access Adobe’s Resource Central web service from within the Audition interface to download a wealth of sound effects and some music tracks. As yet there are no Soundbooth-style controls within Audition to modify the structure, length or arrangement of these music tracks. Presumably that might resurface in a later version of Audition.

After Effects

Adobe has been promoting their products heavily to the Canon 5D/7D videographers, thanks to good native support for H.264 files. As we all know, these cameras suffer from rolling shutter artifacts, which often distort the image within the frame. An impressive, new After Effects feature is the Warp Stabilizer. This filter combines standard stabilization of a moving shot with intraframe correction of the horizontal and vertical distortion due to rolling shutter. Send a clip to After Effects from Premiere Pro using Dynamic Link and apply the Warp Stabilizer. The clip is analyzed in the background and can be tweaked as needed. The result will be much smoother than the original.

Adobe has also enhanced the z-space tools in After Effects, by changing the lighting falloff behaviors in 3D space. Another addition is the new Camera Lens Blur effect, designed to more accurately simulate realistic shallow depth-of-field looks and rack-focus effects. Finally, After Effects CS5.5 now lets you set up a project using existing stereo 3D elements or you can create a stereoscopic output from a 3D project, which consists of layers of 2D elements. Start by creating a Stereo 3D Rig and position elements in z-space. After Effects provides the controls for stereo convergence, left/right-eye adjustments and stereo output.

Conclusion

I’ve covered the 30,000 foot view, but there’s more, including enhancements to Adobe Story, Flash Catalyst,  Flash Pro and Adobe Media Encoder. Adobe Production Premium includes a complimentary one-year subscription to CS Live, Adobe’s “cloud” web site, which hosts Adobe Story. Scripts created in Story can now be directly imported into Premiere Pro from CS Live, without going through On Location first. These scripts can also be used to reconcile speech-to-text translations created in Premiere Pro. I feel speech translation is still a weak part of the application and fails to deliver on the user’s expectation. Using a transcript to correct the translation errors helps make it a functional feature.

The improvements in the three core applications I’ve discussed make this a worthwhile update for most users. Premiere Pro CS5.5 definitely feels more responsive than the CS5 version and Audition seals the deal if you want advanced audio tools at your disposal. I hope that mixer automation will make it into a software update for Audition without having to wait for CS6. If you’re new to Premiere Pro, this is the version to try. Interchange is good with Final Cut Pro, you can work with Media Composer or FCP7 keyboard layouts and it includes Ultra – one of the best green/blue-screen keyers in any NLE.

It’s quite refreshing to see steady performance improvements in subsequent versions. An editor working with Premiere Pro has seen tangible performance boosts with each new Creative Suite version. Once again, Adobe has ratcheted development up a notch and it shows.

Written for Videography magazine (NewBay Media LLC).

©2011 Oliver Peters