Sound Forge Pro for the Mac


Sony Creative Software has been the home for an innovative set of audio and video editing and mixing tools originally developed by Sonic Foundry. These include Vegas Pro, ACID and Sound Forge, which have traditionally been tightly integrated with the Windows operating system. On the other side of the fence, Mac OS has enjoyed a wide range of creative tools, especially for audio production and post. Until recently BIAS Peak had been go-to, two-track audio editor and mastering tool for Mac-based audio engineers; but, the company has apparently withdrawn from the market, leaving an opening for some new blood to step in. Enter Sony’s Sound Forge Pro for the Mac.

Sound Forge has been the tool of choice for Windows-based audio production and now Sony has made a strong entry into the Mac creative universe. Sound Forge Pro Mac 1.0 is a comprehensive tool for audio analysis, recording, editing, processing and mastering. Although it is thought of as a two-track editor, it can deal with multi-channel files with as many as 32 embedded channels, sample rates up to 192kHz and bit depths up to 64-bit float. Since most users are going to be limited by their I/O hardware, they will likely work with 24-bit, 48kHz stereo files. To be clear, it’s designed to edit and master single files and is not a multi-track digital audio workstation application for mixing.

Sound Forge can be used as a recording application if you have an input device on your system, such as the Avid/Digidesign Mbox2 Mini that I use. Sound Forge sports a clean user interface that will appeal to the professional. It might look a tad Spartan to some, since it bucks the current trend of dark, dimensional interfaces. In other words, it’s devoid of unnecessary “chrome”. The operation is very easy to learn, thanks to a tabbed window layout, easy-to-understand controls and menus and a good user guide.

Sound Forge Pro Mac comes with a set of Sony plug-ins, as well as the iZotope mastering suite filters. In addition, Sound Forge will support many third party VST and Mac Audio Units plug-ins. I have a set of Focusrite Scarlett filters, the Waves OneKnob series and Waves Vocalrider plug-ins installed on my Mac Pro, which all show up and work properly within Sound Forge. The iZotope set is superb, so for pristine audio quality, Sound Forge is as good as it gets. I applied a Declicker noise reduction filter to an old recording from a vinyl LP. This filter did one of the best jobs I’ve heard to remove and/or reduce the record pops and clicks without adding negative artifacts to the file.

Audio filters can be applied as a processing step – meaning the filter is set and previewed and then applied to alter the file. Sound Forge also includes a real-time plug-in chain. Stack up a series of filters in the chain window and tweak the adjustments. The order can be changed and saved as a preset for later use. Simply listen to the file in real-time with the filter chain applied. If you like the result, apply these settings in a “save as” function and the file will be rendered in a faster-than-real-time “bounce”. Some filters, like Timestretch can only be applied as an effects process and won’t function as part of a real-time plug-in chain.

As an editing tool, Sound Forge lets the editor get down to the sample level. You can redraw waveforms with a pen tool in addition to the usual keyframed changes to parameters like the volume envelope. Unlike other audio editors, where volume and pan are part of the basic track window, Sound Forge gives you several ways to adjust volume. One way is to add a specific volume filter where you apply any audio keyframe adjustments. Another way is to create an event (a section of timeline) and drag the volume level up or down.

The audio editing tools are quite simplified. Selected a range you want to remove, hit the delete key and you’ve made the edit. There’s even an edit preview function so you can hear what the edit will sound like before committing. To add space, insert silence. This methodology is a bit foreign to video editors used to the way NLEs handle audio tracks. Once you make an edit in Sound Forge this way, there’s no segment in the track or cut marks on the clip indicating where the edit had been made. If you split the track into events, however, then track segment appear more familiar and you have the ability to trim, edit, slip clip segments and add crossfades at overlaps.

You can also mark up the file into regions, which may be separately exported. In the example I cited earlier of the old vinyl LP, I recorded each complete side as a single audio file. After audio clean-up in Sound Forge, the file would be broken into regions for each song on that LP side. These would finally be exported as separate regions to result in a new digital file for each individual song.

There are some missing elements in this 1.0 version. For example, Sound Forge doesn’t recognize most video files. I was able to open the audio track from an MP4 file, but not a QuickTime movie. There is no JKL transport control and no scrubbing. You can loop playback, but you cannot shuttle through the track with the mouse and hear either an analog or digital-style scrubbing sound. It’s real-time playback or nothing. The application is a good file conversion utility. If you need to generate high-quality MP3 files for clients, Sound Forge is definitely useful. Unfortunately there’s no batch conversion function. Another curious omission for an audio-centric tool is the lack of CD track layout and burning tools. I realize that we work in a file-based world, but when Adobe dropped the same tools from Audition, they ended up having to add them back in Creative Suite 6. Obviously users still feel that there’s a need for this.

Audio engineers and mixers can see the obvious benefit to another great audio tool for the Mac – especially with the demise of BIAS Peak and the end-of-life of Apple’s Soundtrack Pro. For video editors, it might be a bit more questionable. I find Sound Forge Pro to be a solid tool when you need to focus on audio-only tasks, like dialogue clean-up, noise reduction and voice-over recordings. Clients often request radio versions of the TV commercials I edit. Here again, working in a tool that’s optimized for the task is the right way to go. The lack of video support is a wrinkle, but it’s easy enough to export a WAV or AIF file from most NLEs. Then open that file in Sound Forge and work your magic.

Sony’s Sound Forge Pro Mac 1.0 is a solid first step to bring this application to Mac users. I haven’t had any hiccups with it, in spite of the fact that it’s a 1.0 product. If Sony expands on some of the missing items, this will become the go-to professional audio tool for Mac users, just as it has been for Windows.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters

PluralEyes 3

df_pluraleyes3_01_smThe concept of synchronizing clips by sound seems so obvious in retrospect, but when Bruce Sharpe showed his first version of PluralEyes at a small NAB booth, it struck many as nothing short of magic. The first version was designed to sync multiple consumer and prosumer video cameras by aligning their sound tracks in the absence of recorded timecode. With the unanticipated popularity of the HDSLR cameras, like the Canon EOS 5D Mark II in late 2009, PluralEyes gained a big boost. It became the easiest way to sync 5D clips with double-system audio recorded using low-cost devices, such as the Zoom H4n handheld digital audio recorder. PluralEyes expanded from a plug-in for Final Cut Pro to add the standalone DualEyes, used to sync double-system sound projects. In a very short time period, PluralEyes went from an unknown to a brand name synonymous with a product or process, much like Coke or Kleenex.

Now that Sharpe’s Singular Software products are part of the Red Giant Software family, PluralEyes is available as the new and improved, standalone PluralEyes 3 (currently in version 3.1). It encompasses all of the features of both the original PluralEyes and of DualEyes. This means that PluralEyes 3 supports two basic processes: a) synchronizing camera files with external audio, and b) synchronizing multiple cameras to each other or to a common sound track. This is all done by comparing the audio tracks against each other without the use of timecode, clapsticks or other common reference points.

PluralEyes 3 analyzes and matches audio waveform shapes to accomplish this, so without belaboring the obvious, all camera files have to include an audio track recorded in the same general environment. Since PluralEyes uses very good audio analysis tools and audio normalization to aid the process, the camera audio does not have to be pristine. The most common scenario is a high-quality audio recording as a separate digital audio file and camera audio that was recorded solely with the onboard mic. Naturally the cleaner this onboard recording is, the more likely that synchronization will be successful.

The new features of PluralEyes 3 include a brand new user interface, faster synchronization, NLE round-tripping support (Apple Final Cut Pro, Final Cut Pro X, Avid Media Composer and Adobe Premiere Pro) and direct exporting of new, synchronized media files. To synchronize double-system projects, simply drag your camera files into the interface’s camera section and the audio tracks into the audio section. PluralEyes 3 lets you create multiple bins as tabs across the top of the interface for use in organizing your files. For instance, you might want a separate bin for each camera or shoot date or location.

As you add the camera and audio clips to these sections, they will be lined up in ascending order within the lower timeline window. Once the timeline is filled, click “synchronize” and watch PluralEyes 3 do its magic. If the audio recording is low, you can opt to level the audio (normalization) during this process. That will make it easier for successful matching, but it’s an extra step, so the total synchronizing process will take a little longer. Part of PluralEyes 3’s new interface is a 2-up view, which makes it possible to see how the audio tracks align. This view will aid you in adjusting sync if needed.

When synchronization is complete, PluralEyes 3 offers several export options. If you are sending these files to Premiere Pro, Final Cut Pro or Final Cut Pro X, simply export the appropriate XML version. You can choose to replace the camera audio tracks with the audio file’s track as part of this step. Then import that XML into the NLE you selected. When I ran this test with FCP X, the export options let me send two new Events (synchronized clips plus synchronized clips with replaced audio) and a new sequence (Project) representing the PluralEyes timeline. This timeline had both sets of audio channels turned on, so you’ll have to mute the camera tracks first if you intend to use this timeline.

A new feature is the ability to export new media files. For instance, if you want new clips where the high-quality audio has replaced the camera’s reference track, PluralEyes 3 will export these and write new media files. The advantage is that this approach is independent of your NLE choice, making the self-contained, synchronized files easy to migrate between systems.

PluralEyes 3 can also sync multiple cameras for a multi-camera edit session. First, start in the NLE by building a timeline with the clips for each camera placed on a separate video track. Video 1 = camera 1, video 2 = camera 2 and so on. Multiple broken clips from the same camera angle should be placed back-to-back on the same track. In the case of FCP X, group multiple clips from the same camera into a single secondary storyline, before proceeding to the next camera. Once you are done, export an XML file for that sequence. For Avid Media Composer projects, export an AAF file with the media linked and not embedded.

The XML or AAF file is then imported into PluralEyes 3. You’ll end up with a timeline that is populated with the different camera angles corresponding to your NLE sequence. Next, click “synchronize” and watch as PluralEyes realigns the camera clips by referencing the sound tracks against each other. The 2-up view is handy to compare two cameras (as well as their audio tracks) against each other, in case you have any question regarding their synchronization. Once this process is done, export a new XML or AAF from PluralEyes. Import that file into the NLE and you will have a timeline with camera clips rearranged in sync. This would represent what editors typically call a “sync map”. In the case of FCP X, the PluralEyes 3 export settings offer the option of exporting new events, as well as multicam clips. These can be used in FCP X’s standard multicam editing workflow. Open the FCP X angle viewer for access to editing between camera angles.

Red Giant’s PluralEyes 3 is a major advance over the original concept. It’s no longer tied to a single NLE, but is useful both in standalone and NLE-specific workflows. As editors deal with an ever-increasing, diverse spectrum of media sources, a tool like PluralEyes is an essential part of the kit. It was a no-brainer on day one, but even more so in this new and improved version.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters

AJA Video Systems T-Tap

df_ttap_01Thunderbolt is the latest protocol for peripherals used by Apple on its computers to carry audio, video, data and power over a single cable. The protocol combines Apple’s DisplayPort technology and PCIe into a single connectivity path. The technology can be used to daisy-chain numerous devices, including storage, monitors and broadcast I/O hardware. Thunderbolt ports are currently available on Apple MacBook Air, MacBook Pro, Mac mini and iMac computers. Manufacturers, such as AJA Video Systems, Blackmagic Design and Matrox, have embraced Thunderbolt technology and produced a number of specific capture and output devices designed to be used with it.

The newest Thunderbolt unit to hit the market and start shipping is the T-Tap from AJA. The T-Tap follows AJA’s previously released Io XT, which places much of the power of AJA’s popular KONA cards into a Thunderbolt-enabled external unit. In spite of the fact that Io XT packs a lot of punch into a small, lightweight unit, there was a need for an even smaller product. Thus came the T-Tap, a small, robust, external adapter designed only for broadcast output and monitoring. Without input electronics, the size of the unit could be reduced to a palm-sized, metal-enclosed adapter. It is ideal for the editor who just needs to connect his laptop or iMac to an external monitor or recording deck.

The AJA T-Tap is a bus-powered, end-of-chain Thunderbolt product. This means it has to be last in a series of Thunderbolt devices. For example, if you used a Thunderbolt-enabled Promise Pegasus storage array, the T-Tap could be connected to the Pegasus’ looped Thunderbolt output port. Both storage and T-Tap would be connected in a serial path from the single port on a MacBook Pro. In the case of an iMac with dual Thunderbolt ports or the use of FireWire, USB3 or internal storage, the T-Tap would be directly connected to the Mac. Neither type of connection significantly impacts the performance of getting video out through the T-Tap due to Thunderbolt’s bidirectional 10Gbps throughput. Full bandwidth, uncompressed audio and video are sent over the single Thunderbolt cable to the T-Tap. In turn, it can be connected to any gear with SDI or HDMI connections (or both simultaneously). The T-Tap is capable of passing 10-bit, uncompressed SD, HD and even 2K (2048 x 1080) video with up to eight channels of embedded 24-bit digital audio.

Setting up the T-Tap configuration

I did my testing connecting the AJA T-Tap to an Apple 17” 2.2GHz Core i7 MacBook Pro running OS 10.8.2. The unit was directly connected to the laptop’s Thunderbolt port with Apple ProRes LT media playing from an external G-DRIVE mini. This was connected to the laptop and bus-powered using the FireWire 800 port. The T-Tap’s SDI output was connected to a TV Logic monitor, while the HDMI was simultaneously connected to a Panasonic plasma display.

df_ttap_02Currently the T-Tap works with a variety of editing hosts, including Avid Media Composer/Symphony/NewsCutter (6.5/10.5), Apple Final Cut Pro 7/X and Adobe Premiere Pro CS6. AJA does not employ a unified installer, so the basic driver software package that you download enables the unit to work with Apple products. Use with Adobe or Avid systems requires the download of additional plug-ins. The package also includes a number of AJA utilities, such as software to output QuickTime media files through the T-Tap without launching any other NLE application. T-Tap does not work with Autodesk Smoke 2013, yet, nor does it work with DaVinci Resolve, since Blackmagic Design restricts output to their own products.

Set-up of the T-Tap is controlled through the AJA Control Panel. If you’ve used AJA’s other products, like KONA cards, then you’ll be familiar with its use. I did my testing with FCP 7 and FCP X. Under the “legacy” version of Final Cut Pro, you have quite a lot of direct control over video output using FCP’s pulldown menu for playback and viewing. With FCP X, you have to set the desired format that matches your editing timeline in the AJA Control Panel, before launching FCP X. The T-Tap is intended for monitoring and synchronous output only, so there is no VTR control port on the unit. You can certainly “roll and record” a deck on-the-fly, but there is no software control for frame-accurate output from any of the NLEs to a recorder.

The T-Tap supports all of the popular video frame rates, as well as true 24, 30 and 60fps. Video can be interlaced, progressive or progressive segmented frame (PsF) and the HDMI output supports both YUV and RGB video (host software-dependent). Generally the video is simultaneously played through both the SDI and HDMI ports, but some frame rate/scanning combos, such as 23.98 PsF, will not be compatible with HDMI and will display only on the SDI output.

Working with projects

df_ttap_03I found that some settings in the AJA Control Panel would sort of override the host NLE’s setting. This is especially true of FCP X. For example, I was playing a 23.98 timeline, but had the T-Tap set to 29.97i output. It did display as a 1080i signal, but with a 2:2:2:4 rather than the proper 2:3:2:3 pulldown cadence added. However, when I played the same media in FCP 7 using a 1080p/23.98 sequence, but with the playback set to 29.97i in the FCP menu, the output and display used the correct pulldown cadence. This has less to do with the capabilities of the T-Tap and more to do with what the host software allows or takes advantage of. Although there is no secondary format (upconverted or downconverted) output, as with the KONA cards, you can set the primary playback to be downconverted from within the NLE software. Testing the same 1080p/23.98 sequence in FCP 7, I was able to change the playback settings to 720p/59.94 (downsampled and cadence inserted) and the T-Tap performed beautifully, just like a standard PCIe card.

A few other niceties are worth noting. You can add timecode overlays from the sequence’s timecode (“burn-in” windows) to the output. HDMI protocol can be set for HDMI or DVI, when using an adapter for a DVI display. The RGB range can be set to full (0-1023) or SMPTE (64-940) levels. Settings can be saved or recalled from a Presets window.

Scrubbing, skimming and playback performance was very responsive with either of the Final Cut applications. Running this hardware combo and Apple ProRes LT media felt no less agile than running on a MacPro with fast storage and a PCIe card, like a KONA.

Overall, I found the T-Tap to be  great little unit, but there are a couple of things to make note of. First of all, the T-Tap doesn’t include its own Thunderbolt cable. This is typical of most manufacturers, but you’ll need to factor in another $49 (Apple) for a cable. The T-Tap is also very warm to the touch. There’s a lot of processing going on, of course, but it was considerably warmer than the G-DRIVE sitting right next to it. The AJA Io XT that I tested a few months ago felt cooler, but it is larger and uses a hard plastic case. A Thunderbolt device carries up to 10 watts of power, so metal was used in the smaller T-Tap to dissipate heat.

Sizing up the competition

I would be remiss if I didn’t address the competition. Both AJA Video Systems and Blackmagic Design often compete head-to-head by offering similar products. Usually the AJA products are a little more expensive and that’s just as true here. The T-Tap lists for $295, which is pretty low to start with, but of course, the Blackmagic Design UltraStudio Mini Monitor is less. The relative difference may sound like a lot, but it really isn’t, as both cost so little to start with. Both companies make great products, but if either of these products are on your radar, then make sure you compare them as apples-to-apples with the features that are most important to you. The AJA T-Tap does include some notable features that you may need. There’s 2K output, RGB support to an HP DreamColor display and a 3-year warranty. As someone who has dealt with AJA over the years on a few minor support issues, I can certainly attest to the fact that their customer response is one of the best in the business. That alone may be worth paying a few bucks more for.

As productions move into a file-based world and smaller, more capable computers can handle the load, we become less dependent on cards that you have to install into a workstation just to monitor your video. Yet, the need to provide broadcast quality monitoring and real-time output to recorders, routers, servers and switchers has not been diminished. AJA has been a key part of this migration with the right products to serve video professionals. The T-Tap continues as part of the legacy. It’s small and robust, yet packs a punch where it counts.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters

DaVinci Resolve Workflows


Blackmagic Design’s purchase of DaVinci Systems put a world class color grading solution into the hands of every video professional. With Resolve 9, DaVinci sports a better user interface that makes it easy to run, regardless of whether you are an editor, colorist or DIT working on set.  DaVinci Resolve 9 comes in two basic Mac or Windows software versions, the $995 paid and the free Lite version. The new Blackmagic Cinema Camera software bundle also includes the full (paid) version, plus a copy of Ultrascope. For facilities seeking to add comprehensive color grading services, there’s also a version with Blackmagic’s dedicated control surface, as well as Linux systems configurations.

Both paid and free versions of Resolve (currently at version 9.1) work the same way, except that the paid version offers larger-than-HD output, noise reduction and the ability to tap into more than one extra GPU card for hardware acceleration. Resolve runs fine with a single display card (I’ve done testing with the Nvidia GT120, the Nvidia Quadro 4000 and the ATI 5870), but requires a Blackmagic video output card if you want to see the image on a broadcast monitor.

Work in Resolve 9 generally flows left-to-right, through the tabbed pages, which you select at the bottom of the interface screen. These are broken into Media (where you access the media files that you’ll be working with), Conform (importing/exporting EDL, XML and AAF files), Color (where you do color correction), Gallery (the place to store and recall preset looks) and Deliver (rendering and/or output to tape).

Many casual users employ Resolve in these two ways: a) correcting camera files to send on to editorial, and b) color correction roundtrips with NLE software. This tutorial is intended to highlight some of the basic workflow steps associated with these tasks. Resolve is deep and powerful, so spend time with the excellent manual to learn its color correction tools, which would be impossible to cover here.

Creating edit-ready dailies – BMCC (CinemaDNG media)

The Blackmagic Cinema Camera can record images as camera raw, CinemaDNG image sequences. Resolve 9 can be used to turn these into QuickTime or MXF media for editing. Files may be graded for the desired final look at this point, or the operator can choose to apply the BMD Film preset. This log preset generates files with a flat look comparable to ARRI Log-C. You may prefer this if you intend to use a Log-to-Rec709 LUT (look up table) in another grading application or a filter like the Pomfort Log-to-Video effect, which is available for Final Cut Pro 7/X.df_resolve_1_sm

Step 1 – Media: Drag clip folders into the Media Pool section.

Step 2 – Conform: Skip this tab, since the clips are already on a single timeline.

df_resolve_3_smStep 3 – Color: Make sure the camera setting (camera icon) for the clips on the timeline are set to Project. Open the project settings (gear icon). Change and apply these values: 1) Camera raw – CinemaDNG; 2) White Balance – as shot; 3) Color Space and Gamma – BMD Film.

Step 4 – Deliver: Set it to render each clip individually, assign the target destination and frame rate and the naming options. Then choose Add Job and Start Render.

The free version of Resolve will downscale the BMCC’s 2.5K-wide images to 1920×1080. The paid version of Resolve will permit output at the larger, native size. Rendered ProRes files may now be directly imported into FCP 7, FCP X or Premiere Pro. Correct the images to a proper video appearance by using the available color correction tools or filters within the NLE that you are using.

Creating edit-ready dailies – ARRI Alexa / BMCC (ProRes, DNxHD media)

df_resolve_2_smBoth the ARRI Alexa and the Blackmagic Cinema Camera can record Apple ProRes and Avid DNxHD media files to onboard storage. Each offers a similar log gamma profile that may be applied during recording in order to preserve dynamic range. Log-C for the Alexa and BMD Film for Blackmagic. These profiles facilitate high-quality grading later. Resolve may be used to properly grade these images to the final look as dailies are generated, or it may simply be used to apply a viewing LUT for a more pleasing appearance during the edit.

Step 1 – Media: Drag clip folders into the Media Pool section.

Step 2 – Conform: Skip this tab, since the clips are already on a single timeline.

Step 3 – Color: Make sure the camera setting for the clips on the timeline are set to Project. Open the project settings and set these values: 3D Input LUT – ARRI Alexa Log-C or BMD Film to Rec 709.

df_resolve_4_smStep 4 – Deliver: Set it to render each clip individually, assign the target destination and frame rate and the naming options. Check whether or not to render with audio. Then choose Add Job and Start Render.

The result will be new, color corrected media files, ready for editing. To render Avid-compatible MXF media for Avid Media Composer, select the Avid AAF Roundtrip from the Easy Setup presets. After rendering, return to the Conform page to export an AAF file.

Roundtrips – using Resolve together with editing applications

DaVinci Resolve supports roundtrips from and back to NLEs based on EDL, XML and AAF lists. You can use Resolve for roundtrips with Apple Final Cut Pro 7/X, Adobe Premiere Pro and Avid Media Composer/Symphony. You may also use it to go between systems. For example, you could edit in FCP X, color correct in Resolve and then finish in Premiere Pro or Autodesk Smoke 2013. Media should have valid timecode and reel IDs to enable the process to work properly.

df_resolve_5_smIn addition to accessing the camera files and generating new media with baked-in corrections, these roundtrips require an interchange of edit lists. Resolve imports an XML and/or AAF file to link to the original camera media and places those clips on a timeline that matches the edited sequence. When the corrected (and trimmed) media is rendered, Resolve must generate new XML and/or AAF files, which the NLE uses to link to these new media files. AAF files are used with Avid systems and MXF media, while standard XML files and QuickTime media is used with Final Cut Pro 7 and Premiere Pro. FCP X uses a new XML format that is incompatible with FCP 7 or Premiere Pro without translation by Resolve or another utility.

Step 1 – Avid/Premiere Pro/Final Cut Pro: Export a list file that is linked to the camera media (AAF, XML or FCPXML).

Step 2- Conform (skip Media tab): Import the XML or AAF file. Make sure you have set the options to automatically add these clips to the Media Pool.

Step 3 – Color: Grade your shots as desired.df_resolve_6_sm

Step 4 – Deliver: Easy Setup preset – select Final Cut Pro XML or Avid AAF roundtrip. Verify QuickTime or MXF rendering, depending on the target application. Change handle lengths if desired. Check whether or not to render with audio. Then choose Add Job and Start Render.

df_resolve_9_smStep 5 – Conform: Export a new XML (FCP7, Premiere Pro), FCPXML (FCP X) or AAF (Avid) list.

The roundtrip back

The reason you want to go back into your NLE is for the final finishing process, such as adding titles and effects or mixing sound. If you rendered QuickTime media and generated one of the XML formats, you’ll be able to import these new lists into FCP7/X or Premiere Pro and those applications will reconnect to the files in their current location. FCP X offers the option to import/copy the media into its own managed Events folders.

df_resolve_7_smIf you export MXF media and a corresponding AAF list with the intent of returning to Avid Media Composer/Symphony, then follow these additional steps.

Step 1 – Copy or move the folder of rendered MXF media files into an Avid MediaFiles/MXF subfolder. Rename this copied folder of rendered Resolve files with a number.

Step 2 – Launch Media Composer or Symphony and return to your project or create a new project.df_resolve_8_sm

Step 3 – Open a new, blank bin and import the AAF file that was exported from Resolve. This list will populate the bin with master clips and a sequence, which will be linked to the new MXF media rendered in Resolve and copied into the Avid MediaFiles/MXF subfolder.

Originally written for DV magazine / Creative Planet Network

©2013 Oliver Peters