Avid Media Composer | First

They’ve teased us for two years, but now it’s finally out. Avid Technology has released its free nonlinear editing application, Media Composer | First. This is not dumbed-down, teaser software, but rather a partially-restricted version of the full-fledged Media Composer software and built upon the same code. With that comes an inherent level of complexity, which Avid has sought to minimize for new users; however, you really do want to go through the tutorials before diving in.

It’s important to understand who the target user is. Avid didn’t set out to simply add another free, professional editing tool to an increasingly crowded market. Media Composer | First is intended as a functional starter tool for users who want to get their feet wet in the Avid ecosystem, but then eventually convert to the full-fledged, paid software. That’s been successful for Avid with Pro Tools | First. To sweeten the pot, you’ll also get 350 sound effects from Pro Sound Effects and 50 royalty-free music tracks from Sound Ideas (both sets are also free).

Diving in

To get Media Composer | First, you must set up an Avid master account, which is free. Existing customers can also get First, but the software cannot be co-installed on a computer with the full version. For example, I installed Media Composer | First on my laptop, because I have the full Media Composer application on my desktop. You must sign into the account and stay signed in for Media Composer | First to lunch and run. I did get it to work if I signed in, but then disconnected the internet. There was a disconnection prompt, but nevertheless, the application worked, saved, and exported properly. It doesn’t seem mandatory to be constantly connected to Avid over the internet. All project data is stored locally, so this is not a cloud application.

The managing of the account and future updates are handled through Application Manager, an Avid desktop utility. It’s not my favorite, as at times it’s unreliable, but it does work most of the time. Opening the installer .dmg file will take a long time to verify. This seems to be a general Avid quirk, so be patient. When you first open the application, you may get a disk drive write permissions error message. On macOS you normally set drive permissions for “system”, “wheel”, and “everyone”. Typically I have the last two set to “read only”, which works for every other application, except Avid’s. Therefore, if you want to store Avid media on your internal system hard drive, then “everyone” must be changed to “read & write”.

The guided tour

The Avid designers have tried to make the Media Composer | First interface easy to navigate for new users – especially those coming from other NLEs, where media and projects are managed differently than in Media Composer. Right at the launch screen you have the option to learn through online tutorials. These will be helpful even for experienced users who might try to “out-think” the software. The interface includes a number of text overlays to help you get started. For example, there is no place to set project settings. The first clip added to the first sequence sets the project settings from there on. So, don’t drop a 25fps clip onto the timeline as your first clip, if you intend to work in a 23.98fps project. These prompts are right in front of you, so if you follow their guidance, you’ll be OK.

The same holds true for importing media through the Source Browser. With Media Composer you either transcode a file, which turns it into Avid-managed media placed into the Avid MediaFiles folder, or simply link to the file. If you select link, then the file stays in place and it’s up to the user not to move or delete that file on the hard drive. Although the original Avid paradigm was to only manage media in its MediaFiles hard drive folders, the current versions handle linking just fine and act largely the same as other NLEs.

Options, restrictions, and limitations

Since this is a free application, a number of features have been restricted. There are three biggies. Tracks are limited to four video tracks and eight audio tracks. This is actually quite workable, however, I think a higher audio track count would have been advisable, because of how Avid handles stereo, mono, and multichannel files. On a side note, if you use the “collapse” function to nest video clips, it’s possible to vertically stack more than just four clips on the timeline.

The application is locked to a maximum project size of 1920×1080 (Rec. 709 color space only) and up to 59.94fps. Source files can be larger (such as 4K) and you can still use them on the timeline, but you’ll have to pan-and-scan, crop, or scale them. I hope future versions will permit at least UltraHD (4K) project sizes.

Finally, Media Composer | First projects cannot be interchanged with full fledged Media Composer projects. This means that you cannot start in Media Composer | First and then migrate your project to the paid version. Hopefully this gets fixed in a future update. If not, it will negatively impact students and indie producers using the application for any real work.

As expected, there are no 3D stereoscopic tools, ScriptSync (automatic speech-to-text/sync-to-script), PhraseFind (phonetic search engine), or Symphony (advanced color correction) options. One that surprised me, though, was the removable of the superior Spectramatte keyer. You are left with the truly terrible RGB keyer for blue/green-screen work.

Nevertheless, there’s plenty of horsepower left. For example, FrameFlex to handle resizing and Timewarps for retiming clips, which is how 4K and off-speed frame rates are handled. Color correction (including scopes), multicam, IllusionFX, source setting color LUTs, Audiosuite, and Pro Tools-style audio track effects are also there. Transcoding allows for the use of a wide range of codecs, including ProRes on a Mac. 4K camera clips will be transcoded to 1080. However, exports are limited to Avid DNxHD and H.264 QuickTime files at up to 1920×1080. The only DNxHD export flavor is the 100Mbps variant (at 29.97, 80Mbps for 23.98), which is comparable to ProResLT. It’s good quality, but not at the highest mastering levels.

Conclusion

This is a really good first effect, no pun intended. As you might expect, it’s a little buggy for a first version. For example, I experienced a number of crashes while testing source LUTs. However, it was well-behaved during standard editing tasks. If Media Composer | First files can become compatible with the paid systems and the 1080 limit can be increased to UHD/4K, then Avid has a winner on its hands. Think of the film student who starts on First at home, but then finishes on the full version in the college’s computer lab. Or the indie producer/director who starts his or her own rough cut on First, but then takes it to an editor or facility to complete the process. These are ideal scenarios for First. I’ve cut tons of short and long form projects, including a few feature films, using a variety of NLEs. Nearly all of those could have been done using Media Composer | First. Yes, it’s free, but there’s enough power to get the job done and done well.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Advertisements

Suburbicon

George Clooney’s latest film, Suburbicon, originated over a decade ago as a screenplay by Joel and Ethan Coen. Clooney picked it up when the Coens decided not to produce the film themselves. Clooney and writing partner Grant Heslov (The Monuments Men, The Ides of March, Good Night, and Good Luck), rewrote it as taking place in the 1950s and added another story element. In the summer of 1957, the Myers, an African-American couple, moved into a largely white suburb in Levittown, Pennsylvania, setting off months of violent protests. The rewritten script interweaves the tale of the black family with that of their next-door neighbors, Gardner (Matt Damon) and Margaret (Julianne Moore). In fact, a documentary was produced about the historical events and shots from that documentary were used in Suburbicon.

Calibrating the tone

During the production and editing of the film, the overall tone was adjusted as a result of the actual, contemporary events occurring in the country. I spoke with the film’s editor, Stephen Mirrione (The Revenant, Birdman or (The Unexpected Virtue of Ignorance), The Monuments Men) about this. Mirrione explains, “The movie is presented as over-the-top to exaggerate events as satire. In feeling that out, George started to tone down the silliness, based on surrounding events. The production was being filmed during the time of the US election last year, so the mood on the set changed. The real world was more over-the-top than imagined, so the film didn’t feel quite right. George started gravitating towards a more realistic style and we locked into that tone by the time the film moved into post.”

The production took place on the Warner Brothers lot in September 2016 with Mirrione and first assistant editor Patrick Smith cutting in parallel with the production. Mirrione continues, “I was cutting during this production period. George would come in on Saturdays to work with me and ‘recalibrate’ the cut. Naturally some scenes were lost in this process. They were funny scenes, but just didn’t fit the direction any longer. In January we moved to England for the rest of the post. Amal [Clooney, George’s wife] was pregnant at the time, so George and Amal wanted to be close to her family near London. We had done post there before and had a good relationship with vendors for sound post. The final sound mix was in the April/May time frame. We had an editing room set up close to George outside of London, but also others in Twickenham and at Pinewood Studios. This way I could move around to work with George on the cut, wherever he needed to be.”

Traveling light

Mirrione is used to working with a light footprint, so the need for mobility was no burden. He explains, “I’m accustomed to being very mobile. All the media was in the Avid DNxHD36 format on mobile drives. We had an Avid ISIS shared storage system in Twickenham, which was the hub for all of the media. Patrick would make sure all the drives were updated during production, so I was able to work completely with standalone drives. The Avid is a bit faster that way, although there’s a slight trade-off waiting for updated bins to be sent. I was using a ‘trash can’ [2013] Mac Pro plus AJA hardware, but I also used a laptop – mainly for reference – when we were in LA during the final steps of the process.” The intercontinental workflow also extended to color correction. According to Mirrione, “Stefan Sonnenfeld was our digital intermediate colorist and Company 3 [Co3] stored a back-up of all the original media. Through an arrangement with Deluxe, he was able to stream material to England for review, as well as from England to LA to show the DP [Robert Elswit].”

Music was critical to Suburbicon and scoring fell to Alexandre Desplat (The Secret Life of Pets, Florence Foster Jenkins, The Danish Girl). Mirrione explains their scoring process. “It was very important, as we built the temp score in the edit, to understand the tone and suspense of the film. George wanted a classic 1950s-style score. We tapped some Elmer Bernstein, Grifters, The Good Son, and other music for our initial style and direction. Peter Clarke was brought on as music editor to help round out the emotional beats. Once we finished the cut, Alexandre and George worked together to create a beautiful score. I love watching the scenes with that score, because his music makes the editing seem much more exciting and elegant.”

Suiting the edit tool to your needs

Stephen Mirrione typically uses Avid Media Composer to cut his films and Suburbicon is no exception. Unlike many film editors who rely on unique Avid features, like ScriptSync, Mirrione takes a more straightforward approach. He says, “We were using Media Composer 8. The way George shoots, there’s not a lot of improv or tons of takes. I prefer to just rely on PDFs of the script notes and placing descriptions into the bins. The infrastructure required for ScriptSync, like extra assistants, is not something I need. My usual method of organization is a bin for each day of dailies, organized in shooting order. If the director remembers something, it’s easy to find in a day bin. During the edit, I alternate my bin set-ups between the script view and the frame view.”

With a number of noted editors dabbling with other software, I wondered whether Mirrione has been tempted. He responds, “I view my approach as system-agnostic and have cut on Lightworks and the old Laser Pacific unit, among others. I don’t want to be dependent on one piece of software to define how I do my craft. But I keep coming back to Avid. For me it’s the trim mode. It takes me back to the way I cut film. I looked at Resolve, because it would be great to skip the roundtrip between applications. I had tested it, but felt it would be too steep a learning curve, and that would have impacted George’s experience as the director.”

In wrapping our conversation, Mirrione concluded with this take away from his Suburbicon experience. He explains, “In our first preview screening, it was inspiring to see how seriously the audience took to the film and the attachment they had to the characters. The audiences were surprised at how biting and relevant it is to today. The theme of the film is really talking about what can happen when people don’t speak out against racism and bullying. I’m so proud and lucky to have the opportunity to work with someone like George, who wants to do something meaningful.”

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Mindhunter

The investigation of crime is a film topic with which David Fincher is very familiar. He returns to this genre in the new Netflix series, Mindhunter, which is executive produced by Fincher and Charlize Theron. The series is the story of the FBI’s Behavioral Science Unit and how it became an elite profiling team, known for investigating serial criminals. The TV series is based on the nonfiction book Mind Hunter: Inside the FBI’s Elite Serial Crime Unit, co-written by Mark Olshaker and John Douglas, a former agent in the unit who spent 25 years with the FBI. Agent Douglas interviewed scores of serial killers, including Charles Manson, Ted Bundy, and Ed Gein, who dressed himself in his victims’ skin. The lead character in the series, Holden Ford (played by Jonathan Groff) is based on Douglas. The series takes place in 1979 and centers on two FBI agents, who were among the first to interview imprisoned serial killers in order to learn how they think and apply that to other crimes. Mindhunter is about the origins of modern day criminal profiling.

As with other Fincher projects, he brought in much of the team that’s been with him through the various feature films, like Gone Girl, The Girl with the Dragon Tattoo, and Zodiac. It has also given a number in the team the opportunity to move up in their careers. I recently spoke with Tyler Nelson, one of the four series editors, who was given the opportunity to move from the assistant chair to that of a primary editor. Nelson explains, “I’ve been working with David Fincher for nearly 11 years, starting with The Curious Case of Benjamin Button. I started on that as an apprentice, but was bumped up to an assistant editor midway through. There was actually another series in the works for HBO called Videosyncrasy, which I was going to edit on. But that didn’t make it to air. So I’m glad that everyone had the faith in me to let me edit on this series. I cut the four episodes directed by Andrew Douglas and Asif Kapadia, while Kirk Baxter [editor on Gone Girl, The Girl with the Dragon Tattoo, The Social Network] cut the four shows that David directed.”

Pushing the technology envelope

The Fincher post operation has a long history of trying new and innovative techniques, including their selection of editing tools. The editors cut this series using Adobe Premiere Pro CC. Nelson and the other editors are no stranger to Premiere Pro, since Baxter had cut Gone Girl with it. Nelson says, “Of course, Kirk and I have been using it for years. One of the editors, Byron Smith, came over from House of Cards, which was being cut on [Apple] Final Cut Pro 7. So that was an easy transition for him. We are all fans of Adobe’s approach to the entertainment industry and were onboard with using it. In fact, we were running on beta software, which gave us the ability to offer feedback to Adobe on features that will hopefully make it into released products and benefit all Premiere users.”

Pushing the envelope is also a factor on the production side. The series was shot with custom versions of the RED Weapon camera. Shots were recorded at 6K resolution, but framed for a 5K extraction, leaving a lot of “padding” around the edges. This allowed room for reposition and stabilization, which is done a lot on Fincher’s projects. In fact, nearly all of the moving footage is stabilized. All camera footage is processed into EXR image sequences in addition to ProRes editing files for “offline” editing. These ProRes files also get an added camera LUT so everyone sees a good representation of the color correction during the editing process. One change from past projects was to bring color correction in-house. The final grade was handled by Eric Weidt on a FilmLight Baselight X unit, which was sourcing from the EXR files. The final Netflix deliverables are 4K/HDR masters. Pushing a lot of data through a facility requires robust hardware systems. The editors used 2013 (“trash can”) Mac Pros connected to an Open Drives shared storage system. This high-end storage system was initially developed as part of the Gone Girl workflow and uses storage modules populated with all SSD drives.

The feature film approach

Unlike most TV series, where there’s a definite schedule to deliver a new episode each week, Netflix releases all of their shows at once, which changes the dynamic of how episodes are handled in post. Nelson continues, “We were able to treat this like one long feature film. In essence, each episode is like a reel of a film. There are 10 episodes and each is 45 minutes to an hour long. We worked it as if it was an eight-and-a-half to nine hour long movie.” Skywalker Sound did all the sound post after a cut was locked. Nelson adds, “Most of the time we handed off locked cuts, but sometimes when you hear the cleaned up sound, it can highlight issues with the edit that you didn’t notice before. In some cases, we were able to go back into the edit and make some minor tweaks to make it flow better.”

As Adobe moves more into the world of dialogue-driven entertainment, a number of developers are coming up with speech-to-text solutions that are compatible with Premiere Pro. This potentially provides editors a function similar to Avid’s ScriptSync. Would something like this have been beneficial on Mindhunter, a series based on extended interviews? Nelson replies, “I like to work with the application the way it is. I try not to get too dependent on any feature that’s very specific or unique to only one piece of software. I don’t even customize my keyboard settings too much, just so it’s easier to move from one workstation to another that way. I like to work from sequences, so I don’t need a special layout for the bins or anything like that.”

“On Mindhunter we used the same ‘KEM roll’ system as on the films, which is a process that Kirk Baxter and Angus Wall [editor on Zodiac, The Curious Case of Benjamin Button, The Social Network] prefer to work in,” Nelson continues. “All of the coverage for each scene set-up is broken up into ‘story beats’. In a 10 minute take for an interview, there might be 40 ‘beats’. These are all edited in the order of last take to first take, with any ‘starred’ takes at the head of the sequence. This way you will see all of the coverage, takes, and angles for a ‘beat’ before moving on to the group for the next ‘beat’. As you review the sequence, the really good sections of clips are moved up to video track two on the sequence. Then create a new sequence organized in story order from these selected clips and start building the scene. At any given time you can go back to the earlier sequences if the director asks to see something different than what’s in your scene cut. This method works with any NLE, so you don’t become locked into one and only one software tool.”

“Where Adobe’s approach is very helpful to us is with linked After Effects compositions,” explains Nelson. “We do a lot of invisible split screen effects and shot stabilization. Those clips are all put into After Effects comps using Dynamic Link, so that an assistant can go into After Effects and do the work. When it’s done, the completed comp just pops back into the timeline. Then ‘render and replace’ for smooth playback.”

The challenge

Certainly a series like this can be challenging for any editor, but how did Nelson take to it? He answers, “I found every interview scene to be challenging. You have an eight to 10 minute interview that needs to be interesting and compelling. Sometimes it takes two days to just get through looking at the footage for a scene like that. You start with ‘How am I going to do this?’ Somewhere along the line you get to the point where ‘This is totally working.’ And you don’t always know how you got to that point. It takes a long time approaching the footage in different ways until you can flesh it out. I really hope people enjoy the series. These are dramatizations, but real people actually did these terrible things. Certainly that creeps me out, but I really love this show and I hope people will see the craftsmanship that’s gone into Mindhunter and enjoy the series.”

In closing, Nelson offered these additional thoughts. “I’d gotten an education each and every day. Lots of editors haven’t figured it out until well into a long career. I’ve learned a lot being closer to the creative process. I’ve worked with David Fincher for almost 11 years. You think you are ready to edit, but it’s still a challenge. Many folks don’t get an opportunity like this and I don’t take that lightly. Everything that I’ve learned working with David has given me the tools and I feel fortunate that the producers had the confidence in me to let me cut on this amazing show.”

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

6 Below

From IMAX to stereo3D, theaters have invested in various technologies to entice viewers and increase ticket sales. With a tip of the hat to the past, Barco has developed a new ultrawide, 3-screen digital projection system, which is a similar concept to Cinerama film theaters from the 1950s. But modern 6K-capable digital cinema cameras make the new approach possible with stunning clarity. There are currently 40 Barco Escape theaters worldwide, with the company looking for opportunities to run films designed for this format.

Enter Scott Waugh, director (Act of Valor, Need for Speed) and co-founder of LA production company, Bandito Brothers. Waugh, who is always on the lookout for new technologies, was interested in developing the first full-length, feature film to take advantage of this 3-screen, 7:1 aspect ratio for the entire length of the film. But Waugh didn’t want to change how he intended to shoot the film strictly for these theaters, since the film would also be distributed to conventional theaters. This effectively meant that two films needed to come out of the post-production process – one formatted for the Barco Escape format and one for standard 4K theaters.

6 Below (written by Madison Turner) became the right vehicle. This is a true life survival story of Eric LaMarque (played by Josh Harnett), an ex-pro hockey player turned snowboarder with an addiction problem, who finds himself lost in the ice and snow of the California Sierra mountains for a week. To best tell this story, Waugh and company trekked an hour or more into the mountains above Sundance, Utah for the production.

To handle the post workflow and co-edit the film with Waugh, editor Vashi Nedomansky (That Which I Love Destroys Me, Sharknado 2, An American Carol) joined the team. Nedomansky, another veteran of Bandito Brothers who uses Adobe Premiere Pro as his axe of choice, has also helped set up Adobe-based editorial workflows for Deadpool and Gone Girl. Ironically, in earlier years Nedomansky had been a pro hockey player himself, before shifting to a career in film and video. In fact, he played against the real Eric LeMarque on the circuit.

Pushing the boundaries

The Barco Escape format projects three 2K DCPs to cover the total 6K width. To accommodate this, RED 6K cameras were used and post was done with native media at 6K in Adobe Premiere Pro CC. My first question to Nedomansky was this. Why stay native? Nedomansky says, “We had always been pushing the boundaries at Bandito Brothers. What can we get away with? It’s always a question of time, storage, money, and working with a small team. We had a small 4-person post team for 6 Below, located near Sundance. So there was interest in not losing time to transcoding.

After some testing, we settled on decked out Dell workstations, because these could tackle the 6K RED raw files natively.” Two Dell Precision 7910 towers (20-core, 128GB RAM) with Nvidia Quadro M6000 GPUs were set up for editing, along with a third, less beefy HP quad-core computer for the assistant editor and visual effects. All three were connected to shared storage using a 10GigE network. Mike McCarthy, post production supervisor for 6 Below, set up the system. To keep things stable, they were running Windows 7 and stayed on the same Adobe Creative Cloud version throughout the life of the production. Nedomansky continues, “We kept waiting for the 6K to not play, but it never stopped in the six weeks of time that we were up there. My first assembly was almost three hours long – all in a single timeline – and I was able to play it straight through without any skips or stuttering.”

There were other challenges along the way. Nedomansky explains, “Almost all of the film was done as single-camera and Josh has to carry it with his performance as the sole person on screen for much of the film. He has to go through a range of emotions and you can’t just turn that on and off between takes. So there were lots of long 10-minute takes to convey his deterioration within the hostile environmental conditions. The story is about a man lost in the wild, without much dialogue. The challenge is how to cut down these long takes without taking away from his performance. One solution was to go against the grain – using jump cuts to shorten long takes. But I wanted to look for the emotional changes or a physical act to motivate a jump cut in a way that would make it more organic. In one case, I took a 10-minute take down to 45 seconds.”

When you have a film where weather is a character, you hope that the weather will cooperate. Nedomansky adds, “One of our biggest concerns going in, was the weather. Production started in March – a time when there isn’t a lot of snow in Utah. Fortunately for us, a day before we were supposed to start shooting, they had the biggest ‘blizzard’ of the winter for four days. This saved us a lot of VFX time, because we didn’t have to create atmospherics, like snow in front of the lens. It was there naturally.”

Using the Creative Cloud tools to their fullest

6 Below features an extensive percentage of visual effects shots. Nedomansky says, “The film has 1500 shots with 205 of them as VFX shots. John Carr was the assistant editor and visual effects artist on the film and he did all of the work in After Effects and at 6K resolution, which is unusual for films. Some of the shots included ‘day for night’ where John had to add star plates for the sky. This meant rotoscoping behind Josh and the trees to add the plates. He also had to paint out crew footprints in the snow, along with the occasional dolly track or crew member in a shot. There were also some split screens done at 6K right in Premiere Pro.”

The post schedule involved six weeks on-set and then fourteen more weeks back in LA, for a 20-week total. After that, sound post and grading (done at Technicolor). The process to correctly format the film for both Barco and regular theaters almost constituted posting two films. The RED camera image is 6144 x 2592 pixels, Barco Escape 6144 x 864, and a 4K extraction 4096 x 2160. Nedomansky explains, “The Barco frame is thin and wide. It could use the full width, but not height, of the full 6K RED image. So, I had to do a lot of ‘animation’ to reposition the frame within the Barco format. For the 4K version, the framing would be adjusted accordingly. The film has about 1500 shots, but we didn’t use different takes for the two versions. I was able to do this all through reframing.”

In wrapping up our conversation, Nedomansky adds, “I played hockey against Eric and this added an extra layer of responsibility. He’s very much still alive today. Like any film of this type, it’s ‘based on’ the true story, but liberties are taken. I wanted to make sure that Eric would respect the result. Scott and I’ve done films that were heavy on action, but this film shows another directorial style – more personal and emotional with beautiful visuals. That’s also a departure for me and it’s very important for editors to have that option.”

6 Below was released on October 13 in cinemas.

Read Vashi’s own write-up of his post production workflow.

Images are courtesy of Vashi Visuals.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Sound Forge Pro Mac 3

There are plenty of modern tools that deal with audio, but sometimes you need a product with a very narrow focus to do the job without compromise. That’s where Sound Forge Pro fits in. Originally part of Sonic Foundry’s software development, the product, along with its siblings Vegas Pro and Acid, migrated to Sony Creative Software. Sony, in turn, sold off those products to German software developer Magix, where they appear to have found a good home. I recently tested Sound Forge Pro Mac 3, which is the macOS companion to Sound Forge Pro 11 on the Windows side. (Sound Forge Pro 12 is expected to roll out in 2018.) Both are advanced, multichannel audio editors, dedicated to editing, processing, and mastering individual audio files, as opposed to a DAW application, which is designed for mixing.

Although Magix’s other products are PC-centric, they’ve done a good job embracing and improving the Mac products. Sound Forge also comes in the Audio Studio version – a lower cost Windows product designed for users who don’t need quite as many features. There is no Mac equivalent for it yet. The former Mac version that was sold through Apple’s Mac App Store is no longer available. Naturally a product like Sound Forge Pro may require some justification for its price tag, since the application competes with great audio tools within most modern NLEs, like Final Cut Pro X, Premiere Pro, or Resolve. It’s also competing with Adobe Audition (included with a Creative Cloud subscription) and Apple Logic Pro X, which sports a lower cost.

Sound Forge Pro is primarily designed as a dedicated audio mastering application, that does precision audio editing. It works with multichannel audio files (up to 32 tracks) in maximum bit rates of 24-bit, 32-bit, and 64-bit float at up to 192kHz. It also works with video files, although it will only import the audio channels for processing. Sound Forge Pro for the Mac comes with several iZotope plug-ins, including Declicker, Declipper, Denoiser, Ozone Elements 7, and RX Elements. (The Windows version includes a slightly different mix of iZotope plug-ins.) That’s on top Magix’s own plug-ins and any other AU plug-ins that might already be installed on your Mac from other applications. The bottom line is that you have a lot of effects and processing horsepower to work with when using Sound Forge Pro.

Even though Sound Forge Pro is essentially a single file editor, you can work with multiple individual files. Multiple files are displayed within the interface as horizontal tabs or in a vertical stack. You can process multiple files at the same time and can copy and paste between them. You can also copy and paste between individual channels within a single multichannel file.

As an audio editor, it’s fast, tactile, and non-destructive, making it ideal for music editing, podcasts, radio interviews, and more. For audio producers, it complies with Red Book Standard CD authoring. The attraction for video editors is its mastering tools, especially loudness control for broadcast compliance. Both Magix’s Wave Hammer and iZotope Ozone 7 Elements’ mastering tools are great for solving loudness issues. That’s aided by accurate LUFS metering. Other cool tools include AutoTrim, which automatically removes gaps of silence at the beginnings and ends of files or from regions within a file. There is also élastique Timestretch, a processing tool to slow down or speed up audio, while maintaining the correct pitch. Timestretch can be applied to an entire file or simply a section within a file. Effects tools and plug-ins are divided into those that require processing and those that can be played in real-time. For example, Timestretch is applied as a processing step, whereas a reverb filter would play in real time. Processing is typically fast on any modern desktop or laptop computer, thanks to the application’s 64-bit engine.

Basic editing is as simple as marking a section and hitting the delete key. You can also split a file into events and then trim, delete, move, or copy & paste event blocks. If you slide an event to overlap another, a crossfade is automatically created. You can adjust the fade-in/fade-out slopes of these crossfades. The only missing item is the ability to scrub through audio in any fashion. So, no mouse scrub or JKL-key jogging with audible audio, as you’d normally find in an NLE application. That’s apparently there in the Windows versions, but not in this Mac version.

All in all, if audio is a significant part of your workload and you want to handle it in a better and easier fashion, then Sound Forge Pro Mac 3 is worth the investment.

Originally written for RedShark News.

©2017 Oliver Peters

SpeedScriber

Script-based video editing started with Ediflex. But, it really came into its own when Avid created script integration as a way to cut dialogue-driven stories, like feature films, in Media Composer. The key ingredient is a written script or a transcription of the spoken audio. This is easy with a feature that’s been acted according to defined script lines, but much harder with something freeform, like a documentary or news interview. In those projects, you first need a person or service to transcribe the audio into a written document – or simply cut without it and hunt around when you look for that one specific sentence.

Modern technology has come to the rescue in the form of artificial intelligence, which has enabled a number of transcription services to offer very fast turnaround times from audio upload to a transcribed, speech-to-text document. Several video developers have tapped into these resources to create new transcription services/applications, which can be tied into several of the popular NLE applications.

Transcription for the three “A” companies

One of these new products is SpeedScriber, a transcription application for macOS and its companion service developed by Digital Heaven, which was founded by veteran UK editor and plug-in developer Martin Baker. To start using SpeedScriber, install the free SpeedScriber application, which is available from the Apple Mac App Store. The next steps depend on whether you just want to create transcribed documents, captioning files, or script integration for Avid Media Composer, Adobe Premiere Pro CC, or Apple Final Cut Pro X.

If you just want a document, or plan to use Media Composer or FCPX, then no other tools are required. For Premiere Pro CC workflows, you’ll want to download an panel installer for macOS or Windows from the SpeedScriber website. This integrates as a standard Premiere Pro panel and permits you to import transcription files directly into Premiere Pro. The SpeedScriber application enables roundtripping to/from Final Cut using FCPXML.

First, let’s talk about the transcription itself. It should generally be clip-based and not from edited timelines, unless you just want to document a completed project or for captioning. When you launch SpeedScriber for the first time, you’ll need to create an account. This will include 15 minutes of free transcription time. The file length determines the time used. Billing for the service is based on time and is tiered, ranging from $.50/minute (30/60/120 minutes) down to $.37/minute (6,000 minutes). Minutes are pre-purchased and don’t expire.

Once your account is ready, drag-and-drop or point the application to the file to import. Disable any unwanted audio channels, so that the transcription is based on the best audio channel within the file. Even if all channels are equal, disable all but one of them. Set up the number of speakers and language format, such as British, Australian, or American English. According to Baker, support for five European languages will be added in version 1.1. The service will automatically determine when speakers change, such as between an interviewer and the subject. It’s hard for the system to determine this with great accuracy, so don’t expect these speaker changes to be perfect.

The transcription experience

Accuracy of the transcription can be extremely good, but it depends on the audio quality that you’ve supplied. A clean interview track – well mic’ed and in a quiet room – can be dead-on with only a few corrections needed. Slower speakers who enunciate well result in greater accuracy. On the other hand, having several speakers in a noisy environment, or a very fast speaker with a heavy accent, will require a lot of correction – enough so that manual transcription might be better in those cases.

Once SpeedScriber has completed its automatic transcription, you can play the file to proof it and make any corrections to the text that are required. It’s easy to type corrections to the transcription within the SpeedScriber text editing window. When done, you can export the text in a number of different formats. I ran a test clip of a clear-spoken woman with well-recorded audio. She had a slight southern drawl, but the result from SpeedScriber was excellent. It also did a good job of ignoring speech idiosyncrasies, such a frequent “ums”. This eight minute test clip only required about a dozen text corrections throughout.

If the objective is script integration into an NLE, then the process varies depending on brand. Typically such integration is clip-based, although multi-cam clips are supported. However, it’s tougher when you try to connect the transcription to a timeline. For example, I like to do cutdowns of interviews first, before transcribing, and that’s not really how ScreedScriber works best. In version 1.1, FCPX compound clips will be supported, so segments can be cut before transcription.

A clear set of tutorial videos are available in the support section of  the SpeedScriber website.

Integration with NLEs

Media Composer is easy, because it already has a Script Integration feature. Import the text file that was exported from SpeedScriber as a new script into Media Composer and link the video clip to it. If you purchased Avid’s ScriptSync, then you can automatically line up the clip to sentences within the script. This happens automatically thanks to ScriptSync’s speech analysis function. But if you didn’t purchase this add-on, simply add sync points manually.

With Premiere Pro, select the clip, open the SpeedScriber panel and from it, import the corresponding transcription. The text appears in the Speech Analysis section of that clip’s metadata display. It will actually be embedded into the media file so that the clip can be moved between projects complete with that clip’s transcription. You can view and use this text display to mark in/out by words for accurate script-based selections. When you import the script and link it to a multi-cam clip, synced clip, or sequence, text will show up as markers and can be viewed in the markers panel. Premiere Pro is the only integration that can easily update existing speech metadata or markers. So you can start editing with the raw transcript and then update it later when corrections have been made. However, when I tested transcriptions on an edited sequence instead of a clip, it locked up Premiere Pro, requiring a Force Quit. Fortunately, when I re-opened the recovered project, the markers were there as expected.

The most straight forward approach seems to be its use with Final Cut Pro X. According to Baker, “This is the first Digital Heaven product with broad appeal by supporting Avid and Premiere Pro. But FCPX has ended up having the deepest integration due to the ability to drag-and-drop the Library, which was introduced in 10.3. So with roundtripping, SpeedScriber rebuilds the clip’s timeline without any need to export. Another advantage of the roundtripping is that SpeedScriber can read the audio channel status from the dropped XML, which is important for getting the best accuracy.”

There’s a roundtrip procedure with FCPX, but even without it, simply export an FCPXML from SpeedScriber. Import that into your Final Cut Pro X Library. The clip will then show a number of keyword entries corresponding to line breaks. For each keyword entry, the browser notes field will display the associated text, making it easy to find any dialogue. Plus, these entries are already marked as selections. When clips are edited into the sequence (an FCPX Project), the timeline index enables these notes to be displayed under the Tags section.

SpeedScriber shows tremendous potential to accelerate the efficiency of many spoken-word projects, like documentaries. Half the battle is trying to figure out the story that you want to tell, so having the text right in front of you makes this job easier. Applying modern technology to this challenge is refreshing and the constantly improving accuracy of these systems makes it an easy consideration. SpeedScriber is one of those tools that not only gets you home earlier, but will give you the assurance that you can easily find that clip you are looking for in the proverbial haystack of clips.

©2017 Oliver Peters

A Light Footprint

When I started video editing, the norm was an edit suite with three large quadraplex (2”) videotape recorders, video switcher, audio mixer, B&W graphics camera(s) for titles, and a computer-assisted, timecode-based edit controller. This was generally considered  an “online edit suite”, but in many markets, this was both “offline” (creative cutting) and “online” (finishing). Not too long thereafter, digital effects (ADO, NEC, Quantel) and character generators (Chyron, Aston, 3M) joined the repertoire. 2” quad eventually gave way to 1” VTRs and those, in turn, were replaced by digital – D1, D2, and finally Digital Betacam. A few facilities with money and clientele migrated to HD versions of these million dollar rooms.

Towards the midpoint in the lifespan for this way of working, nonlinear editing took hold. After a few different contenders had their day in the sun, the world largely settled in with Avid and/or Media 100 rooms. While a lower cost commitment than the large online bays of the day, these nonlinear edit bays (NLE) still required custom-configured Macs, a fair amount of external storage, along with proprietary hardware and monitoring to see a high-quality video image. Though crude at first, NLEs eventually proved capable of handling all the video needs, including HD-quality projects and even higher resolutions today.

The trend towards smaller

As technology advanced, computers because faster and more powerful, storage capacities increased, and software that required custom hardware evolved to work in a software-only mode. Today, it’s possible to operate with a fraction of the cost, equipment, and hassle of just a few years ago, let along a room from the mid-70s. As a result, when designing or installing a new room, it’s important to question the assumptions about what makes a good edit bay configuration.

For example, today I frequently work in rooms running newer iMacs, 2013 Mac Pros, and even MacBook Pro laptops. These are all perfectly capable of running Apple Final Cut Pro X, Adobe Premiere Pro, Avid Media Composer, and other applications, without the need for additional hardware. In my interview with Thomas Grove Carter, he mentioned often working off of his laptop with a connected external drive for media. And that’s at Trim, a high-end London commercial editing boutique.

In my own home edit room, I recently set aside my older Mac Pro tower in favor of working entirely with my 2015 MacBook Pro. No more need to keep two machines synced up and the MBP is zippier in all respects. With the exception of some heavy-duty rendering (infrequent), I don’t miss using the tower. I run the laptop with an external Dell display and have configured my editing application workspaces around a single screen. The laptop is closed and parked in a BookArc stand tucked behind the Dell. But I also bought a Rain stand for those times when I need the MBP open and functioning as a second display.

Reduce your editing footprint

I find more and more editors working in similar configurations. For example, one of my clients is a production company with seven networked (NAS storage) workstations. Most of these are iMacs with few other connected peripherals. The main room has a 2013 “trash can” Mac Pro and a bit more gear, since this is the “hero” room for clients. If you are looking to downsize your editing environment, here are some pointers.

While you can work strictly from a laptop, I prefer to build it up for a better experience. Essential for me is a Thunderbolt dock. Check out OWC or CalDigit for two of the best options. This lets you connect the computer to the dock and then everything else connects to that dock. One Thunderbolt cable to the laptop, plus power for the computer, leaving you with a clean installation with an easy-to-move computer. From the dock, I’m running a Presonus Audiobox USB audio interface (to a Mackie mixer and speakers), a TimeMachine drive, a G-Tech media drive, and the Dell display. If I were to buy something different today, I would use the Mackie Onyx Blackjack interface instead of the Presonus/Mackie mixer combo. The Blackjack is an all-in-one solution.

Expand your peripherals as needed

At the production company’s hero room, we have the extra need to drive some video monitors for color correction and client viewing. That room is similarly configured as above, except with a Mac Pro and connection to a QNAP shared storage solution. The latter connects over 10Gb/s Ethernet via a Sonnet Thunderbolt/Ethernet adapter.

When we initially installed the room, video to the displays was handled by a Blackmagic Design UltraStudio device. However, we had a lot of playback performance issues with the UltraStudio, especially when using FCPX. After some experimenting, we realized that both Premiere Pro and FCPX can send a fullscreen, [generally] color-accurate signal to the wall-mounted flat panel using only HDMI and no other video i/o hardware. We ended up connecting the HDMI from the dock to the display and that’s the standard working routine when we are cutting in either Premiere Pro or Final Cut.

The rub for us is DaVinci Resolve. You must use some type of Blackmagic Design hardware product in order to get fullscreen video to a display when in Resolve. Therefore, the Ultrastudio’s HDMI port connects to the second HDMI input of the large client display and SDI feeds a separate TV Logic broadcast monitor. This is for more accurate color rendition while grading. With Media Composer, there were no performance issues, but the audio and video signal wants to go through the same device. So, if we edit Avid, then the signal chain goes through the UltraStudio, as well.

All of this means that in today’s world, you can work as lightly as you like. Laptop-only – no problem. iMac with some peripherals – no problem. A fancy, client-oriented room – still less hassle and cost than just a few short years ago. Load it up with extra control surfaces or stay light with a keyboard, mouse, or tablet. It all works today – pretty much as advertised. Gone are the days when you absolutely need to drop a small fortune to edit high-quality video. You just have to know what you are doing and understand the trade-offs as they arise.

©2017 Oliver Peters