AJA T-Tap

 

df0217_ttap_sm

The Thunderbolt protocol has ushered in a new era for easy connectivity of hardware peripherals. It allows users to deploy a single connection type to tie in networking, external storage, monitoring and broadcast audio and video input and output. Along with easy connections, it has also enabled peripheral devices to becomes smaller, lighter and more powerful. This is in part due to advances in the hardware and software, as well. AJA Video Systems is one of the popular video manufacturers that has taken advantage of these benefits.

In many modern editing environments, the actual editing system has become extremely streamlined. All it really takes is a Thunderbolt-enabled laptop, all-in-one (like an iMac) or desktop computer, fast external storage, and professional monitoring – and you are good to go. For many editors, live video output is strictly for monitoring, as deliverables are more often-than-not files and not tape. Professional monitoring is easy to achieve using SDI or HDMI connections. Any concern for analog is gone, unless you need to maintain analog audio monitoring. AJA makes a series of i/o products to address these various needs, ranging from full options down to simple monitoring devices. Blackmagic Design and AJA currently produce the lion’s share of these types of products, including PCIe cards for legacy installations and Thunderbolt devices for newer systems.

I recently tested the AJA T-Tap, which is a palm-sized video output device that connects to the computer using the Thunderbolt 2 protocol. It is bus-powered – meaning that no external power supply or “wall-wart” is needed to run it. I tested this on both a 2013 Mac Pro and a 2015 MacBook Pro. In each case, my main need was SDI and/or HDMI out of the unit to external monitors. Installation couldn’t be easier. Simply download the current control panel software and drivers from AJA’s website, install, and then connect the T-Tap. Hook up your monitors and you are ready. There’s very little else to do, except set your control panel configuration for the correct video/frame rate standard. Everything else is automatic in both Adobe Premiere Pro CC and Apple Final Cut Pro X. Although you’ll want to check your preference settings to make sure the device is detected and enabled.

One of the main reasons I wanted to test the T-Tap was as a direct comparison with the Blackmagic products on these same computers. For example, the current output device being used on the 2013 Mac Pro that I tested is a Blackmagic DesignUltraStudio Express. This contains a bit more processing and is comparable to AJA’s Io XT . I also tested the BMD MiniMonitor, which is a direct competitor to the T-Tap. The UltraStudio provides both input and output and offers an analog break-out cable harness, whereas the two smaller units are only output using SDI and HDMI. All three are bus-powered. In general, all performed well with Premiere Pro, except that the BMD MiniMonitor couldn’t provide output via HDMI. For unexplained reasons, that screen was blank. No such problem with either the T-Tap or the UltraStudio Express.

The real differences are with Final Cut Pro X on the Mac Pro. That computer has six Thunderbolt ports, which are shared across three buses – i.e. two connectors per bus. On the test machine, one bus feeds the two external displays, the second bus connects to external storage (not shared for maximum throughput), and the remaining bus connects to both the output device and a CalDigit dock. If the BMD UltraStudio Express is plugged into any connection shared with another peripheral, JKL high-speed playback and scrubbing in FCPX is useless. Not only does the video output stutter and freeze, but so does the image in the application’s viewer. So you end up wasting an available Thunderbolt port on the machine, if you want to use that device with FCPX. Therefore, using the UltraStudio with FCPX on this machine isn’t really functional, except for screening with a client. This means I end up disabling the device most of the time I use FCPX. In that respect, both the AJA T-Tap and the BMD MiniMonitor performed well. However, my subjective evaluation is that the T-Tap gave better performance in my critical JKL scrubbing test.

One difference that might not be a factor for most, is that the UltraStudio Express (which costs a bit more) has advanced processing. This yields a smooth image in pause when working with progressive and PsF media. When my sequence was stopped in either FCPX or Premiere, both the T-Tap and the UltraStudio yield a full-resolution, whole-frame image on the HDMI output. (HDMI didn’t appear to function on the MiniMonitor.) On the TV Logic broadcast display that was being fed vis SDI, the T-Tap and MiniMonitor only displayed a field in pause, so you get an image with “jaggies”. The UltraStudio Express generates a whole frame for a smooth image in pause. I didn’t test a unit like AJA’s Io XT, so I’m not sure if the more expensive AJA model offers similar processing. However, it should be noted that the Io XT is triple the cost of the UltraStudio Express.

The elephant in the room, of course, is Blackmagic Design DaVinci Resolve. That application is restricted to only work with Blackmagic’s own hardware devices. If you want to run Resolve – and you want professional monitoring out of it – then you can’t use any AJA product with it. However, these units are so inexpensive to begin with – compared with what they used to cost – it’s realistic to own both. In fact, some FCPX editors use a T-Tap while editing in FCPX and then switch over to a MiniMonitor or UltraStudio for Resolve work. The reason being the better performance between Final Cut and the AJA products.

Ultimately these are all wonderful devices. I like the robustness of AJA’s manufacturing and software tools. I’ve used their products over the years and never been disappointed with performance or service if needed. If you don’t need video output from Resolve, then the AJA T-Tap is a great choice for an inexpensive, simple, Thunderbolt video output solution. Laptop users who need to hook up to monitors while working at home or away will find it a great choice. Toss it into your laptop bag and you are ready to rock.

©2017 Oliver Peters

La La Land

df0117_lalaland_01_sm

La La Land is a common euphemism for Los Angeles and Hollywood, but it’s also writer/director Damien Chazelle’s newest film. Chazelle originally shopped La La Land around without much success and so moved on to another film project, Whiplash. Five Oscar nominations with three wins went a long way to break the ice and secure funding for La La Land. The new film tells the story of two struggling artists – Mia (Emma Stone), an aspiring actress, and Sebastian (Ryan Gosling), a jazz musician. La La Land was conceived as a musical set in modern day Los Angeles. It harkens back to the MGM musicals of the 50s and 60s, as well as French musicals, including Jacques Demy’s The Umbrellas of Cherbourg.

One of the Whiplash Oscars went to Tom Cross for Best Achievement in Film Editing. After working as one of David O. Russell’s four editors on Joy, Cross returned to cut La La Land with Chazelle. Tom Cross and I discussed how this film came together. “As we were completing Whiplash, Damien was talking about his next film,” he says. “He sent me a script and a list of reference movies and I was all in. La La Land is Damien’s love letter to old Hollywood. He knew that doing a musical was risky, because it would require large scale collaboration of all the film crafts. He loves the editing process and felt that the cutting would be the technical bridge that would hold it all together. He wanted to tell his story with the language of dreams, which to Damien is the film language of old Hollywood cinema. That meant that he wanted to use iris transitions, layered montages and other old optical techniques. The challenge was to use these retro styles, but still have a story that feels contemporary and grounded in realism.”

Playing with tone and time

La La Land was shot in approximately forty days, but editing the film took nearly a year. Cross explains, “Damien is great at planning and is very clear in what he shoots and his intentions. In the cutting room, we spent a lot of time calibrating the movie – playing with tone and time. Damien wanted to start our story with both characters together on the freeway, then branch off and show Mia going through her day. We take her to a specific plot intersection and then flashback in time to Sebastian on the freeway. Then we move through his day, until we are back at the intersection where our two stories collide. Much like the seasons that our movie cycles through – Winter, Spring, Summer, Fall – we end up returning to this specific intersection later in the film, but with a different outcome. Damien wanted to set up certain timelines and patterns that the audience would follow, so that we could ricochet off of them later.”

df0117_lalaland_02As a musical, Tom Cross had to vary his editorial style for different scenes. He continues, “For Sebastian and Mia’s musical courtship, Damien wanted the scenes to be slower and romantic with a lot of camera moves. In Griffith Park, it’s a long unbroken take with rounded edges. On the other hand, the big concert with John Legend is cutty, with sharp edges. It’s fragmented and the opposite of romantic. Likewise, when they are full-on in love and running around LA, the cutting is at a fever pitch. It’s lively and sweeps you off your feet. Damien wanted to be careful to match the editing style to the emotion of each scene. He knew that one style would accentuate the other.”

La La Land was shot in the unusual, extra-wide aspect ratio of 2.55:1 to replicate cinemascope from the 1950s. “This makes ordinary locations look extraordinary,” Cross says. “Damien would vary the composition from classic wides to very fragmented framing and big close-ups. When Sebastian and Mia are dancing, there’s head-to-toe framing like you would have in a Fred Astaire/Ginger Rogers film. During their dinner break up scene, the shots of their faces get tighter – almost claustrophobic – to be purposefully uncomfortable and unflinching. Damien wanted the cutting to be stripped down and austere – the opposite of what’s come before. He told me to play the scene in their medium shots until I punched into their close angles. And once we’re close, we have to stay there.”

Tricks and tools

The Avid Media Composer-equipped cutting rooms were hosted by Electric Picture Solutions in North Hollywood. Tom Cross used plenty of Media Composer features to cut La La Land. He explains, “For the standard dialogue scenes we used Avid’s Script Sync feature. This was very handy for Damien, because he likes to go over every line with a fine tooth comb. The musical numbers were cut using multi-cam groups. For scenes with prerecorded music, multiple takes could be synced and grouped as if they were different camera angles. I had my assistant set up what I call ‘supergroups’. For instance, all the singers might be grouped into one clip. The instruments might be in another group. Then I could stack the different groups onto multiple video tracks, making it easy to cut between groups, as well as angles within the groups.”

In addition to modern cutting techniques, Cross also relies on lo-fi tools, like scene cards on a wall. Cross says, “Damien was there the whole time and he loves to see every part of the process. He has a great editor’s mind – very open to editorial cheats to solve problems, such as invisible split screen effects and speed adjustments. Damien wanted us to be very meticulous about lip sync during the musical scenes because he felt that anything less than perfect would take you out of the moment. His feeling was that the audience scrutinizes the sync of the singing in a musical more than spoken dialogue in a normal film. So we spent a lot of time cutting and manipulating the vocal track – in order to get it right. Sometimes, I would speed-ramp the picture to match the singing. Damien was also very particular about downbeats and how they hit the picture. I learned that while working with him on Whiplash. It has to be precise. Justin Hurwitz, our composer, provided a mockup score to cut with, and that was eventually replaced by the final music recorded with a 95-piece orchestra. Of course, when you have living, breathing musicians, notes line up differently from the mockup track. Therefore, we had many cuts that needed to be shifted in order to maintain the sync that Damien wanted. During our final days of the sound mix, we were rolling cuts one or two frames in either direction on the dub stage at Fox.”

Editors and directors each have different ways to approach the start of the cutting process. Cross explains, “I edited while they were shooting and had a cut by the time the production wrapped. It’s a great way for the editor to learn the footage and make sure the production is protected. However, Damien didn’t want to see the first cut, but preferred to have it on hand if we needed it. I think first cuts are overwhelming for most directors. Damien had the idea of starting at the end first. There’s a big end scene and he decided that we should do that heavy lifting first. He said, ‘at least we’ll have an ending.’ We worked on it until we got it to a good place and then went back and started from the beginning. It re-invigorated us.”

Tom Cross wrapped with these parting thoughts. “This was a dream project for Damien and it was my dream to be able to work with him on it. It’s romantic, magical and awe-inspiring. I was very lucky to go from a film where you get beaten down on the drums – to another, where you get swept off your feet!”

For more conversations with Tom Cross, check out Art of the Cut.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

2017 Technology Predictions

df3316_techpred_sm

The next year will certainly be an interesting one. Not only because of the forces of innovation, but also those of politics. With the new President vowing to use the bully pulpit to entice, encourage or cajole US corporations to bring their offshore manufacturing back to the states, it seems pretty clear that companies in the media industries will be affected. The likely targets will be storage, camera and computer manufacturers. I presume that Apple will become the most visible and possibly vocal of these, but that awaits to be seen.

At present, Apple is more of an engineering design and services company than a manufacturer. The exception being the Mac Pro. Given their volume and the expertise of suppliers like Foxconn, it’s hard to see how moving iPhone production to the US would be possible or at least cost-effective. However, low volume products, like the 2013 Mac Pro model are a better fit, which is why that product is assembled in Austin. But of course, there’s plenty of speculation that the “trash can” Mac isn’t long for this world. It’s sorely in need of a refresh and has been largely overshadowed by the new MacBook Pro models. Although I think from a business perspective Apple would just as soon drop it, the Mac Pro does have the advantage of servicing a market segment that Apple likes to be associated with – creative media professionals. If you add in the political climate, it’s a good counterpoint to say that Apple’s highest end product is made here.

Factoring all that in, I predict that we’ll see at least one more iteration of the Mac Pro. I don’t expect a form factor change, but I would expect newer Xeon chips, when available, and a shift to the Thunderbolt 3 protocol, using the USB-C plugs. This way it will be compatible with the same peripherals as can be used by the new MacBook Pros. The same will be true of the next iMacs. I also expect to see at least one more version of the Mac Mini, as this provides a small package that many can used as a server machine. It will sport new Xeon or new Core i7 chips and Thunderbolt 3/USB-C ports. However, once these new machines hit the market, there are plenty of signs to predict that those products will be the last of their kind, leaving Apple to only make iMac and laptop form factors for their macOS products. That’s a couple of years out.

If tariffs and a change in trade agreements become public policy, then imported products will become more expensive than they have been. I see this having the greatest impact with cameras, as so many (nearly all) are produced by foreign companies, such as Sony, Canon and ARRI. This may well be a very positive development for a company like RED. If all of a sudden ALEXAs become a lot more expensive as compared with RED Epics, Weapons, etc., well then you just might see a shift in the sales numbers. Of course, a lot of this is just reading the tea leaves, but if politics were ever a driver, this would be the year that we’ll see it.

Another continuing trend will be mergers and acquisitions as weaker companies consolidate with stronger competitors. The ripest of these is Avid Technology. Their financial issues have spilled over into business news and it’s hard to see how they can dig themselves out of the current holes with such lackluster sales. The smart money predicts a breakup or sell-off. If this occurs, the predictions (with which I agree) would have ProTools going to Dolby and Media Composer – and maybe also storage – going to Blackmagic Design. The rest, including Interplay, the Media Central Platform and the Orad products would go elsewhere or just be closed down.

The obvious question would be why Blackmagic Design would want Media Composer? After all, they are already developing DaVinci Resolve into an NLE in its own right. By picking up Media Composer, they add a highly respected editing application to the portfolio and thus buy into an existing marketshare, just as they did in color correction. Once acquired, I’m pretty confident that Blackmagic’s software engineers, together with the staff retained from Avid, would quickly clean up and improve Media Composer from its current state. Only Blackmagic seems to have the will to suffer through the complaints that such a move may have from loyalists. Avid editors are legendary in their reluctance to accept changes to the interface.

When it comes to nonlinear editing applications, I continue to see a rosy future for Adobe. Premiere Pro’s penetration is increasing in the world of entertainment, broadcast and corporate media, which has been Avid’s stronghold. While Avid is still strong in these areas, they seem to be selling to existing customers and not growing their base. Adobe, on the other hand, is pulling from Avid and Apple customers, plus new ones. While there was a lot of grousing about the Adobe subscription model, most users seem OK with it and are happy to be able to keep their software current with each Creative Cloud update. Likewise, Apple is doing well with Final Cut Pro X. Their market seems to be more individual users and “creative enthusiasts” than is the case for Adobe. In addition, FCP X also seems to be doing well internationally. Since Apple has another five years to go on its public commitment to FCP X development, I only see more growth for this application.

Apple has long held an outsized percentage of the creative market, as compared with its overall marketshare of all computers. However, it doesn’t take much sleuthing to see the enthusiasm expressed for the Microsoft Surface Studio. In my own travels, I see a lot of Surface tablets in regular use. So far, the ones I encounter are being used for general computing, but that will change. Since these devices run Windows, any application that can run under Windows will work. As the Surface line becomes more powerful, I fully expect to see creatives routinely running all of the Adobe apps, Media Composer, Resolve, Lightworks and others without any difficulty. Among some users, many would love to cut the Apple chord, and I predict the Surface and Surface Studio are just the tools to enable that move. Add to that the innovative menu control knob that was introduced with Surface Studio and you can see that creative design thinking isn’t limited to Cupertino.

For storage products, I see two shifts.The first is the move to the Thunderbolt 3 protocol. If you’ve invested heavily in Thunderbolt 2 or USB-3 devices, technology has just leapfrogged you. While these products will continue to be useful and can be connected via legacy ports or docks and adapters, storage manufacturers will embrace Thunderbolt 3 for direct-attached products. The shared storage providers will continue down the 10-Gigabit and 40-Gigabit Ethernet route for awhile, until Thunderbolt 3 networking really becomes viable. We aren’t there yet, but I can’t see why it won’t come soon. Right now, if you have two to ten users, a low cost shared storage environment is pretty easy to set up. The hitch is controlling the application permissions of the software being used. Avid had a lock on that, but there are now ways to enable Avid bin-locking for a few hundred bucks per seat. No need to buy expensive storage and pay annual support contracts any longer.

Along these lines is Adobe’s project sharing through Team Projects (currently in beta testing). Once they get the kinks ironed out, Team and Enterprise accounts will be able to work collaboratively and simultaneously on the same production. I see it as only a matter of time before Apple offers a similar capability with Final Cut Pro X. I certainly seems like all the hooks are there under the hood to make that possible. So maybe 2017 will be the year the project sharing comes to Final Cut users. Once both Adobe and Apple can offer reliable project collaboration in fashion that rivals Avid, you’ll see an even greater shift to these editing tools and away from Media Composer within the film and broadcast editing communities.

As laptops grow in power, expect an even faster demise of the desktop, workstation PC. More and more, people want to be mobile. Having a laptop connected to all the bells and whistles at your base station edit suite, yet being able to unplug and go where you need to be – that’s the future direction for a lot of post professionals. Wrapping this up, remember, these predictions are free and worth just what you paid for them!

Originally written for Digital Video magazine / Creative Planet Network

© 2016 Oliver Peters

BorisFX BCC 10

df3216_bcc10_01_sml

Boris Continuum Complete (BCC) by BorisFX is the epitome of the term “Swiss Army knife” when it comes to talking about plug-ins. Most editors will pick this package over others, if they can only have one toolkit to cover a diverse range of picture enhancements. In the past year, BorisFX has upgraded this toolkit with new effects, expanded to add more NLE hosts, and integrated mocha’s Academy Award-winning planar tracking technology after the acquisition of Imagineer Systems. This set of plug-ins is now up to version BCC10. BorisFX has not only added new effects to BCC10, but also expanded its licensing options to include multi-host and subscription options.

Since many users now work with several NLEs, multi-host licensing makes a lot of sense. One purchase with a single serial number covers the installation for each of the various applications. There are two multi-host license versions: one for Avid/Adobe/Apple/OFX and the second that doesn’t include Avid. OFX licensing covers the installation for Blackmagic Design DaVinci Resolve, as well as Sony Vegas Pro for PC users.

What’s new in BCC10

df3216_bcc10_10Boris Continuum Complete version 10 includes over 230 effects within 16 different categories, like 3D Objects, Art Looks, Particles, Perspective and more. Each effect comes with numerous presets for a total of over 2,500 presets in all. There are plenty of new tools in BCC10, but the biggest news is that each effect filter integrates mocha planar tracking. BorisFX has always included Pixel Chooser as a way of masking objects. Now each filter also lets you launch the mocha interface right from inside the plug-in’s effect control panel. For example, if you are applying skin smoothing to only your talent’s forehead using the new BCC Beauty Studio, simply launch mocha, create a mask for the forehead and track the talent’s movement within the shot. The mask and track are saved within the plug-in, so you can instantly see the results.

df3216_bcc10_05A second big change is the addition and integration of the FX Browser. Each plug-in effect lets you launch the FX Browser interface to display how each of the various presets for that effect would look when applied to the selected clip. You can preview the whole clip, not just a thumbnail. FX Browser is also a standalone effect that can be applied to the clip. When you use it that way, then all presets for all filters can be previewed. While FX Browser has been implemented in past versions in some of the hosts, this is the first time that it’s become an integrated part of the BCC package across all NLEs.

df3216_bcc10_02BCC10 includes two new “studio” tools, as well as a number of new individual effects. BCC Beauty Studio is a set of tools in a single filter targeted at image retouching, especially the skin texture of talent. Photographers retouch “glamor” shots to reduce or remove blemishes, so Photoshop-style retouching is almost expected these days. This is the digital video equivalent. As with most skin smoothing filters, BCC Beauty Studio uses skin keying algorithms to isolate skin colors. It then blurs skin texture, but also lets the editor adjust contrast, color correction, and even add a subtle glow to image highlights. Of course, as I mentioned above, mocha masking and tracking is integrated for the ultimate control in where and how the effect is applied.

The second new, complex filter is BCC Title Studio. This is an integrated 3D titling tool that can be used based on templates within the effects browser or by launching the separate Title Studio interface. Editors familiar with BorisFX products will recognize this titling interface as essentially Boris RED right inside of their NLE. Not only can you create titles, but also more advanced motion graphics. You can even import objects, EPS and image files for 3D effects, including the addition of materials and shading. As with other BorisFX tilting tools, you can animate text on and off the screen.

df3216_bcc10_03In addition to these two large plug-ins, BCC10 also gained nine new filters and transitions. These include BCC Remover (fills in missing pixels or removes objects using cloning) and BCC Drop-out Fixer (restores damaged footage). For the folks who have to deal with a lot of 4×3 content and vertical cell phone footage, there’s BCC Reframer. Unlike the usual approach where the same image is stretched and blurred behind the vertical shot, this filter includes options to stylize the foreground and background.

df3216_bcc10_11The trend these days is to embrace image “defects” as a creative effect, so two of the new filters are BCC Light Leaks and BCC Video Glitch. Each adds organic, distressed effects, like in-camera light contamination and corrupted digital video artifacts. To go along with this, there are also four new transitions, including a BCC Light Leaks Dissolve, Cross Glitch, Cross Zoom and Cross Melt. Of these, the light leaks, glitch and zoom transitions are about what you’d expect from the name, however, the melt transition seems rather unique. In addition to the underlying dissolve between two images, there are a variety of effects options that can be applied as part of this transition. Many of these are glass, plastic, prism or streak effects, which add an interesting twist to this style of transition.

In use

df3216_bcc10_04The new BCC10 package works within the established hosts much like it always has, so no surprises there. The Boris Continuum Complete package used to come bundled with Avid Media Composer, but unfortunately that’s no longer the case. Avid editors who want the full BCC set have to purchase it. As with most plug-ins, After Effects is generally the best host when adjustment and manipulation of effects are required.

df3216_bcc10_09A new NLE to consider is DaVinci Resolve. Many are testing the waters to see if Resolve could become their NLE of choice. Blackmagic Design introduced Resolve 12.5 with even more focus on its editing toolset, including new, built-in effect filters and transitions. In my testing, BCC10 works reasonably well with Resolve 12.5 once you get used to where the effects are. Resolve uses a modal design with editing and color correction split into separate modes or pages. BCC10 transition effects only show up in the OFX library of the edit page. For filter effects, which are applied to the whole clip, you have to go to the color page. During the color correction process you may add any filter effect, but it has to be applied to a node. If you apply more than one filter, you have to add a new node for each filter. With the initial release of BCC10, mocha did not work within Resolve. If you tried to launch it, a message came up that this functionality would be added at a later time. In May, BorisFX released BCC10.2, which included mocha for both Resolve 12.5 and Vegas Pro. To use the BCC10 effects with Resolve 12.5 you need the paid Studio version and not the free version of Resolve.

df3216_bcc10_07BorisFX BCC10 is definitely a solid update, with new features, mocha integration and better GPU-based performance. It runs best in After Effects CC, Premiere Pro CC and Avid Media Composer. The built-in effects tools are pretty good in After Effects, Final Cut Pro X and Resolve 12.5 – meaning you might get by without needing what BCC10 has to offer. On the other hand, they are unfortunately very mediocre in Premiere Pro or Media Composer. If one of those is your editing axe, then BCC10 becomes an essential purchase, if you want to improve the capabilities of your editing application. Regardless of which tool you use, BCC10 will give you more options to stretch your creativity.

df3216_bcc10_08On a related note, at IBC 2016 in Amsterdam, BorisFX announced the acquisition of GenArts. This means that the Sapphire effects are now housed under the BorisFX umbrella, which could make for some interesting bundling options in the future. As with their integration of mocha tracking into the BCC effects, future versions of BCC and/or Sapphire might also see a sharing of compatible technologies across these two effects families. Stay tuned.

Originally written for Digital Video magazine / Creative Planet Network

©2016 Oliver Peters

The wait is over – FCP X 10.3

df3116_fcpx1003_1_smAmidst the hoopla on Oct. 27th, when Apple introduced the new MacBook Pro with Touch Bar, the ProApps team also released updates to Final Cut Pro X, Motion and Compressor. This was great news for fans, since Final Cut got a prime showcase slot in the event’s main stage presentation. Despite the point numbering, the bump from 10.2 to 10.3 is a full version change, just like in macOS, where 10.11 (El Capitan) to 10.12 (Sierra) is also a new version. This makes FCP X 10.3 the fourth iteration in the FCP X line and the eleventh under the Final Cut Pro brand. I’m a bit surprised that Apple didn’t drop the “X” from the name, though, seeing as it’s done that with macOS itself. And speaking of operating systems, this release requires 10.11.4 (El Capitan) or higher (Sierra).

If you already purchased the application in the past, then this update will be a free upgrade for you. There are numerous enhancements, but three features stand out among the changes: the new interface, the expanded use of roles for mixing, and support for a wider color gamut.

A new look for the user interface

The new user interface is darker and flatter. Although for my taste, it’s a bit too dark without any brightness sliders to customize the appearance. The dimensional style is gone, putting Final Cut Pro X in line with the aesthetics of iMovie and other Apple applications. Final Cut Pro X was already out of step with design trends at the time it was first released. Reskinning the application with this new appearance brings it in line with the rest of the design industry.

The engineers have added workspaces and rearranged where certain controls are, though generally, panels are in the same places as before. Workspaces can be customized, but not nearly to the level of Adobe’s Premiere Pro CC. The most welcomed of these changes is that the inspector pane can be toggled to full height when needed. In reality, the inspector height isn’t changed. It’s the width of the timeline that changes and toggles between covering and revealing the full inspector panel.

There are other minor changes throughout 10.3, which make it a much better application. For example, if you like to work with a source/record, 2-up viewer display, then 10.3 now allows you to play a source clip from inside the event viewer.

Magnetic Timeline 2 and the expansion of roles

df3116_fcpx1003_2Apple did a lot of work to rejigger the way the timeline works and to expand the functionality of roles. It’s even being marketed as Magnetic Timeline 2. Up until now, the use of roles in Final Cut has been optional. With 10.3, it’s become the primary way to mix and organize connected clips within the timeline. Apple has resisted adding a true mixing panel, instead substituting the concept of audio lanes.

Let’s say that you assign the roles of dialogue, music or effects to your timeline audio clips. The timeline index panel lets you organize these clips into groups according to their assigned roles, which Apple calls audio lanes. If you click “show audio lanes”, the various connected clips rearrange vertical position in the timeline window to be grouped into their corresponding lanes, based on roles. Now you have three lanes of grouped clips: dialogue, effects, music. You can change timeline focus to individual roles – such as only dialogue – which will minimize the size of all the other roles (clips) in the window. These groups or lanes can also be soloed, so you just hear dialogue without the rest, for example.

There is no submix bus to globally control or filter groups of clips, like you have in Premiere Pro or most digital audio applications. The solution in FCP X 10.3 is to select all clips of the same role and create a compound clip. (Other NLEs refer to this as “nesting”.) By doing so, all of the dialogue, effects and music clips appear on the timeline as only three compound clips – one for each role. You can then apply audio filters or adjust the overall level of that role by applying them to the compound clip.

Unfortunately, if you have to go back and make adjustments to an individual clip, you’ll have to open up the compound clip in its own timeline. When you do that, you lose the context of the other clips. For example, tweaking a sound effect clip inside its compound clip, means that you would only hear the other surrounding effect clips, without dialogue and music or seeing the video. In addition, you won’t hear the result of filters or volume changes made at the top level of that compound clip. Nevertheless, it’s not as complex as it sounds and this is a viable solution, given the design approach Apple engineers have taken.

df3116_fcpx1003_3It does surprise me that they ended up with this solution, because it’s a very modal way of operating. This would seem to be an anathema to the intent of much of the rest of FCP X’s design. One has to wonder whether or not they’ve become boxed in my their own architecture. Naturally others will counter that this process is simplified due to the lack of track patching and submix matrices.

Wide color

The industry at large is embracing color standards that enable displays to reproduce more of the color spectrum, which the human eye can see. An under-the-hood change with FCP X is the embrace of wide gamut color. I think that calling it “wide color” dumbs down the actual standards, but I guess Apple wants to keep things in plain language. In any case, the interface is pretty clear on the actual specs.

Libraries can be set up for “standard color” (Rec. 601 for SD and Rec. 709 for HD) or “wide color” (Rec. 2020). The Projects (sequences) that you create within a Library can be either, as long as the Library was initially set up for wide gamut. You can also change the setting for a Project after the fact. Newer cameras that record in raw or log color space, like RED or ARRI models, are perfectly compatible with wide color (Rec. 2020) delivery, thanks to post-production color grading techniques. That is where this change comes into play.

For the most part you won’t see much difference in normal work, unless you really crank up the saturation. If you do this in the wide color gamut mode, you can get pretty extreme and the scopes will display an acceptable signal. However, if you then switch the Project setting to standard color, the high chroma areas will change to a somewhat duller appearance in the viewer and the scopes will show signal clipping. Most current television display systems don’t display wide gamut color, yet, so it’s not something most users need to worry about today. This is Apple’s way of future-proofing Final Cut and to pass the cleanest possible signal through the system.

A few more things

df3116_fcpx1003_4Numerous other useful tools were added in this version. For example, Flow – a morphing dissolve – for use in bridging jump cuts. Unlike Avid’s or Adobe’s variations, this transition works in real-time without analysis or rendering. This is because it morphs between two still frames. Each company’s approach has a slightly different appearance, but Flow definitely looks like an effect that will get a lot of use – especially with interview-driven productions. Other timeline enhancements include the ability to easily add and toggle audio fades. There’s simplified top and tail trimming. Now you can remove attributes and you can roll (trim) between adjacent, connected clips. Finally – a biggie for shared storage users – FCP X can now work with NAS systems that use the SMB protocol.

Working with it for over a week at the time I post this, the application has been quite stable, even on a production with over 2,000 4K clips. I wouldn’t recommend upgrading if you are in the middle of a production. The upgraded Libraries I tested did exhibit some flakiness, which weren’t there in freshly created Libraries. There’s also a technique to keep both 10.2 and 10.3 active on the same computer. Definitely trash your preferences before diving in.

So far, the plug-ins and Motion templates still work, but you’ll definitely need to check whether these vendors have issued updates designed for this release. This also goes for the third-party apps, like those from Intelligent Assistance, because 10.3 adds a new version of FCPXML. Both Intelligent Assistance and Blackmagic Design issued updates (for Resolve and Desktop Video) by the next day.

There are a few user interface bugs, but no show-stoppers. For instance, the application doesn’t appear to hold its last state upon close, especially when more than one Library is open. When you open it again the next time, the wrong Library may be selected or the wrong Project loaded in the timeline. It occasionally loses focus on the pane selected. This is an old bug that was there in previous versions. You are working in the timeline and all of a sudden nothing happens, because the application “forgot” which pane it’s supposed to have focus on. Clicking command-1 seems to fix this. Lastly, the audio meters window doesn’t work properly. If you resize it to be slimmer, the next time you launch FCP X, the meters panel is large again. That’s even if you updated the workspace with this smaller width. And then sometimes they don’t display audio until you close and reopen the audio meters window.

In this round of testing, I’ve had to move around Libraries with external media to different storage volumes. This requires media relinking. While it was ultimately successful, the time needed to relink was considerably longer than doing this same task in other NLEs.

My test units are all connected to Blackmagic Design i/o hardware, which seems to retard performance a bit. With a/v output turned off within the FCP X interface, clips play right away without stuttering when I hit the spacebar. With the a/v output on, I randomly get stuttering on clips when they start to play. It’s only a minor nuisance, so I just turn it off until I need to see the image on an external monitor. I’ve been told that AJA hardware performs better with FCP X, but I haven’t had a chance to test this myself. In any case, I don’t see this issue when running the same media through Premiere Pro on the exact same computer, storage and i/o hardware.

Final Cut Pro X 10.3 will definitely please most of its fans. There’s a lot of substance and improvement to be appreciated. It also feels like it’s performing better, but I haven’t had enough time with a real project yet to fully test that. Of course, the users who probe a bit deeper will point to plenty of items that are still missing (and available in products like Premiere Pro), such as better media relinking, more versatile replace edit functions and batch exporting.

For editors who’ve only given it a cursory look in the past or were swayed by the negative social media and press over the past five years, this would be the version to re-evaluate. Every new or improved item is targeted at the professional editor. Maybe it’s changed enough to dive in. On the other hand, if you’re an editor who’s given FCP X a fair and educated assessment and just not found it to your liking or suitable for your needs, then I doubt 10.3 will temp you. Regardless, this gives fans some reassurance about Apple’s commitment to professional users of their software – at least for another five years.

If you have the time, there are plenty of great tips here at the virtual Final Cut User Group.

The new Final Cut Pro X 10.3 user manual can be found here.

Click here for additional links highlighting features in this update.

Originally written for Digital Video magazine / Creative Planet Network

©2016 Oliver Peters

Tools for Dealing with Media

df3016_media_1_sm

Although most editing application manufacturers like to tout how you can just go from camera to edit with native media, most editors know that’s a pretty frustrating way to work. The norm these days is for the production team to use a whole potpourri of professional and prosumer cameras, so it’s really up to the editor to straighten this out before the edit begins. Granted a DIT could do all of this, but in my experience, the person being called a DIT is generally just someone who copies/backs-up the camera cards onto hard drives to bring back from the shoot. As an editor you are most likely to receive a drive with organized copies of the camera media cards, but still with the media in its native form.

Native media is fine when you are talking about ARRI ALEXA, Canon C300 or even RED files. It is not fine when coming from a Canon 5D, DJI, iPhone, Sony A7S, etc. The reason is that these systems record long-GOP media without valid timecode. Most do not generate unique file names. In some cases, there is no proper timebase within the files, so time itself is “rubbery” – meaning, a frame of time varies slightly in true duration from one frame to the next.

If you remove the A7S .mp4 files from within the clutter of media card folders and take these files straight into an NLE, you will get varying results. There is a signal interpreted as timecode by some tools, but not by others. Final Cut Pro X starts all of these clips at 00:00:00:00, while Premiere Pro and Resolve read something that is interpreted as timecode, which ascends sequentially on successive clips. Finally, these cameras have no way to deal with off-speed recordings. For example, if a higher frame rate is recorded with the intent to play it back in slow motion. You can do that with a high-end camera, but not these prosumer products. So I’ve come to rely on several software products heavily in these types of productions.

Step 1 : Hedge for Mac

df3016_media_2The first step in any editing is to get the media from the field drives onto the edit system drives. Hopefully your company’s SOP is to archive this media from the field in addition to any that comes out of the edit. However, you don’t want to edit directly from these drives. When you do a Finder copy from one drive to the next there is no checksum verification. In other words, the software doesn’t actually check to make sure the copy is exact without errors. This is the biggest plus for an application like Hedge – copy AND verification.

Hedge comes in a free and a paid version. The free version is useful, but copy and verify is slower than the paid version. The premium (paid) version uses a software component that they call Fast Lane to speed up the verification process so that it takes roughly the same amount of time as a Finder copy, which has no verification. To give you an idea, I copied a 62GB folder from a USB2.0 thumb drive to an external media drive connected to my Mac via eSATA (through an internal card). The process took under 30 minutes for a copy through Hedge (paid version) – about the same as it took for a Finder copy. Using the free version takes about twice as long, so there’s a real advantage to buying the premium version of the application. In addition, the premium version works with NAS and RAID systems.

The interface is super simple. Sources and targets are drag-and-drop. You can specify folders within the drives, so it’s not just a root-level, drive-to-drive copy. Multiple targets and even multiple sources can be specified within the same batch. This is great for creating a master as well as several back-up copies. Finally, Hedge generates a transfer log for written evidence of the copies and verification performed.

Step 2 : EditReady

df3016_media_3Now that you have your media copies, it’s time to process the prosumer camera media into something more edit-friendly. Since the camera-original files are being archived, I don’t generally save both the original and converted files on my edit system. For all intents and purposes, the new, processed files become my camera media. I’ve used tools like MPEG Streamclip in the past. That still works well, but EditReady from Divergent Media is better. It reads many media formats that other players don’t and it does a great job writing ProRes media. It will do other formats, too, but ProRes is usually the best format for projects that I work with.

One nice benefit of EditReady is that it offers additional processing functions. For example, if you want to bake in a LUT to the transcoded files, there’s a function for that. If you shot at 29.97, but want the files to play at 23.976 inside you NLE, EditReady enables you to retime the files accordingly. Since Divergent Media also makes ScopeBox, you can get a bundle with both EditReady and ScopeBox. Through a software conduit called ScopeLink, clips from the EditReady player show up in the ScopeBox viewer and its scopes, so you can make technical evaluations right within the EditReady environment.

EditReady uses a drag-and-drop interface that allows you to set up a batch for processing. If you have more that one target location or process chain, simply open up additional windows for each batch that you’d like to set up. Once these are fired off, all process will run simultaneously. The best part is that these conversions are fast, resulting in reliable transcoded media in an edit-friendly format.

Step 3: Better Rename

df3016_media_4The last step for me is usually to rename the file names. I won’t do this with formats like ALEXA ProRes or RED, but it’s essential for 5D, DJI and other similar cameras. That’s because these camera normally don’t generate unique file names. After all, you don’t want a bunch of clips that are named C0001 with a starting timecode of 00:00:00:00 – do you?

While there are a number of batch renaming applications and even Automator scripts that you can create, my preferred application is Better Rename, which is available in the Mac App Store. It has a host of functions to change names, add numbered sequences and append a text prefix or suffix to a name. The latter option is usually the best choice. Typically I’ll drag my camera files from each group into the interface and append a prefix that adds a camera card identifier and a date to the clip name. So C0001 becomes A01_102916_C0001. A clip from the second card would change from C0001 to A02_102916_C0001. It’s doubtful that the A camera would shoot more than 99 cards in a day, but if so, you can adjust your naming scheme accordingly.

There you go. Three simple steps to bulletproof how you work with media.

©2016 Oliver Peters

Audio Splits and Stems in Premiere Pro

df2916_audspltppro_8_sm

When TV shows and feature films are being mixed, the final deliverables usually include audio stems as separate audio files or married to a multi-channel video master file or tape. Stems are the isolated submix channels for dialogue, sound effects and music. These elements are typically called DME (dialogue, music, effects) stems or splits and a multi-channel master file that includes these is usually called a split-track submaster. These isolated tracks are normally at mix level, meaning that you can combine them and the sum should equal the same level and mix as the final composite mixed track.

The benefit of having such stems is that you can easily replace elements, like re-recording dialogue in a different language, without having to dive back into the original audio project. The simplest form is to have 3 stereo stem tracks (6 mono tracks) for left and right dialogue, sound effects and music. Obviously, if you have a 5.1 surround mix, you’ll end up with a lot more tracks. There are also other variations for sports or comedy shows. For example, sports shows often isolate the voice-over announcer material from an on-camera dialogue. Comedy shows may isolate the laugh track as a stem. In these cases, rather than 3 stereo DME stems, you might have 4 or more. In other cases, the music and effects stems are combined to end up with a single stereo M&E track (music and effects minus dialogue).

Although this is common practice for entertainment programming, it should also be common practice if you work in short films, corporate videos or commercials. Creating such split-track submasters at the time you finish your project can often save your bacon at some point down the line. I ran into this during the past week. df2916_audspltppro_1A large corporate client needed to replace the music tracks on 11 training videos. These videos were originally editing in 2010 using Final Cut Pro 7 and mixed in Pro Tools. Although it may have been possible to resurrect the old project files, doing so would have been problematic. However, in 2010, I had exported split-track submasters with the final picture and isolated stereo tracks for dialogue, sound effects and music. These have become the new source for our edit – now 6 years later. Since I am editing these in Premiere Pro CC, it is important to also create new split-track submasters, with the revised music tracks, should we ever need to do this again in the future.

Setting up a new Premiere Pro sequence 

I’m usually editing in either Final Cut Pro X or Premiere Pro CC these days. It’s easy to generate a multi-channel master file with isolated DME stems in FCP X, by using the Roles function. However, to do this, you need to make sure you properly assign the correct Roles from the get-go. Assuming that you’ve done this for dialogue, sound effects and music Roles on the source clips, then the stems become self-sorting upon export – based on how you route a Role to its corresponding export channel. When it comes to audio editing and mixing, I find Premiere Pro CC’s approach more to my liking. This process is relatively easy in Premiere, too; however, you have to set up a proper sequence designed for this type of audio work. That’s better than trying to sort it out at the end of the line.

df2916_audspltppro_4The first thing you’ll need to do is create a custom preset. By default, sequence presets are configured with a certain number of tracks routed to a stereo master output. This creates a 2-channel file on export. Start by changing the track configuration to multi-channel and set the number of output channels. My requirement is to end up with an 8-channel file that includes a stereo mix, plus stereo stems for isolated dialogue, sound effects and music. Next, add the number of tracks you need and assign them as “standard” for the regular tracks or “stereo submix” for the submix tracks.

df2916_audspltppro_2This is a simple example with 3 regular tracks and 3 submix tracks, because this was a simple project. A more complete project would have more regular tracks, depending on how much overlapping dialogue or sound effects or music you are working with on the timeline. For instance, some editors like to set up “zones” for types of audio. You might decide to have 24 timeline tracks, with 1-8 used for dialogue, 9-18 for sound effects and 17-24 for music. In this case, you would still only need 3 submix tracks for the aggregate of the dialogue, sound effects and music.

df2916_audspltppro_5Rename the submix tracks in the timeline. I’ve renamed Submix 1-3 as DIA, SFX and MUS for easy recognition. With Premiere Pro, you can mix audio in several different places, such as the clip mixer or the audio track mixer. Go to the audio track mixer and assign the channel output and routing. (Channel output can also be assigned in the sequence preset panel.) For each of the regular tracks, I’ve set the pulldown for routing to the corresponding submix track. Audio 1 to DIA, Audio 2 to SFX and Audio 3 to MUS. The 3 submix tracks are all routed to the Master output.

df2916_audspltppro_3The last step is to properly assign channel routing. With this sequence preset, master channels 1 and 2 will contain the full mix. First, when you export a 2-channel file as a master file or a review copy, by default only the first 2 output channels are used. So these will always get the mix without you having to change anything. Second, most of us tend to edit with stereo monitoring systems. Again, output channels 1 and 2 are the default, which means you’ll always be monitoring the full mix, unless you make changes or solo a track. Output channels 3-8 correspond to the stereo stems. Therefore, to enable this to happen automatically, you must assign the channel output in the following configuration: DIA (Submix 1) to 1-2 and 3-4, SFX (Submix 2) to 1-2 and 5-6, and MUS (Submix 3) to 1-2 and 7-8. The result is that everything goes to both the full mix, as well as the isolated stereo channel for each audio component – dialogue, sound effects and music.

Editing in the custom timeline

Once you’ve set up the timeline, the rest is easy. Edit any dialogue clips to track 1, sound effects to track 2 and music to track 3. In a more complex example, like the 24-track timeline I referred to earlier, you’d work in the “zones” that you had organized. If 1-8 are routed to the dialogue submix track, then you would edit dialogue clips only to tracks 1-8. Same for the corresponding sound effects and music tracks. Clips levels can still be adjusted as you normally would. But, by having submix tracks, you can adjust the level of all dialogue by moving the single, DIA submix fader in the audio track mixer. This can also be automated. If you want a common filter, like a compressor, added all of one stem – like a compressor across all sound effects – simply assign it from the pulldown within that submix channel strip.

Exporting the file

df2916_audspltppro_6The last step is exporting your spilt-track submaster file. If this isn’t correct, the rest was all for naught. The best formats to use are either a QuickTime ProRes file or one of the MXF OP1a choices. In the audio tab of the export settings panel, change the pulldown channel selection from Stereo to 8 channels. Now each of your timeline output channels will be exported as a separate mono track in the file. These correspond to your 4 stereo mix groups – the full mix plus stems. Now in one single, neat file, you have the final image and mix, along with the isolated stems that can facilitate easy changes down the road. Depending on the nature of the project, you might also want to export versions with and without titles for an extra level of future-proofing.

Reusing the file

df2916_audspltppro_7If you decide to use this exported submaster file at a later date as a source clip for a new edit, simply import it into Premiere Pro like any other form of media. However, because its channel structure will be read as 8 mono channels, you will need to modify the file using the Modify-Audio Channels contextual menu (right-click the clip). Change the clip channel format from Mono to Stereo, which turns your 8 mono channels back into the left and right sides of 4 stereo channels. You may then ignore the remaining “unassigned” clip channels. Do not change any of the check boxes.

Hopefully, by following this guide, you’ll find that creating timelines with stem tracks becomes second nature. It can sure help you years later, as I found out yet again this past week!

©2016 Oliver Peters