Viva Las Vegas – NAB 2018

As more and more folks get all of their information through internet sources, the running question is whether or not trade shows still have value. A show like the annual NAB (National Association of Broadcasters) Show in Las Vegas is both fun and grueling, typified by sensory overload and folks in business attire with sneakers. Although some announcements are made before the exhibits officially open – and nearly all are pretty widely known before the week ends – there still is nothing quite like being there in person.

For some, other shows have taken the place of NAB. The annual HPA Tech Retreat in the Palm Springs area is a gathering of technical specialists, researchers, and creatives that many consider the TED Talks for our industry. For others, the Cine Gear Expo in LA is the prime showcase for grip, lighting, and camera offerings. RED Camera has focused on Cine Gear instead of NAB for the last couple of years. And then, of course, there’s IBC in Amsterdam – the more humane version of NAB in a more pleasant setting. But for me, NAB is still the main event.

First of all, the NAB Show isn’t merely about the exhibit floor at the sprawling Las Vegas Convention Center. Actual NAB members can attend various sessions and workshops related to broadcasting and regulations. There are countless sidebar events specific to various parts of the industry. For editors that includes Avid Connect – a two-day series of Avid presentations in the weekend leading into NAB; Post Production World – a series of workshops, training sessions, and presentations managed by Future Media Concepts; as well as a number of keynote presentations and artist gatherings, including SuperMeet, FCPexchange, and the FCPX Guru Gathering. These are places where you’ll rub shoulders with some well-known editors, colorists, artists, and mixers, learn about new technologies like HDR (high dynamic range imagery), and occasionally see some new product features from vendors who might not officially be on the show floor with a booth, like Apple.

One of the biggest benefits I find in going to NAB is simply walking the floor, checking out the companies and products who might not get a lot of attention. These newcomers often have the most innovative technologies and it’s these new things that you find, which were never on the radar prior to that week.

The second benefit is connection. I meet up again in person with friends that I’ve made over the years – both other users, as well as vendors. Often it’s a chance to meet people that you might only know through the internet (forums, blogs, etc.) and to get to know them just a bit better. A bit more of that might make the internet more friendly, too!

Here are some of my random thoughts and observations from Las Vegas.

__________________________________

Editing hardware and software – four As and a B

Apple uncharacteristically pre-announced their new features just prior to the show, culminating with App Store availability on Monday when the NAB exhibits opened. This includes new Final Cut Pro X/Motion/Compressor updates and the official number of 2.5 million FCPX users. That’s a growth of 500,000 users in 2017, the biggest year to date for Final Cut. The key new feature in FCPX is a captioning function to author, edit, and export both closed and embedded (open) captions. There aren’t many great solutions for captioning and the best to date have been expensive. I found that the Apple approach was now the best and easiest to use that I’ve seen. It’s well-designed and should save time and money for those who need to create captions for their productions – even if you are using another brand of NLE. Best of all, if you own FCPX, you already have that feature. When you don’t have a script to start out, then manual or automatic transcription is required as a starting point. There is now a tie-in between Speedscriber (also updated this week) and FCPX that will expedite the speech-to-text function.

The second part of Apple’s announcement was the introduction of a new camera raw codec family – ProResRAW and ProResRAW HQ. These are acquisition codecs designed to record the raw sensor data from Bayer-pattern sensors (prior to debayering the signal into RGB information) and make that available in post, just like RED’s REDCODE RAW or CinemaDNG. Since this is an acquisition codec and NOT a post or intermediate codec, it requires a partnership on the production side of the equation. Initially this includes Atomos and DJI. Atomos supplies an external recorder, which can record the raw output from various cameras that offer the ability to record raw data externally. This currently includes their Shogun Inferno and Sumo 19 models. As this is camera-specific, Atomos must then create the correct profile by camera to remap that sensor data into ProResRAW. At the show, this included several Canon, Sony, and Panasonic cameras. DJI does this in-camera on the Inspire 2.

The advantage with FCPX, is that ProResRAW is optimized for post, thus allowing for more streams in real-time. ProResRAW data rates (variable) fall between that of ProRes and ProResHQ, while the less compressed ProResRAW HQ rates are between ProRes HQ and ProRes 4444. It’s very early with this new codec, so additional camera and post vendors will likely add ProResRAW support over the coming year. It is currently unknown whether or not any other NLEs can support ProResRAW decode and playback yet.

As always, the Avid booth was quite crowded and, from what I heard, Avid Connect was well attended with enthused Avid users. The Avid offerings are quite broad and hard to encapsulate into any single blog post. Most, these days, are very enterprise-centric. But this year, with a new CEO at the helm, Avid’s creative tools have been reorganized into three strata – First, standard, and Ultimate. This applies to Sibelius, Pro Tools, and Media Composer. In the case of Media Composer, there’s Media Composer | First – a fully functioning free version, with minimal restrictions; Media Composer; and Media Composer | Ultimate – includes all options, such as PhraseFind, ScriptSync, NewsCutter, and Symphony. The big difference is that project sharing has been decoupled from Media Composer. This means that if you get the “standard” version (just named Media Composer) it will not be enabled for collaboration on a shared storage network. That will require Media Composer | Ultimate. So Media Composer (standard) is designed for the individual editor. There is also a new subscription pricing structure, which places Media Composer at about the same annual cost as Adobe Premiere Pro CC (single app license). The push is clearly towards subscription, however, you can still purchase and/or maintain support for perpetual licenses, but it’s a little harder to find that info on Avid’s store website.

Though not as big news, Avid is also launching the Avid DNxID capture/export unit. It is custom-designed by Blackmagic Design for Avid and uses a small form factor. It was created for file-base acquisition, supports 4K, and includes embedded DNx codecs for onboard encoding. Connections are via component analog, HDMI, as well as an SD card slot.

The traffic around Adobe’s booth was thick the entire week. The booth featured interesting demos that were front and center in the middle of one of the South Hall’s main thoroughfares, generally creating a bit of a bottleneck. The newest Creative Cloud updates had preceded the show, but were certainly new to anyone not already using the Adobe apps. Big news for Premiere Pro users was the addition of automatic ducking that was brought over from Audition, and a new shot matching function within the Lumetri color panel. Both are examples of Adobe’s use of their Sensei AI technology. Not to be left out, Audition can now also directly open sequences from Premiere Pro. Character Animator had been in beta form, but is now a full-fledged CC product. And for puppet control Adobe also introduced the Advanced Puppet Engine for After Effects. This is a deformation tool to better bend, twist, and control elements.

Of course when it comes to NLEs, the biggest buzz has been over Blackmagic Design’s DaVinci Resolve 15. The company has an extensive track record of buying up older products whose companies weren’t doing so well, reinvigorating the design, reducing the cost, and breathing new life into them – often to a new, wider customer base. This is no more evident than Resolve, which has now grown from a leading color correction system to a powerful, all-in-one edit/mix/effects/color solution. We had previously seen the integration of the Fairlight audio mixing engine. This year Fusion visual effects were added. As before, each one of these disparate tools appears on its own page with a specific UI optimized for that task.

A number of folks have quipped that someone had finally resurrected Avid DS. Although all-in-ones like DS and Smoke haven’t been hugely successful in the past, Resolve’s price point is considerably more attractive. The Fusion integration means that you now have a subset of Fusion running inside of Resolve. This is a node-based compositor, which makes it easy for a Resolve user to understand, since it, too, already uses nodes in the color page. At least for now, Blackmagic Design intends to also maintain a standalone version of Fusion, which will offer more functions for visual effects compositing. Resolve also gained new editorial features, including tabbed sequences, a pancake timeline view, captioning, and improvements in the Fairlight audio page.

Other Blackmagic Design news includes updates to their various mini-converters, updates to the Cintel Scanner, and the announcement of a 4K Pocket Cinema Camera (due in September). They have also redesigned and modularized the Fairlight console mixing panels. These are now more cost-effective to manufacture and can be combined in various configurations.

This was the year for a number of milestone anniversaries, such as the 100th for Panasonic and the 25th for AJA. There were a lot of new product announcements at the AJA booth, but a big one was the push for more OpenGear-compatible cards. OpenGear is an open source hardware rack standard that was developed by Ross and embraced by many manufacturers. You can purchase any OpenGear version of a manufacturer’s product and then mix and match a variety of OpenGear cards into any OpenGear rack enclosure. AJA’s cards also offer Dashboard support, which is a software tool to configure and control the cards. There are new KONA SDI and HDMI cards, HDR support in the IO 4K Plus, and HDR capture and playback with the KiPro Ultra Plus.

HDR

It’s fair to say that we are all learning about HDR, but from what I observed on the floor, AJA is one of the only companies with a number of hardware product offerings that will allow you to handle HDR. This is thanks to their partnership with ColorFront, who is handling the color science in these products. This includes the FS | HDR – an up/down/cross, SDR/HDR synchronizer/converter. It also includes support for the Tangent Element Kb panel. The FS | HDR was a tech preview last year, but a product now. This year the tech preview product is the HDR Image Analyzer, which offers waveform and histogram monitoring at up to 4K/60fps.

Speaking of HDR (high dynamic range) and SDR (standard dynamic range), I had a chance to sit in on Robbie Carman’s (colorist at DC Color, Mixing Light) Post Production World HDR overview. Carman has graded numerous HDR projects and from his HDR presentation – coupled with exhibits on the floor – it’s quite clear that HDR is the wild, wild west right now. There is much confusion about color space and dynamic range, not to mention what current hardware is capable of versus the maximums expressed in the tech standards. For example, the BT 2020 spec doesn’t inherently mean that the image is HDR. Or the fact that you must be working in 4K to also have HDR and the set must accept the HDMI 2.0 standard.

High dynamic range grading absolutely requires HDR-compatible hardware, such as the proper i/o device and a display with the ability to receive metadata that turns on and sets its target HDR values. This means investing in a device like AJA’s IO 4K Plus or Blackmagic’s UltraStudio 4K Extreme 3. It also means purchasing a true grading monitor costing tens of thousands of dollars, like one from Sony, Canon, or Flanders. You CANNOT properly grade HDR based on the image of ANY computer display. So while the latest version of FCPX can handle HDR, and an iMac Pro screen features a high nits rating, you cannot rely on this screen to see proper HDR.

LG was a sponsor of the show and LG displays were visible in many of the exhibits. Many of their newest products qualify at the minimum HDR spec, but for the most part, the images shown on the floor were simply bright and not HDR – no matter what the sales reps in the booths were saying.

One interesting fact that Carman pointed out was that HDR displays cannot be driven across the full screen at the highest value. You cannot display a full screen of white at 1,000 nits on a 1,000 nits display without causing damage. Therefore, automatic gain adjustments are used in the set’s electronics to dim the screen. Only a smaller percentage of the image (20% maybe?) can be driven at full value before dimming occurs. Another point Carman made was that standard lift/gamma/gain controls may be too coarse to grade HDR images with finesse. His preference is to use Resolve’s log grading controls, because you can make more precise adjustments to highlight and shadow values.

Cameras

I’m not a camera guy, but there was notable camera news at the show. Many folks really like the Panasonic colorimetry for which the Varicam products are known. For people who want a full-featured camera in a small form factor, look no further than the Panasonics AU-EVA-1. It’s a 4K, Super35, handheld cinema camera featuring dual ISOs. Panasonic claims 14 stops of latitude. It will take EF lenses and can output camera raw data. When paired with an Atmos recorder it will be able to record ProResRAW.

Another new camera is Canon’s EOS C700 FF. This is a new full-frame model in both EF and PL lens mount versions. As with the standard C700, this is a 4K, Super35 cinema camera that records ProRes or X-AVC at up to 4K resolution onboard to CFast cards. The full-frame sensor offers higher resolution and a shallower depth of field.

Storage

Storage is of interest to many. As costs come down, collaboration is easier than ever. The direct-attached vendors, like G-Tech, LaCie, OWC, Promise, and others were all there with new products. So were the traditional shared storage vendors like Avid, Facilis, Tiger, 1 Beyond, and EditShare. But three of the newer companies had my interest.

In my editing day job, I work extensively with QNAP, which currently offers the best price/performance ratio of any system. It’s reliable, cost-effective, and provides reasonable JKL response cutting HD media with Premiere Pro in a shared editing installation. But it’s not the most responsive and it struggles with 4K media, in spite of plenty of bandwidth  – especially when the editors are all banging away. This has me looking at both Lumaforge and OpenDrives.

Lumaforge is known to many of the Final Cut Pro X editors, because the developers have optimized the system for FCPX and have had early successes with many key installations. Since then they have also pushed into more Premiere-based installations. Because these units are engineered for video-centric facilities, as opposed to data-centric, they promise a better shared storage, video editing experience.

Likewise, OpenDrives made its name as the provider for high-profile film and TV projects cut on Premiere Pro. Last year they came to the show with their highest performance, all-SSD systems. These units are pricey and, therefore, don’t have a broad appeal. This year they brought a few of the systems that are more applicable to a broader user base. These include spinning disk and hybrid products. All are truly optimized for Premiere Pro.

The cloud

In other storage news, “the cloud” garners a ton of interest. The biggest vendors are Microsoft, Google, IBM, and Amazon. While each of these offers relatively easy ways to use cloud-based services for back-up and archiving, if you want a full cloud-based installation for all of your media needs, then actual off-the-shelf solutions are not readily available. The truth of the matter is that each of these companies offers APIs, which are then handed off to other vendors – often for totally custom solutions.

Avid and Sony seem to have the most complete offerings, with Sony Ci being the best one-size-fits-all answer for customer-facing services. Of course, if review-and-approval is your only need, then Frame.io leads and will have new features rolled out during the year. IBM/Aspera is a great option for standard archiving, because fast Aspera up and down transfers are included. You get your choice of IBM or other (Google, Amazon, etc.) cloud storage. They even offer a trial period using IBM storage for 30 days at up to 100GB free. Backblaze is a competing archive solution with many partnering applications. For example, you can tie it in with Archiware’s P5 Suite of tools for back-up, archiving, and server synchronization to the cloud.

Naturally, when you talk of the “cloud”, many people interpret that to mean software that runs in the cloud – SaaS (software as a service). In most cases, that is nowhere close to happening. However, the exception is The Foundry, which was showing Athera, a suite of its virtualized applications, like Nuke, running on the Google Cloud Platform. They demo’ed it running inside the Chrome browser, thanks to this partnership with Google. The Foundry had a pod in the Google partners pavilion.

In short, you can connect to the internet with a laptop, activate a license of the tool or tools that you need, and then all media, processing, and rendering is handled in the cloud, using Google’s services and hardware. Since all of this happens on Google’s servers, only an updated UI image needs to be pushed back to the connected computer’s display. This concept is ideal for the visual effects world, where the work is generally done on an individual shot basis without a lot of media being moved in real-time. The target is the Nuke-centric shop that may need to add on a few freelancers quickly, and who may or may not be able to work on-premises.

Interesting newcomers

As I mentioned at the beginning, part of the joy of NAB is discovering the small vendors who seek out NAB to make their mark. One example this year is Lumberjack Systems, a venture by Philip Hodgetts and Greg Clarke of Intelligent Assistance. They were in the Lumaforge suite demonstrating Lumberjack Builder, which is a text-based NLE. In the simplest of explanations, your transcription or scripted text is connected to media. As you re-arrange or trim the text, the associated picture is edited accordingly. Newly-written text for voiceovers turns into spoken word media courtesy of the computer’s internal audio system and system voice. Once your text-based rough cut is complete, an FCPXML is sent to Final Cut Pro X, for further finesse and final editing.

Another new vendor I encountered was Quine, co-founded by Norwegian DoP Grunleik Groven. Their QuineBox IoT device attaches to the back of a camera, where it can record and upload “conformable” dailies (ProRes, DNxHD) to your SAN, as well as proxies to the cloud via its internal wi-fi system. Script notes can also be incorporated. The unit has already been battle-test on the Netflix/NRK production of “Norsemen”.

Closing thoughts

It’s always interesting to see, year over year, which companies are not at the show. This isn’t necessarily indicative of a company’s health, but can signal a change in their direction or that of the industry. Sometimes companies opt for smaller suites at an area hotel in lieu of the show floor (Autodesk). Or they are a smaller part of a reseller or partner’s booth (RED). But often, they are simply gone. For instance, in past years drones were all the rage, with a lot of different manufacturers exhibiting. DJI has largely captured that market for both vehicles and camera systems. While there were a few other drone vendors besides DJI, GoPro and Freefly weren’t at the show at all.

Another surprise change for me was the absence of SAM (Snell Advanced Media) – the hybrid company formed out of Snell & Wilcox and Quantel. SAM products are now part of Grass Valley, which, in turn, is owned by Belden (the cable manufacturer). Separate Snell products appear to have been absorbed into the broader Grass Valley product line. Quantel’s Go and Rio editors continue in Grass Valley’s editing line, alongside Edius – as simple, middle, and advanced NLE products. A bit sad actually. And very ironic. Here we are in the world of software and file-based video, but the company that still has money to make acquisitions is the one with a heavy investment in copper (I know, not just copper, but you get the point).

Speaking of “putting a fork in it”, I would have to say that stereo 3D and 360 VR are pretty much dead in the film and video space. I understand that there is a market – potentially quite large – in gaming, education, simulation, engineering, training, etc. But for more traditional entertainment projects, it’s just not there. Vendors were down to a few, and even though the leading NLEs have ways of working with 360 VR projects, the image quality still looks awful. When you view a 4K image within even the best goggles, the qualitative experience is like watching a 1970s-era TV set from a few inches away. For now, it continues to be a novelty looking for a reason to exist.

A few final points… It’s always fun to see what computers were being used in the booths. Apple is again a clear winner, with plenty of MacBook Pros and iMac Pros all over the LVCC when used for any sort of creative products or demos. eGPUs are of interest, with Sonnet being the main vendor. However, eGPUs are not a solution that solves every problem. For example, you will see more benefit by adding an eGPU to a lesser-powered machine, like a 13” MacBook Pro than one with more horsepower, like an iMac Pro. Each eGPU takes one Thunderbolt 3 bus, so realistically, you are likely to only add one additional eGPU to a computer. None of the NLE vendors could really tell me how much of a boost their application would have with an eGPU. Finally, if you are looking for some great-looking, large, OLED displays that are pretty darned accurate and won’t break the bank, then LG is the place to look.

©2018 Oliver Peters

Advertisements

Putting Apple’s iMac Pro Through the Paces

At the end of December, Apple made good on the release of the new iMac Pro and started selling and shipping the new workstations. While this could be characterized as a stop-gap effort until the next generation of Mac Pro is produced, that doesn’t detract from the usefulness and power of this design in its own right. After all, the iMac line is the direct descendant in spirit and design of the original Macintosh. Underneath the sexy, all-in-one, space grey enclosure, the iMac Pro offers serious workstation performance.

I work mostly these days with a production company that produces and posts commercials, corporate videos, and entertainment programming. Our editing set-up consists of seven workstations, plus an auxiliary machine connected to a common QNAP shared storage network. These edit stations consisted of a mix of old and new Mac Pros and iMacs (connected via 10GigE), with a Mac Mini for the auxiliary (1GigE). It was time to upgrade the oldest machines, which led us to consider the iMac Pros. The company picked up three of them – replacing two Mac Pro towers and an older iMac. The new configuration is a mix of three, one-year-old Retina 5K iMacs (late 2015 model), a 2013 “trash can” Mac Pro, and three 2017 iMac Pros.

There are plenty of videos and articles on the web about how these machines perform; but, the testers often use artificial benchmarks or only Final Cut Pro X. This shop has a mix of NLEs (Adobe, Apple, Avid, Blackmagic Design), but our primary tool is Adobe Premiere Pro CC 2018. This gave me a chance to compare how these machines stacked up against each other in the kind of work we actually do. This comparison isn’t truly apples-to-apples, since the specs of the three different products are somewhat different from each other. Nevertheless, I feel that it’s a valid real-world assessment of the iMac Pros in a typical, modern post environment.

Why buy iMac Pros at all?

The question to address is why should someone purchase these machines? Let me say right off the bat, that if your main focus is 3D animation or heavy compositing using After Effects or other applications – and speed and performance are the most important factor – then don’t buy an Apple computer. Period. There are plenty of examples of Dell and HP workstations, along with high-end gaming PCs, that outperform any of the Macs. This is largely due to the availability of advanced NVidia GPUs for the PC, which simply aren’t an option for current Macs.

On the other hand, if you need a machine that’s solid and robust across a wide range of postproduction tasks – and you prefer the Mac operating ecosystem – then the iMac Pros are a good choice. Yes, the machine is pricy and you can buy cheaper gaming PCs and DIY workstations, but if you stick to the name brands, like Dell and HP, then the iMac Pros are competitively priced. In our case, a shift to PC would have also meant changing out all of the machines and not just three – therefore, even more expensive.

Naturally, the next thing is to compare price against the current 5K iMacs and 2013 Mac Pros. Apple’s base configuration of the iMac Pro uses an 8-core 3.2GHz Xeon W CPU, 32GB RAM, 1TB SSD, and the Radeon Pro Vega 56 GPU (8GB memory) for $4,999. A comparably configured 2013 Mac Pro is $5,207 (with mouse and keyboard), but no display. Of course, it also has the dual D-700 GPUs. The 5K iMac in a similar configuration is $3,729. Note that we require 10GigE connectivity, which is built into the iMac Pros. Therefore, in a direct comparison, you would need to bump up the iMac and Mac Pro prices by about $500 for a Thunderbolt2-to-10GigE converter.

Comparing these numbers for similar machines, you’d spend more for the Mac Pro and less for the iMac. Yet, the iMac Pro uses newer processors and faster RAM, so it could be argued that it’s already better out of the gate in the base configuration than Apple’s former top-of-the-line product. It has more horsepower than the tricked-out iMac, so then it becomes a question of whether the cost difference is important to you for what you are getting.

Build quality

Needless to say, Apple has a focus on the quality and fit-and-finish of its products. The iMac Pro is no exception. Except for the space grey color, it looks like the regular 27” iMacs and just as nicely built. However, let me quibble a bit with a few things. First, the edges of the case and foot tend to be a bit sharp. It’s not a huge issue, but compared with an iPhone, iPad, or 2013 Mac Pro, the edges just not as smooth and rounded. Secondly, you get a wireless mouse and extended keyboard. Both have to be plugged in to charge. In the case of the mouse, the cable plugs in at the bottom, rendering it useless during charging. Truly a bad design. The wireless keyboard is the newer, flatter style, so you lose two USB ports that were on the previous plug-in extended keyboard. Personally, I prefer the features and feel of the previous keyboard, not to mention any scroll wheel mouse over the Magic Mouse. Of course, those are strictly items of personal taste.

With the iMac Pro, Apple is transitioning its workstations to Thunderbolt 3, using USB-C connectors. Previous Thunderbolt 2 ports have been problematic, because the cables easily disconnect. In fact, on our existing iMacs, it’s very easy to disconnect the Thunderbolt 2 cable that connects us to the shared storage network, simply by moving the iMac around to get to the ports on the back. The USB-C connectors feel more snug, so hopefully we will find that to be an improvement. If you need to get to the back of the iMac or iMac Pro frequently, in order to plug in drives, dongles, etc., then I would highly recommend one of the docks from CalDigit or OWC as a valuable accessory.

5K screen

Apple spends a lot of marketing hype on promoting their 5K Retina screens. The 27” screens have a raw pixel resolution of 5120×2880 pixels, but that’s not what you see in terms of image and user interface dimensions. To start with, the 5K iMacs and iMac Pros use the same screen resolution and the default display setting (middle scaled option) is 2560×1440 pixels. The top choice is 3200×1800. Of course, if you use that setting, everything becomes extremely small on screen.  Conversely, our 2013 Mac Pro is connected to a 27” Apple LED Cinema Display (non Retina). It’s top scaled resolution is also 2560×1440 pixels. Therefore, at the most useable settings, all of our workstations are set to the same resolution. Even if you scale the resolution up (images and UI get smaller), you are going to end up adjusting the size of the application interface and viewer window. While you might see different viewer size percentage numbers between the machines, the effective size on screen will be the same.

Retina is Apple’s marketing name for high pixel density. This is the equivalent of DPI (dots per inch) in print resolutions. According to a Macworld article, iPhones from 4 to 5s had a pixel density of 326ppi (pixels per inch), while iMacs have 218ppi. Apple converts a device’s display to Retina by doubling the horizontal and vertical pixel count. More pixels are applied to any given area on the screen, resulting in smoother text, smoother diagonal lines, and so on. That’s assuming an application’s interface is optimized for it. At the distance that the editors sit from a 27” display, there is simply little or no difference between the look of the 27” LED display and the 27” iMac Retina screens.

Upgradeability

Future-proofing and upgrades are the biggest negatives thrown at all-in-ones, particularly the iMac Pros. While the user can upgrade RAM in the standard iMacs, that’s not the case with iMac Pros. You can upgrade RAM in the future, but that must be done at a service facility, such as the Apple Store’s Genius service. This means that in three years, when you want the latest, greatest CPU, GPU, storage, etc., you won’t be able to swap out components. But is this really an issue? I’m sure Apple has user research numbers to justify their decisions. Plus, the thermal design of the iMac would make user upgrades difficult, unlike older mac Pro towers.

In my own experience on personal machines, as well as clients’ machines that I’ve helped maintain, I have upgraded storage, GPU cards, and RAM, but never the CPU. Although I do know others who have upgraded Xeon models on their Mac Pro towers. Part of the dichotomy is buying what you can afford now and upgrading later, versus stretching a bit up front and then not needing to upgrade later. My gut feeling is that Apple is pushing the latter approach.

If I tally up the cost of the upgrades that I’ve made after about three years, I would already be part of the way towards a newer, better machine anyway. Plus, if you are cutting HD and even 4K today, then just about any advanced machine will do the trick, making it less likely that you’ll need to do that upgrade within the foreseeable life of the machine. An argument can be made for either approach, but I really think that the vast majority of users – even professional users – never actually upgrade any of the internal hardware from that of the configuration as originally purchased.

Performance testing

We ultimately purchased machines that were the 10-core bump-up from the base configuration, feeling that this is the sweet spot (and is currently available) within the iMac Pro product line.

The new machine specs within the facility now look like this:

2013 Mac Pro – 3GHz 8-core Xeon/64GB RAM/dual D-500 GPUs/1TB SSD (Sierra)

2015 iMac – 4GHz 4-core Core i7/32GB RAM/AMD R9/3TB Fusion drive (Sierra)

2017 iMac Pro – 3GHz 10-core Xeon W/64GB RAM/Radeon Vega 64/1TB SSD (High Sierra)

As you can see, the tech specs of the new iMac Pros more closely match the 2013 Mac Pro than the year-old 5K iMacs. Of course, it’s not a perfect match for optimal benchmark testing, but close enough for a good read on how well the iMac Pro delivers in a real working environment.

Test 1 – BruceX

The BruceX test uses a 5K Final Cut Pro X timeline made up only of built-in titles and generators. The timeline is then rendered out to a ProRes file. This tests the pure application without any media and codec variables. It’s a bit of an artificial test and only applicable to FCPX performance, but still useful. The faster the export time, the better. (I have bolded the best results.)

2013 Mac Pro – 26.8 sec.

2015 iMac – 28.3 sec.

2017 iMac Pro – 14.4 sec.

Test 2 – media encoding

In my next test, I took a 4½-minute-long 1080p ProRes file and rendered it to a 4K/UHD (3840×2160) H.264 (1-pass CBR 20Mbps) file. Not only was it being encoded, but also scaled up to 4K in this process. I rendered from and to the desktop, to eliminate any variables from the QNAP system. Finally, I conducted the test using both Adobe Media Encoder (using OpenCL processing) and Apple Compressor.

Two noteworthy issues. The Compressor test was surprisingly slow on the Mac Pro. (I actually ran the Compressor test twice, just to be certain about the slowness of the Mac Pro.) The AME version kicked in the fans on the iMac.

Adobe Media Encoder

2013 Mac Pro – 6:13 min.

2015 iMac – 7:14 min.

2017 iMac Pro – 4:48 min.

 Compressor

2013 Mac Pro – 11:02 min.

2015 iMac – 2:20 min.

2017 iMac Pro – 2:19 min.

 Test 3 – editing timeline playback – multi-layered sequence

This was a difficult test designed to break during unrendered playback. The 40-second 1080p/23.98 sequence include six layers of resized 4K source media.

Layer 1 – DJI clips with dissolves between the clips

Layers 2-5 – 2D PIP ARRI Alexa clips (no LUTs); layer 5 had a Gaussian blur effect added

Layer 6 – native REDCODE RAW with minor color correction

The sequence was created in both Final Cut Pro X and Premiere Pro. Playback was tested with the media located on the QNAP volumes, as well as from the desktop (this should provide the best possible playback).

Playing back this sequence in Final Cut Pro X from the QNAP resulted is the video output largely choking on all of the machines. Playing it back in Premiere Pro from the QNAP was slightly better than in FCPX, with the 2017 iMac Pro performing best of all. It played, but was still choppy.

When I tested playback from the desktop, all three machines performed reasonably well using both Final Cut Pro X (“best performance”) and Premiere Pro (“1/2 resolution”). There were some frames dropped, although the iMac Pro played back more smoothly than the other two. In fact, in Premiere Pro, I was able to set the sequence to “full resolution” and get visually smooth playback, although the indicator light still noted dropped frames. Typically, as each staggered layer kicked in, performance tended to hiccup.

Test 4 – editing timeline playback – single-layer sequence

 This was a simpler test using a standard workflow. The 30-second 1080p/23.98 sequence included three Alexa clips (no LUTs) with dissolves between the clips. Each source file was 4K/UHD and had a “punch-in” and reposition within the HD frame. Each also included a slight, basic color correction. Playback was tested in Final Cut Pro X and Premiere Pro, as well as from the QNAP system and the desktop. Quality settings were increased to “best quality” in FCPX and “full resolution” in Premiere Pro.

My complex timeline in Test 3 appeared to perform better in Premiere Pro. In Test 4, the edge was with Final Cut Pro X. No frames were dropped with any of the three machines playing back either from the QNAP or the desktop, when testing in FCPX. In Premiere Pro, the 2017 iMac Pro was solid in both situations. The 2015 iMac was mostly smooth at “full” and completely smooth at “1/2”. Unfortunately, the 2013 Mac Pro seemed to be the worst of the three, dropping frames even at “1/2 resolution” at each dissolve within the timeline.

Test 5 – timeline renders (multi-layered sequence)

In this test, I took the complex sequence from Test 3 and exported it to a ProRes master file. I used the QNAP-connected versions of the Premiere Pro and Final Cut Pro X timelines and rendered the exports to the desktop. In FCPX, I used its default Share function. In Premiere Pro, I queued the export to Adobe Media Encoder set to process in OpenCL. This was one of the few tests in which the 2013 Mac Pro put in a faster time, although the iMac Pro was very close.

Rendering to ProRes – Premiere Pro (via Adobe Media Encoder)

2013 Mac Pro – 1:29 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:45 min.

Rendering to ProRes – Final Cut Pro X

2013 Mac Pro – 1:21 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:22 min.

Test 6 – Adobe After Effects – rendering composition

My final test was to see how well the iMac Pro performed in rendering out compositions from After Effects. This was a 1080p/23.98 15-second composition. The bottom layer was a JPEG still with a Color Finesse correction. On top of that were five 1080p ProResLT video clips that had been slomo’ed to fill the composition length. Each was scaled, cropped, and repositioned. Each was beveled with a layer style and had a stylized effect added to it. The topmost layer was a camera layer with all other layers set to 3D, so the clips could be repositioned in z-space. Using the camera, I added a slight rotation/perspective change over the life of the composition.

Rendering to ProRes – After Effects

2013 Mac Pro – 2:37 min.

2015 iMac – 2:15 min.

2017 iMac Pro – 2:03 min.

Conclusion

After all of this testing, one is left with the answer “it depends”. The 2013 Mac Pro has two GPUs, but not every application takes advantage of that. Some apps tax all the available cores, so more, but slower, cores are better. Others go for the maximum speed on fewer cores. All things considered, the iMac Pro performed at the top of these three machines. It was either the best or close/equal to the best.

There is no way to really quantify actual editing playback performance and resolution by any numerical factor. However, it is interesting to look at the aggregate of the six tests that could be quantified. When you compare the cumulative totals of just the iMac Pro and the iMac, the Pro came out 48% faster. Compared to the 2013 Mac Pro, it was 85% faster. The iMac Pro’s performance against the totals of the slowest machines (either iMac or Mac Pro depending on the test), showed it being a whopping 113% faster – more than twice as fast. But it only bested the fastest set by 20%. Naturally, such comparisons are more curiosity than anything else. Some of these numbers will be meaningful and others won’t, depending on the apps used and a user’s storage situation.

I will say that installing these three machines was the easiest I’ve ever done, including connecting them to the 10GigE storage network. The majority of our apps come from Adobe Create Cloud, the Mac App Store, or FxFactory (for plug-ins). Except for a few other installers, there was largely no need to track down installers, activation information, etc. for a zillion small apps and plug-ins. This made it a breeze and is certainly part of the attraction of the Mac ecosystem. The iMac Pro’s all-in-one design limits the required peripherals, which also contributes to a faster installation. Naturally, I can’t tell anyone if this is the right machine for them, but so far, the investment does look like the correct choice for this shop’s needs.

(Updated 6/22/18)

Here are two additional impressions by working editors: Thomas Grove Carter and Ben Balser. Also a very comprehensive review from AppleInsider.

©2018 Oliver Peters

A Light Footprint

When I started video editing, the norm was an edit suite with three large quadraplex (2”) videotape recorders, video switcher, audio mixer, B&W graphics camera(s) for titles, and a computer-assisted, timecode-based edit controller. This was generally considered  an “online edit suite”, but in many markets, this was both “offline” (creative cutting) and “online” (finishing). Not too long thereafter, digital effects (ADO, NEC, Quantel) and character generators (Chyron, Aston, 3M) joined the repertoire. 2” quad eventually gave way to 1” VTRs and those, in turn, were replaced by digital – D1, D2, and finally Digital Betacam. A few facilities with money and clientele migrated to HD versions of these million dollar rooms.

Towards the midpoint in the lifespan for this way of working, nonlinear editing took hold. After a few different contenders had their day in the sun, the world largely settled in with Avid and/or Media 100 rooms. While a lower cost commitment than the large online bays of the day, these nonlinear edit bays (NLE) still required custom-configured Macs, a fair amount of external storage, along with proprietary hardware and monitoring to see a high-quality video image. Though crude at first, NLEs eventually proved capable of handling all the video needs, including HD-quality projects and even higher resolutions today.

The trend towards smaller

As technology advanced, computers because faster and more powerful, storage capacities increased, and software that required custom hardware evolved to work in a software-only mode. Today, it’s possible to operate with a fraction of the cost, equipment, and hassle of just a few years ago, let along a room from the mid-70s. As a result, when designing or installing a new room, it’s important to question the assumptions about what makes a good edit bay configuration.

For example, today I frequently work in rooms running newer iMacs, 2013 Mac Pros, and even MacBook Pro laptops. These are all perfectly capable of running Apple Final Cut Pro X, Adobe Premiere Pro, Avid Media Composer, and other applications, without the need for additional hardware. In my interview with Thomas Grove Carter, he mentioned often working off of his laptop with a connected external drive for media. And that’s at Trim, a high-end London commercial editing boutique.

In my own home edit room, I recently set aside my older Mac Pro tower in favor of working entirely with my 2015 MacBook Pro. No more need to keep two machines synced up and the MBP is zippier in all respects. With the exception of some heavy-duty rendering (infrequent), I don’t miss using the tower. I run the laptop with an external Dell display and have configured my editing application workspaces around a single screen. The laptop is closed and parked in a BookArc stand tucked behind the Dell. But I also bought a Rain stand for those times when I need the MBP open and functioning as a second display.

Reduce your editing footprint

I find more and more editors working in similar configurations. For example, one of my clients is a production company with seven networked (NAS storage) workstations. Most of these are iMacs with few other connected peripherals. The main room has a 2013 “trash can” Mac Pro and a bit more gear, since this is the “hero” room for clients. If you are looking to downsize your editing environment, here are some pointers.

While you can work strictly from a laptop, I prefer to build it up for a better experience. Essential for me is a Thunderbolt dock. Check out OWC or CalDigit for two of the best options. This lets you connect the computer to the dock and then everything else connects to that dock. One Thunderbolt cable to the laptop, plus power for the computer, leaving you with a clean installation with an easy-to-move computer. From the dock, I’m running a Presonus Audiobox USB audio interface (to a Mackie mixer and speakers), a TimeMachine drive, a G-Tech media drive, and the Dell display. If I were to buy something different today, I would use the Mackie Onyx Blackjack interface instead of the Presonus/Mackie mixer combo. The Blackjack is an all-in-one solution.

Expand your peripherals as needed

At the production company’s hero room, we have the extra need to drive some video monitors for color correction and client viewing. That room is similarly configured as above, except with a Mac Pro and connection to a QNAP shared storage solution. The latter connects over 10Gb/s Ethernet via a Sonnet Thunderbolt/Ethernet adapter.

When we initially installed the room, video to the displays was handled by a Blackmagic Design UltraStudio device. However, we had a lot of playback performance issues with the UltraStudio, especially when using FCPX. After some experimenting, we realized that both Premiere Pro and FCPX can send a fullscreen, [generally] color-accurate signal to the wall-mounted flat panel using only HDMI and no other video i/o hardware. We ended up connecting the HDMI from the dock to the display and that’s the standard working routine when we are cutting in either Premiere Pro or Final Cut.

The rub for us is DaVinci Resolve. You must use some type of Blackmagic Design hardware product in order to get fullscreen video to a display when in Resolve. Therefore, the Ultrastudio’s HDMI port connects to the second HDMI input of the large client display and SDI feeds a separate TV Logic broadcast monitor. This is for more accurate color rendition while grading. With Media Composer, there were no performance issues, but the audio and video signal wants to go through the same device. So, if we edit Avid, then the signal chain goes through the UltraStudio, as well.

All of this means that in today’s world, you can work as lightly as you like. Laptop-only – no problem. iMac with some peripherals – no problem. A fancy, client-oriented room – still less hassle and cost than just a few short years ago. Load it up with extra control surfaces or stay light with a keyboard, mouse, or tablet. It all works today – pretty much as advertised. Gone are the days when you absolutely need to drop a small fortune to edit high-quality video. You just have to know what you are doing and understand the trade-offs as they arise.

©2017 Oliver Peters

Premiere Pro Workflow Tips

When you are editing on projects that only you touch, your working practices can be as messy as you want them to be. However, if you work on projects that need to be interchanged with others down the line, or you’re in a collaborative editing environment, good operating practices are essential. This starts at the moment you first receive the media and carries through until the project has been completed, delivered, and archived.

Any editor who’s worked with Avid Media Composer in a shared storage situation knows that it’s pretty rock solid and takes measures to assure proper media relinking and management. Adobe Premiere Pro is very powerful, but much more freeform. Therefore, the responsibility of proper media management and editor discipline falls to the user. I’ve covered some of these points in other posts, but it’s good to revisit workflow habits.

Folder templates. I like to have things neat and one way to assure that is with project folder templates. You can use a tool like Post Haste to automatically generate a new set of folders for each new production – or you can simply design your own set of folders as a template layout and copy those for each new job. Since I’m working mainly in Premiere Pro these days, my folder template includes a Premiere Pro template project, too. This gives me an easy starting point that has been tailored for the kinds of narrative/interview projects that I’m working on. Simply rename the root folder and the project for the new production (or let Post Haste do that for you). My layout includes folders for projects, graphics, audio, documents, exports, and raw media. I spend most of my time working at a multi-suite facility connected to a NAS shared storage system. There, the folders end up on the NAS volume and are accessible to all editors.

Media preparation. When the crew comes back from the shoot, the first priority is to back-up their files to an archive drive and then copy the files again to the storage used for editing – in my case a NAS volume. If we follow the folder layout described above, then those files get copied to the production dailies or raw media (whatever you called it) folder. Because Premiere Pro is very fluid and forgiving with all types of codecs, formats, and naming conventions, it’s easy to get sloppy and skip the next steps. DON’T. The most important thing for proper media linking is to have consistent locations and unique file names. If you don’t, then future relinking, moving the project into an application like Resolve for color correction/finishing, or other process may lead to not linking to the correct file.

Premiere Pro works better when ALL of the media is in a single common format, like DNxHD/HR or ProRes. However, for most productions, the transcoding time involved would be unacceptable. A large production will often shoot with multiple camera formats (Alexa, RED, DSLRs, GoPros, drones, etc.) and generate several cards worth of media each day. My recommendation is to leave the professional format files alone (like RED or Alexa), but transcode the oddball clips, like DJI cameras. Many of these prosumer formats place the media into various folder structures or hide them inside a package container format. I will generally move these outside of this structure so they are easily accessible at the Finder level. Media from the cameras should be arranged in a folder hierarchy of Date, Camera, and Card. Coordinate with the DIT and you’ll often get the media already organized in this manner. Transcode files as needed and delete the originals if you like (as long as they’ve been backed up first).

Unfortunately these prosumer cameras often use repeated, rather than unique, file names. Every card starts over with clip number 0001. That’s why we need to rename these files. You can usually skip renaming professional format files. It’s optional. Renaming Alexa files is fine, but avoid renaming RED or P2 files. However, definitely rename DSLR, GoPro, and DJI clips. When renaming clips I use an app called Better Rename on the Mac, but any batch renaming utility will do. Follow a consistent naming convention. Mine is a descriptive abbreviation, month/day, camera, and card. So a shoot in Palermo on July 22, using the B camera, recorded on card 4, becomes PAL0722B04_. This is appended in front of the camera-generated clip name, so then clip number 0057 becomes PAL0722B04_0057. You don’t need the year, because the folder location, general project info, or the embedded file info will tell you that.

A quick word on renaming. Stick with universal alphanumeric conventions in both the files and the folder names. Avoid symbols, emojis, etc. Otherwise, some systems will not be able to read the files. Don’t get overly lengthy in your names. Stick with upper and lower case letters, numbers, dashes, underscores, and spaces. Then you’ll be fine.

Project location. Premiere Pro has several basic file types that it generates with each project. These include the project file itself, Auto-saved project files, renders, media cache files and audio peak (.pek) files. Some of these are created in the background as new media is imported into the project. You can choose to store these anywhere you like on the system, although there are optimal locations.

Working on a NAS, there is no problem in letting the project file, Auto-saves, and renders stay on the NAS in the same section of the NAS as all of your other media. I do this because it’s easy to back-up the whole job at the end of the line and have everything in one place. However, you don’t want all the small, application-generated cache files to be there. While it’s an option in preferences, it is highly recommended to have these media cache files go to the internal hard drive of the workstation or a separate, external local drive. The reason is that there are a lot of these small files and that traffic on the NAS will tend to bog down the overall performance. So set them to be local (the default).

The downside of doing this is that when another editor opens the Premiere Pro project on a different computer, these files have to be regenerated on that new system. The project will react sluggishly until this background process is complete. While this is a bit of a drag, it’s what Adobe recommends to keep the system operating well.

One other cache setting to be mindful of is the automatic delete option. A recent Premiere Pro problem cropped up when users noticed that original media was disappearing from their drives. Although this was a definite bug, the situation mainly affected users who had set Media cache to be with their original media files and had enabled automatic deletion. You are better off to keep the default location, but change the deletion setting to manual. You’ll have to occasional clean your caches manually, but this is preferable to losing your original content.

Premiere Pro project locking. A recent addition to Premiere Pro is project locking. This came about because of Team Projects, which are cloud-only shared project files. However, in many environments, facilities do not want their projects in the cloud. Yet, they can still take advantage of this feature. When project locking is enabled in Premiere Pro (every user on the system must do this), the application opens a temporary .prlock next to the project file. This is intended to prevent other users from opening the same project and overwriting the original editor’s work and/or revisions.

Unfortunately, this only works correctly when you open a project from the launch window. Do not open the project by double-clicking the project file itself in order to launch Premiere Pro and that project. If you open through the launch window, then Premiere Pro will prevents you from opening a locked project file. However, if you open through the Finder, then the locking system is circumvented, causing crashes and potentially lost work.

Project layout templates.  Like folder layouts, I’m fond of using a template for my Premiere Pro projects, too. This way all projects have a consistent starting point, which is good when working with several editors collaboratively. You can certainly create multiple templates depending on the nature and specs of the job, e.g. commercials, narrative, 23.98, 29.97, etc. As with the folder layout, I’ll often use a leading underscore with a name to sort an item to the top of a list, or start the name with a “z” to sort it to the bottom. A lot of my work is interview-driven with supportive B-roll footage. Most of the time I’m cutting in 23.98fps. So, that’s the example shown here.

My normal routine is to import the camera files (using Premiere Pro’s internal Media Browser) according to the date/camera/card organization described earlier. Then I’ll review the footage and rearrange the clips. Interview files go into an interview sources bin. I will add sub-bins in the B-roll section for general categories. As I review footage, I’ll move clips into their appropriate area, until the date/camera/card bins are empty and can be deleted from the project. Interviews will be grouped as multi-cam clips and edited to a single sequence for each person. This sequence gets moved into the Interview Edits sub-bin and becomes the source for any clips from this interview. I do a few other things before starting to edit, but that’s for another time and another post.

Working as a team. There are lots of ways to work collaboratively, so the concept doesn’t mean the same thing in every type of job. Sometimes it requires different people working on the same job. Other times it means several editors may access a common pool of media, but working in their own discrete projects. In any case, Premiere does not allow the same sort of flexibility that Media Composer or Final Cut Pro editors enjoy. You cannot have two or more editors working inside the same project file. You cannot open more than one project at a time. This mean Premiere Pro editors need to think through their workflows in order to effectively share projects.

There are different strategies to employ. The easiest is to use the standard “save as” function to create alternate versions of a project. This is also useful to keep project bloat low. As you edit a long time on a project, you build up a lot of old “in progress” sequences. After a while, it’s best to save a copy and delete the older sequences. But the best way is to organize a structure to follow.

As an example, let’s say a travel-style show covers several locations in an episode. Several editors and an assistant are working on it. The assistant would create a master project with all the footage imported and organized, interviews grouped/synced, and so on. At this point each editor takes a different location to cut that segment. There are two options. The first is to duplicate the project file for each location. Open each one up and delete the content that’s not for that location. The second option is to create a new project for each location and them import media from the master project using Media Browser. This is Adobe’s built-in module that enables the editor to access files, bins, and sequences from inside other Premiere Pro projects. When these are imported, there is no dynamic linking between the two projects. The two sets of files/sequences are independent of each other.

Next, each editors cuts their own piece, resulting in a final sequence for each segment. Back in the master project, each edited sequence can be imported – again, using Media Browser –  for the purposes of the final show build and tweaks. Since all of the media is common, no additional media files will be imported. Another option is to create a new final project and then import each sequence into it (using Media Browser). This will import the sequences and any associated media films. Then use the segment sequences to build the final show sequence and tweak as needed.

There are plenty of ways to use Premiere Pro and maintain editing versatility within a shared storage situation. You just have to follow a few rules for “best practices” so that everyone will “play nice” and have a successful experience.

Click here to download a folder template and enclosed Premiere Pro template project.

©2017 Oliver Peters

Bricklayers and Sculptors

One of the livelier hangouts on the internet for editors to kick around their thoughts is the Creative COW’s Apple Final Cut Pro X Debates forum. Part forum, part bar room brawl, it started as a place to discuss the relative merits (or not) of Apple’s FCP X. As such, the COW’s bosses allow a bit more latitude than in other forums. However, often threads derail into really thoughtful discussions about editing concepts.

Recently one of its frequent contributors, Simon Ubsdell, posted a thread called Bricklayers and Sculptors. In his words, “There are two different types of editors: Those who lay one shot after another like a bricklayer builds a wall. And those who discover the shape of their film by sculpting the raw material like a sculptor works with clay. These processes are not the same. There is no continuum that links these two approaches. They are diametrically opposed.”

Simon Ubsdell is the creative director, partner, and editor/mixer for London-based trailer shop Tokyo Productions. Ubsdell is also an experienced plug-in developer, having developed and/or co-developed the TKY, Tokyo, and Hawaiki effects plug-ins. But beyond that, Simon is one of the folks with whom I often have e-mail discussions regarding the state of editing today. We were both early adopters of FCP X who have since shifted almost completely to Adobe Premiere Pro. In keeping with the theme of his forum post, I asked him to share his ideas about how to organize an edit.

With Simon’s permission, the following are his thoughts on how best to organize editing projects in a way that keeps you immersed in the material and results in editing with greater assurance that you’ve make the best possible edit decisions.

________________________________________________

Simon Ubsdell – Bricklayers and Sculptors in practical terms

To avoid getting too general about this, let me describe a job I did this week. The producer came to us with a documentary that’s still shooting and only roughly “edited” into a very loose assembly – it’s the stories of five different women that will eventually be interweaved, but that hasn’t happened yet. As I say, extremely rough and unformed.

I grabbed all the source material and put it on a timeline. That showed me at a glance that there was about four hours of it in total. I put in markers to show where each woman’s material started and ended, which allowed me to see how much material I had for each of them. If I ever needed to go back to “everything”, it would make searching easier. (Not an essential step by any means.)

I duplicated that sequence five times to make sequences of all the material for each woman. Then I made duplicates of those duplicates and began removing everything I didn’t want. (At this point I am only looking for dialogue and “key sound”, not pictures which I will pick up in a separate set of passes.)

Working subtractively

From this point on I am working almost exclusively subtractively. A lot of people approach string-outs by adding clips from the browser – but here all my clips are already on the timeline and I am taking away anything I don’t want. This is for me the key part of the process because each edit is not a rough approximation, but a very precise “topping and tailing” of what I want to use. If you’re “editing in the Browser” (or in Bins), you’re simply not going to be making the kind of frame accurate edits that I am making every single time with this method.

The point to grasp here is that instead of “making bricks” for use later on, I am already editing in the strictest sense – making cuts that will stand up later on. I don’t have to select and then trim – I am doing both operations at the same time. I have my editing hat on, not an organizing hat. I am focused on a timeline that is going to form the basis of the final edit. I am already thinking editorially (in the sense of creative timeline-based editing) and not wasting any time merely thinking organizationally.

I should mention here that this is an iterative process – not just one pass through the material, but several. At certain points I will keep duplicates as I start to work on shorter versions. I won’t generally keep that many duplicates – usually just an intermediate “long version”, which has lost all the material I definitely don’t want. And by “definitely don’t want” I’m not talking about heads and tails that everybody throws away where the camera is being turned on or off or the crew are in shot – I am already making deep, fine-grained editorial and editing decisions that will be of immense value later on. I’m going straight to the edit point that I know I’ll want for my finished show. It’s not a provisional edit point – it’s a genuine editorial choice. From this point of view, the process of rejecting slates and tails is entirely irrelevant and pointless – a whole process that I sidestep entirely. I am cutting from one bit that I want to keep directly to the next bit I want to keep and I am doing so with fine-tuned precision. And because I am working subtractively I am actually incorporating several edit decisions in one – in other words, with one delete step I am both removing the tail from the outgoing clip and setting the start of the next clip.

Feeling the pacing and flow

Another key element here is that I can see how one clip flows into another – even if I am not going to be using those two clips side-by-side. I can already get a feel for the pacing. I can also start to see what might go where, so as part of this phase, I am moving things around as options start suggesting themselves. Because I am working in the timeline with actual edited material, those options present themselves very naturally – I’m getting offered creative choices for free. I can’t stress too strongly how relevant this part is. If I were simply sorting through material in a Browser/Bin, this process would not be happening or at least not happening in anything like the same way. The ability to reorder clips as the thought occurs to me and for this to be an actual editorial decision on a timeline is an incredibly useful thing and again a great timesaver. I don’t have to think about editorial decisions twice.

And another major benefit that is simply not available to Browser/Bin-based methods, is that I am constructing editorial chunks as I go. I’m taking this section from Clip A and putting it side-by-side with this other section from Clip A, which may come from earlier in the actual source, and perhaps adding a section from Clip B to the end and something from Clip C to the front. I am forming editorial units as I work through the material. And these are units that I can later use wholesale.

Another interesting spin-off is that I can very quickly spot “duplicate material”, by which I mean instances where the same information or sentiment is conveyed in more or less the same terms at different places in the source material. Because I am reviewing all of this on the timeline and because I am doing so iteratively, I can very quickly form an opinion as to which of the “duplicates” I want to use in my final edit.

Working towards the delivery target

Let’s step back and look at a further benefit of this method. Whatever your final film is, it will have the length that it needs to be – unless you’re Andy Warhol. You’re delivering a documentary for broadcast or theatrical distribution, or a short form promo or a trailer or TV spot. In each case you have a rough idea of what final length you need to arrive at. In my case, I knew that the piece needed to be around three minutes long. And that, of course, throws up a very obvious piece of arithmetic that it helps me to know. I had five stories to fit into those three minutes, which meant that the absolute maximum of dialogue that I would need would be just over 30 seconds from each story!  The best way of getting to those 30 seconds is obviously subtractively.

I know I need to get my timeline of each story down to something approaching this length. Because I’m not simply topping and tailing clips in the Browser, but actually sculpting them on the timeline (and forming them into editorial units, as described above), I can keep a very close eye on how this is coming along for each story strand. I have a continuous read-out of how well I am getting on with reducing the material down to the target length. By contrast, if I approach my final edit with 30 minutes of loosely selected source material to juggle, I’m going to spend a lot more time on editorial decisions that I could have successfully made earlier.

So the final stage of the process in this case was simply to combine and rearrange the pre-edited timelines into a final timeline – a process that is now incredibly fast and a lot of fun. I’ve narrowed the range of choices right down to the necessary minimum. A great deal of the editing has literally already been done, because I’ve been editing from the very first moment that I laid all the material on the original timeline containing all the source material for the project.

As you can see, the process has been essentially entirely subtractive throughout – a gradual whittling down of the four hours to something closer to three minutes. This is not to say there won’t be additive parts to the overall edit. Of course, I added music, SFX, and graphics, but from the perspective of the process as a whole, this is addition at the most trivial level.

Learning to tell the story in pictures

There is another layer of addition that I have left out and that’s what happens with the pictures. So far I’ve only mentioned what is happening with what is sometimes called the “radio edit”. In my case, I will perform the exact same (sometimes iterative) process of subtracting the shots I want to keep from the entirety of the source material – again, this is obviously happening on a timeline or timelines. The real delight of this method is to review all the “pictures” without reference to the sound, because in doing so you can get a real insight into how the story can be told pictorially. I will often review the pictures having very, very roughly laid up some of the music tracks that I have planned on using. It’s amazing how this lets you gauge both whether your music suits the material and conversely whether the pictures are the right ones for the way you are planning to tell the story.

This brings to me a key point I would make about how I personally work with this method and that’s that I plunge in and experiment even at the early stages of the project. For me, the key thing is to start to get a feel for how it’s all going to come together. This loose experimentation is a great way of approaching that. At some point in the experimentation something clicks and you can see the whole shape or at the very least get a feeling for what it’s all going to look like. The sooner that click happens, the better you can work, because now you are not simply randomly sorting material, you are working towards a picture you have in your head. For me, that’s the biggest benefit of working in the timeline from the very beginning. You’re getting immersed in the shape of the material rather than just its content and the immersion is what sparks the ideas. I’m not invoking some magical thinking here – I’m just talking about a method that’s proven itself time and time again to be the best and fastest way to unlock the doors of the edit.

Another benefit is that although one would expect this method to make it harder to collaborate, in fact the reverse is the case if each editor is conversant with the technique. You’re handing over vastly more useful creative edit information with this process than you could by any other means. What you’re effectively doing is “showing your workings” and not just handing over some versions. It means that the editor taking over from you can easily backtrack through your work and find new stuff and see the ideas that you didn’t end up including in the version(s) that you handed over. It’s an incredibly fast way for the new editor to get up to speed with the project without having to start from scratch by acquainting him or herself with where the useful material can be found.

Even on a more conventional level, I personally would far rather receive string-outs of selects than all the most carefully organized Browser/Bin info you care to throw at me. Obviously if I’m cutting a feature, I want to be able to find 323T14 instantly, but beyond that most basic level, I have no interest in digging through bins or keyword collections or whatever else you might be using, as that’s just going to slow me down.

Freeing yourself of the Browser/Bins

Another observation about this method is how it relates to the NLE interface. When I’m working with my string-outs, which is essentially 90% of the time, I am not ever looking at the Browser/Bins. Accordingly, in Premiere Pro or Final Cut Pro X, I can fully close down the Project/Browser windows/panes and avail myself of the extra screen real estate that gives me, which is not inconsiderable. The consequence of that is to make the timeline experience even more immersive and that’s exactly what I want. I want to be immersed in the details of what I’m doing in the timeline and I have no interest in any other distractions. Conversely, having to keep going back to Bins/Browser means shifting the focus of attention away from my work and breaking the all-important “flow” factor. I just don’t want any distractions from the fundamentally crucial process of moving from one clip to another in a timeline context. As soon as I am dragged away from that, there’s is a discontinuity in what I am doing.

The edit comes to shape organically

I find that there comes a point, if you work this way, when the subsequence you are working on organically starts to take on the shape of the finished edit and it’s something that happens without you having to consciously make it happen. It’s the method doing the work for you. This means that I never find myself starting a fresh sequence and adding to it from the subsequences and I think that has huge advantages. It reinforces my point that you are editing from the very first moment when you lay all your source material onto one timeline. That process leads without pause or interruption to the final edit through the gradual iterative subtraction.

I talked about how the iterative sifting process lets you see “duplicates”, that’s to say instances where the same idea is repeated in an alternative form – and that it helps you make the choice between the different options. Another aspect of this is that it helps you to identify what is strong and what is not so strong. If I were cutting corporates or skate videos this might be different, but for what I do, I need to be able to isolate the key “moments” in my material and find ways to promote those and make them work as powerfully as possible.

In a completely literal sense, when you’re cutting promos and trailers, you want to create an emotional, visceral connection to the material in the audience. You want to make them laugh or cry, you want to make them hold their breath in anticipation, or gasp in astonishment. You need to know how to craft the moments that will elicit the response you are looking for. I find that this method really helps me identify where those moments are going to come from and how to structure everything around them so as to build them as strongly as possible. The iterative sifting method means you can be very sure of what to go for and in what context it’s going to work the best. In other words, I keep coming back to the realization that this method is doing a lot of the creative work for you in a way that simply won’t happen with the alternatives. Even setting aside the manifest efficiency, it would be worth it for this alone.

There’s a huge amount more that I could say about this process, but I’ll leave it there for now. I’m not saying this method works equally well for all types of projects. It’s perhaps less suited to scripted drama, for instance, but even there it can work effectively with certain modifications. Like every method, every editor wants to tweak it to their own taste and inclinations. The one thing I have found to its advantage above all others is that it almost entirely circumvents the problem of “what shot do I lay down next?” Time and again I’ve seen Browser/Bin-focused editors get stuck in exactly this way and it can be a very real block.

– Simon Ubsdell

For an expanded version of this concept, check out Simon’s in-depth article at Creative COW. Click here to link.

For more creative editing tips, click on this link for Film Editor Techniques.

©2017 Simon Ubsdell, Oliver Peters

The Art of Motion Graphics Design

While many of us may be good directors, photographers, or editors, it’s not a given that we are also good graphic designers. Most editors certainly understand the mechanics and techniques of developing designs and visual effects composites, but that doesn’t by default include a tasteful sense of design. Combining just the right typeface with the proper balance within a frame can often be elusive, whereas it’s second nature to a professional graphic designer.

German motion designer and visual effects artist Timo Fecher aims to correct that, or at least expose a wider audience to the rules and tools that embody good design. Fecher has developed the Crossfeyer website promoting a free e-mail newsletter for online training. A key component of this is his free eBook Motion Graphics Design Academy – The Basics, which he is giving away to subscribers (free) for the balance of this year. His intent is then to publish the book next year for purchase.

I’ve had a chance to read through an advanced copy of the eBook. I find it to be an excellent primer for people who want to understand basic design principles.  The chapters cover animation, shapes, composition, typography, and more.

Feyer spells out his goals for the book this way, “The Motion Graphics Design Academy is for people who want to learn more about the basics of design, animation, and project design. It’s for newcomers, graphic designers who want to add a new dimension to their art, everyone dealing with digital image processing, and especially all kinds of filmmakers who want to improve their movies, trailers, title sequences, video clips, and commercials. The goal of the eBook is to give its readers a profound background knowledge about design and animation principles and to improve their artistic skills. Software and plug-ins are changing constantly. But all that theory about storytelling, animation, color, typefaces, composition and compositing will stay the same.”

Like any learning tool, it won’t automatically make you a great artist, but it will give you the guidelines to create appealing design that will enhance your next production.

©2017 Oliver Peters

Tools for Dealing with Media

df3016_media_1_sm

Although most editing application manufacturers like to tout how you can just go from camera to edit with native media, most editors know that’s a pretty frustrating way to work. The norm these days is for the production team to use a whole potpourri of professional and prosumer cameras, so it’s really up to the editor to straighten this out before the edit begins. Granted a DIT could do all of this, but in my experience, the person being called a DIT is generally just someone who copies/backs-up the camera cards onto hard drives to bring back from the shoot. As an editor you are most likely to receive a drive with organized copies of the camera media cards, but still with the media in its native form.

Native media is fine when you are talking about ARRI ALEXA, Canon C300 or even RED files. It is not fine when coming from a Canon 5D, DJI, iPhone, Sony A7S, etc. The reason is that these systems record long-GOP media without valid timecode. Most do not generate unique file names. In some cases, there is no proper timebase within the files, so time itself is “rubbery” – meaning, a frame of time varies slightly in true duration from one frame to the next.

If you remove the A7S .mp4 files from within the clutter of media card folders and take these files straight into an NLE, you will get varying results. There is a signal interpreted as timecode by some tools, but not by others. Final Cut Pro X starts all of these clips at 00:00:00:00, while Premiere Pro and Resolve read something that is interpreted as timecode, which ascends sequentially on successive clips. Finally, these cameras have no way to deal with off-speed recordings. For example, if a higher frame rate is recorded with the intent to play it back in slow motion. You can do that with a high-end camera, but not these prosumer products. So I’ve come to rely on several software products heavily in these types of productions.

Step 1 : Hedge for Mac

df3016_media_2The first step in any editing is to get the media from the field drives onto the edit system drives. Hopefully your company’s SOP is to archive this media from the field in addition to any that comes out of the edit. However, you don’t want to edit directly from these drives. When you do a Finder copy from one drive to the next there is no checksum verification. In other words, the software doesn’t actually check to make sure the copy is exact without errors. This is the biggest plus for an application like Hedge – copy AND verification.

Hedge comes in a free and a paid version. The free version is useful, but copy and verify is slower than the paid version. The premium (paid) version uses a software component that they call Fast Lane to speed up the verification process so that it takes roughly the same amount of time as a Finder copy, which has no verification. To give you an idea, I copied a 62GB folder from a USB2.0 thumb drive to an external media drive connected to my Mac via eSATA (through an internal card). The process took under 30 minutes for a copy through Hedge (paid version) – about the same as it took for a Finder copy. Using the free version takes about twice as long, so there’s a real advantage to buying the premium version of the application. In addition, the premium version works with NAS and RAID systems.

The interface is super simple. Sources and targets are drag-and-drop. You can specify folders within the drives, so it’s not just a root-level, drive-to-drive copy. Multiple targets and even multiple sources can be specified within the same batch. This is great for creating a master as well as several back-up copies. Finally, Hedge generates a transfer log for written evidence of the copies and verification performed.

Step 2 : EditReady

df3016_media_3Now that you have your media copies, it’s time to process the prosumer camera media into something more edit-friendly. Since the camera-original files are being archived, I don’t generally save both the original and converted files on my edit system. For all intents and purposes, the new, processed files become my camera media. I’ve used tools like MPEG Streamclip in the past. That still works well, but EditReady from Divergent Media is better. It reads many media formats that other players don’t and it does a great job writing ProRes media. It will do other formats, too, but ProRes is usually the best format for projects that I work with.

One nice benefit of EditReady is that it offers additional processing functions. For example, if you want to bake in a LUT to the transcoded files, there’s a function for that. If you shot at 29.97, but want the files to play at 23.976 inside you NLE, EditReady enables you to retime the files accordingly. Since Divergent Media also makes ScopeBox, you can get a bundle with both EditReady and ScopeBox. Through a software conduit called ScopeLink, clips from the EditReady player show up in the ScopeBox viewer and its scopes, so you can make technical evaluations right within the EditReady environment.

EditReady uses a drag-and-drop interface that allows you to set up a batch for processing. If you have more that one target location or process chain, simply open up additional windows for each batch that you’d like to set up. Once these are fired off, all process will run simultaneously. The best part is that these conversions are fast, resulting in reliable transcoded media in an edit-friendly format.

Step 3: Better Rename

df3016_media_4The last step for me is usually to rename the file names. I won’t do this with formats like ALEXA ProRes or RED, but it’s essential for 5D, DJI and other similar cameras. That’s because these camera normally don’t generate unique file names. After all, you don’t want a bunch of clips that are named C0001 with a starting timecode of 00:00:00:00 – do you?

While there are a number of batch renaming applications and even Automator scripts that you can create, my preferred application is Better Rename, which is available in the Mac App Store. It has a host of functions to change names, add numbered sequences and append a text prefix or suffix to a name. The latter option is usually the best choice. Typically I’ll drag my camera files from each group into the interface and append a prefix that adds a camera card identifier and a date to the clip name. So C0001 becomes A01_102916_C0001. A clip from the second card would change from C0001 to A02_102916_C0001. It’s doubtful that the A camera would shoot more than 99 cards in a day, but if so, you can adjust your naming scheme accordingly.

There you go. Three simple steps to bulletproof how you work with media.

©2016 Oliver Peters