Viva Las Vegas – NAB 2018

As more and more folks get all of their information through internet sources, the running question is whether or not trade shows still have value. A show like the annual NAB (National Association of Broadcasters) Show in Las Vegas is both fun and grueling, typified by sensory overload and folks in business attire with sneakers. Although some announcements are made before the exhibits officially open – and nearly all are pretty widely known before the week ends – there still is nothing quite like being there in person.

For some, other shows have taken the place of NAB. The annual HPA Tech Retreat in the Palm Springs area is a gathering of technical specialists, researchers, and creatives that many consider the TED Talks for our industry. For others, the Cine Gear Expo in LA is the prime showcase for grip, lighting, and camera offerings. RED Camera has focused on Cine Gear instead of NAB for the last couple of years. And then, of course, there’s IBC in Amsterdam – the more humane version of NAB in a more pleasant setting. But for me, NAB is still the main event.

First of all, the NAB Show isn’t merely about the exhibit floor at the sprawling Las Vegas Convention Center. Actual NAB members can attend various sessions and workshops related to broadcasting and regulations. There are countless sidebar events specific to various parts of the industry. For editors that includes Avid Connect – a two-day series of Avid presentations in the weekend leading into NAB; Post Production World – a series of workshops, training sessions, and presentations managed by Future Media Concepts; as well as a number of keynote presentations and artist gatherings, including SuperMeet, FCPexchange, and the FCPX Guru Gathering. These are places where you’ll rub shoulders with some well-known editors, colorists, artists, and mixers, learn about new technologies like HDR (high dynamic range imagery), and occasionally see some new product features from vendors who might not officially be on the show floor with a booth, like Apple.

One of the biggest benefits I find in going to NAB is simply walking the floor, checking out the companies and products who might not get a lot of attention. These newcomers often have the most innovative technologies and it’s these new things that you find, which were never on the radar prior to that week.

The second benefit is connection. I meet up again in person with friends that I’ve made over the years – both other users, as well as vendors. Often it’s a chance to meet people that you might only know through the internet (forums, blogs, etc.) and to get to know them just a bit better. A bit more of that might make the internet more friendly, too!

Here are some of my random thoughts and observations from Las Vegas.

__________________________________

Editing hardware and software – four As and a B

Apple uncharacteristically pre-announced their new features just prior to the show, culminating with App Store availability on Monday when the NAB exhibits opened. This includes new Final Cut Pro X/Motion/Compressor updates and the official number of 2.5 million FCPX users. That’s a growth of 500,000 users in 2017, the biggest year to date for Final Cut. The key new feature in FCPX is a captioning function to author, edit, and export both closed and embedded (open) captions. There aren’t many great solutions for captioning and the best to date have been expensive. I found that the Apple approach was now the best and easiest to use that I’ve seen. It’s well-designed and should save time and money for those who need to create captions for their productions – even if you are using another brand of NLE. Best of all, if you own FCPX, you already have that feature. When you don’t have a script to start out, then manual or automatic transcription is required as a starting point. There is now a tie-in between Speedscriber (also updated this week) and FCPX that will expedite the speech-to-text function.

The second part of Apple’s announcement was the introduction of a new camera raw codec family – ProResRAW and ProResRAW HQ. These are acquisition codecs designed to record the raw sensor data from Bayer-pattern sensors (prior to debayering the signal into RGB information) and make that available in post, just like RED’s REDCODE RAW or CinemaDNG. Since this is an acquisition codec and NOT a post or intermediate codec, it requires a partnership on the production side of the equation. Initially this includes Atomos and DJI. Atomos supplies an external recorder, which can record the raw output from various cameras that offer the ability to record raw data externally. This currently includes their Shogun Inferno and Sumo 19 models. As this is camera-specific, Atomos must then create the correct profile by camera to remap that sensor data into ProResRAW. At the show, this included several Canon, Sony, and Panasonic cameras. DJI does this in-camera on the Inspire 2.

The advantage with FCPX, is that ProResRAW is optimized for post, thus allowing for more streams in real-time. ProResRAW data rates (variable) fall between that of ProRes and ProResHQ, while the less compressed ProResRAW HQ rates are between ProRes HQ and ProRes 4444. It’s very early with this new codec, so additional camera and post vendors will likely add ProResRAW support over the coming year. It is currently unknown whether or not any other NLEs can support ProResRAW decode and playback yet.

As always, the Avid booth was quite crowded and, from what I heard, Avid Connect was well attended with enthused Avid users. The Avid offerings are quite broad and hard to encapsulate into any single blog post. Most, these days, are very enterprise-centric. But this year, with a new CEO at the helm, Avid’s creative tools have been reorganized into three strata – First, standard, and Ultimate. This applies to Sibelius, Pro Tools, and Media Composer. In the case of Media Composer, there’s Media Composer | First – a fully functioning free version, with minimal restrictions; Media Composer; and Media Composer | Ultimate – includes all options, such as PhraseFind, ScriptSync, NewsCutter, and Symphony. The big difference is that project sharing has been decoupled from Media Composer. This means that if you get the “standard” version (just named Media Composer) it will not be enabled for collaboration on a shared storage network. That will require Media Composer | Ultimate. So Media Composer (standard) is designed for the individual editor. There is also a new subscription pricing structure, which places Media Composer at about the same annual cost as Adobe Premiere Pro CC (single app license). The push is clearly towards subscription, however, you can still purchase and/or maintain support for perpetual licenses, but it’s a little harder to find that info on Avid’s store website.

Though not as big news, Avid is also launching the Avid DNxID capture/export unit. It is custom-designed by Blackmagic Design for Avid and uses a small form factor. It was created for file-base acquisition, supports 4K, and includes embedded DNx codecs for onboard encoding. Connections are via component analog, HDMI, as well as an SD card slot.

The traffic around Adobe’s booth was thick the entire week. The booth featured interesting demos that were front and center in the middle of one of the South Hall’s main thoroughfares, generally creating a bit of a bottleneck. The newest Creative Cloud updates had preceded the show, but were certainly new to anyone not already using the Adobe apps. Big news for Premiere Pro users was the addition of automatic ducking that was brought over from Audition, and a new shot matching function within the Lumetri color panel. Both are examples of Adobe’s use of their Sensei AI technology. Not to be left out, Audition can now also directly open sequences from Premiere Pro. Character Animator had been in beta form, but is now a full-fledged CC product. And for puppet control Adobe also introduced the Advanced Puppet Engine for After Effects. This is a deformation tool to better bend, twist, and control elements.

Of course when it comes to NLEs, the biggest buzz has been over Blackmagic Design’s DaVinci Resolve 15. The company has an extensive track record of buying up older products whose companies weren’t doing so well, reinvigorating the design, reducing the cost, and breathing new life into them – often to a new, wider customer base. This is no more evident than Resolve, which has now grown from a leading color correction system to a powerful, all-in-one edit/mix/effects/color solution. We had previously seen the integration of the Fairlight audio mixing engine. This year Fusion visual effects were added. As before, each one of these disparate tools appears on its own page with a specific UI optimized for that task.

A number of folks have quipped that someone had finally resurrected Avid DS. Although all-in-ones like DS and Smoke haven’t been hugely successful in the past, Resolve’s price point is considerably more attractive. The Fusion integration means that you now have a subset of Fusion running inside of Resolve. This is a node-based compositor, which makes it easy for a Resolve user to understand, since it, too, already uses nodes in the color page. At least for now, Blackmagic Design intends to also maintain a standalone version of Fusion, which will offer more functions for visual effects compositing. Resolve also gained new editorial features, including tabbed sequences, a pancake timeline view, captioning, and improvements in the Fairlight audio page.

Other Blackmagic Design news includes updates to their various mini-converters, updates to the Cintel Scanner, and the announcement of a 4K Pocket Cinema Camera (due in September). They have also redesigned and modularized the Fairlight console mixing panels. These are now more cost-effective to manufacture and can be combined in various configurations.

This was the year for a number of milestone anniversaries, such as the 100th for Panasonic and the 25th for AJA. There were a lot of new product announcements at the AJA booth, but a big one was the push for more OpenGear-compatible cards. OpenGear is an open source hardware rack standard that was developed by Ross and embraced by many manufacturers. You can purchase any OpenGear version of a manufacturer’s product and then mix and match a variety of OpenGear cards into any OpenGear rack enclosure. AJA’s cards also offer Dashboard support, which is a software tool to configure and control the cards. There are new KONA SDI and HDMI cards, HDR support in the IO 4K Plus, and HDR capture and playback with the KiPro Ultra Plus.

HDR

It’s fair to say that we are all learning about HDR, but from what I observed on the floor, AJA is one of the only companies with a number of hardware product offerings that will allow you to handle HDR. This is thanks to their partnership with ColorFront, who is handling the color science in these products. This includes the FS | HDR – an up/down/cross, SDR/HDR synchronizer/converter. It also includes support for the Tangent Element Kb panel. The FS | HDR was a tech preview last year, but a product now. This year the tech preview product is the HDR Image Analyzer, which offers waveform and histogram monitoring at up to 4K/60fps.

Speaking of HDR (high dynamic range) and SDR (standard dynamic range), I had a chance to sit in on Robbie Carman’s (colorist at DC Color, Mixing Light) Post Production World HDR overview. Carman has graded numerous HDR projects and from his HDR presentation – coupled with exhibits on the floor – it’s quite clear that HDR is the wild, wild west right now. There is much confusion about color space and dynamic range, not to mention what current hardware is capable of versus the maximums expressed in the tech standards. For example, the BT 2020 spec doesn’t inherently mean that the image is HDR. Or the fact that you must be working in 4K to also have HDR and the set must accept the HDMI 2.0 standard.

High dynamic range grading absolutely requires HDR-compatible hardware, such as the proper i/o device and a display with the ability to receive metadata that turns on and sets its target HDR values. This means investing in a device like AJA’s IO 4K Plus or Blackmagic’s UltraStudio 4K Extreme 3. It also means purchasing a true grading monitor costing tens of thousands of dollars, like one from Sony, Canon, or Flanders. You CANNOT properly grade HDR based on the image of ANY computer display. So while the latest version of FCPX can handle HDR, and an iMac Pro screen features a high nits rating, you cannot rely on this screen to see proper HDR.

LG was a sponsor of the show and LG displays were visible in many of the exhibits. Many of their newest products qualify at the minimum HDR spec, but for the most part, the images shown on the floor were simply bright and not HDR – no matter what the sales reps in the booths were saying.

One interesting fact that Carman pointed out was that HDR displays cannot be driven across the full screen at the highest value. You cannot display a full screen of white at 1,000 nits on a 1,000 nits display without causing damage. Therefore, automatic gain adjustments are used in the set’s electronics to dim the screen. Only a smaller percentage of the image (20% maybe?) can be driven at full value before dimming occurs. Another point Carman made was that standard lift/gamma/gain controls may be too coarse to grade HDR images with finesse. His preference is to use Resolve’s log grading controls, because you can make more precise adjustments to highlight and shadow values.

Cameras

I’m not a camera guy, but there was notable camera news at the show. Many folks really like the Panasonic colorimetry for which the Varicam products are known. For people who want a full-featured camera in a small form factor, look no further than the Panasonics AU-EVA-1. It’s a 4K, Super35, handheld cinema camera featuring dual ISOs. Panasonic claims 14 stops of latitude. It will take EF lenses and can output camera raw data. When paired with an Atmos recorder it will be able to record ProResRAW.

Another new camera is Canon’s EOS C700 FF. This is a new full-frame model in both EF and PL lens mount versions. As with the standard C700, this is a 4K, Super35 cinema camera that records ProRes or X-AVC at up to 4K resolution onboard to CFast cards. The full-frame sensor offers higher resolution and a shallower depth of field.

Storage

Storage is of interest to many. As costs come down, collaboration is easier than ever. The direct-attached vendors, like G-Tech, LaCie, OWC, Promise, and others were all there with new products. So were the traditional shared storage vendors like Avid, Facilis, Tiger, 1 Beyond, and EditShare. But three of the newer companies had my interest.

In my editing day job, I work extensively with QNAP, which currently offers the best price/performance ratio of any system. It’s reliable, cost-effective, and provides reasonable JKL response cutting HD media with Premiere Pro in a shared editing installation. But it’s not the most responsive and it struggles with 4K media, in spite of plenty of bandwidth  – especially when the editors are all banging away. This has me looking at both Lumaforge and OpenDrives.

Lumaforge is known to many of the Final Cut Pro X editors, because the developers have optimized the system for FCPX and have had early successes with many key installations. Since then they have also pushed into more Premiere-based installations. Because these units are engineered for video-centric facilities, as opposed to data-centric, they promise a better shared storage, video editing experience.

Likewise, OpenDrives made its name as the provider for high-profile film and TV projects cut on Premiere Pro. Last year they came to the show with their highest performance, all-SSD systems. These units are pricey and, therefore, don’t have a broad appeal. This year they brought a few of the systems that are more applicable to a broader user base. These include spinning disk and hybrid products. All are truly optimized for Premiere Pro.

The cloud

In other storage news, “the cloud” garners a ton of interest. The biggest vendors are Microsoft, Google, IBM, and Amazon. While each of these offers relatively easy ways to use cloud-based services for back-up and archiving, if you want a full cloud-based installation for all of your media needs, then actual off-the-shelf solutions are not readily available. The truth of the matter is that each of these companies offers APIs, which are then handed off to other vendors – often for totally custom solutions.

Avid and Sony seem to have the most complete offerings, with Sony Ci being the best one-size-fits-all answer for customer-facing services. Of course, if review-and-approval is your only need, then Frame.io leads and will have new features rolled out during the year. IBM/Aspera is a great option for standard archiving, because fast Aspera up and down transfers are included. You get your choice of IBM or other (Google, Amazon, etc.) cloud storage. They even offer a trial period using IBM storage for 30 days at up to 100GB free. Backblaze is a competing archive solution with many partnering applications. For example, you can tie it in with Archiware’s P5 Suite of tools for back-up, archiving, and server synchronization to the cloud.

Naturally, when you talk of the “cloud”, many people interpret that to mean software that runs in the cloud – SaaS (software as a service). In most cases, that is nowhere close to happening. However, the exception is The Foundry, which was showing Athera, a suite of its virtualized applications, like Nuke, running on the Google Cloud Platform. They demo’ed it running inside the Chrome browser, thanks to this partnership with Google. The Foundry had a pod in the Google partners pavilion.

In short, you can connect to the internet with a laptop, activate a license of the tool or tools that you need, and then all media, processing, and rendering is handled in the cloud, using Google’s services and hardware. Since all of this happens on Google’s servers, only an updated UI image needs to be pushed back to the connected computer’s display. This concept is ideal for the visual effects world, where the work is generally done on an individual shot basis without a lot of media being moved in real-time. The target is the Nuke-centric shop that may need to add on a few freelancers quickly, and who may or may not be able to work on-premises.

Interesting newcomers

As I mentioned at the beginning, part of the joy of NAB is discovering the small vendors who seek out NAB to make their mark. One example this year is Lumberjack Systems, a venture by Philip Hodgetts and Greg Clarke of Intelligent Assistance. They were in the Lumaforge suite demonstrating Lumberjack Builder, which is a text-based NLE. In the simplest of explanations, your transcription or scripted text is connected to media. As you re-arrange or trim the text, the associated picture is edited accordingly. Newly-written text for voiceovers turns into spoken word media courtesy of the computer’s internal audio system and system voice. Once your text-based rough cut is complete, an FCPXML is sent to Final Cut Pro X, for further finesse and final editing.

Another new vendor I encountered was Quine, co-founded by Norwegian DoP Grunleik Groven. Their QuineBox IoT device attaches to the back of a camera, where it can record and upload “conformable” dailies (ProRes, DNxHD) to your SAN, as well as proxies to the cloud via its internal wi-fi system. Script notes can also be incorporated. The unit has already been battle-test on the Netflix/NRK production of “Norsemen”.

Closing thoughts

It’s always interesting to see, year over year, which companies are not at the show. This isn’t necessarily indicative of a company’s health, but can signal a change in their direction or that of the industry. Sometimes companies opt for smaller suites at an area hotel in lieu of the show floor (Autodesk). Or they are a smaller part of a reseller or partner’s booth (RED). But often, they are simply gone. For instance, in past years drones were all the rage, with a lot of different manufacturers exhibiting. DJI has largely captured that market for both vehicles and camera systems. While there were a few other drone vendors besides DJI, GoPro and Freefly weren’t at the show at all.

Another surprise change for me was the absence of SAM (Snell Advanced Media) – the hybrid company formed out of Snell & Wilcox and Quantel. SAM products are now part of Grass Valley, which, in turn, is owned by Belden (the cable manufacturer). Separate Snell products appear to have been absorbed into the broader Grass Valley product line. Quantel’s Go and Rio editors continue in Grass Valley’s editing line, alongside Edius – as simple, middle, and advanced NLE products. A bit sad actually. And very ironic. Here we are in the world of software and file-based video, but the company that still has money to make acquisitions is the one with a heavy investment in copper (I know, not just copper, but you get the point).

Speaking of “putting a fork in it”, I would have to say that stereo 3D and 360 VR are pretty much dead in the film and video space. I understand that there is a market – potentially quite large – in gaming, education, simulation, engineering, training, etc. But for more traditional entertainment projects, it’s just not there. Vendors were down to a few, and even though the leading NLEs have ways of working with 360 VR projects, the image quality still looks awful. When you view a 4K image within even the best goggles, the qualitative experience is like watching a 1970s-era TV set from a few inches away. For now, it continues to be a novelty looking for a reason to exist.

A few final points… It’s always fun to see what computers were being used in the booths. Apple is again a clear winner, with plenty of MacBook Pros and iMac Pros all over the LVCC when used for any sort of creative products or demos. eGPUs are of interest, with Sonnet being the main vendor. However, eGPUs are not a solution that solves every problem. For example, you will see more benefit by adding an eGPU to a lesser-powered machine, like a 13” MacBook Pro than one with more horsepower, like an iMac Pro. Each eGPU takes one Thunderbolt 3 bus, so realistically, you are likely to only add one additional eGPU to a computer. None of the NLE vendors could really tell me how much of a boost their application would have with an eGPU. Finally, if you are looking for some great-looking, large, OLED displays that are pretty darned accurate and won’t break the bank, then LG is the place to look.

©2018 Oliver Peters

Advertisements

Telestream Switch 4

Once Apple pulled the plug on QuickTime Player Pro 7, the industry started to look elsewhere for an all-purpose media tool that could facilitate the proper playback, inspection, and encoding of media files. For many, that new multipurpose application has become Telestream’s Switch, now in version 4. Telestream offers a range of desktop and enterprise media solutions, including Vantage, ScreenFlow, Flip4Mac, Episode, and others. Switch fills the role of a media player with added post-production capabilities, going far beyond other players, such as QuickTime Player or VLC.

Switch is offered in three versions: the basic Switch Player ($9.99), Switch Plus ($199) and Switch Pro ($499). Pricing for Plus and Pro covers the first year of support, which includes upgrades and assistance. There is also a free demo version with watermarking. All versions are available for both macOS (10.11-13) and Windows (7-10).

Playback support

The first attraction to Switch is its wide support of “consumer”, broadcast, and professional media formats and codecs. For Mac users, some of these are supported in QuickTime Player, too, but require a conversion step before you can play them. Not so with Switch. Of particular importance to editors will be the MPEG-2 and MXF variations. Some formats do require an upgrade to at least the Plus version, so check Telestream’s tech specs for specifics.

One area where Switch shines is file inspection. This has made it to the go-to quality assurance tool at many facilities. File metadata is exposed, along with proper display and reporting of interlaced video. It supports JKL transport control and frame advance using the arrow keys. Since closed captioning is important for all terrestrial and set-top channel broadcasters, you must have a way to check embedded captions. In the case of QuickTime Player, it will only display a single track of embedded captions and then, only the lower track. So, for example, if you have a file with both English and Spanish captions on CC1 and CC3, QuickTime Player will only display the English captions and not even let you verify that more captions are present. With Switch Plus and Pro, the full range of embedded channels are presented and you have the ability to do a check on any of the caption tracks.

Switch Plus likely covers the needs of most users; but Pro adds additional functionality, such as metering for multi-channel audio and loudness compliance. Pro also lets you open up to sixteen different files for comparison. It is the only version that supports external monitoring through Blackmagic Design or AJA i/o hardware. Finally, Pro lets you QC DPP (Digital Production Partnership) files from the desktop and display AS-11 MXF metadata.

Content encoding

Beyond these powerful player and inspection functions, Switch Plus and Pro are also full-fledged media encoders. You can change metadata, reorder audio channels, and export a new media file in various formats. Files can be trimmed, cropped, and/or resized in the export. Do you have a ProRes master file and need to generate an MPEG-2 Transport Stream file for broadcast? No problem.

I had a situation where I received a closed caption master file of a commercial from the captioning facility. It needed to have the ends of the file (slate and black) trimmed to meet the delivery specs. Normally when you edit or convert a file with embedded captioning, it will break the captions on the new file. Not so with Switch. I simply set the in and out points, set my encode specs to video pass-through, and generated the new file. The encode (essentially a file copy in this case) was lightning fast and the captions stayed intact.

Switch Plus and Pro include publishing presets for Vimeo, YouTube, and Facebook. In addition, the Pro version also lets you create an iTunes Store package, necessary to be compliant when distributing via the iTunes Store. Switch is a cross-platform application, but ProRes encoding support is limited to the Mac version. However, the iTunes Store package feature is the exception. ProRes asset creation is available to Windows users when creating the .itms files used by the iTunes Store.

Although Switch Plus or Pro might seem pricy to some when they compare these to Apple Compressor or Adobe Media Encoder; however, the other encoders can’t do the precision media functions that Switch offers. Telestream has built Switch to be an industrial-grade media tool that covers a host of needs in a package that’s easy for anyone to understand. If you liked QuickTime Player Pro 7, then Switch has become its 21st century successor.

Originally written for RedShark News.

©2018 Oliver Peters

What’s up with Final Cut’s Color Wheels?

NOTE: The information presented here has been superseded by the release of FCPX 10.4.1 in April 2018. With that release the color wheels model has been changed. Please read the linked blog post for updated information.

Apple Final Cut Pro X 10.4 introduced new, advanced color correction tools to this editing application, including color wheels, curves, and hue vs. saturation curves. These are tools that users of other NLEs have enjoyed for some time – and, which were part of Final Cut Studio (FCP 7, Color). Like others, my first reaction was, “Super! They’ve added some nice advanced tools, which will improve the use of FCPX for higher-end users.” But, as I started to primarily use the Color Wheels with real correction work, I quickly realized that something wasn’t quite right in how they operated. Or at least, they didn’t work in a way that we’ve come to understand.

In trying to figure it out, I reached out to other industry pros and developers for their thoughts. Naturally this led to some spirited discussions at forums like those at Creative COW. However, other editors have noticed the same problems, so you can also find threads in the Facebook FCPX group and at FCP.co. It is certainly easy to characterize this as just another internet kerfuffle, surrounding Apple’s “think different” approaches to FCPX. But those arguments fall flat when you actually try to use the tools as intended.

The FCPX Color Wheels panel includes four wheels – Master, Shadows, Midtones, and Highlights. The puck in the center of each wheel is a hue offset control to push hues in the direction that you move the puck. The slider to the right of the wheel controls the brightness of that range. The left slider controls the saturation. One of the main issues is that when you adjust luminance using one of these controls, the affected range is too broad. Specifically, in the case of the Midtones control, as you adjust the luminance slider up or down, you are affecting most of the image and not just the midrange levels. This is not the way this type of control normally works in other tools, and in fact, it’s not how FCPX’s Color Board controls work either.

“What’s the big deal?” you might ask. Fair enough. I see two operational issues. The first is that to properly grade the image using the Color Wheels, you end up having to go back-and-forth a lot between wheels, to counteract the changes made by one control with another. The second is that using the Midtones slider tends to drive highlights above 100 IRE, where they will be clipped if any broadcast limiting is used. This doesn’t happen with other color tools, notably Apple’s own Color Board.

A lot of the discussion focuses on luma levels and specifically the Midtones slider, since it’s easy to see the issue there. However, other controls are also affected, but that’s too much to dissect in a single post. Throughout this post, be sure to click on the images to see the full view. I have presented various samples against each other and you will only get the full understanding if you open the thumbnail (which is small but also cropped) to the full image. I have compared the effect using five different tools – the Color Board, the Color Wheels, a color corrector plug-in that I built as a Motion template using Motion effects, Rubber Monkey Software FilmConvert (the wheels portion only), and finally, the Adobe Lumetri controls in Premiere Pro.

I am using three different test images – a black-to-white ramp, a test pattern, and a demo video image. The ramp without correction will appear as a diagonal line (0-100 IRE) on the scope, which makes it easy to analyze what’s happening. The video image has definite shadow and highlight areas, which lets us see how these controls work in the real world. For example, if you want to brighten the area of the shot where the man is in the shadows, but don’t want to make the highlights any brighter, this would normally be done using a Midtones control. Be aware that these various tools certainly aren’t calibrated the same way and some have a greater range of control than others. The weakest of these is FilmConvert’s wheels, since this plug-in has additional level controls in other parts of its interface.

Color science models

In the various forum threads, the argument is made that Apple is simply using a different color science method or a different weighing of some existing models. That’s certainly possible, since not all color correctors are built the same way. The most common approaches are Lift/Gamma/Gain and Shadows/Mids/Highlights. Be careful with naming. Just because something uses the terminology of Shadows, Midtones, and Highlights, does not mean that it also uses the SMH color science model. Many tools use the Lift/Gamma/Gain model, but in fact, call the controls shadows (Lift), mids (Gamma), and highlights (Gain). Another term you may run across is Set-up in some correction tools. This is typically used for control of shadows (equal to Lift), but can also function is an offset control that raises the level of the entire image. Avid Symphony employs this solution. Finally, both Symphony and Adobe SpeedGrade use what has been dubbed a 12-way color corrector. Each range is further subdivided into its own subset of shadows, mids, and highlights controls.

An LGG model provides broad control of shadows and highlights, with the midtones control working like a curve that covers the whole range, but with the largest effect in the middle. An SMH model normally divides the levels into three distinct, precisely overlapping ranges. This is much like a three-band audio equalizing filter. A number of the color correctors add a luma range control, which gives the user the ability to change how much of the image a specific range will affect. In other words, how broad is the control of the shadows, mids, or highlights control? This is like a Q control in an audio equalizer, where you change the shape of the envelope at a certain frequency.

Red Giant’s Magic Bullet Looks offers both color correction models with two different tools – the 4-way color corrector (SMH) and the Colorista color corrector (LGG). When you adjust the midrange control of their 4-way, the result is a graceful S-shaped curve to the levels on the waveform.

To study the effect of an LGG-based corrector, test the ramp. The shadows control (Lift) will raise or lower the dark areas of the image without changing the absolute highlights. The diagonal line of the ramp on the waveform essentially pivots, hinged at the 100 IRE point. Conversely, change the highlights control (Gain) pivots the line pinned to 0 IRE (at black). When you adjust the midtones control (Gamma), you create a curve to the line, which stays pinned at 0 and 100 IRE at either end. In this way you are effectively “expanding” or “compressing” the levels in the middle portion of your image without changing the position of your black or white points.

How the various color correction tools react

Looking at the luma control for the Midtones, two things are clear. First, all of these tools are using the LGG color science model. It’s not clear what the Color Wheels are using, but it isn’t SMH, as there is no bulge or S-curve visible in the scope. Second, the Color Wheels quickly drive the image levels into clipping, while the other tools generally keep black and while levels in place. In essence, the Midtones control affects the image more like a master or offset control would, than a typical mids or Gamma control. Yet, clearly Apple’s Color Board controls adhere to the standard LGG model. The concern, of course, is clipping. In the test image of the man walking on the village street, the sunlit building walls on the opposite side of the street will become overexposed and risk being clipped when the Color Wheels are used.

What about color? As a simple test, I next shifted the Midtones puck to the yellow. Bear in mind that the range of each of these controls is different, so you will see varying degrees of yellow intensity. Nevertheless, the way the control should work is that some pure black and white should be preserved at the top and bottom of the video levels. All of these tools maintain that, except for the Color Wheels. There, the entire image is yellow, effectively making the hue offset puck function more like a tint control.

One other issue to note, is that the Color Wheels offer an extraordinarily control range. The hue offset control RGB intensity values go from 0 (center of the wheel) to 1023. However, the puck icon can only go to the rim of the wheel, which it hits at about 200. With a mouse (or numerical entry), you can keep going well past the stop of the wheel icon – five times farther, in fact. The image not only becomes very yellow in this case, but you can easily lose the location of your control, since the GUI position in no longer relevant.

The working theory

The big question is why don’t the Color Wheels conform to established principles, when in fact, the Color Board controls do? Until there is some further clarification from Apple, one possible explanation is with HDR. FCPX 10.4 introduced High Dynamic Range (HDR) features. One of the various HDR standards is Rec. 2020 PQ. In that color space, the 0-100 IRE limitations of Rec. 709 are expanded to 0-10,000 nits. 0-100 nits is roughly the same brightness as we are used to with Rec. 709.

Looking at this image of the man walking along the street – where I’ve attempted to get a pleasing look with all of the tools – you’ll see that the Color Wheels in Rec. 709 don’t react correctly and will drive the highlights into a range to be clipped. However, in the bottom pane, which is the same image in Rec. 2020 PQ color space, the grade looks pretty normal. And, in practice, the Color Wheels controls work more or less the way I would have expected them to work. Yes, the same controls work differently in the different color spaces – properly in 2020 PQ and not in 709.

But why is that the case? I have no answer, but I do have a wild guess. Maybe, just maybe, the Color Wheels were designed for – or intended to only be used for – HDR work. Or maybe there’s conversion or recalibration of the controls that hasn’t taken place yet in this version. If the tool is only calibrated for HDR, then its range and weighing will be completely wrong for Rec. 709 video. If you increase the Midtones luma of the ramp in both Rec. 709 and Rec. 2020 PQ, you’ll see a similar curve. In fact, if you overlay a screen shot of each waveform, placing the full Rec. 709 scope image over the bottom portion of the Rec. 2020 PQ scale, you’ll notice that these sort of align up to about 100 IRE and nits. It’s as if one is simply a slice out of the other.

Regardless of why, this is something where I would hope Apple will provide a white paper or other demonstration of what the best practices will be for using this tool effectively. If it isn’t intentional, and actually is a mistake, then I presume a fix will be forthcoming. In either case, put in your feedback comments to Apple.

A word about HDR

Over the course of testing this tool and this theory, I’ve done a bit of testing with the HDR color spaces in FCPX. If you want to know more about HDR, I would encourage you to check out these contrary blog posts by Stu Maschwitz and Alexis Van Hurkman. I tend to side with Stu’s point-of-view and am not a big fan of HDR.

The way Apple has implemented these features in Final Cut Pro X 10.4 is to allow the user to set and override color spaces. If you set up your project to be Rec. 2020 PQ (and set preferences to “show HDR as raw values”), then the viewer and a/v output (direct from the Mac, not through a hardware i/o device) are effectively dimmed through the Mac’s color profile system. When you grade the image based on the 0-10,000 nits scale, you’ll end up seeing an image that looks pleasing and essentially the same as if you were working in Rec. 709. However – and I cannot over-emphasize this – you are not going to be able to produce an image that’s truly compatible with Dolby Vision and actually look correct as HDR, unless you have the correct AJA i/o hardware and a proper display. And by display, I mean a top-end Dolby, Canon, or Sony unit, costing tens of thousands of dollars.

As I understand the PQ specs, the bulk of the higher range is for the highlights that are normally constrained or clipped in our current video systems. However, that 10,000 nits scale is weighed, so that about 50% of the image value is in the first 100 nits, making it of comparable brightness to the current 100 IRE. The rest of that range is for brighter information, like specular highlights. You don’t necessarily get more brightness in the shadow detail. Therefore, if you are grading a shot in FCPX in a 2020 PQ color space and you only have the computer display to go by, you’ll grade by eye as much as by scope. This means that to get a pleasing image, you will end up making the average appearance of the image brighter than it really should be. When this is viewed on a real HDR monitor, it will be painfully bright. Having a higher-nits computer display, like on the iMac Pro (up to 500 nits), won’t make much difference, unless maybe, you crank the display brightness to its maximum (ouch!).  “Mine goes the 11!”

Right now, HDR is the wild, wild west. If you are smart, you’ll realize that you don’t know what you don’t know. While it’s nice to have these new features in FCPX, they can be very dangerous in the wrong hands.

But that’s another matter. Right now, I just hope Apple (or one of the usual suspects, like Ripple Training, LumaForge, or Larry Jordan) will come out with more elaboration on the Color Wheels.

©2018 Oliver Peters

Stocking Stuffers 2017

It’s holiday time once again. For many editors that means it’s time to gift themselves with some new tools and toys to speed their workflows or just make the coming year more fun! Here are some products to consider.

Just like the tiny house craze, many editors are opting for their laptops as their main editing tool. I’ve done it for work that I cut when I’m not freelancing in other shops, simply because my MacBook Pro is a better machine than my old (but still reliable) 2009 Mac Pro tower. One less machine to deal with, which simplifies life. But to really make it feel like a desktop tool, you need some accessories along with an external display. For me, that boils down to a dock, a stand, and an audio interface. There are several stands for laptops. I bought both the Twelve South BookArc and the Rain Design mStand: the BookArc for when I just want to tuck the closed MacBook Pro out of the way in the clamshell mode and the mStand for when I need to use the laptop’s screen as a second display. Another option some editors like is the Vertical Dock from Henge Docks, which not only holds the MacBook Pro, but also offers some cable management.

The next hardware add-on for me is a USB audio interface. This is useful for any type of computer and may be used with or without other interfaces from Blackmagic Design or AJA. The simplest of these is the Mackie Onyx Blackjack, which combines interface and output monitor mixing into one package. This means no extra small mixer is required. USB input and analog audio output direct to a pair of powered speakers. But if you prefer a separate small mixer and only want a USB interface for input/output, then the PreSonus Audiobox USB or the Focusrite Scarlett series is the way to go.

Another ‘must have’ with any modern system is a Thunderbolt dock in order to expand the native port connectivity of your computer. There are several on the market but it’s hard to go wrong with either the CalDigit Thunderbolt Station 2 or the OWC Thunderbolt 2 Dock. Make sure you double-check which version fits for your needs, depending on whether you have a Thunderbolt 2 or 3 connection and/or USB-C ports. I routinely use each of the CalDigit and OWC products. The choice simply depends on which one has the right combination of ports to fit your needs.

Drives are another issue. With a small system, you want small portable drives. While LaCie Rugged and G-Technology portable drives are popular choices, SSDs are the way to go when you need true, fast performance. A number of editors I’ve spoken to are partial to the Samsung Portable SSD T5 drives. These USB3.0-compatible drives aren’t the cheapest, but they are ultraportable and offer amazing read/write speeds. Another popular solution is to use raw (uncased) drives in a drive caddy/dock for archiving purposes. Since they are raw, you don’t pack for the extra packaging, power supply, and interface electronics with each, just to have it sit on the shelf. My favorite of these is the HGST Deckstar NAS series.

For many editors the software world is changing with free applications, subscription models, and online services. The most common use of the latter is for review-and-approval, along with posting demo clips and short films. Kollaborate.tv, Frame.io, Wipster.io, and Vimeo are the best known. There are plenty of options and even Vimeo Pro and Business plans offer a Frame/Wipster-style review-and-approval and collaboration service. Plus, there’s some transfer ability between these. For example, you can publish to a Vimeo account from your Frame account. Another expansion of the online world is in team workgroups. A popular solution is Slack, which is a workgroup-based messaging/communication service.

As more resources become available online, the benefits of large-scale computing horsepower are available to even single editors. One of the first of these new resources is cloud-based, speech-to-text transcription. A number of online services provide this functionality to any NLE. Products to check out include Scribeomatic (Coremelt), Transcriptive (Digital Anarchy), and Speedscriber (Digital Heaven). They each offer different pricing models and speech analysis engines. Some are still in beta, but one that’s already out is Speedscriber, which I’ve used and am quite happy with. Processing is fast and reasonably accurate, given a solid audio recording.

Naturally free tools make every user happy and the king of the hill is Blackmagic Design with DaVinci Resolve and Fusion. How can you go wrong with something this powerful and free with ongoing company product development? Even the paid versions with some more advanced features are low cost. However, at the very least the free version of Resolve should be in every editor’s toolkit, because it’s such a Swiss Army Knife application.

On the other hand, editors who have the need to learn Avid Media Composer, need look no further than the free Media Composer | First. Avid has tried ‘dumbed-down’ free editing apps before, but First is actually built off of the same code base as the full Media Composer software. Thus, skills translate and most of the core functions are available for you to use.

Many users are quite happy with the advantages of Adobe’s Creative Cloud software subscription model. Others prefer to own their software. If you work in video, then it’s easy to put together alternative software kits for editing, effects, audio, and encoding that don’t touch an Adobe product. Yet for most, the stumbling block is Photoshop – until now. Both Affinity Photo (Serif) and Pixelmator Pro are full-fledged graphic design and creation tools that rival Photoshop in features and quality. Each of these has its own strong points. Affinity Photo offers Mac and Windows versions, while Pixelmator Pro is Mac only, but taps more tightly into macOS functions.

If you work in the Final Cut Pro X world, several utilities are essential. These include SendToX and XtoCC from Intelligent Assistance, along with X2Pro Audio Convert from Marquis Broadcast. Marquis’ newest is Worx4 X – a media management tool. It takes your final sequence and creates a new FCPX library with consolidated (trimmed) media. No transcoding is involved, so the process is lighting fast. Although in some cases media is copied without being trimmed. This can reduce the media to be archived from TBs down to GBs. They also offer Worx4 Pro, which is designed for Premiere Pro CC users. This tool serves as a media tracking application, to let editors find all of the media used in a Premiere Pro project across multiple volumes.

Most editors love to indulge in plug-in packages. If you can only invest in a single, large plug-in package, then BorisFX’s Boris Continuum Complete 11 and/or their Sapphire 11 bundles are the way to go. These are industry-leading tools with wide host and platform support. Both feature mocha tracking integration and Continuum also includes the Primatte Studio chromakey technology.

If you want to go for a build-it-up-as-you-need-it approach – and you are strictly on the Mac – then FxFactory will be more to your liking. You can start with the free, basic platform or buy the Pro version, which includes FxFactory’s own plug-ins. Either way, FxFactory functions as a plug-in management tool. FxFactory’s numerous partner/developers provide their products through the FxFactory platform, which functions like an app store for plug-ins. You can pick and choose the plug-ins that you need when the time is right to purchase them. There are plenty of plug-ins to recommend, but I would start with any of the Crumplepop group, because they work well and provide specific useful functions. They also include the few audio plug-ins available via FxFactory. Another plug-in to check out is the Hawaiki Keyer 4. It installs into both the Apple and Adobe applications and far surpasses the built-in keying tools within these applications.

The Crumplepop FxFactory plug-ins now includes Koji Advance, which is a powerful film look tool. I like Koji a lot, but prefer FilmConvert from Rubber Monkey Software. To my eyes, it creates one of the more pleasing and accurate film emulations around and even adds a very good three-way color corrector. This opens as a floating window inside of FCPX, which is less obtrusive than some of the other color correction plug-ins for FCPX. It’s not just for film emulation – you can actually use it as the primary color corrector for an entire project.

I don’t want to forget audio plug-ins in this end-of-the-year roundup. Most editors don’t feel too comfortable with a ton of surgical audio filters, so let me stick to suggestions that are easy-to-use and very affordable. iZotope is a well-known audio developer and several of its products are perfect for video editors. These fall into repair, mixing, and mastering needs. These include the Nectar, Ozone, and RX bundles, along with the RX Loudness Control. The first three groups are designed to cover a wide range of needs and, like the BCC video plug-ins, are somewhat of an all-encompassing product offering. But if that’s a bit rich for the blood, then check out iZotope’s various Elements versions.

The iZotope RX Loudness Control is great for accurate loudness compliance, and best used with Avid or Adobe products. However, it is not real-time, because it uses analysis and adaptive processing. If you want something more straightforward and real-time, then check out the LUFS Meter from Klangfreund. It can be used for loudness control on individual tracks or the master output. It works with most of the NLEs and DAWs. A similar tool to this is Loudness Change from Videotoolshed.

Finally, let’s not forget the iOS world, which is increasingly becoming a viable production platform. For example, I’ve used my iPad in the last year to do location interview recordings. This is a market that audio powerhouse Apogee has also recognized. If you need a studio-quality hardware interface for an iPhone or iPad, then check out the Apogee ONE. In my case, I tapped the Apogee MetaRecorder iOS application for my iPad, which works with both Apogee products and the iPad’s built-in mic. It can be used in conjunction with FCPX workflows through the integration of metadata tagging for Keywords, Favorites, and Markers.

Have a great holiday season and happy editing in the coming year!

©2017 Oliver Peters

HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The Hobbit

df_hobbit_1Peter Jackson’s The Hobbit: An Unexpected Journey was one of the most anticipated films of 2012. It broke new technological boundaries and presented many creative challenges to its editor. After working as a television editor, Jabez Olssen started his own odyssey with Jackson in 2000 as an assistant editor and operator on The Lord of the Rings trilogy. After assisting again on King Kong, he next cut Jackson’s Lovely Bones as the first feature film on which he was the sole editor. The director tapped Olssen again for The Hobbit trilogy, where unlike the Rings trilogy, he will be the sole editor on all three films.

Much like the Rings films, all production for the three Hobbit films was shoot in a single eighteen month stretch. Jackson employed as many as 60 RED Digital Cinema EPIC cameras rigged for stereoscopic acquisition at 48fps – double the standard rate of traditional feature photography. Olssen was editing the first film in parallel with the principal photography phase. He had a very tight schedule that only allowed about five months after the production wrapped to lock the cut and get the film ready for release.

To get The Hobbit out on such an aggressive schedule, Olssen leaned hard on a post production infrastructure built around Avid’s technology, including 13 Media Composers (10 with Nitris DX hardware) and an ISIS 7000 with 128TB of storage. Peter Jackson’s production facilities are located in Wellington, New Zealand, where active fibre channel connections tie Stone Street Studio, Weta Digital, Park Road Post Production and the cutting rooms to the Avid ISIS storage. The three films combined, total 2200 hours (1100 x two eyes) of footage, which is the equivalent of 24 million feet of film. In addition, an Apace active backup solution with 72TB of storage was also installed, which could immediately switch over if ISIS failed.

The editorial team – headed up by first assistant editor Dan Best – consisted of eight assistant editors, including three visual effects editors. According to Olssen, “We mimicked a similar pipeline to a film project. Think of the RED camera .r3d media files as a digital negative. Peter’s facility, Park Road Post Production, functioned as the digital lab. They took the RED media from the set and generated one-light, color-corrected dailies for the editors. 24fps 2D DNxHD36 files were created by dropping every second frame from the files of one ‘eye’ of a stereo recording. For example, we used 24fps timecode with the difference between the 48fps frames being a period instead of a colon. Frame A would be 11.22.21.13 and frame B would be 11:22:21:13. This was a very natural solution for editing and a lot like working with single-field media files on interlaced television projects. The DNxHD files were then delivered to the assistant editors, who synced, subclipped and organized clips into the Avid projects. Since we were all on ISIS shared storage, once they were done, I could access the bins and the footage was ready to edit, even if I were on set. For me, working with RED files was no different than a standard film production.”

df_hobbit_2Olssen continued, “A big change for the team since the Rings movies is that the Avid systems have become more portable. Plus the fibre channel connection to ISIS allows us to run much longer distances. This enabled me to have a mobile cart on the set with a portable Media Composer system connected to the ISIS storage in the main editing building. In addition, we also had a camper van outfitted as a more comfortable mobile editing room with its own Media Composer; we called it the EMC – ‘Editorial Mobile Command’. So, I could cut on set while Peter was shooting, using the cart and, as needed, use the EMC for some quick screening of edits during a break in production. I was also on location around New Zealand for three months and during that time I cut on a laptop with mirrored media on external drives.”

The main editing room was set up with a full-blown Nitris DX system connected to a 103” plasma screen for Jackson. The original plan was to cut in 2D and then periodically consolidate scenes to conform a stereo version for screening in the Media Composer suite. Instead they took a different approach. Olssen explained, “We didn’t have enough storage to have all three films’ worth of footage loaded as stereo media, but Peter was comfortable cutting the film in 2D. This was equally important, since more theaters displayed this version of the film. Every few weeks, Park Road Post Production would conform a 48fps stereo version so we could screen the cut. They used an SGO Mistika system for the DI, because it could handle the frame rate and had very good stereo adjustment tools. Although you often have to tweak the cuts after you see the film in a stereo screening, I found we had to do far less of that than I’d expected. We were cognizant of stereo-related concerns during editing. It also helped that we could judge a cut straight from the Avid on the 103” plasma, instead of relying on a small TV screen.”

df_hobbit_3The editorial team was working with what amounted to 24fps high-definition proxy files for stereo 48fps RED .r3d camera masters. Edit decision lists were shared with Weta Digital and Park Road Post Production for visual effects, conform and digital intermediate color correction/finishing at a 2K resolution. Based on these EDLs, each unit would retrieve the specific footage needed from the camera masters, which had been archived onto LTO data tape.

The Hobbit trilogy is a heavy visual effects production, which had Olssen tapping into the Media Composer toolkit. Olssen said, “We started with a lot of low resolution, pre-visualization animations as placeholders for the effects shots. As the real effects started coming in, we would replace the pre-vis footage with the correct effects shots. With the Gollum scenes we were lucky enough to have Andy Serkis in the actual live action footage from set, so they were easy to visualize how the scene would look. But other CG characters, like Azog, were captured separately on a Performance Capture stage. That meant we had to layer separately-shot material into a single shot. We were cutting vertically in the timeline, as well as horizontally. In the early stages, many of the scenes were a patchwork of live action and pre-vis, so I used PIP effects to overlay elements to determine the scene timing. Naturally, I had to do a lot of temp green-screen composites. The dwarves are full-size actors and for many of the scenes, we had to scale them down and reposition them in the shot so we could see how the shots were coming together.”

As with most feature film editors, Jabez Olssen likes to fill out his cut with temporary sound effects and music, so that in-progress screenings feel like a complete film. He continued, “We were lucky to use some of Howard Shore’s music from the Rings films for character themes that tie The Hobbit back into The Lord of the Rings. He wrote some nice ‘Hobbity’ music for those. We couldn’t use too much of it, though, because it was so familiar to us! The sound department at Park Road Post Production uses Avid Pro Tools systems. They also have a Media Composer connected to the same ISIS storage, which enabled the sound editors to screen the cut there. From it, they generated QuickTime files for picture reference and audio files so the sound editors could work locally on their own Pro Tools workstations.”

Audiences are looking forward to the next two films in the series, which means the adventure continues for Jabez Olssen. On such a long term production many editors would be reluctant to update software, but not this time. Olssen concluded, “I actually like to upgrade, because I look forward to the new features. Although, I usually wait a few weeks until everyone knows it’s safe. We ended up on version 6.0 at the end of the first film and are on 6.5 now. Other nonlinear editing software packages are more designed for one-man bands, but Media Composer is really the only software that works for a huge visual effects film. You can’t underestimate how valuable it is to have all of the assistant editors be able to open the same projects and bins. The stability and reliability is the best. It means that we can deliver challenging films like The Hobbit trilogy on a tight post production schedule and know the system won’t let us down.”

Originally written for Avid Technology, Inc.

©2013 Oliver Peters

Post Production Mastering Tips

The last step in commercial music production is mastering. Typically this involves making a recording sound as good as it possibly can through the application of equalization and multiband compression. In the case of LPs and CDs (remember those?), this also includes setting up the flow from one tune to the next and balancing out levels so the entire product has a consistent sound. Video post has a similar phase, which has historically been in the hands of the finishing or online editor.

That sounds so sweet

The most direct comparison between the last video finishing steps and commercial music mastering is how filters are applied in order to properly compress the audio track and to bring video levels within legal broadcast specs. When I edit projects in Apple Final Cut Pro 7 and do my own mixes, I frequently use Soundtrack Pro as the place to polish the audio. My STP mixing strategy employs tracks that route into one or more subgroup buses and then a master output bus. Four to eight tracks of content in FCP might become twenty tracks in STP. Voice-over, sync-sound, SFX and music elements get spread over more tracks and routed to appropriate subgroups. These subgroups then flow into the master bus. This gives me the flexibility to apply specific filters to a track and have fine control over the audio.

I’ll usually apply a compressor across the master bus to tame any peaks and beef up the mix. My settings involve a low compression ratio and a hard limit at -10dB. The objective is to keep the mix levels reasonable so as to preserve dynamic range. I don’t want to slam the meters and drive the signal hard into compression. Even when I do the complete mix in Final Cut, I will still use Soundtrack Pro simply to compress the composite mix, because I prefer its filters. When you set the reference tone to -20dB, then these levels will match the nominal levels for most digital VTRs. If you are laying off to an analog format, such as Betacam-SP, set your reference tone to -12dB and match the input on the deck to 0VU.

Getting ready for broadcast

The video equivalent is the broadcast safe limiting filter. Most NLEs have one, including Avid Media Composer and both old and new versions of Final Cut. This should normally be the last filter in the chain of effects. It’s often best to apply it to a self-contained file in FCP 7, a higher track in Media Composer or a compound clip in FCP X. Broadcast specs will vary with the network or station receiving your files or tapes, so check first. It’s worth noting that many popular effects, like glow dissolves, violate these parameters. You want the maximum luminance levels (white peaks) to be limited to 100 IRE and chrominance to not exceed 110, 115 or 120, depending on the specs of the broadcaster to whom you are delivering. In short, the chroma should stay within the outer ring of a vectorscope. I usually turn off any RGB limiting to avoid artifacts.

It’s often a good idea to reduce the overall video levels by about five percent prior to the application of a broadcast safe filter, simply so you don’t clip too harshly. That’s the same principle as I’ve applied to the audio mix. For example, I will often first apply a color correction filter to slightly lower the luminance level and reduce chroma. In addition, I’ll frequently use a desaturate highlights or lows filter. As you raise midrange or highlight levels and crush shadows during color correction, the chroma is also driven higher and/or lower accordingly. Red, blues and yellows are most susceptible, so it’s a good idea to tone down chroma saturation above 90 IRE and below 20 IRE. Most of these filters let you feather the transition range and the percentage of desaturation, so play with the settings to get the most subtle result. This keeps the overall image vibrant, but still legal.

Let me interject at this point that what you pay for when using a music mastering specialist are the “ears” (and brain) of the engineer and their premium monitoring environment. This should be equally true of a video finishing environment. Without proper audio and video monitoring, it’s impossible to tell whether the adjustments being made are correct. Accurate speakers, calibrated broadcast video monitors and video scopes are essential tools. Having said that though, software scopes and modern computer displays aren’t completely inaccurate. For example, the software scopes in FCP X and Apple’s ColorSync technology are quite good. Tools like Blackmagic Design Ultrascope, HP Dreamcolor or Apple Cinema Displays do provide accurate monitoring in lower-cost situations. I’ve compared the FCP X Viewer on an iMac to the output displayed on a broadcast monitor fed by an AJA IoXT. I find that both match surprisingly well. Ultimately it gets down to trusting an editor who knows how to get the best out of any given system.

Navigating the formats

Editors work in a multi-standard world. I frequently cut HD spots that run as downconverted SD content for broadcast, as well as at a higher HD resolution for the internet. The best production and post “lingua franca” format today is 1080p/23.976. This format fits a sweet spot for the internet, Blu-ray, DVD and modern LCD and plasma displays. It’s also readily available in just about every camera at any price range. Even if your product is only intended to be displayed as standard definition today, it’s a good idea to future-proof it by working in HD.

If you shoot, edit and master at 1080p/23.976, then you can easily convert to NTSC, 720p/59.94 or 1080i/29.97 for broadcast. The last step for many of my projects is to create deliverables from my master file. Usually this involves creating three separate broadcast files in SD and two HD formats using either ProRes or uncompressed codecs. I will also generate an internet version (without bars, tone, countdown or slate) that’s a high-quality H.264 file in the 720p/23.976 format. Either .mov or .mp4 is fine.

Adobe After Effects is my tool of choice for these broadcast conversions, because it does high-quality scaling and adds proper cadences. I follow these steps.

A) Export a self-contained 1080p/23.976 ProResHQ file from FCP 7 or X.

B) Place that into a 720×486, 29.97fps After Effects D1 composition and scale the source clip to size. Generally this will be letterboxed inside of the 4×3 frame.

C) Render an uncompressed QuickTime file, which is lower-field ordered with added 2:3 pulldown.

D) Re-import that into FCP 7 or X using a matching sequence setting, add the mixed track and format it with bars, tone, countdown and slate.

E) Export a final self-contained broadcast master file.

F) Repeat the process for each additional broadcast format.

Getting back there

Archiving is “The $64,000 Question” for today’s digital media shops. File-based mastering and archiving introduces dilemmas that didn’t exist with videotape. I recommend always exporting a final mixed master file along with a split-track, textless submaster. QuickTime files support multi-channel audio configurations, so building such a file with separate stereo stems for dialogue, sound effects and music is very easy in just about any NLE. Self-contained QuickTime movies with discrete audio channels can be exported from both FCP 7 and FCP X (using Roles).

Even if your NLE can’t export multi-channel master files, export the individual submixed elements as .wav or .aif audio files for future use. In addition to the audio track configuration, remove any titles and logos. By having these two files (master and submaster), it’s very simple to make most of the future revisions you might encounter without ever having to restore the original editorial project. Naturally, one question is which codec to use for access in the future. The preferred codec families these days are Avid DNxHD, Apple ProRes, uncompressed, OP1a MXF (XDCAM) or IMX. FCP editors will tend towards ProRes and Avid editors towards DNxHD, but uncompressed is very viable with the low cost of storage. For feature films, another option to consider would be image sequences, like a string of uncompressed TIFF or DPX files.

Whichever format you standardize on, make multiple copies. LTO data tape is considered the best storage medium, but for small files, like edited TV commercial masters, DVD-ROM, Blu-ray and XDCAM media are likely the most robust. This is especially true in the case of water damage.

The typical strategy for most small users who don’t want to invest in LTO drives is a three-pronged solution.

A) Store all camera footage, elements and masters on a RAID array for near-term editing access.

B) Back-up the same items onto at least two copies of raw SATA or SSD hard drives for longer storage.

C) Burn DVD-ROM or BD-ROM copies of edited master files, submasters, project files and elements (music, VO, graphics, etc.).

A properly polished production with audio and video levels that conform to standards is an essential aspect of delivering a professional product. Developing effective mastering and archiving procedures will protect the investment your clients have made in a production. Even better, a reliable archive routine will bring you repeat business, because it’s easy to return to the project in the future.

Originally written for DV magazine/Creative Planet/NewBay Media, LLC

©2012 Oliver Peters