Luca Visual FX Film and Light Transitions

There are a number of effects that fall into the “grunge” or “distressed” category, including film grain, scratches and film defects, TV scanlines and light leaks. The last group is typically only available from stock footage houses as actual film clips, rather than as plug-in filter effects. They commonly consist of actual transfers of film reel run-outs, damaged film negative and so on. This makes it harder to use them for compositing, because the stock clips are usually full frame without a keyable alpha channel.

Luca Visual FX has added a new twist to this situation with the release of Film Effects and Light Transitions. These are available as two, cost-effective effects packages for a total of 63 effects. They can be purchased as either one or both of the complete collections or as individual effects. As the names imply, the Film Effects Transitions are a set of clips that mimic light streaks, film reel light leaks and film leaders.

The Light Transitions are designed as light glow and streak effects. Both sets of effects may be used over full frame video for spice or as a transition effect to cover a cut between two video clips. Each transition effect generally has a few frames where the clip is full frame, which allows you to hide a cut underneath.

Luca Visual FX is already known in the FCP community for a series of Final Cut plug-ins, including Light Box, 9 GPU accelerated effects released through Noise Industries. What’s different about Film Effects and Light Transitions is that they are actual video clips in the QuickTime Animation (millions +) format. The clips all carry an embedded alpha signal for keying. Since they are video clips and not actual filter plug-ins, they are compatible with any NLE or compositing applications that can take QuickTime Animation, which is just about everything. That’s great, because you aren’t locked into a specific NLE’s plug-in API and you also can purchase one package and it will work in most of your applications.

I tested these clips in Avid Media Composer, Apple Final Cut Pro, Adobe Premiere Pro, Apple Motion and Adobe After Effects. No issues or problems whatsoever. In the case of Avid, these files are transcoded upon import (Avid editors will need to invert the alpha), but that’s a very quick process, since the clips are all very short.

The standard way to apply an effects clip would be to add it to the next track above your video. When you use the clip as a transition, you’d line up the center point of the clip over the cut between the two video clips on the lower track. Since the Film Effects and Light Transitions clips are really a nest of a full color fill image plus an alpha signal, you can still modify the fill video.

For instance, you can alter the color or texture of the fill by adding a color correction, blur or other type of filter. Want to change the hue of the lights? Simply apply a color corrector to the fill image and shift the color wheel.

You can also use the alpha signal to cut a hole for a different fill image. For instance, in Avid Media Composer, simply open the nested clip and replace the fill with a different video clip. In Final Cut, place another video clip one track higher than the effects clip and set the Composite Mode of that new clip to Travel Matte – Alpha. Each of these procedures will let you use the Luca clip’s alpha channel to key the hole, yet fill it with completely different video.

In the case of Avid, you can also combine several clips to form a single fill clip within the nest. So, you aren’t limited to only using or replacing the fill, but could also blend the light clips with other video and then use that combination as a custom transition effect.

Since the Avid import step transcodes the media into Avid MXF, playback of the unrendered media is actually a bit smoother than inside FCP. Generally these files preview in real-time in most applications. Since they are short clips, rendering the Animation codec is only a matter of seconds.

All of these clips are 1920×1080, so they are big enough to cover all the HD and SD variations. Frame rates can be altered simply by applying variable speed modifiers if your application doesn’t automatically compensate for different frame rates. Like any other video clip, these can be resized and repositioned as needed. Another cool trick is to stack multiple clips.

For instance, if you like a combination of two different film effects, then simply stack the clips vertically on higher tracks: video on V1, effect A on V2 and effect B on V3.

Luca Visual FX’s Film Effects and Light Transitions collections are simply useful tools that most editors will want in their bag of tricks. There’s not a lot to describe and quite frankly, the demo reel tells you most of what you need to know. These effects are super-simple to use – unlike other filter packages with a ton of parameters that often make it hard to get the desired effect. Drag & drop or edit the clip to the timeline, make a few tweaks if you like and move on. Just as easy as that and you’ll add a great new look to your videos that clients will love.

©2010 Oliver Peters

Random Impressions – NAB 2010

I always enjoy the show – partly for the new toys – but also to hook up once again, face-to-face, with many friends in the business. I’m back home now and have had a day to decompress and make a few observations about NAB Convention.

First off, this was an extremely strong show for post. Tons of new versions of many of your favorite NLEs, color grading tools and other items. Second, the attendance was good. A bit more than last year – so still a “down” year compared with peaks of a few years ago. Yet, I felt the floor density was higher than the 2009 vs. 2010 numbers indicated. Thursday was still well-attended and not the ghost town I would have expected. So, on the purely subjective metric of how crowded the floor felt, I would have to say that daily averages were much better than 2009.

If you want more specific product knowledge about what was on the floor, check out the various NAB reports at Videography, DV, TV Technology, Studio Daily, Post and Pro Video Coalition. I would encourage you to check out DV’s “(Almost) Live From the NAB Show Blog” – Part 1 and Part 2. The following thoughts fall under opinion and observation, so I’m bound to skip a lot of the details that you might really want to know.


It never ceases to amaze me when I see blog posts and forum comments that seem to expect Apple to pop up out of nowhere at the show with some amazing new version of Final Cut Studio. Have these folks been under a rock? Apple swore off trade shows several years ago and there’s no indication this policy has changed. They were never on the 2009 or 2010 exhibitor’s list and you can’t plan a 1500-3000 seat “user event” at an area ballroom without word getting out. So, I have no idea why people persist in this fantasy game.

The short term scenario is that it is unlikely that there’ll be a feature-laden new version of FCP/FCS any time soon. Maybe an incremental update like the “new” Final Cut Studio from last year, but I wouldn’t expect that until a few months down the road at the earliest. Or maybe not until 2011. Even if that doesn’t happen or even if the release strikes many as lackluster instead of awesome, it won’t change the breakdown of NLEs to any great degree. If you work with FCP today, you are getting the job done and probably relatively happy with the product. I don’t foresee any change in the product that would greatly alter that situation.

The more important news – as it pertains to NAB – is that Apple is doing a good job of attracting a number of new partners to its core technologies. Autodesk’s Smoke for Mac OS X is a good example, but they are just one of the over 300-strong developer community that constitues the Final Cut ecosystem. A number of folks, such as ARRI, have licensed the ProRes codec, which is a pretty good endorsement of image quality, as well as workflow.


Certain versions tend to become milestones for a company’s software. I believe Media Composer 5 will be one of those. Avid renumbered versions with the release of Adrenaline several years ago, so this version 5 is really more like version 17. Numbers notwithstanding, other milestones for Media Composer had been the old version 5.x and version 7.x and I believe this newest release (targeted for June) will have just as much impact for Avid editors.

Media Composer 5 goes a long way towards keeping Avid editors in the fold and may even get some Avid-to-FCP “switchers” to come back. It adds limited 3rd party i/o hardware support, wider codec support (including RED and QuickTime through AMA) Pro Tools-style audio features and more FCP-like timeline editing functions. I highly doubt that it will really get any FCP diehards to convert, but it might pique the interest of those selecting their first high-end NLE. Down the road, I’ll have proper review when it’s ready for actual use.

In addition to Media Composer 5, Avid also previewed its “editing in the cloud” concept. This is largely based on work already done by Maximum Throughput, which had been acquired by Avid. The demo looked pretty fluid, but I think it’s probably a number of years off. That’s OK as this was merely a technology preview; however, it does have relevance to large enterprises. The same concepts developed for editing over the internet clearly apply to editing on an internal companywide LAN or WAN system.

The direction that Avid seems to be taking here – along with its expansion of Interplay into a family of asset management products – sets them up to make the Professional Services department into an IBM-style corporate consultation service and profit center. In other words, if you are a large company or TV network and want to implement the “cloud” editing concept along with the necessary asset management tools, it’s going to take a knowledgeable organization to do that for you. Avid naturally has such expertise and is poised to leverage its internal assets into billable services. The small editing boutique may not have any interest in that concept, but if it makes Avid a stronger company overall, then I’m all for it.

Adobe Creative Suite 5

CS5 is just about here. It’s 64-bit and uses the Mercury Playback Engine. But will Premiere Pro really pick up steam as an NLE of choice? Like Media Composer, expect a real review in the coming months. I’ve used Premiere Pro in the past on paying gigs and didn’t have the sort of issues I see people complain about. These were smaller projects, so I didn’t hit some of the problems that have plagued Premiere Pro, which mainly relate to scalability. Although it’s not touted in the CS5 press info, it does appear to me that Adobe has done a lot of tweaking under the hood. This is related to the changes for 64-bit, so I really expect Premiere Pro CS5 to be a far better product than previous versions.

Whether that’s true or not is going to depend on your particular system. For example, much has been written about the Mercury Playback Engine. This is an optimization for the CUDA technology of specific high-end NVIDIA graphics cards. If you don’t have one of these cards installed, Premiere Pro shifts into software emulation. In some cases, it will be a big difference and in other cases it won’t. There’s lots of native codec and format support, but not all camera codecs are equal. Some are CPU-intensive, some GPU-intensive and others require fast disk arrays. If your system is optimized for DVCPRO HD, for example (older CPU, but fast arrays), you won’t see outstanding results with AVC-Intra, which is processor-intensive, requiring the newest, fastest CPUs.

There’s plenty in the other apps to sell editors on the CS5 Production Premium bundle, even if they never touch Premiere Pro. On the other hand, Premiere Pro CS5 is still pretty powerful, so editors without a vested interest in Avid, Apple or something else, will probably be quite happy with it.

One format to rule them all

With apologies to J. R. R. Tolkien, the hopes of a single media format seem to have been totally shattered at this NAB. When MXF and AAF were originally bounced around, the hope was for a common media and metadata format that could be used from camera to NLE to server without the need for translation, transcoding or any other sort of conversion.

I think that idea is toast, thanks to the camera manufacturers, who – along with impatient users – have pushed NLE developers to natively support just about every new camera format and codec imaginable. Since the software can handle it, we see NLEs evolving into a more browser-style format. This is the basis for how Premiere Pro and Final Cut Pro are structured. It is now becoming a model that others are embracing. Avid has AMA (a plug-in API for camera manufacturers), but you also see “soft import” in the Autodesk systems and “soft mount” in Quantel. All variations of the same theme. In fact, Apple is the “odd man out” in this scenario, forcing everything into QuickTime before FCP can work with it.

The three advanced formats that seem to have the broadest support today are Avid DNxHD, Apple ProRes and Panasonic AVC-Intra. To a lesser extent you can add AVCHD, Sony XDCAM (various flavors) and DV/DVCPRO/DV50/DVCPRO HD.

Stereo 3D

Just when we thought we had this HD thing figured out, the electronics manufacturers are pushing us into stereo 3D. There was plenty of 3D on the floor, but bear in mind that there are very few in the production community pushing to do this. It’s driven almost entirely by display manufacturers and studios looking to cash in on 3D theater distribution. I think we are headed for a 3D bubble that will eventually drop back into a niche, albeit a large niche for some.

Whether 3D is big or not doesn’t matter. It’s here now and something many of us will have to deal with, so you might as well start figuring things out. The industry is at the starting point and a lot is in flux. First off – the terminology. Walking around the floor there were references to Stereo 3D, S3D, Stereoscopic and so on. Or what about marketing slogans like Panasonic’s “from camera to couch”? Or Sony’s “make.believe”? Hmm… Did the marketing people really think that one through? New crew positions will evolve. Are you a “stereographer”? Or should you be called a “stereoscopist”?

I watched a lot of stereo 3D demos and I generally didn’t like most of them. Too much of 3D looks like a visual effect and not the way my eyes see reality. It also affects the creative direction. For instance, the clip of a Kenny Chesney 3D concert film, which was edited in a typical, fast-paced, rock-n-roll-style of cutting, was harder to adjust to than the nice slow camera moves from the Masters golf coverage.

I also observed that most 3D shots have an extremely deep depth-of-field. More so even in 3D, than if you just looked at the shot in 2D. Shallow depth-of-field, like the gorgeous shots from the HDSLRs that everyone loves, don’t seem to work in 3D. I tended to pay attention to objects in the background, instead of the foreground, which I would presume is the opposite of what a director would have wanted. Many of the 3D shots felt like multi-planed pieces of animation. I have heard this referred to as “density zones” and seems to be an anomaly with 3D shots. A lot of these shots simply had the effect of a moving version of the vintage View-Masters of the past.

Obviously a lot of companies will try to produce 3D content from archival 2D masters. To answer that need JVC showed a real-time 2D-to-3D convertor, which was able to take standard programs and adjust shots on-the-fly using a set of sophisticated algorithms.  This creates some interesting artifacts. First off, you have to interpolate the information so that alternating fields become left and right eye views. Viewing the result shows visible scanlines on an HD display. That seems to be a common problem with current 3D displays.

Second, there are errors in the 3D. Some of the computation is based on colors, which means that occasionally some objects are incorrectly placed due to their color. That part of an object (like a shirt or certain colors in a flag) will appear at a different point in Z-space compared to the rest of the object to which it is attached. My guess is that casual viewers will almost never see these things and therefore such products will be quite successful.

My whole take on this is that we simply don’t see real life the way that stereo 3D films force us to see. Many folks will disagree with me on this, including a number of scientists, but I feel that people largely view life in 2D. Your eyes converge on an object and focus (both physically and mentally) on that object. Other things are on the periphery, so you are aware of them, but not focused on them. When you want to look at something else, you change your attention and change your focus, much like a pan or tilt with a rack focus. By the same token, we don’t see the sort of extreme shallow depth-of-field caused by some lenses, but that somehow feels more natural. These issues may evolve as stereo 3D evolves, but for me, the most natural images were those that were closest to 2D. If that’s the case, then you have to conclude, “What’s the point?”

Disruptive technology

Blackmagic Design definitely generated the buzz this year. They bought the ailing DaVinci Systems company last year and promptly told everyone in the media that they had no intension of selling cheaper versions of these flagship systems. We now know that wasn’t true. It turns out that Blackmagic has once again been true to form – as everyone had initially thought – and brought a brand new Mac version of DaVinci Resolve to NAB at a very low price.

Upon acquiring DaVinci, Blackmagic decided to “end-of-life” all hardware products (like the DaVinci 2K), end all support contracts and focus on rebuilding the company around its flagship software products – Resolve (grading) and Revival (film restoration). They redesigned the signature DaVinci control surfaces to better fit into Blackmagic Design’s manufacturing pipeline. You can now purchase Resolve in three configurations: software-only Mac ($1K), software (Mac) with panels ($30K) or a Linux version with panels ($50K). Add to this the computer, high-end graphics cards and drives.

The software-only version will work with a panel like the Tangent Wave, so it will allow a user to create a color grading room with the “name brand” product at a ridiculously low price. This has plenty of folks on various forums pretty steamed. I suspect there will be three types of DaVinci products.

Customer A is the existing facility that upgrades from an older DaVinci to Resolve 7.0 These people will build a high-end room using a cluster of Linux towers. That’s not cheap, but will still cost far less than in the past.

Customer B will be the facility that wants to set up a less powerful “assist” station. It may also be the entrepreneurial colorist who decides to set up his own home system – either to branch out on his own – or to be able to work from home to avoid the commute.

Customer C – the one that scares most folks – is the shop that sets up a bare bones grading room around Resolve, just so they can say that they have a DaVinci room. There are obvious performance differences between Resolve on a Mac and a full-featured, real-time 2K-capable-and-more DaVinci suite, so the fear is that some folks will represent one as being the other.

No matter what, that’s the same argument made when FCP came out and also when Color arrived. Grant Petty (Blackmagic Design’s founder) has always been about empowering people by lowering the cost of entry. This is just another step in that journey. I think the real question will be whether owners who have set up Apple Color rooms will convert these to DaVinci. Color is good, but DaVinci has the brand recognition and there are plenty of experienced DaVinci colorists around. At an extra $1K for software, this might be an easy transition. Likewise for Avid shops. Media Composer’s and Symphony’s color correction tools are pretty long-in-the-tooth and those owners are looking for options. DaVinci makes a lot more sense for these shops than investing in the Final Cut Studio approach. Hard to tell at this point.

Digital cameras

RED had its RED Day event. I was registered, but blew it off. Too much other stuff to see and quite frankly, I have little or no interest in being teased by cameras that are yet to come (late or if ever). In my world, HDSLRs have far greater impact than RED One or Epic. Judging by the number of Canons and Nikons I saw being used on the floor for video coverage and podcasts, I’d have to say the rest of the world shares that experience.

The real news is that RED is no longer the only game if you want a digital cinematography camera. Sure there’s Sony and Panasonic, but more importantly there’s ARRI with the Alexa and Aaton with the Penelope-∆ (Delta). Both companies have a strong film pedigree and these new cameras coming this year and in 2011 will offer some options that will interest DPs. The Penelope is oddest in that it’s a hybrid film/digital camera using two interchangeable magazines – one for film and another that’s a digital back. It uses an optical viewfinder, so the sensor if attached to the digital magazine in precisely the same location as the film loop in the film magazine. This leaves it exposed when you swap magazines, but the folks at Aaton don’t see this as an issue, aside from occasional, simple cleaning. In reality, you probably won’t be swapping back and forth between film and digital on the same production.

In my opinion, where RED has gone wrong has been in placing resolution over workflow. No matter how smooth, native or fast current RED post workflow is, they will have a hard time shaking the common “slam” that their workflow is slow, hard or expensive. ARRI and Aaton offer somewhat lower resolution than RED, but they record both camera RAW and direct-to-edit formats. The Alexa records in ARRI RAW as well as ProRes, while Aaton uses DNxHD (for now) as its compressed file format. This means that the camera generates a file that is ready to edit in Avid or FCP straight from the shoot. If you are working in TV, that may be all you need. If you are doing a feature film, it becomes an offline editing format. The camera RAW file is preserved as a “digital negative”, which would be used for color grading and finishing. ARRI RAW is already supported by a number of systems, including Avid (with Metafuze) and Assimilate Scratch.

Pure magic

Last year I was “wowed” by Singular Software’s PluralEyes. This year it was GET from AV3 Software. GET is a phonetic search tool based on the same Nexidia  technology that is licensed to Avid for Media Composer’s ScriptSync feature. Think of GET as Spotlight for speech. GET operates as a standalone application that can be used in conjunction with Final Cut Pro. It shouldn’t be thought of as just a plug-in.

The process is simple. First, index the media files that are to be reviewed. This only needs to happen once and the company claims that files can be indexed 200 times faster than real time. (ScriptSync’s indexing is extremely fast.) Once files are indexed, enter the search term into the GET search field and all the possible choices are located. Adjusting the accuracy up or down will increase or decrease the number of matching clips.

You can also do searches using multiple parameters, such as a search term plus a date or a reel number. Since the algorithms are phonetic, correct spelling is less important, as long are it sounds the same. GET includes its own player and clips imported into FCP will have markers at the matching points within the master clip. The shipping version of the product (in a few months) will also subclip the matching segments.

Other snapshots

There are a few other interesting things to mention.

CatDV from Square Box Systems has come along nicely. Many of my FCP friends have looked at this and characterize it as “what Final Cut Server should have been.” Check it out.

I ran into Boris Yamnitsky (Boris FX founder) at the show and he was more than happy to show me some of their upcoming release. Boris FX wasn’t officially exhibiting this year, but they are starting to roll out BCC 7, starting with the After Effects version (ready for CS5). It will include a number of key new features, like particles. What really caught my eye, though, was a color correction filter that combined functionality from both Colorista and Color. It’s a single layer color correction filter with 3 color wheels, but the twist is that you can apply masks with both inside and outside grades – all within the same instance of the filter.

Lastly, Lightworks is back. Well, it never actually left – just changed hands a few times. This placed it with EditShare after they acquired Geevs Broadcast last year. Rather than bang it out with the “A” NLE vendors, EditShare has opted to release it as open source and see what the development community can do for the product. It already has a small, loyal following among film editors and has a few, unmatched touches for collaborative editing. For instance, two editors can work on exactly the same sequence (not copies). One editor at a time has “record” control. As one makes changes, the other can see these updated on his own timeline!

See, I told you it was a fun year.

©2010 Oliver Peters

Glow as a color tool

I’m off to NAB, so here’s a quick tip that’s short and sweet. Video cameras often have a tough time with certain lighting and color balance situations and render an image with skewed colorimetry. This can yield skin tones that tend to be very monochromatic and pushed into the red-yellow-orange range. The new crop of HDSLRs, like the Canon EOS 5D Mark II can be big offenders, as they normally produce images with high contrast and saturation. These issues can sometimes be fixed through color grading, including using secondary color correction. Sometimes you simply shift the orange tones into a more magenta image, thus ending up with a cure that’s worse than the disease. In my opinion, the skin tone colors are less of an issue, than the monochromatic range of color. In other words, warm tones may be fine if you still can achieve some highlights within the image that give you back a face with some dimensionality.

Click images to enlarge



I recently cut a commercial for NYPD Pizzeria shot with the Canon 5D that exhibited some of these color issues. One way to achieve better highlights and a broader range of tones on skin is to use a glow filter. Here are some examples using the BorisFX BCC 6 AVX glow filter inside Avid Media Composer. There are numerous glow (or “chromatic glow”) filters on the market, including a number of freebees for FCP. BorisFX’s BCC6 collection is bundled with new Media Composer purchases, so it makes a great fix-it tool in this situation.



The key to success is to really dial back on the default settings. For these shots, I started with the “White Luma Glow” preset. This simply adds a glow to the highlights without also adding color to the edges of the glow effect. Next, you need to adjust the threshold, intensity and radius sliders to taste. The objective is to latch onto the subtle skin highlights that do exist in the image and accentuate them, without making the glow so obvious that it looks like an effect. By doing so, you make the existing highlights a bit brighter and also change the color from yellow-orange to white.



The end result is a face that appears to have some better dimensional lighting. It can tend to make the face look a bit shiny, but I feel this is still preferable to a face that’s a single shade of color. Remember to be SUBTLE in using this effect!

©2010 Oliver Peters

Adobe Lightroom for video editors

Video editors and producers frequently have to deal with photos. This is especially true of many documentaries where a large portion of the story consists of still images. No motion film or video was available to preserve that given event. This requires a large collection of possible shots to be organized and prepared for the edit. The latter task often involves color correction, painting out defects (tears, dirt, scratches, etc.) and scaling/cropping to match the video format of the NLE.

There are plenty of tools to do these tasks and more often than not Adobe Photoshop is used. I’ve written before about Apple Aperture as a solution for this, but recently I’ve been turning more to Adobe Photoshop Lightroom 2. Both Aperture and Lightroom are great tools to use. For me, there’s no clear winner is this debate, but you can find plenty of passionate posts around the web by photographers and photo enthusiasts who extol the pros and cons of each application. Regardless, both offer powerful tools for a video editor who has to deal with stills. Apple just released Aperture 3 and Adobe currently has Lightroom 3 in public beta. Although these add new features, the general requirements that I will discuss are fine in either app’s 2.0 version.


Photoshop Lightroom and Aperture both work in the same general manner. You can view stills in a library or catalog, which is used as a form of asset management. You may choose to have the application handle all control of your stills and the locations where they are stored. Or, you may choose to do that organizing yourself at the finder level and then import these folders and files into the library. The application lets you work with high-res proxy files that link back to the unaltered original photos.

Changes made to these proxies are previewed by showing you a “live” update of the original at full resolution. Any alterations are only applied when a file is exported. This exported file is a copy with the adjustments “baked in”, so the original photo is always left unaltered. Obviously one key difference between the two applications is that Lightroom is a cross-platform solution, while Aperture is Mac-only. If you are on the Mac, then the choice of which to use is largely subjective for our purposes.

There are three things at the moment that appeal to me more in Lightroom than Aperture. First, I like that Adobe uses a terminology that’s consistent with the files and folders of the computer. I organize my images in folders on my hard drive. These can easily be imported into Lightroom as a folder and shown in a manner that maintains that order. Although Aperture allows essentially the same method, Apple prefers to hide the fact that you are looking at a folder on the hard drive, by organizing the photo folders according to “projects” and “albums”. Not a problem, but I just think that’s a way of dumbing things down, as well as, unnecessarily mixing metaphors for the user. The second and third items for me are that Lightroom feels like there is better dual monitor support for the way I like to work and it is already a 64-bit application.

Lightroom layout

The Lightroom user interface is divided into five basic sections, which can be accessed via tabs in the upper right. These are Library, Develop, Slideshow, Print and Web. Library is where you see your catalog of assets. You can view the layout in several ways – grid, single image and others. Locations are on tabs down the left side, images in the middle and metadata on the right for the selected image. If you have two displays, then the selected image will be full-screen on the left monitor.

Develop is where you’d adjust, correct or alter the image. Pick an image from the filmstrip below and it loads into the center pane of the right monitor at one of the various, selectable proxy sizes. The same image is full-screen on the left monitor in either a “fit to screen” or a “1:1 pixel” display. The left portion of the right screen (your main working display), includes a navigator panel, presets and history. The image adjustment tools are on tabs down the right-hand side. I won’t go into any detail, since you can find plenty of in-depth tutorials around the web that discuss how these tools work. Suffice it to say that you have a powerful toolset for primary and secondary color-correction, stylistic effects, cropping, scaling and adjustment layer masking.

Slideshow offers you tools to control playback of a selected set of images on your desktop, complete with a presentation title. Print controls layouts for printing. Web does the same for displaying image collections on the web. Web choices include Flash, HTML gallery and Adobe Airtight display engines.

For the video producer

The toolset is great for fixing or giving a “look” to images, but the video producer is going to be most interested in how this makes life easier. That’s centered in three areas: cropping, metadata and export. Develop includes a cropping tool which can be restricted to certain ratios. If you want an image to fit neatly into the 16×9 of HD or 4×3 of SD, then set the constraints and the crop you draw will maintain this ratio. The same tool also allows freeform rotation – handy if you just need to move the image a few degrees clockwise or counter-clockwise to make the horizon level or correct for a badly angled tripod.

Photo organization is achieved through Smart Collections. Images can be tagged with addition metadata, such as key words and/or ratings. Smart Collection folders can be set up accordingly, so any images with the appropriate tag will automatically be filtered and pop up in the appropriate Smart Collection. A producer trying to cull 100 selected options from 1,000 possible images can easily tag the desired shots and automatically create a Smart Collection of the selects.

Once the images have been selected, then simply export one or more images for use in your NLE. Images can be exported from Library or Develop by right-clicking the image and choosing Export. Select a range of image to get more than one. This opens the export dialogue where you can select a preset or set new parameters for target export location, file format, size and color profile. You may also rename the exported file. So, exporting a batch of JPEGs – resized to 1920×1080 and labeled by project name and sequential number – is a simple one-step process. When the images are exported, any color correction, stylistic effects and cropping will be applied to the exported images.

©2010 Oliver Peters