Category Archives: Tips

Ice Mermaid premieres on KCTS

The 42-minute documentary, Ice Mermaid: Cold Resolve, that I made about Melissa Kegler’s quest to swim farther and colder than any American is now streaming on KCTS. You can watch the trailer and stream the film via the KCTS 9 Passport app.

I’m thrilled that the film has received this distribution from PBS, and what’s more, I’ve recently learned that it has been selected for a major international film festival! More details on that coming soon in a separate post.

I first met Melissa a couple of years ago, when I was teaching a cinematography workshop at Seattle Film Institute. I wanted to give my students an opportunity to work with some high-quality footage, and I thought this topic might provide us that in an accessible environment for the class. I found an open-water swimming group on Facebook, and the page owner, Oscar Brain, agreed to let us film him doing an early-morning swim at Golden Gardens. Here’s the little film we made from that:

While we were filming this, Oscar kept mentioning this person named Melissa Kegler. She’d swum the English Cannel, around Manhattan, Catalina Island, on and on. So that made me curious about her. After this short video was published on the group’s facebook page, I noticed that Melissa liked it. So I reached out to her and asked if she’d be willing to be our second subject for the last half of the class. She agreed, and we made this little video together:

It’s worth mentioning that our inspiration for the making of both of these videos was the film Nomadland. We definitely sought to mimic the use of early-morning light used so effectively in that film. Astute observers may even note the presence, in Oscar’s video, of the same lantern as the one carried by Frances McDormand in the Badlands campground scene.

While we were filming Melissa, she mentioned that she was kind of thinking about tackling the US record for ice swimming. I thought that sounded like something worth a longer project, and she agreed. So that was how the longer documentary project got started.

UW Medicine Cancer Fundraising Video

If there’s one video emblematic of the work I’ve done over the past 14 years, it’s probably this one, a fundraising video for cancer research. It’s work that I’m proud of. I’m lucky to have among my clients UW Medicine, an organization with a vast commitment to both treating and researching the underlying causes and possible cures for this disease. I’m a pretty empathetic guy, but always cancer has been something that happens on the other side of the camera. Until now.

My cancer diagnosis

One day at the beginning of May I noticed a lump in my throat. I’ll spare you the details, but after a month of ruling out other things, I got the word: throat cancer. Mine stems from HPV-16 infection when I was younger. I’m not alone in this – estimates are that at least 40 percent of the US population is positive for this virus, which is now preventable with a vaccine that wasn’t available in my day. I am one of about 1 percent of those infections that develop into cancer.

The good news, my doctor told me, is that if I was willing to undergo a brutal regime of chemotherapy and radiation, I had an 80 percent chance to beat it. Of course I took those odds. Most cancer patients are lucky to have 50/50 chance.

It’s now the end of August, and my last day of treatment was August 28. I’m in my first full week of recovery from treatment. Along the way, radiation in my throat took away completely my ability to taste food. Chemo made me as sick as I imagined it would, and then some. But I kept my hair. Not my beard – radiation on my throat fried that. Losing my ability to taste took the biggest emotional toll. I’m a closet foodie, and love to cook. It came as a shock to me how disgusting eating is when everything tastes like cardboard. Right now, food is like poison. Lucky, I accepted when given the option to have a stomach tube surgically implanted at the beginning of treatment, because I’ve relied on it exclusively to get the calories my body desperately needs to survive this treatment. I’ve lost about 15 pounds, and it could have been far worse. They say I’ll get my ability to taste back in about a month.

My commitment

Cancer changes you. After all this suffering, I now know that I’m alive for a reason. Before cancer, I had a keen interest in making videos to support organizations in search of cures. But now, it’s different. Now, it comes from inside me. And it’s time for me to give it a voice.

After spending the past 15 years delivering projects in this space, I’m at the top of my game. And now, I’m doubling down. So if you work at a cancer research organization, hospital, or health care system that wants to unleash the power of storytelling, I want to partner with you. It’s personal for me now. And I can’t wait to see what we’d be capable of producing.

Let’s work together

I’m available and ready for assignments starting mid October, 2023. For the video above, I traveled to Texas, and I’m happy to travel anywhere. I also speak decent Spanish. Please pass this post along to anyone you know who works in a cancer field.

My portfolio of related work: https://vimeo.com/danmccoomb

Contact info:

Dan McComb, dan@visualcontact.com, (206) 228-0780.

Call me, and let’s get to work making cancer a curable disease for more patients like me, and the extraordinary family in this video.

Interviews with natural light

I recently celebrated my 57th birthday. And the older and more experienced I get with a camera, the more attracted I am to shooting interviews in natural light. Maybe it’s just because I’m tired of all the work involved in setting up lights, grip and everything required to impose your vision on a scene. But I think it’s because, after all these years, I’ve finally realized that nature does it best.

Examples of natural light interviews

A couple months ago I shot for a wonderful organization called Friendship Circle (not my client, but I was hired by the New York based agency to DP most of the Seattle-based production). Take a look at these interview frames:

If I had this shot to do over, I’d add more negative fill to the camera-right side of her face, to giver her face a little more dimensionality. It’s a little flat this way, but I love how creamy soft the natural light is.

The two images above are a mother and her daughter, who were interviewed in the same room. I simply reversed the camera angle between shots, to get a different look. By opening and partially closing the blinds, I was able to control the light enough to make it work.

The setup for the daughter’s interview

But shooting without lights doesn’t mean shooting without anything. If you examine the frame above, note the diffuse reflection in the upper camera-left side of her glasses. See that soft white glow? That’s the 4×4 bounce, shiny side up, that I placed to try and make that camera-left side of her face a little brighter, to get a bit of wrap.

On her mom’s frame, I needed to use the bounce on the camera left (shadow) side of her face, because the shadow side of her face would have been too dark without it. So depending on the situation, I’m either wanting to add a little or take away a little light. And this can be achieved without light fixtures, just by carefully placing the subjects in the frame.

Natural light (with a little help)

The edge light on this frame is a little too hot. The correct intensity is in the frame below.

In the two-shot of the couple, I’ve placed them so that they are lit by a large bank of windows in their living room, with the kitchen behind them. There are some lovely small lights in the background that add interest and the window in the kitchen provides motivation for a light (but in this case, does not provide the amount of light I needed to separate them from the background). So this is a case where I added one small light, an edge light as it’s called, to just make them pop off that background. So this is a case where the shot isn’t entirely naturally lit. However, the shot would have worked fine without it. Adding it felt like a nice extra touch.

Because it wasn’t the key light, setting up this edge light (a Dedo DLED-7) was super easy – I just placed it on the floor, aimed upward with a parallel beam adapter. On a c-stand with a boom arm, I placed a #3 Cine Reflect Lighting System panel into the beam, which gives a nice soft light that is easy to focus without needing to set any flags. Done.

Naturally wrapping window light

I love this face. The way the natural light wraps around it in this shot, it’s like a painting. I would have had to set up a very large source (with a lot of flags to control the light spill) to achieve this look artificially. But with natural window light, with a little cutting and bouncing, it just works.

On another project, for UW Medicine, I shot a doctor interview last week in a conference room that had a long row of windows along a southern exposure (see frame above). Luckily for me, the interview was shot during the day when the sun was at a high enough angle that it wasn’t reaching too far into the room, which allowed me to use the windows as soft key sources of light (sky blue). Note, that if the weather had been mixed, as so often is the case here in Seattle, with partial clouds and sun breaks, this wouldn’t have worked. The exposure would have fluctuated with the cloud movement.

The setup for frame above, using blinds to control light entering the conference room with bounced light on the fill side.

To make this shot work, I used a 4×4 floppy to darken the background to get good separation, and a 4×4 bounce on the shadow side of his face to lower the contrast. I also had to warm the shot a lot in post, as that sky blue is very cool light, and I wanted a warm vibe.

How might this same shot have looked in artificial light? I just happen to have a comparison, because I interviewed another doctor in the same conference room a year ago. Here’s how that looked:

Same location, virtually same camera angle, but with artificial light. Feels artificial and “sourcy” to me.

At the end of the day, working with naturally light is all about seeing the light, and realizing what you can do with it with just a little shaping.

Finally, here’s the finished Friendship Circle piece:

Finish Strong ad by Ford features 6 of my Covid Nurse clips

I’m thrilled to have shot the first 6 clips of this TV spot from Ford, encouraging people to finish strong in the battle with Covid.

All of the clips originally appeared in the Covid Nurse video I made for UW Medicine last April, shot on Canon C500mkii.

Big shout out to archival producer Stephan Michaels, who jumped through a lot of hoops for me to get this across the finish line.

Now we just have to get ourselves there. Finish strong, people!

How to shape natural light for cinematic outdoor interviews

“Lighting is a subtle craft,” I told my students yesterday. Then we headed to a baseball field to see what that looks like. Our assignment: To shape natural light into something cinematic for an outdoor interview using only two pieces of grip. A 4×4 foam core bounce, and a Scrim Jim Cine Frame (8 x 8′).

Our first challenge was to choose a background. We placed our subject on the edge of the field, and slowly walked in a circle around him to observe how the light fell on him in relationship to the background.

People rarely look good in direct sunlight. It makes them squint. So we placed our subject with the sun behind him, as you can see from the shadows below.

By doing this, we penciled him out from the background with natural rim light. This works best when you can find a background that is at least a stop darker than your subject. In this case, we could do that very easily, because there were lots of trees. Trees absorb light. So here’s our starting frame:

1. The camera left side of his face is visibly darker because there is a line of trees off camera in front of him on that side, and open field on the other.

This first frame shows the importance of considering how nearby objects will impact your subject. If he had been standing in the middle of the field, his face would have been evenly illuminated. But because there is a line of trees behind us and camera left, the light is wrapping in from camera right which is exposed to open field.

Our goal here is to make the most of the tools we have to create a naturally lit interview that feels organic and makes our subject look great. So let’s start playing with our toys and see what each does for our shot.

2. Bounce 3/4  on camera left. 

Our first setup is to bounce light on the same side as the sun, 3/4 angle camera left. This evens out the light on his face, eliminating the dark areas created by the trees in front of him. But it’s pretty flat. Let’s move the bounce a little farther to the right…

3. Bounce under lens

Placing the bounce directly under the lens is a pretty common sight on film sets. It’s a great way to get a nearly invisible fill up into the eye sockets of your talent. But in this case, it still feels pretty flat.

4. Bounce 90° camera right

Placing the bounce on the camera right side gives us a nice dimensionality, but it doesn’t look organic, because the sun is coming from behind and to camera left. So it looks lit. No good. Let’s put down our bounce for a minute and see how the negative fill affects our shot.

5. Neg only

The neg by itself does a nice job of evening out the light on his face, but he’s now too dark overall. If we raise our exposure to compensate, the  background will get too hot, and we want to leave it alone at a stop under. So let’s bring back our bounce, on the same side as the sun (called “same-side fill”) and see what happens.

7. All in – bounce camera left and neg camera right. 

Wow! This looks pretty good. Adding the bounce wraps the light of the sun around his face very naturally, and the neg on the other side gives us the 3-dimensionality that we’re always striving for in cinematic shooting. 

If anything, I’d say we could have raised the 4×4 bounce a little higher to do something about the shadow that’s forming a triangle between his camera right cheek and eye. 

What techniques do you use to shape available light for cinematic outdoor interviews?

How to choose the right hard drives for 4K video projects

Hard drives for 4K video

One of my students at Seattle Film Institute asked me a question the other day: “How do you choose hard drives for 4K video?”

Most beginning filmmakers are on tight budgets. So my short answer was: “Buy the cheapest drives you can afford to store your media, and the most expensive drive you can afford to edit it.”

Let’s unpack what that means in today’s technology landscape.

When I get a new 4K project, I buy two hard drives big enough to hold all project media. In my case, that’s generally 1 to 2 terabyte drives. At the end of each day of production, I’ll lay off the files to both simultaneously. I use  Hedge which enables me to have two backups of the media from the get-go.

Drives I recommend

The drive I have most frequently chosen for this is the 2TB Backup Plus Slim Portable External USB 3.0 Hard Drive. It currently costs $65. Black Magic Speed Test clocks it at 75 MB/s. That’s way too slow to edit 4K video on, of course, but we only need it for storage. The nice thing about USB3 is that it’s compatible with just about any computer out there, both Windows and Mac. So if your client wants the files at any point, you can simply hand them the drive.

If you have a computer with USB-C, however, I recommend the aPrime ineo rugged waterproof IP-66 certified drives. For $95, you’re getting a drive you can drop in the water, with rubber bumpers to break its fall, and a built-in USB-C cable. These drives clock for me at around 110 MB/s read and write speeds. So for a little more money, you get drives that are a LOT more rugged and a little bit faster. 

Both the above drives are about the size of a typical iPhone. And that matters to me – they will (hopefully) live out their lives in a drawer. I like that they won’t take up much space.

Small is the new big

So now let’s talk about the fun stuff – speedy editing drives. I used to rely on toaster-sized RAID drives to get the speed and reliability I needed for editing. But with SSD, that’s no longer the case. With solid state media, I have found speed, reliability AND the benefit of being able to take entire projects with me wherever I go. With this freedom, I find the only time I’m cutting at a desk is when I’m doing audio passes with studio monitors. I’ll connect my laptop to a larger monitor at various stages of the project. But even then, I tend to park myself all over the house. For example, the kitchen table, or on the coffee table in the living room.

My tried-and-true favorite 4K editing drive is currently the  1TB T5 Portable Solid-State Drive (Black). I get read-write tests to about 300 MB/s which is more than fast enough to edit 4K video. This drive is the size of a business card (and only a little thicker). It is now available in a 2TB size for under $500, which seems like a bargain to me. But the landscape is changing. 

Here comes Thunderbolt 3 

Most new Macs now support Thunderbolt 3. If you are one of the fortunate people who has one, I invite you to behold the 2TB X5 Portable SSD.

I hesitate to call it affordable at $1,400, but it gives wings to your 4K projects. I recently retired my late 2013 MacBook Pro and made the leap to a late 2018 MacBook Pro, so I finally have a computer than can keep up with such a beast.

I’ve been putting the X5 through its paces by editing multiple streams of  ProRes RAW 4K DCI on a project that weighs in at 1.7 TB. With everything loaded on the drive, here’s how the X5 performs:

X5 Write/Read times with drive 3/4 full
The X5 clocks even faster read and write times with Black Magic Speed Test

It’s interesting to note that even though this drive is lightning quick, it’s still not nearly as fast the internal drive of the 2018 MacBook Pro:

Late 2018 Macbook Pro internal drive speed test

Putting it all together

What these numbers tell me is that to get the absolute best performance from Final Cut Pro X, you want to keep your FCPX Library file on your local hard drive. Then, store all of your media on the X5. Beyond speed, this has the added benefit of allowing automatic backups of your FCPX project files. To get near real-time backups, use an automatic cloud-based backup service like BackBlaze. Because it runs in the background, BackBlaze won’t slow you down at all and you won’t have to remember to back up your project. Note, however, that BackBlaze is not an efficient way to back up your media drives. But you’ve already got yourself covered there with those cheap backup drives.

For longer 4K projects like feature-length films, you’re of course still going to be living in the land of RAID when choosing hard drives for 4K video. But for the small projects, I find this 3-drive system, in which you back up your media on 2 cheap drives, and edit it on a single fast one, is a winning formula. 

What hard drives for 4K video are you using? 


To infinity and beyond: A fix for the focus issues with Contax-Zeiss and Metabones Speedbooster

Well, that sure was simple. Turns out the solution to fixing the infinity focus issue on vintage Contax-Zeiss with Metabones Speedbooster doesn’t involve using a Dremel tool. And no, you don’t have to recalibrate the lenses, either.

You just have to loosen a tiny set screw on the Metabones Speedbooster, and rotate its lens element (in my case, counter clockwise). Boom! Everything comes into focus.

Each of my vintage CY Zeiss lenses has a slightly different infinity focus point, however, so that means setting the infinity focus to the lens that is the farthest off. This means the rest of the lenses now focus beyond infinity, which isn’t really a problem. All of my modern AF Canon lenses do that by design. But it does mean the witness marks are off to varying degrees.

There’s one other issue with this fix that affects my Sony FS5: rotating out the rear element on the Speedbooster causes it to protrude further (see image below). As a result, when the Speedbooster is screwed into the FS5, it makes contact around the edges of the ND filter mechanism. That’s not ideal, but it’s a slight contact, and the ND filter functions normally. So I’m going to chase the shallow look to infinity and beyond.

I love the vintage look of these lenses so much that, before I discovered this solution, I’ve been shooting pretty much every interview I do with them. That hasn’t been an issue, because interviews never happen at infinity. But now that I have the full range of focus, I’m a heck of a lot more likely to shoot my b-roll with this glass now, too.

Want to see for yourself how amazing this vintage look is? You can rent my infinitely focusable 5-lens set of Contax-Zeiss cine-mod lenses on ShareGrid Seattle for $60/day. Set includes modified Speedbooster for use with Sony E-mount cameras.

SaveSave

SaveSave

SaveSave

SaveSave

Teradek ServPro turns iPhones into superb client monitors

Teradek Serve Pro

Teradek Serve Pro, ready for action

For smaller documentary film productions, a client monitor is a luxury that’s rarely on the table. Not only are they expensive, but they are complicated. First, you need to set up a radio network, and then, a monitor. It’s expensive, and time consuming.

But Teradek has changed that equation with the Serve Pro, a small camera-mountable box that creates a wi-fi network that up to 10 iOS or Android devices can use to monitor video. It’s been a game-changer for me and my clients. Here’s why it’s my new favorite tool on location.

I’m often shooting projects where up to three client representatives are on location. Client reps like to have something to do when they are there, besides just watching you work. But they also need and want to stay out of my way.  On larger sets, they are usually clustered around the monitor. So what Serve Pro does is give them that experience, the opportunity to see what’s going on, and provide you feedback if they want, based on an image they are seeing on their phone or tablet.

I was skeptical at first that the quality would be that good – I imagined having to spend time disclaiming the image, telling them that it would look better in post. But that turns out not to be the case. Not only is the image top-notch, but I find that it’s as good or in some cases better than the image coming to my Shogun and SmallHD monitors if you have an iPhone 7 or newer.

HD video monitoring with professional controls

Not only is the image bright and sharp, but all of the controls that you would expect to find on a SmallHD monitor are included in the free Vuer app! You read that correctly: you can apply a LUT, view peaking or zebras, even false color if you want. You can look at a vector scope, histogram, etc. But clients generally just want the image to look good. And Serv Pro has that covered, too.

I generally shoot in Slog, so I just email my clients a LUT before the project, then show them how to load it into the app on the morning of the shoot. It takes just a few clicks, and boom, the image they are seeing looks fantastic, as I send them a LUT that is designed for client viewing with slightly crushed blacks.

Shogun’s problem and solution

When I first started shooting with a Shogun Inferno, I was disappointed to find that I couldn’t get an HD signal out of the Inferno when shooting RAW. So that pretty much meant I couldn’t use the Serv Pro. But the latest firmware (version 9 that also supports ProRes Raw) has fixed this issue! So now it’s possible to record raw to Shogun Inferno AND send an HD signal out to the Serv Pro.

Wooden Camera battery plate for Sony FS5

Serve Pro attached with velcro to the back of a battery plate. 

Mounting the Serv Pro to camera

It’s possible to attach the Serve Pro to the camera using 1/4 holes that are drilled into the unit in two places. But what I’ve found works great for me is to attach two strips of velcro, and use that to mount to the flat back of a battery plate. This doesn’t cause the unit to overhead, and works great to keep a low profile on the camera while shooting.

Battery life

It’s possible to power the Serve Pro from a smaller battery with a p-tap, but I find that using full-size vlock batteries is the way to go. A 98wh battery will power my camera, Serve Pro, and Shogun 2.5 hours.

The Serv Pro costs $1,800. I rent it to my productions for $105/day. My clients love it so much they call me before shoots to make sure I’m brining it. So it’s become a standard line-item in most of my productions, and I will have it paid off by the end of the year.

You can rent my Serv Pro and check it out for yourself! My kit, which includes HDMI, SDI and power cables in road case as pictured above, is available on ShareGrid for $105 for the day or weekend for productions in the Seattle area.

Rent my Teradek Serv Pro on ShareGrid for $105.

Do you provide monitoring for your clients?

Tentacle Sync gives timecode a good name

Tentacle sync and sound devices mixpre 3

Tentacle Sync in “green” mode feeding timecode to a Sound Devices MixPre3

When I made my first film about 7 years ago, I was sort of lucky. There was this little software app called PluralEyes that enabled me to do dual-system sound without knowing the first thing about timecode. That was cool, and it served me rather well for a long time. But recently I’ve been edging out of this friendly zone into more dangerous territory where I’m getting hired by directors who expect me to provide files with timecode. Because, well, timecode.

But I’ve been pleasantly surprised to discover that, with the right tools, timecode doesn’t have to be scary. In fact, it has really speeded up my post-production workflow. But I’m getting ahead of myself. First, a reminder of why timecode can suck.

Timecode is complicated

When a director hired me earlier this year for a timecode shoot, I did what any self-respecting DP would do: I hired a professional sound recordist. They usually come with everything needed for timecode – a recorder/mixer that generates timecode, and the “lockit” boxes that plug into cameras that accept timecode, for a complete solution. But this time, the recordist told me her lockit box was broken, so I’d have to provide something myself.

I was able to rent a lockit box from Lensrentals, and I brought it to the shoot assuming the sound recordist would know what to do with it. She responded as if I’d just handed her a snake. Fifteen minutes later, she allowed that she didn’t know how to make this particular box work. I took one look at the long row of cryptically labeled dip switches, and…told the director I’d record reference audio and he could use PluralEyes to sync the footage.

The look on his face told me I’d disappointed him.

So the next day, I did some research. Turns out there are a lot of solutions that claim to be “industry standard” on the market. Most of them are big, clunky, and expensive. For example, the lockit box I’d rented is about twice as big as a Sennheiser wireless mic receiver, and has to be somehow anchored to the camera. But my Sony FS5 doesn’t have a timecode in port, so how do I work around that? Finally, it needs to be externally powered somehow…

But there’s a better way. It’s called Tentacle Sync.

Tentacle Sync is simple

Tentacle makes a tiny plastic box (less than half the size of a Sennheiser G3 wireless mic receiver) that has built-in velcro for attaching to your camera. It doesn’t need external power, as it comes standard with a built-in lithium ion battery that will last for even the longest day of shooting.

Tentacle Sync doesn't take up much space

Tentacle Sync doesn’t take up much space on a Sony FS5

There is only one switch on the Tentacle – on or off. All other controls are accessed through an app, which connects wireless using bluetooth via either iOS or Android.

tentacle label

Each tentacle is named at the factory (or you can assign your own custom name) to tell them apart from each other

I use a Sound Devices MixPre 3 as my primary recorder when I’m not working with a pro recordist, and it does not have timecode built in. However, it supports timecode, meaning, it will accept timecode from a generator. In practice this means that the MixPre 3 is a fine timecode-generating recorder/mixer when paired with a Tentacle Sync. Used in this way, you need one Tentacle Sync unit for the recorder, and a Tentacle Sync box for each of the cameras you will be using on the shoot. A Tentacle Sync Sync E Timecode Generator with Bluetooth (Single Unit) sells for $289 apiece, but you can save money by purchasing them in a set. The Tentacle Sync Sync E Timecode Generator with Bluetooth (Dual Set) sells for $519.

I now own three of the boxes, and they allow me to pair two cameras with timecode generated by the third unit, which lives on the MixPre 3.

Tentacle Sync E comes with everything needed to get up and running, and more

Tentacle Sync E comes with everything needed to get up and running, and more

How to configure a Tentacle Sync

Setting up the units is a breeze. Each comes with a unique name from Tentacle (which I’ve labeled them so I can tell them apart easily). Press and hold the on switch until the light flashes green – that puts it in timecode generator mode. Put this one on the recorder. Press and hold the others until the light turns blue – this puts them into timecode receiving mode. Then, briefly connect the master to the slave, and this syncs the timecode.

The Tentacle Sync units come standard with a 1/8″ to 1/8″ trs cable, which works great for pairing to the auxiliary input of the MixPre3, which can be set to receive timecode. It also works great to send timecode to DSLRs or mirrorless cameras via the audio jack.

Larger cameras like my Sony FS5 that don’t have a dedicated timecode in port can also receive timecode via one of the two audio channels, although you’ll need a Tentacle Sync Tentacle to 3-Pin XLR Cable (16″) to get the timecode out of the Tentacle box (which they sell).

I have a Shogun Inferno, which has a dedicated Sync port for Linear Timecode (LTC) in, and for that, you’ll need a Tentacle Sync Tentacle to BNC Cable (Straight, 16″) or Tentacle Sync Tentacle to BNC Cable (Right-Angle, 16″).

A professional sound recordist I work with frequently, Scott Waters, loves the Tentacle boxes. He owns the required Hirose to 1/8″ connector cable required to sync timecode from his Sound Devices 633 to each Tentacle Sync on my cameras.

Tentacle workflow

It’s one thing to get timecode sent to the audio port of your camera, but that’s a non-standard way of doing timecode. I’m not even sure Davinci Resolve can read timecode on an audio channel if you using Resolve to batch-sync, as many productions do.  So how do you take advantage of it?

Syncing is near instantaneous and offers XML and file export options

Syncing is near instantaneous in Tentacle Sync Studio, and offers XML and file export options plus multicam mode

Enter Tentacle Sync Studio. This is drop-dead simple software designed with smart defaults, which looks for timecode on one of the audio channels of all media dropped into it. If it finds timecode in an audio channel, it automatically uses that to sync, rather than the timecode in the video file. Once you drop all of your footage and audio into the app, it automatically builds a sync map, which you can view to see which of your footage is synced.

You can export footage via XML into your NLE of choice, or you can export new clips with the good audio embedded in them. You can optionally keep your reference audio, but the default is to replace the reference with the clean audio (which is what I almost always want).

Shogun’s 2.5 frame delay

I use Tentacle Sync with my Shogun Inferno via a BNC cable, and it works great, with a small caveat. For some reason, there is a 2.5 frame offset in the timecode, meaning the audio needs to be slipped 2.5 frames forward in the timeline to be perfectly in sync. Luckily, the Shogun Inferno has a timecode preference that allows you to set a timecode offset, but it only works in 1-frame increments. So I’ve set mine to -2 frames, and it’s close enough.

Shogun Inferno's timecode controls allow offsetting the TC signal. I find a -2 offset works best.

Shogun Inferno’s timecode controls allow offsetting the TC signal. I find a -2 offset works best.

Tip: When sending timecode to your MixPre 3, if you use presets (and you should because they are a huge timesaver), make sure you configure each preset to accept timecode via the aux in. I forgot to do that and when I chose a new preset, it reset to the default (timecode in via HDMI) and I was very sad when I discovered in the edit that most of my files contained no timecode).

Why use timecode on your projects instead of PluralEyes? Because it saves time. Syncing based on audio waveforms is very processor intensive, and if you’re working on a large project, it can literally take hours. Furthermore, it’s messy (creating folders full of temporary files), and not always accurate. Some files just never sync, forcing you to align them one at a time, a very cumbersome process.

With Tentacle Sync Studio, timecode syncing is almost instantaneous. And the Tentacle Sync boxes themselves are unobtrusive, and can be attached with velcro (included in the kit) to any size of camera.

Since I’ve discovered the joys of syncing with Tentacle, I pretty much use it on every project now. What about you? How do you keep your clips in sync?

SaveSave

SaveSave

Exploring Apple’s new ProRes Raw with Sony FS5

Apple’s ProRes Raw is the most exciting thing out of NAB this year. Fancy new LEDs that immitate police lights are cool, but they will be surpassed by even cooler lights within a year or two. ProRes Raw, however, is going to be with us for a long time. And for good reason.

I’ve been playing with ProRes Raw for the last few days, using my Sony FS5 and Shogun Inferno, bringing the footage into Final Cut Pro X to learn how it works and what it can offer me and my clients.

First of all, let me just say, I’ve never been a big fan of 4K. Few of my clients ask for it, and when they do, it slows me down in my post-production by a factor of three or four. So it’s expensive. Everything takes forever to render, and in the end, we invariably end up delivering HD anyway. So why shoot it?

Well, ProRes Raw just gave us some pretty compelling reasons.

  1. First, it doesn’t slow your editing down nearly as much as before. I found that by setting playback to “Better Performance” in FCPX, I was able to play back a ProRes Raw 4K timeline on my late 2013 MacBook Pro without any frames dropping.
  2. Secondly, to get ProRes Raw using FS5 and Shogun, you MUST acquire in 4K. There’s no option to record a 2k signal, apart from recording in high frame rates. So it would take a lot longer to down convert your footage and transcode to HD than to stay in 4K.
  3. Thirdly, the options you get in post by acquiring the raw and the 4K are actually pretty cool given that they no longer slow you down the way they used to.

So, if you could have a Tesla for the same price as a boring gas car, which would you buy? (Only, you’re going to have to wait a long time for that Tesla, and you can have ProRes today.)

Consider this:

  • No transcoding = no need to make dailies. Your raw files ARE your dailies – and you can change your mind about camera LUTs later without re-rendering 🙂
  • 12-bit color depth. Woot.
  • File sizes aren’t much bigger than current flavors of ProRes

So how does it work?

To get started working on your ProRes Raw footage in FCPX, you need to show the “Info Inspector” and change the default view from Basic to General. This reveals the ProRes Raw-specific controls that allow you to debayer your footage.

RAW to Log Conversion controls which flavor of Log you want to debayer to (it’s also possible to choose None and work directly with RAW – more on that in a moment).

Camera LUT applies a LUT to the LOG signal before it’s sent out – akin to baking in a look on your recorder (but it allows you to change your mind later).

The first thing I wanted to test out was how the Shogun was interpreting the raw signal, and see if that was the same as the way Final Cut Pro X interprets it. Turns out, they’re different. And that has implications on how you shoot.

Shogun interprets the raw signal from a Sony FS5 like Slog-2 – only it’s not.

When you look at the raw signal on the Shogun, it doesn’t matter whether you send it Slog 3 or Slog 2 on your FS5 – the Shogun thinks it’s getting an Slog 2 stream, at least for the purpose of applying LUTs. It’s NOT an Slog2 stream, though – it’s raw sensor data. But Shogun is receiving metadata from the camera so that it can interpret it, and whatever it’s doing is in SLOG2. So when you drop on a LUT designed for Slog3, it looks all wrong (way too crunchy). So until we have more options in FCPX, it’s important to stick with using SLOG2 LUTs for monitoring on the Shogun.

When you bring your ProRes Raw footage into Final Cut Pro X, however, there’s no option to debayer the footage as SLOG2. Currently the only options for Sony cameras are Slog3 in two flavors – either Sgamma3 or Sgamma3.cine.

You can also apply a camera LUT in FCPX at the same time you are debayering, but it’s important to note that if you do, it’s the same thing as applying a LUT at the beginning of your color pipeline, rather than at the end of it. Before ProRes Raw, this was generally the wrong place to do this, because if the camera LUT crunches the shadows or clips the highlights, you won’t be able to recover them when making adjustments to the clip later. But. And this is a big but: When you apply a camera LUT to ProRes Raw at the source, THERE IS NO RENDERING. Did you hear that correctly? No rendering.

Actually, with FCPX 10.4.1, it’s not only possible to apply a camera LUT to RAW, but you can also apply a camera LUT to any LOG footage.

Yep, you can play with LUTs to your heart’s content, find one that gives you a ballpark without clipping your highlights, and take it from there. So this really is a fantastic way to make dailies – but with the ability to change your mind later without incurring any rendering penalty. How cool is that?

But (a small one this time). Since you can’t shoot with SLOG3 LUTs using Shogun (or more correctly, since Shogun won’t interpret your raw footage as anything other than SLOG2), and since it doesn’t work to apply an SLOG2 camera LUTs in FCPX due to the fact that it’s debayered as SLOG3, we at a bit of an impasse, aren’t we?

Matching Slog2 to Slog3

Matching Slog2 to Slog3

Turns it it’s not that difficult to overcome. It’s entirely possible to find an SLOG2 LUT for shooting that comes fairly close to matching what you’ll see off the sensor in SLOG3 in FCPX. I did this by placing my MacBook monitor on a stand next to my Shogun, and trying lots of different LUTs on. Turns out that loading the SONY_EE_SL2_L709A-1.cube LUT onto the Shogun, I’m able to get pretty close to what I’m seeing when I open the clip in FCPX using the default Slog3 raw to log conversion, and the default slog3/sgamma3 camera LUT. I do look forward, though, to FCPX supporting more options for debayering.

OK enough about LOG. Let’s roll up our sleeves and get straight to the real RAW, shall we? All you have to do is select None as the option on Raw to Log Conversion, and you get this:

You get an image that looks, well, raw. To work on it, you have to drop it into a timeline, and start applying color controls. That’s great and all, but now you have to deal with rendering. So right away you can see that there are some workflow reasons why you’d want to stick with Log. Having said that, you can do all the corrections you want to do and save them using “Save Effects Preset” so you can apply them in a single click later. Not the same as a LUT, but definitely a timesaver. But I digress.

I was blown away by how you can pretty much, within FCPX, do the same things I used to doing to an image in Photoshop. You can recover highlights until forever! But you do have to be careful – pulling highlights down has an impact on the rest of the image, so you have to jack a contrast curve something crazy if you want to recover highlights outside a window. But you can do it:

Recovered highlights in a bright window

Recovered highlights in a bright window

You’ll notice that the brightest highlights went a little crazy there, but good news – you have pinpoint control over color saturation using hue-sat curves in FCPX. After a little playing around, I was able to calm it down like so:

Tamed highlights

Tamed highlights

The bottom line is you quickly find yourself spending a LOT of time fussing with things when you get into grading the raw RAW. But it sure is a neat thing to be able to do it when you want to.

I think I’m going to find that I stick with a Log workflow for now, but with the option to to straight into raw for some challenging shots, where I really want the kind of control that Photoshop gives a still photographer.

Blowing the image up to 200 percent, a little noise becomes apparent after all the pushing around:

Noise in 12-bit raw

Noise in 12-bit raw

But dare I say, it’s filmic noise, not ugly digital noise. That right there is the benefit of shooting 12-bit, most def. Shooting 10-bit 4k on the Shogun you definitely have noise issues in the shadows. I’d say definitely less so with 12-bit.

Here’s a side-by side comparison of 10-bit SLOG2 footage with LUT applied next to RAW 12-bit:

4k enlarged 200 percent

There’s currently no tool in FCPX raw controls akin to the denoise setting in Adobe Camera Raw tools. So to denoise this, you’ll need a plugin like the excellent Neat Video Denoiser. And that, my friends, is where the Apple ProRes party slows down a bit. You’ll definitely want to apply that filter as the very last step of any project, because the render times with denoising are painfully long.

But never mind a little noise that has to be enlarged to be seen. ProRes Raw has me all fired up. This codec looks like a game changer to me. More workflows will emerge quickly, supported by more software and hardware tools such as cameras that shoot direct to ProRes Raw. In the mean time, Apple has handed us Sony FS5 owners a way to stay on the bleeding edge for the foreseeable future (without upgrading to the Sony FS5 MKII, thank you very much).

 

SaveSave

SaveSave