Filmmaking

Timecode versus Sync: How They Differ and Why it Matters

Though many of us are familiar with cameras having two ports¨Da timecode in/out port and a genlock port¨Dwe may not consider why both might be needed. Perhaps we dismiss genlock as just a legacy holdover from the times before switchers had built-in frame synchronizers. The terms ¡°sync¡± and ¡°timecode¡± are often used interchangeably, and the fact that timecode can be used to sync devices only compounds this confusion. Sync (genlock) is like a beat that calls out when a field (as well as a line with tri-level) occurs, while timecode indexes each frame (or the equivalent period of time for an audio recorder) so that it can be identified in uniquely post. ?

“… timecode alone, while sufficient to provide syncing points for post production, may not be a reliable way to keep devices in sync with each other.”

If we need to synchronize multiple cameras, or a camera and audio recorder, we tend to just use timecode. The problem is, timecode alone, while sufficient to provide syncing points for post production, may not be a reliable way to keep devices in sync with each other.
With timecode, unless the cameras or recorders are re-jam-synced frequently, their timecode will start to drift apart. The problem is that the quartz crystal (or other piezo-electric material) clocks used in many cameras and quite a few audio recorders are too imprecise for critical applications. This means that two different clocks may have two different opinions on how long a second is, causing recordings to drift apart over time. Depending on the camera or recorder, the timecode may drift as much as a field in a non-trivially short space of time (30 minutes in some cases). Even a field offset is enough that sound and picture will become visibly out of sync when combining the two elements later, in post.
In light of this limitation, several responses come to mind:

Genlock the devices, forcing them to “fire” at the same instant
Rather than jam-syncing, constantly feed in timecode from a central source
Fix it in post, perhaps using a sync software

What is genlock?
To start, it is worth revisiting genlock. Genlock (generator locking) appeared in the broadcast world as a way to synchronize multiple video sources, including cameras, VTRs, and external feeds, so that their field rates are all in phase with each other. This was required to prevent so-called “jumping,” an artifact that occurs, when switching in live productions, if the all of the video sources are not in step with each other. For a long time it was necessary to genlock each device¨Dcamera or deck¨Dby sending a black burst or composite signal into it, originating from a common house clock. These days, many switchers handle frame sync on their own by using a frame buffer that can hold the field (or frame if the signal is progressive) momentarily until it falls into alignment with the program feed.

In the HD world genlock is still with us, but using a composite signal as a reference has largely been supplanted by tri-level, which emits pulses that clock both the frame rate and the line rate.
Because it synchronizes frames, genlock can be used to keep multiple devices from drifting apart. But it is tricky because, unlike timecode, the camera or recorder must always be hardwired to the source. On top of that, where different length cables are used, each camera must be calibrated to take into account the length of the cable run. In a fixed studio setting this isn’t a big problem, but it can be in the field. And it is especially problematic in cinema production where the setup might change with every shot.
Incidentally, another application for which genlock is used is to ensure both sensors on two-camera 3D rigs fire at the same time. A common misconception is that that you can simply link the two cameras together, feeding the composite output of one camera into the genlock input on the second. While this works in theory, the risk of miscalibrating the receiving camera means it is a much safer option to use a separate sync box that is wired to both cameras with cables of identical length.
What about wireless timecode?
A popular, if dubious, way to sync multiple devices in the field is to continuously feed timecode into them using a wireless audio transmitter and receiver. Much like black burst being a type of composite signal, timecode such as SMPTE 12M LTC can be passed as an analog audio signal, and produces a well-know “telemetry” noise if played through speakers. Since we live in Age of the App, there are many app-driven solutions that promise timecode sync over Wi-Fi. This can work, but the inherent unreliability of wireless technologies comes in to play; inexpensive wireless systems may even digitally process the signals, introducing delay. Also, when audio or video is being sent wirelessly, a loss of signal will be noticed right away¨Dpeople on set are watching that. But if the timecode signal drops, the camera just reverts to its internal clock and the problem may not get noticed until it is too late. Plus, using audio hardware, not to mention a Wi-Fi network, rather than dedicated timecode hardware, adds variables that may compound troubleshooting when things do¨Das they inevitably will¨Dgo wrong.
What about syncing in software?
If the footage is being recoded, it’s going to need to be edited anyway, so why not just use an application like Red Giant PluralEyes or Final Cut Pro X’s built-in facility to sync audio and video? If the material is sufficiently broken up or if you are able to cut up the clips into less than 30-minute segments, this will probably work, at least within a half-frame margin of error.* But for an all-day live event, during which each camera is being recorded separately, this solution would create a lot of extra work. It would be nice, if timecode is available anyway, to be able to use that exclusively when syncing cameras and audio in post.

The ideal solution, when all cameras can’t be hard-wired to a reference source as in a TV studio, is to connect each individually to a reliable sync device such as an Ambient Recording Lockit Box or a Sound Devices recorder with sync output. These devices use high-precision, temperature-compensated crystal oscillator (TCVCXO) technology that boasts less than one frame of drift per day. For situations in which timecode has to be relayed wirelessly, there is also wireless hardware based on the same TCVCXO technology which, while not as reliable as a hard-wired solution, is the next best thing since it is optimized to send timecode data rather than natural sound in the frequency range of the human voice.
To sum it up, true synchronization without drift requires two things: It requires a reliable timecode source to avoid drift, and it requires genlock to make sure the fields or frames are hitting the same beat. In some cases, sync problems can be fixed in post. But this can be time consuming, and the relative cost of getting it right on-set may win at the end of the day. The bad news is with the reduced need for genlock in broadcast, fewer and fewer cameras have sync ports, especially at the prosumer level. Eventually, we may be stuck with Wi-Fi but, hopefully, by then cameras will feature better internal clocks so that an hour in camera A will match an hour in camera B¨Deven if their timecodes aren’t perfectly aligned.
*A video field in NTSC areas lasts just shy of 1/60 of a second. Meanwhile, an audio sample is typically either 48 kHz or 96 kHz. That’s 48 or 96 thousand samples per second versus 60 video samples per second for video. Unfortunately most NLEs only let you adjust audio with precision of one frame, or at best, one field. This means you will never get audio that wasn’t already in sync precisely without resorting to dedicated tools.