Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

Music & Pictures (Part 4)

We've now reached Part 4 where Robin Lumley surveys the technology of syncing sound and vision.

Musician, producer and film music composer Robin Lumley continues his six part series designed to take the mystery out of writing and recording music for films and television.

Control room of West Heath Studios.

Last month, we discussed the ins and outs of actually setting about recording music to picture, albeit in a relatively primitive way; that is, 'wild' in film-makers terms. In other words, recording the music to supplied picture without any modern technology (SMPTE, MIDI, Q-Lock etc). Now, if you were to re-read the February edition of my series, and also the article featuring Rod Argent and his 'Soldiers' series, you'd find that Rod had actually done most of his music to picture 'wild'. This month, I'm really only going to explain how to use the current state of the art technology to record music/dialogue/sound effects and cover the evolution of the various methods, which are, to all intents and purposes, different routes to arrive at the same end result.

Well, let's take it from last month's article, that all the basic relationships between music writer and film director hold true, whether one is dealing with stone-age analogue equipment, or the very best of modern technology.


To obtain absolute accuracy and synchronisation between two different sets of spinning spools, be they video tape, multitrack recording tape, or film, and being able at any one moment to make them run together from point A to point B, and be thus repeatable over an infinite number of takes, requires some kind of device to weld them, electronically, together.

Some years ago, to give you a potted history, NASA, the American Space Exploration organisation, developed a system of tagging an individual frame of film or video tape with its own individual inherent number. For obvious reasons, this was an essential, advantageous idea, especially when later analysing pictures from the Moon, or information from various space probes, manned or unmanned. So a digital code was invented ie. a code that could be burnt into each individual frame of picture, generated by a 'black box'.

This finally evolved into the SMPTE code, which has since become the standard timecode syncing system in the video/film world. The initials actually stand for Society Of Motion Picture and Television Engineers. (Now, I'm aware that some of this information you may find, not only in past issues of this worthy periodical, but in many others also).

SMPTE is a digital code system which counts picture frames, seconds, hours and minutes. Its first use, so far as I am aware, was to make possible 48-track recording, by burning SMPTE code into one track each of two 24-track tape recorders, ending up with 46-track recording (taking one track each from both 24-track machines for the code). In the days of big budget recording during the late 1970s, this was fine, but current economic restraints have pushed the recording industry into other areas, outside the scope of these remarks. However, of that, more later.

If SMPTE code were recorded onto one track of a video tape recorder simultaneously whilst being recorded onto one track of, say, a 24-track audio multitrack recorder, the two would then have a means to operate in complete synchronisation. This begins to make life very easy if one is attempting to record sound to picture, as the visual component and the audio component are irrevocably locked together, and performing music or whatever becomes an absolute doddle, as both parts are linked, no matter how many times one re-runs the tape/film or re-overdubs.


That's one way, but there are others... take the Audio Kinetics Q-Lock system for example, which has virtually become the norm in large studio complexes for controlling the syncing of recorded sound to picture. It's a similar sync idea in that say, a Studer 24-track can be linked to either a VTR or film transport and from one control panel the sound recording medium (in this case a 24-track tape machine) and the visual medium (VTR or film transport) will run together forever locked up. So all the 'hit points' and their musical equivalents are fixed together whenever the machinery is run, making continuous overdubbing a very easy operation. But say, if during the recording, a director wishes to change something in the picture (ie. perform a last minute edit), locating an edit point can be difficult because SMPTE is what we call a Longitudinal Timecode, or LTC. And if you're inching your way along a tape to find an edit point, LTC is difficult to read, just like any other audio track if moving Very slowly.

A system called the GTC Editon solves this problem quite nicely by using a Vertical Interval Timecode or VITC, which is recorded on two lines at the top of each video frame. Thus one can go from still-frame mode, inch along, or run at speeds up to 50 times that of normal playback without any slippage of synchronisation occurring at all. Also, the Editon is very simple to use, with fewer knobs and buttons to worry about, although it is not quite as accurate, specification-wise, as a Q-Lock.

The synthesizer suite at CTS Studio 4, Wembley, was purpose-built for soundtrack post-production work.

Meanwhile, the Americans have recently come up with another system for syncing called Shadow, about which I know very little, as I've not seen one in use or read any reports. But apparently, it's a basic synchroniser designed to lock up any two machines, and several units can be slaved together to provide control over a larger number of machines, VT and/or audio. Not to be outdone, the Japanese Otari Company are bringing out their own version of a synchroniser which is supposed to be very much cheaper.

So it seems that a runaway effect will occur, with eventually all the tape machine manufacturers producing their own sync-systems, and it will be fun to see how various products will end up being interfaced, much like the way that, before MIDI evolved to lock up various brands of music synthesizers, different manufacturers' products would not 'talk' to each other until MIDI was agreed upon as a standard by everyone concerned.


Over here in the UK, the BBC Engineering Design Department have been grappling with the sync problem in their own way for many years now, with a great degree of success. Initially, some ten or twelve years ago, the BBC put together their own Video Dubbing Room at TV Centre, Wood Lane. They called it Sypher, which stood for Synchronous Post-Dub with Helical Scan. They used a U-matic video cassette machine and a Studer 8-track multitrack tape recorder locked together, and I believe this original configuration is still in use, although great strides technically have been made since then by the Beeb, using 24-track machines.

The synchronisers themselves are also BBC designed, using a separate sync device and controller for each additional slaved machine, be it audio or visual. Of course, doing things this way is obviously more expensive than having a multiple-machine control system (where does your Licence Fee go?!) but there is a greater flexibility of use, as you can join up any number of machines in any conceivable configuration according to the job that requires to be done.

Recently, two new microprocessor synchronisers appeared called MAXIM and FASOR, the first intended for video dubbing operations, and the second for locking tape machines to film transport systems. MAXIM includes something like 100 timecode memories, which are used for storing events like start and stop cue points, inserting equalisers and other outboard audio effects and dumping them again when required, and so on. Both systems have a pulse lock, using timecode from a master machine which sets up the sync to wherever it is required to be sent. If picture and sound should become separated for whatever technical breakdown or other reason, a special re-sync device is fitted to enable the master timecode to be established once more.

So you can sec, there is a veritable plethora of electronics now available for making the music composer's and performer's life easy when playing to picture. Many of the large, established multitrack recording studios world-wide now feature a sync system in order to attract film and television post-production work. For example, Abbey Road, or the CTS complex at Wembley, who have facilities to take large symphony orchestras, or complex synthesizer set-ups, for use in film music recording. Lots of blockbuster movie soundtracks have been undertaken there.


Studios generally are waking up to the fact that this area of work can be extremely lucrative, and can help to replace some of the down-time they've been having over the past few years because of the hi-tech home equipment that performers have been using to save studio-time money in these days of smaller budgets and economic restraint.

Some new studios, like Bob Howes' West Heath facility in Hampstead, have been specifically built for audio-visual use at the outset, whilst others have had Q-Lock or whatever fitted in retrospect. But after the initial fairly hefty price tag of equipment purchase and interfacing, attracting the business of music to picture recording is fairly downhill all the way, as there is an incremental rise in the amount of work requiring to be done... lots more movies and lots more television programmes and promo videos, for example.

Of course, only the best and technically competent outfits will survive, and word-of-mouth between directors, producers and musicians will soon see to that. This means very high competence is necessary in operation of such studios, as the artists will not be particularly bothered with the operational side of syncing; they'll just want it to work properly, smoothly and efficiently without necessarily understanding every nut, bolt, or microchip.

Naturally, being educated in the usage of such equipment expands the creative horizons of the user. The more understanding one has of the operational capabilities of any equipment, the more creative will be its use, and it will be down to the operating staff at post-syncing studios to gently suggest to clients just what is possible, much like, in purely audio studios, when a new piece of outboard gear comes along, it's down to the engineer to help the client use it - at least at first.

It's all becoming very exciting, and I hope that some of you at least will have the opportunity to be let loose on it all. However, it's a cut-throat game, and making in-roads is very hard. We'll have a look at some possible modes of entry next month.


Read the next part in this series:
Music & Pictures (Part 5)

Previous Article in this issue


Next article in this issue


Sound On Sound - Copyright: SOS Publications Ltd.
The contents of this magazine are re-published here with the kind permission of SOS Publications Ltd.


Sound On Sound - Mar 1986

Donated & scanned by: Bill Blackledge


Composing for Business


Video / Film / Picture


Music & Pictures

Part 1 | Part 2 | Part 3 | Part 4 (Viewing) | Part 5 | Part 6

Feature by Robin Lumley

Previous article in this issue:


Next article in this issue:

> Edits

Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

Donations for September 2021
Issues donated this month: 0

New issues that have been donated or scanned for us this month.

Funds donated this month: £18.00

All donations and support are gratefully appreciated - thank you.

If you're enjoying the site, please consider supporting me to help build this archive...

...with a one time Donation, or a recurring Donation of just £2 a month. It really helps - thank you!

Small Print

Terms of usePrivacy