Getting into Video (Part 1)
Is video a mystery to you? Would you like to dip a toe into its murky waters, or are you worried about getting out of your depth too quickly? In this new series David Mellor explains the essentials of synchronising audio to video, and finds out what it is like to upgrade a home studio to audio post-production standard.
Have you bought your satellite dish yet, and joined the most talked-about revolution in television's recent history? Probably not, simply because you're unsure of the quality of the programmes on offer at the moment. But despite this widespread uncertainty, the number of TV channels - terrestrial broadcast, satellite and cable - is on the increase, and this means that there will be increased opportunities in all areas related to TV and video production.
Although some industry die-hards would tell us that more programmes equals less quality, programme standards have every chance of continuing to improve. Why? Because there is a vast pool of untapped talent itching for the chance to get a piece of the action, and eager for the opportunities that the new TV channels will offer.
Most people are fairly aware of the importance of music in TV production; so much drama is acted out to the strains of background music that it makes you wonder how we get along without it in real life! Since the TV industry is expanding, it follows that there is going to be more business for people working in the field of providing audio for video, and there must be more than a few Sound On Sound readers amongst those intent on getting a share of this work - either as producers of music for TV programmes and commercials, or as audio post-production facility operators.
Audio post-production is where all the elements of a video soundtrack come together - dialogue, music and sound effects. The importance of sound effects is often underestimated by the lay person. The plain fact is that it is so damn difficult to get the action and dialogue right, that getting all the sound effects right at the same time would be impossible. In any piece of film that you see, dramatic or documentary, most of the sound effects will have been recorded separately (or taken from a sound effects library) and added to the soundtrack in postproduction.
Of course, adding sound effects is not without its problems. The main one as far as the consumer is concerned is that the same effects tend to get over-used. The ultimate insult to the viewer, shown recently, was some footage of a very sparsely attended overseas football match. The almost non-existent crowd was heard to cheer with the full gusto of Wembley Stadium at Cup Final time - in English! If you listen closely to the soundtrack of news films, you will soon recognise the same effects cropping up time and time again. So much for journalistic standards!
On the benefit side of the equation, appropriate sound effects can really bring a dull piece of film or video to life. They can be every bit as effective as music, sometimes even more so.
When someone speaks in real life, you see their lips move at the same time as you hear their words. But when speech or anything else is recorded on film or video tape, vision and sound take different routes through the machinery. If they don't end up together in perfect sync, the way they started, the result will be a meaningless jumble of information to a viewer.
The earliest method of synchronising a sound track to film's moving image (or rather the earliest method of current relevance) is the use of an optical soundtrack. Many cinema films are still shown with optical soundtracks, because of the ease and comparative cheapness of duplication. The reproduction of sound via an optical soundtrack is very straightforward: to record the sound, a beam of light is projected onto unexposed film, the width of the beam being modulated by the incoming sound. On playback, a light is directed through the developed film onto a photo-sensitive cell, which converts the varying pattern of light back into sound.
It's not so much the recording process that is important, it's the physical layout of the system: the sound film is identical in dimensions to the picture film. This means that any length of picture film has a corresponding length of separate sound film, sprocket for sprocket - you could measure it out with a ruler. It is impossible for the sound film to get out of sync as long as it is driven via a mechanical or electrical link from the picture film. The initial sync point is determined by the familiar clapper board, which supplies both audio and visual markers.
When a film is edited, it is a straightforward matter to count off the same number of frames of sound film as are required for each piece of picture film. The final print will have the picture and sound track on the same length of film for convenience, with the optical soundtrack running in a strip down the side of the film frames.
More advanced than optical film recording is magnetic film recording. This system is still in common use as a production medium in the major film studios around the world, and is also used for high quality cinema presentation.
Magnetic film is exactly what it says - the same base material as photographic film (with the same dimensions) but coated with magnetic iron oxide rather than light-sensitive material. If the shoot is on 16mm film stock, then 16mm 'mag' film (as it is abbreviated) will be used. If it is on 35mm film, then the mag film employed is 35mm wide.
In conventional film production, every sound recording, whatever its origin, is transferred to mag film. This includes dialogue, music and sound effects. A typical full-length feature film might require several hundred separate reels of mag film for the production of its soundtrack. Multitrack recording and playback is possible by using a number of mag film recorders, all locked together in sync with the picture by reference to the sprocket holes in the film. This is actually a very elegant technique. Accurate synchronisation of dialogue and effects can be performed by shifting the mag film forwards or backwards by the appropriate number of sprockets - a very simple and easily understood method.
The main drawback of mag film is the sound quality. Mag recorders are prone to noise and azimuth errors. Fortunately - though I'm not sure whether that is the right word - the sound quality of the optical track to which the sound will eventually be transferred is actually inferior to that of mag film, so any loss in quality at the post-production stage is usually pretty well masked. Even so, it stands to reason that the higher the quality that can be maintained in the early stages of film or video sound production, the better the final result will be.
The course of development of video recording runs parallel to that of film, although it starts at a later date. Eventually, methods of editing film and video, and synchronising the soundtrack, will probably become completely computerised and pretty well identical. But at the moment, the film and video worlds are still largely separate.
Before video recording was invented, all TV programmes were broadcast live - therefore all the dialogue and effects had to be done in real time. Early video recorders were simply used to record a show, in a single live take, for later transmission. The next stage in development, which seems very obvious now, was to use the video tape as an active production tool. By editing the tape, it became possible to record a show in segments; hopefully to improve the quality of the final programme.
But back in the rock and roll years - the Fifties - editing video tape was not easy. The first, and most primitive, technique used was physical cutting and splicing of the tape. The difficulty with this method is that you can't see the picture that is encoded on video tape as you can on a length of film. An edit made in the wrong place would of course spoil the director's intentions. Also, if the splice was not made at exactly matching points in the video waveform, it would cause the TV monitor to lose sync, momentarily breaking up the picture. To avoid this problem, a liquid containing fine magnetic particles could be painted onto the tape to show up the structure of the TV signal, but this was still not as accurate as editing film.
In a later step towards easier video editing, a control pulse was recorded on to the video tape - a regular pulse on the tape intended to work like the sprocket holes in film. Dub edits could now be made by locking two video machines together and assembling the finished programme from a series of takes. Unfortunately, although the control pulses could keep the machines running in sync, they gave no indication of absolute positions on video tape. Editing was therefore still a very approximate technique.
A further refinement came in the late Sixties, when the Society of Motion Picture and Television Engineers devised a system whereby each frame of the TV picture could be allotted and identified by a unique number - just like the frame numbers of a film. This was, and still is, known as SMPTE timecode. With timecode available, it was possible to devise video editing controllers with the ability to make joins accurate to the exact frame, and editing video tape then became as easy as editing film.
With timecode established for video editing, it was but a small step to synchronise audio machines to SMPTE timecode - or its European equivalent, EBU (European Broadcasting Union) timecode.
Current video production technique requires only a guide audio track to be recorded onto the one-inch video master tape. Everything else can be added later. When the audio production is complete, it can be synced up to the original video, and the sound track 'laid back' onto the audio tracks of the video master, ready for transmission. Let's look at this procedure in more detail...
If we could follow a typical video production - say a TV commercial - from the end of shooting and video editing to a broadcast-ready state, we would observe the following processes:
1. The material on the one-inch master video tape, with timecode and a guide audio track, is copied onto a U-matic or VHS cassette. In the process, the timecode numbers are 'burned in' to the picture so that each frame is clearly numbered onscreen. The cassette is then taken to an audio post-production studio.
2. Dialogue is re-recorded by voice-over artists onto a multitrack tape recorder running in sync with the video cassette - timecode is recorded on one of the audio tracks to enable it to be accurately synchronised to the visuals.
3. Suitable music is selected. This is transferred to quarter-inch audio tape and edited to the correct length. It is then transferred to the multitrack so that it starts and ends at the correct times. This could be done with reference to timecode, or by programming the synchroniser (which locks the video and multitrack tape recorders together) with an 'event' to start the stereo tape machine at the correct timecode value.
4. Sound effects are chosen from a sound effects library (of which there are several available on disc, tape or CD). Once again, these can be synchronised exactly to the picture using timecode, or a timecode-cued machine start.
5. The finished recording is mixed from the multitrack onto a stereo quarter-inch tape. The timecode on the multitrack is transferred at the same time onto a special centre track on the quarter-inch tape.
6. Back at the video facilities house, the stereo tape is laid back onto the master video, still referenced to the original timecode so that the soundtrack remains in sync with the visuals. Voila! Ready for broadcast.
In forthcoming articles in this series I shall be describing this process in more detail and how you can set yourself up to carry out audio post-production work, and also how to record music to picture.
Feature by David Mellor