Integrating MIDI & The Studio
MIDI has revolutionised the way we make and record music. But it has developed at such an incredible pace that it is easy to lose sight of MIDI's many benefits. In this lengthy article, Craig Anderton provides a lucid explanation of how you can make MIDI work for you - whether you have a small home studio set-up or a full-blown professional system.
It used to be that the recording studio was used pretty much like a camera - as a way to chronicle live events. Entire albums were often recorded in an afternoon, with musicians sometimes not even getting the luxury of a second take - much less a fiftieth take! However, the advent of multitrack recording changed the character of the studio forever. Now musicians had the opportunity to experiment, and make the studio part of the compositional process. And if an experiment didn't work out - well, it's always possible to re-record over tape and, if necessary, splice out individual sections.
The multitrack recorder also shifted the musical balance of power away from the performing musician and towards the composer. Thanks to the multitrack studio, composers with even a little bit of instrumental virtuosity could work out sketches - or sometimes even completed versions - of compositions, without having to hire musicians to play what had been composed. A new breed of musicians embraced the studio as a place of virtually unlimited potential, starting with the Beatles, and continuing through Wendy Carlos (Switched-On Bach was the product of a multitrack studio and early Moog synthesizer), Larry Fast, Klaus Schulze, Jean-Michel Jarre, and many others. With the advent of sophisticated sound synthesizers and samplers, a musician with sufficient imagination and a decent studio can realise an orchestra's worth of sounds.
Now the studio is undergoing another massive change, in the form of MIDI (Musical Instrument Digital Interface). While MIDI was first envisioned as more or less a live performance tool, it has become truly pervasive in the music industry. Educators, live performers, hobbyists, and many others have found that MIDI is the answer to a variety of musical dreams, from simplified notation and transcription to streamlined stage setups. Yet when all is said and done, it may be in the studio that MIDI achieves its greatest impact - an impact which has yet to be fully realised, as those aspects of MIDI that relate most to the studio have only come into focus in the past couple of years.
MIDI is a musically-oriented language spoken by computers, and its vocabulary consists entirely of musically-related data. Recording engineers who don't have a strong musical background may approach MIDI with a certain amount of trepidation, but this is not really justified. In fact, part of MIDI's success is due to the fact that enterprising hardware and software engineers have made MIDI as 'transparent' as possible to the user; with very few exceptions, it is not necessary to delve into MIDI on a bits and bytes level. (A 'byte' is the basic unit of computer data, and is roughly analogous to a word in spoken language.) However, you will need to understand the basics of computer systems, as they form the basis of most MIDI-oriented studios. Fortunately, you'll be able to apply this knowledge to much more than MIDI, and with the continued proliferation of microprocessor-controlled devices in our daily lives, being computer literate is a definite advantage.
Once a computer gets into the act, MIDI starts to reach its fullest potential. Not only can the computer handle all kinds of detail work, it can also act as the 'brains' of your MIDI studio. Let's begin by covering some of the most important computer-related terms.
• Hardware is the computer itself along with any peripherals. Peripherals are devices that attach to the computer - printers, modems, external memory devices, MIDI interfaces - that increase its utility and provide specific functions.
• Software is a set of instructions that tells the computer how to execute a task or series of tasks. For example, software could instruct your computer to memorise every piece of information that comes into its MIDI interface, which is the basis of sequencing. Software can 'teach' your computer to do such things as control your mixer for automated mixdown, change an equaliser's settings at a specific point in a song, act as a word processor so that you can jot down lyrics or take notes on a session, and even do your accounting.
• An interface is the hardware/software link between your computer and the outside world. Since computers are general purpose devices, the interfaces included with most computers are designed for communicating with common peripherals like printers, terminals, modems, etc. The only personal computer that includes a MIDI interface as standard equipment is the Atari ST series, although add-on MIDI interfaces are available for just about any computer in existence. These add on interfaces are required because MIDI does not use the same kind of signals associated with printers or terminals; the MIDI interface, which connects between your MIDI system and the computer, translates MIDI messages into a form your computer can understand (and similarly translates computer messages into MIDI).
• Storage devices serve as the computer's memory. Any computer has a limited amount of onboard, scratchpad memory called 'RAM' (Random Access Memory). This is what remembers your moves for automated mixdown, or the notes played during a performance on a MIDI instrument. However, RAM has two limitations. First, when the power goes off, any data stored in RAM is lost. Also, it's not too hard to fill up the computer's internal memory, which brings us to mass storage devices.
• Mass storage devices let you transfer data in RAM to a more permanent medium. Floppy disks (which are based on audio tape-like magnetic media, except that the media store digital data instead of audio signals) are popular, low-cost mass storage devices that can hold up to about one million bytes (1 Megabyte) of data on a single disk. Newer designs are making floppies with 10 Megabytes of storage a practical reality, and we can expect to see these high-density floppies become commonplace in the years ahead. Until then, hard disk drives serve a similar function, but use a different technology in order to store anywhere from 10 to 360 Megabytes of data.
Certain aspects of MIDI are more pertinent to the studio than others, but first let's get an overview of the MIDI language by seeing what kind of data MIDI produces.
Suppose you have a MIDI keyboard with a MIDI Out connector on the back. As you play the keyboard, a continuous stream of data flows out of this connector, which represents your performance. Here are some of the most important messages that will be sent out of the MIDI port:
Note-On and Note-Off messages. When you press down on a key, a piece of data specifies which note you played, and also (on suitably equipped keyboards) the dynamics of your playing. Similarly, lifting your finger off the key generates a Note-Off message.
Pitch bend. Moving the pitch wheel control on the keyboard generates data that represents the pitch wheel's position.
Controllers. A pitch wheel isn't the only real-time controller on the average synth: modulation wheels can add vibrato, and some keyboards provide aftertouch, where pressing down on a key after it reaches the bottom of its travel produces data that can be used to modify the sound (eg. introduce vibrato, or make the sound brighter). Yamaha makes some breath controllers that are pretty useful, and some synths also allow for pedal controllers. Controller information can be put to good use in the MIDI studio, as we'll soon see.
MIDI volume controller. Of the 64 available MIDI continuous controllers, controller 07 has been standardised as a master volume control on MIDI instruments equipped with this feature. This lets you do pseudo-automated mixdown, which we'll cover a little later in the article.
Program Change commands. When you change patches (sounds) on a synthesizer, this is duly noted in the MIDI data stream. One possible application is to change settings on a signal processor along with changes in your synth sound. For example, if you have an ethereal choir patch, some long echoes might be appropriate. But if you have a slap bass sound, then some tight echo might be the best effect for the job. Signal processors that respond to MIDI Program Change commands are extremely useful in the studio. Also note that Program Change commands can be used for 'snapshot-style' pseudo-automated mixdown (described later), particularly with units that don't implement MIDI volume controller 07.
Timing data. Although keyboards usually don't generate any timing data, drum machines, sequencers, and other rhythmically-oriented devices most certainly will. MIDI can handle a variety of synchronisation chores, from synchronising multiple MIDI devices to each other to synchronising MIDI devices to tape or to computers. This aspect of MIDI is also very important to the studio. When working in the studio, probably the most important MIDI timing data is the Song Position Pointer (SPP) message, which indicates how many sixteenth notes have elapsed since the beginning of a composition. Therefore, if a device that sends SPP data (the transmitter) is hooked up to a device that receives SPP data (the receiver), you can start the transmitter anywhere in a song and it will send data to the receiver describing where it is in that song. Within a second or so, the receiver will automatically locate itself to the same place.
Alternate MIDI controllers. In the early days of synthesis, you had to be a keyboard player to access all this wonderful stuff. Fortunately, times have changed, and now there are special drum, wind, guitar, and even voice controllers that output MIDI data. While there are some limitations in using these devices compared to using a keyboard, they have opened up the world of MIDI to non-keyboard players.
The sequencer is the centrepiece of most MIDI studios. A sequencer is functionally similar to a tape recorder, but instead of storing audio data a sequencer stores digital data such as that produced by a keyboard or other MIDI controller. In a sense, the sequencer is like a hi-tech player piano. As you play a synthesizer, instead of punching holes in paper, you're punching data into RAM. On playback, that same data comes out of the computer's MIDI Out port and enters into the keyboard's MIDI In. Sequencers are available either as stand-alone units (Roland MC500, Yamaha QX5, etc) or as software programs available for just about any personal computer.
Like tape, sequencers are capable of multitracking. This is because the MIDI specification allows for 16 individual software channels, and data can be 'stamped' with a particular channel number. For example, you might record a bass part in a sequencer and have it play back over channel 1, then record a string part and have it play back over channel 2. On playback, you would set the synthesizer producing the bass sound to receive data from channel 1 only, and set the string synthesizer to receive data from channel 2 only.
Fortunately, sequencers can record far more than just Note-On and Note-Off data. Most can record controller data, aftertouch, pitch bend, and other data that adds expressiveness to a part. The ability of sequencers to record this data is also vital when working with some types of MIDI controlled signal processors, as described later.
MIDI can work in a traditional studio context and simply automate certain functions, expand the number of available tracks without having to add a tape recorder, and so on. To this way of thinking, MIDI is like an obedient (well, mostly obedient) servant. But perhaps the most interesting ramification is that using MIDI can lead to a whole new way of working with, and thinking about, music. Let's cover some typical MIDI applications, and how they relate to the context of the studio.
As mentioned earlier, a sequencer works similarly to a tape recorder by recording data from electronic musical instruments, then playing that data back into those instruments. This means that if we can synchronise the sequencer to a tape recorder so that they play back simultaneously and in sync, the sequencer can drive the electronic instruments, while the tape recorder records and plays back audio.
Here's an example of how this would work. Start off by creating a rhythm track on your sequencer. (You might prefer to use a drum machine synchronised to the sequencer, but let's assume that you're using the drum machine solely for its sounds, and that the data driving those sounds was recorded into the MIDI sequencer, either through MIDI-compatible drum pads or programmed via a keyboard synthesizer.) Now overdub a sequenced bass part, then play the chord structure on a keyboard. You might even want to add a synthesizer lead line, some little arpeggiated sound effects, or whatever.
Once the rhythm track is prepared, it's time to dedicate one tape recorder track to hold the sync track that drives the sequencer. While we can't cover everything there is to know about synchronisation in this article, one of the easiest sync methods involves using a special adaptor unit (such as the J.L Cooper PPS1, Tascam MTS30, etc) to record Song Position Pointer data (see above) as a series of audio tones on tape. On playback, you can start the tape anywhere, and the same adaptor will translate the audio tones back into SPP data; this data then drives the sequencer over MIDI and allows for auto-location. This is a great improvement over earlier sync methods, where you usually had to start a song from the beginning each time for reasons too complex and depressing (I can't tell you what a pain synchronisation was prior to SPP) to get into here.
As you play the tape, the sequencer will magically follow along and drive your synthesizers. You can now record vocals, acoustic guitars, and so on into the tape recorder as you listen to the sequenced parts. At this point, you don't really need to think about starting and stopping the sequencer; as you start and stop the tape, the sync convertor produces the appropriate commands to automatically start and stop the sequencer.
Figure 1 ties this all together. Here we have a conventional 8-track tape recorder with a sync-to-MIDI convertor that drives the computer/sequencer combination via its MIDI interface. The sequencer in turn drives a MIDI keyboard, a rack-mount MIDI expander unit, and a MIDI drum machine. The outputs from these devices, along with the tape recorder outputs, feed a conventional audio mixer (with lots of inputs to accommodate all that real-time MIDI gear). The output of the mixer would feed a mastering deck and monitoring system.
Using this type of approach offers many advantages compared to simply recording everything on tape. If your MIDI gear responds to MIDI volume controller messages, you can record that data in your sequencer and use it to control levels during playback. This is the equivalent of instant automated mixdown for your MIDI gear, and with no VCAs to degrade performance. Figure 2 graphically shows what this kind of controller track looks like. Each line represents a message for a specific volume level that is held until the next message appears. The large gap labelled 'constant volume' holds the level specified by the last message; the gap does not mean no level.
There are ways to imitate automated mixdown, even if your gear doesn't respond to controller 07 data. You can copy a synth program into several different memory locations, and set each one for a different volume (typically there will be some kind of overall level setting parameter for each program or 'patch'). You can then use Program Change commands to call up the program with the appropriate volume level.
Another advantage of synchronising sequencers to tape is greater fidelity. The output from the MIDI instruments never gets recorded on tape, and is therefore not subject to tape hiss, noise reduction, tape saturation, etc.
Figure 3 incorporates a synthesizer/sequencer combination to provide even more virtual tracks. The Ensoniq ESQ1, which includes a pretty sophisticated onboard sequencer that responds to Song Position Pointers, is synchronised to the same timing signals as the main sequencer. You can therefore record your ESQ1 parts into the ESQ1 sequencer, and free up your main sequencer to record lots of other tracks (the diagram shows only one MIDI expander box, but think of it as representing a bank of MIDI gear, all driven by the main sequencer). And since the ESQ1 operates completely independently of the main sequencer (timing data is not sent over any specific channel, but is 'globally' received), this strategy also provides a way of circumventing MIDI's 16-channel limitation, as the main sequencer can still send out signals on 16 different MIDI channels. Many keyboards, including Roland's D20, E-mu's Emax and Emulators, and Ensoniq's ESQ1, EPS, and SQ80, contain onboard sequencers that lend themselves to this approach. You could also synchronise a drum machine to this setup and gain even more tracks.
What if you are into acoustic, rather than electronic, instrumentation? As it so happens, MIDI is just as applicable to studios that undertake primarily acoustic recording. This is because MIDI sequencers can do a lot more than just play back notes.
A variety of programmable MIDI-compatible signal processors can store a collection of settings as a program, with each program selected by a MIDI Program Change command. Consider the setup shown in Figure 4. Here we record Program Change commands rather than notes into the sequencer, and send these changes to two different MIDI signal processors. Individual processors could, of course, be inserted in line with individual mixer channels to process just one instrument. You might want to use a programmable EQ, for example, to change timbre on a guitar part when it switches from lead to rhythm. Note that signal processors can also tune into any one of the 16 different MIDI channels, so you can record different Program Change commands on different MIDI channels in order to drive several signal processors independently of each other.
As useful as changing effects programs via Program Change can be, Lexicon, ART, Eventide, AKG, Korg and several other manufacturers make sophisticated signal processors where you can actually change parameters (delay time, equalisation, pre-delay, etc) under MIDI control using continuous controller commands (such as aftertouch, pitch bend, etc). You could therefore record continuous controller data in your sequencer which, when played back, could extend the delay time on certain key musical phrases, then revert to a shorter delay time for the rest of the track. Let's face it, most of us would love to have 14 hands (and the brain power to control them) so that we could tweak signal processor settings during the mix - now MIDI makes it possible.
To illustrate how one would take advantage of this, consider a situation where a sequencer is synchronised to tape. Let's say you want to vary the reverb time throughout the course of a song. First, you would set up the signal processor so that the delay time would respond to some specific MIDI data, perhaps note value (with higher notes giving shorter reverb times) or pitch bend, depending on whether you'd prefer to 'play' the reverb from a keyboard or from the pitch bend wheel. Put the sequencer in record mode, and roll the tape. Play the keyboard or pitch bend along with the tune, and record this data into the sequencer. On playback, the sequencer will play all your 'moves' back into the reverb.
MIDI can also provide automated mixdown capacities for audio signals. Again referring to Figure 4, note that we've inserted an automated mixdown unit (such as the Twister, J.L Cooper MixMate, Iota MIDI Fader, etc) in between the tape recorder outputs and mixer inputs. These units typically contain a bunch of VCAs that respond to some kind of MIDI message. Actually, all of the units mentioned above attack the MIDI automation problem in different ways; some use a dedicated sequencer or computer to record your mixing moves, some respond to controller data recorded into one or more tracks of your existing sequencer, and so on. Space does not permit a comprehensive discussion of the merits and drawbacks of these various approaches, but the bottom line is pretty clear: now you can automate your mixer for a fraction of the cost of buying a fully automated console.
As with driving signal processor parameters, the basic idea is for the sequencer to record your mixing moves, and send them to the VCAs during mixdown. Note that one big plus of this approach is easy editing - just as easy as editing notes on a sequencer, in most cases. But I do need to add an element of caution. There is so much more to a mix than adjusting levels that you'll seldom find you can come back to a mix several months later, insert a disk containing your moves into a computer, and pick up your mix from where you left off. Rather, automated MIDI mixing is most useful for memorising all those difficult moves - dropping a fader for a few milliseconds to kill one bad note, and so on. As with MIDI-controlled signal processing, the main purpose of MIDI mixing is to multiply the number of hands you have available.
By the way, using continuous controllers is not the only way to perform a mix. Akai's MPX820 does 'snapshot' mixes, with each mix setting called up by a Program Change command. However, the MPX820 does something really clever. Instead of having fader levels, EQ settings, and other programmable parameters jump radically to their new values upon receipt of a Program Change command, they fade to the new value over a programmable period of time. Figure 5 shows three track levels changing from one set of levels to another over a programmable time period.
Although so far we've talked mostly about controlling all this gear from a keyboard or other musical instrument, there are alternatives which may be more suited for recording applications. Also, you may want to control your system in real time, rather than storing everything in a computer.
The Yamaha MCS2, for example, represents one method of compact system control. This is a little 'do-all' control box that includes a pitch bend wheel, modulation wheel, two assignable controls, eight Program Change buttons, three assignable switches, and inputs for foot, switch, and breath controllers. The MCS2 takes up far less space than a keyboard, and can sit comfortably by your side when you sit down at a mixing console. There are also a number of MIDI footswitch units available for guitarists that do everything from sending out Program Change commands to transmitting strings of System Exclusive data. (One extreme example of using Sys Ex commands might be to send codes to a synth that would let you spell out messages in its display, like 'SOLO IN 4 BEATS' or whatever.) The mind boggles and, of course, so does the pocketbook if you get into this too far!
I've found some other uses of MIDI in the studio that are a little more esoteric. Since most studios seem to have a DX7 or other FM synth sitting around, and since FM synths generate sine waves, you can use this as a signal source for non-critical calibration procedures. Programme a patch on your FM synth that is simply a single, pure, high-level sine wave. (A TX802 is your best synth choice since it has eight separate outputs; this lets you feed eight different sine waves, each controlled by a different MIDI channel, to eight different tape deck inputs.) Musically speaking, this is an incredibly boring sound. But drive the synth with a 'Tape Recorder Alignment Sequence', and you end up with a very useful way to check your gear. An alignment sequence will typically play back about 30 seconds each of various tones through the various outputs, which is just perfect for doing alignment on an 8-track tape deck. Try it yourself, it is a very handy technique.
Another fun trick I learned from SOS editor Ian Gilby is to make up an instrument test sequence. This runs through all the MIDI note numbers, sends out varying ranges of controller data and aftertouch, and runs through Program Changes to test out an instrument's capabilities. You can also send out a single steady A=440Hz note to all your instruments, over all 16 channels, when you want to do tuning.
And of course, a computer can do a lot more than just control MIDI gear. As soon as I go into my studio, the first thing I do is turn on my Macintosh, and with good reason. Its usefulness is enhanced with 'desk accessory' programs such as MacWrite, which lets me take notes on sessions, patches, and lyrics; I also have a couple of nifty desk accessory programs from Austin Development. One is a MIDI Program Change transmitter; just call it up, specify a channel and Program Change number, and there you have it - there's no need to go over to your synth. The other is a TX81Z patch filer that saves and loads patches to or from my Yamaha TX81Z expander without having to quit your main program. And as you might expect, synth-specific voicing/librarian programs are extremely helpful when editing and organising your sound library.
MIDI has shown itself to be a remarkably resilient and flexible specification, and there are plenty more developments on the horizon. Recently, MIDI Timecode (MTC) became part of the official MIDI specification, and this may have a greater impact on the studio than any other part of the MIDI protocol.
The significance of MIDI Timecode becomes clearer if we first understand the significance of SMPTE timecode. SMPTE timecode, developed by the Society of Motion Picture and Television Engineers, is digital data that can be recorded on tape. This code identifies segments of tape on a frame-by-frame basis (the absolute number of frames per second varies, but in the USA the rate is 24 frames per second for film, and 30 frames per second for black and white video). Both audio and video devices can 'lock' to SMPTE timecode, and thereby maintain perfect synchronisation.
Although MIDI provides synchronisation, it is always related to music and expressed in measures and beats. However, video production is time-based - if you want to synchronise some sound effect with visuals of a spaceship going through a star gate, you'd identify the exact time (hours, minutes, seconds, and frames) at which the gate opens instead of specifying that it happens 436 measures, two beats from the beginning of the film. MIDI Timecode provides a way for MIDI instruments to react to SMPTE timecode data; one useful application would be to trigger sound effects stored in a sampler in accordance with an event list of SMPTE timecode 'cues'. In addition to specifying how SMPTE times should appear over MIDI, MIDI Timecode provides for a standardised way of exchanging event lists between pieces of gear.
As of today, MIDI Timecode is very new and not yet used much in studios. However, in addition to the obvious applications mentioned above we can expect some other embellishments, such as signal processors that change settings or programs via MIDI Timecode, and perhaps even tape recorder control devices that could punch in, punch out, rewind, and so on using data received over the MIDI bus. Perhaps one day, tape recorders will have a MIDI connector on the back for just such uses.
What has even greater implications is that the MIDI studio is still a relatively recent development, and has yet to be exploited to anything close to its full potential. It will also re-define some roles, as the following story illustrates.
Recently, a record company with which I've worked had a project where the artist submitted two Macintosh disks of sequences, recorded in a MIDI studio using Mark Of The Unicorn's Performer sequencer and a modest collection of gear. The disks were taken to a studio containing a heavy complement of MIDI equipment, and the record producer acted more or less as the 'voice selector' - in other words, the disks contained the notes, but the sounds to go with those notes still needed to be chosen. Given that the studio had more gear than most individuals, this approach made a lot of sense. The artist could work out a complete, tested arrangement at home, then take advantage of some expensive sound generators for the final production.
In a case like this, the studio engineer's expertise has less to do with miking and acoustics than with knowing how to make all the MIDI gear work smoothly and efficiently (no small feat, I might add). Some of the parts were dumped to tape to allow for using the same sound generator for more than one part, but a lot of the instruments were recorded directly to the 2-track master.
This points up one of the paradoxes of the MIDI studio: projects cost less to do, yet the quality is often higher than a traditional studio. Most of the time-consuming pre-production and arranging work can be done in a small studio or even the composer's home, thus saving on studio bills. Essentially, the studio becomes a mixdown and instrument selection suite, with many of these tracks going directly onto the master to maintain high quality sound.
Tape is not yet obsolete by any means (although it will be should the cost of computer memory plummet); it's such a high density storage medium that it is the only cost-effective route to recording acoustic instruments. But once you've synchronised a MIDI sequencer to your recorder, got hold of some MIDI-controlled signal processors, and taken advantage of automated MIDI mixing, you'll never go back. MIDI is a powerful adjunct to any studio - and don't forget that the best is yet to come.