Mixing With MIDI (Part 2)
Mixing in the MIDI Age
This two-part feature covers both the art and technology of mixing. The first article on page 48, 'Mixing Essentials', covers audio mixing in general, from a more basic level, and applies to any sort of studio mixing task. The second part, 'Mixing With MIDI’, starts on page 54 and covers techniques made possible by MIDI, specifically automated mixing and sending MIDI instruments direct to your master tape. Craig Anderton is your guide.
While the goal of mixing hasn't changed over the years, mixing technology has. Craig Anderton provides a rundown of the latest techniques you can use to help create the perfect mix.
MIDI has slowly but surely worked its way into all aspects of music. From the moment you first capture an idea in a MIDI sequencer to automating a mix via MIDI, this ubiquitous technology is transforming the studio as radically as it has transformed the world of composition and live performance. MIDI's contribution to mixing occurs in five main areas:
1. Virtual tracking. A MIDI sequencer synced to tape acts just like a multitrack recorder in that it stores parts for later playback, and instruments driven by the sequencer can go directly to your 2-track master (synchronised with any existing tape tracks). Virtual tracking places more demands on a studio, though; you generally need lots of inputs on your mixing desk to accommodate not just tape outputs, but instrument outputs. While a 12-channel mixer works fine with an 8-track recorder, all it takes is one drum machine with eight individual outputs and two stereo synthesizers to use up 12 inputs.
You'll also need more signal processors. With multitrack tape, you can record a bass track with, say, chorusing, then record a lead track using the same signal processor set for long echoes. With virtual tracks, where both the bass and lead sounds will feed the mixer simultaneously, in real time, you'll need a separate processor for each sound.
2. Automated mixdown via continuous controller 7. Most modern synthesizers allow for master volume control by MIDI continuous controller 7 messages (check the synth's MIDI implementation sheet to determine whether a particular instrument responds to controller 7 or not). Recording master volume controller messages into a sequencer lets your sequencer vary the mix without you having to touch any faders.
You can programme controller 7 messages in several ways. A MIDI footpedal can work, but is generally not very precise. Being able to assign a synth's modulation wheel to controller 7 works well; just treat the mod wheel like a fader and vary the volume as needed. You can even record the notes on one track and not worry about controller messages during your performance, then overdub these messages on a separate track using a footpedal, mod wheel, or other controlling device. Once you've edited the controller message track to perfection, merge the two tracks together.
The down side of this degree of control is that controller messages eat up memory and reduce MIDI 'bandwidth' (ie. the maximum number of messages that can be sent down the MIDI cable at a given moment). Having continuously varying controller messages on multiple MIDI channels could push your sequencer dangerously close to the 'MIDI clog' condition, where there's more data than MIDI (or your processor) can adequately handle.
Therefore, it's sometimes best to simply insert individual controller messages at strategic points in the song. (Remember, such a message indicates a change in status, so a controller message remains in effect until the next controller message is received.) For example, if a part is supposed to be really loud for a chorus, insert a controller 7 value of 127 (the maximum possible) just before the chorus. This message should occur just before a note, since changing levels in the middle of a note can produce a nasty glitch. Then, if the volume needs to pull back for the verse, insert another controller 7 message at a lower level just before the verse begins. Keep inserting controller messages until your mix is complete.
While this approach does save sequencer memory and MIDI bandwidth, sometimes a smooth fade-in or fade-out is more appropriate. In this case, avoid sudden jumps in volume. Figure 1 shows a fade-in at low resolution, which appears to be just fine. However, if you look at the same fade-in at high resolution (Figure 2), you can see that there are several sudden level jumps, and these will often produce 'zipper noise' (ie. low-level, grainy glitches) in the output. For this situation, it's worth increasing the resolution and adding new controller values in between existing controller values to smooth out the fade (Figure 3). With a graphics-oriented sequencer such as Passport's Master Tracks Pro, you can simply draw in a new curve; with a list-oriented sequencer like Dr.T's KCS or MOTU's Performer, you should insert controller messages in between existing events in the event list.
3. MIDI mixing accessories. There are a variety of MIDI-controlled, automated mixdown packages for audio signals (such as the J.L. Cooper MixMate and Magi systems, Steinberg Mimix, Iota MIDI Fader, Twister, etc). These enable you to control gain-altering devices such as VCAs (voltage-controlled amplifiers: the electronic component in some mixing desks that enables electronic control of volume) via MIDI messages or with control signals from a dedicated sequencer, thus automating the mixdown process for acoustic tracks.
However, it's necessary to dispel a myth about automated mixdown: that you can come back to a mix a couple of weeks or months later, pop in a disk, and end up with a replica of your mix. Unfortunately, you rarely can - there are usually too many non-programmable variables (mixer EQ settings, instrument level controls, outboard signal processors, and so on). There are a couple of ways to move closer to the ideal, though. One is to take copious notes on the non-programmable settings in a system; another is to save synth and signal processor settings for a particular project as System Exclusive data. For example, I usually leave all my synth level controls up full to eliminate that particular variable, and save a SysEx 'snapshot' of all the MIDI gear in my studio. That way, when I load in an automated mix, I also load in all the SysEx data, including synth patches optimised specifically for the mix. However, it is still really difficult to eliminate all of the variables in a setup and obtain exact repeatability.
4. Multitimbral instruments and stereo. Multitimbral synthesizers can play several different sounds (or the same sound) over different MIDI channels and offer exceptional stereo and panning possibilities. Consider a synthesizer with multitimbral capabilities and two discrete audio outputs (ie. left and right). Since each MIDI channel will usually allow for independent controller 7 (volume) messages, if you drive the same sound over two different MIDI channels and assign one sound to the left audio output and one to the right, you can create panning by sending complementary controller 7 messages over the two different channels. As the left gets louder, the right gets softer, and vice versa. This is a simple, effective way to automate panning.
Devices that can restrict each 'instrument' in a multitimbral setup to a particular range of notes provide even more stereo options. (This is a common feature in Yamaha synthesizers, such as the TX802 and TX81Z.) To split a keyboard patch in stereo, assign one sound as two instruments, each responding to the same channel, then restrict the note ranges so that, for example, one instrument covers the range of C—2 through C3 and the other C#3 through G8. Pan the two instruments in stereo, and you'll hear the notes below C#3 over one audio channel, and the notes above and including C3 over the other channel.
The Yamaha TX802, with eight individual audio outputs, lets you take the above trick even further. Assign the same sound to eight individual 'instruments', all on the same MIDI channel, and patch each individual output into a mixer. Pan these outputs across the stereo field (Figure 4). Now let's restrict the note ranges. Suppose a piano part covers a four-octave range, from C1 to C5. You might want to assign each instrument as follows: instrument 1, C1 to F#1; instrument 2, G1 to B1; instrument 3, C2 to F#2; instrument 4, G2 to B2; instrument 5, C3 to F#3; instrument 6, G3 to B3; instrument 7, C4 to F#4; instrument 8, G4 to C5. As the piano part plays, the lower notes will appear toward the left and the higher notes more toward the right. This method creates a pleasing stereo spread and requires neither delay lines nor elaborate miking techniques.
Many synthesizers now feature multitimbrality and multiple audio outputs. Take advantage of the creative options offered by these new features to create some really novel stereo effects.
5. Automated signal processing. Although you can always reach over and turn a knob or punch up a new preset when you want to alter a sound, it's difficult to do this exactly the same way over and over for every pass of the mix, and your hands will probably be occupied with other chores as well. MIDI-controlled signal processing comes in two main flavours: units that react solely to program change commands, changing from one memorised preset effect to another; and units that not only respond to program change commands, but allow some or all parameters to respond to MIDI continuous controllers, aftertouch, velocity, notes, and other MIDI data. Both approaches have some limitations. Changing programs while notes are playing often produces serious glitching, and sometimes changing patches even when no audio is going through the system causes strange noises. Furthermore, some MIDI devices will mute their outputs for a few seconds during a program change. As a result, it's best to insert program change commands during parts in the music where no notes are playing.
When altering effects parameters over MIDI in real time, you'll generally find that some parameters (eg. delay feedback or resonance) change smoothly under MIDI control, while others (eg. delay time) may glitch or change in easily discernible, discrete steps. Experience is the best way to learn which parameters work effectively and which don't, and whether you can get away with changing a particular parameter while audio is passing through the unit.
As with life itself, MIDI mixing involves trade-offs. Programming changes in a mix is as time-consuming as programming a synthesizer, but it saves you from having to be an octopus, trying to manipulate and remember a zillion different mixing moves. Another aspect is more subtle. When mixing, generally you're so involved with trying to change the right controls at the right time that you don't really listen to the overall mix - you listen only to the cues that remind you to flick this switch or move that fader. Usually this means that you don't listen to your mix until it's actually finished! But if you use MIDI to automate your moves, you can listen more than tweak as the mix goes by, which can save you some time overall.
Possibly the worst implication of MIDI mixing is the dreaded 'infinite tweaking' syndrome. Current technology offers so many options that it is a real temptation to explore every single one, but at some point a cost/benefit analysis is necessary. If you want to spend six months creating a mix that hits 100 on a scale of 0 to 100, that's fine. If you can hit 95 by doing a two-day mix, though, it may not be feasible to try for 100.
A record producer once told me about the time he got tied up in the studio producing a single and ended up generating literally dozens of takes of the same song, each with minute differences that seemed important at the time. When he listened to the mixes a couple of weeks later, he really couldn't tell any difference between most of the versions. I often think of this story as I head for the four billionth pass on a mix, trying to get everything 'just right'. It's not worth spending studio time on making changes that nobody - not even you - will ever notice.
Another anecdote comes from Quincy Jones. While I was doing an interview with him a while ago, we were talking about recording synthesizers and sequencing, and he said that type of work was like "painting a 747 with a Q-Tip". Well, I must say that's a pretty fair description of what happens when you decide to tweak everything. Sometimes you have to know when to stop, not just for your own sanity but for the sake of the mix. A mix is just another kind of performance, and like most performances, if you work it over too much you can beat all the spontaneity out of it. A mix with a few rough edges, but done in a heat of passion, will often provide a much more exciting and pleasurable listening experience than a mix that is so perfect it's sterile. It is inexpensive insurance to not always re-record over old mixes; if you think you've done a good mix, keep it, and record a new mix on a blank part of the tape. When you listen back the next day to all your collected mixes, you may find that one of the earlier mixes is the best.
I must confess that I seldom know when to stop. I'll sit there and tweak and tweak and tweak until everything sounds just right to my ears. But I'm learning to find that place where spontaneity and perfection intersect, and maybe someday I'll be able to hit that intersection every time. Meanwhile, I'm keeping my Q-Tips handy. Excuse me, but there's a 747 sitting in my studio that needs painting.
Craig Anderton is a musician, producer and Editor-in-Chief of Electronic Musician magazine. His latest album, 'Forward Motion', was released last year on the Sona Gaia label, distributed by MCA.
© 1989 Electronic Musician, (Contact Details). Reprinted with the kind permission of the publishers.