Vic Lennard considers the trials and triumphs of storing your synth patches as SysEx files within the sequences they're intended for, and concludes that MIDI really is your friend after all.
MIDI has already been put to a wide variety of uses for which it was never intended, but its usefulness is not exhausted yet. Let's look at how MIDI can be used to streamline your music.
THERE'S BEEN MUCH talk recently about using sequencer software and a computer to effectively "automate" a recording studio. Why make a multitrack tape recorder the most expensive purchase in a studio when all the tracks coming from MIDI units can be run in real time during mixdown, from a MIDI sequencer? It's certainly conceivable to produce practically master-quality recordings using just MIDI and a four-track recorder, as long as there are enough inputs available on the mixing desk for all MIDI devices running simultaneously and any additional signals - vocals, guitars, and any acoustic instruments. The only recording necessary is a timecode of some sort plus, of course, those non-MIDI sources.
Take this situation one step further. Patch changes can be stored on the sequencer and the correct sounds selected from each synth, as long as those sounds are stored in the same places next time that song is loaded. This last comment is most relevant.
Consider a studio with four or five songs - on the go at the same time, perhaps for different clients, with more than one bank of voices for each synth. Having put the required patch changes into a song, there is every chance that the banks will have been changed by the next time the song is loaded up. OK, the MIDI equivalent of a track sheet could be kept for each song, but even then the time wasted in loading up each synth with the necessary banks increases with the number of synths resident in the studio. Now in the case of a MIDI-orientated studio, that could be substantial. Is there an alternative?
SYSTEM EXCLUSIVE COULD provide us with an answer. SysEx is the method used for sending parameter information down MIDI cables - information which, in this case, would be the parameters governing the voices onboard the MIDI devices. If the sequencer could be made to send out voice data to each MIDI device before the song begins, then there would be no need to load fresh patches into those units for each song.
Sounds good, but can it be done? Well, if a sequencer is recording and you press the patch change buttons on the front of a synth, there is every chance that these parameters will be sent to the computer, and can then be viewed as data in an event listing. Most synths manufactured over the last five years or so have this facility. The listing will look something like this: F0, ID, . . . F7, where ID is the identification code of the manufacturer and is given in Hexadecimal (base 16). For example, 41H is Roland's ID, 43H is Yamaha's and 2CH belongs to Audio Vertriebel-Peter Struven Gmbh. So now you know. Everything else you see is categorised as being data, even though some bytes may be to do with the model or device number.
So how about this for an idea? Leave a two- or four-bar count-in at the start of a song and connect the MIDI Out of the first synth to the MIDI In of the computer. Set the synth to a patch other than the required one, start recording on the sequencer and select the patch that you want. If nothing happens, check the input filter page. You will probably find that SysEx information is being filtered out - a default procedure for most sequencers. Stop recording and repeat the above steps for each synth, continuing to record from where you previously stopped. If only a two-byte message (looking something like "Cn PP") appears, this will be a standard patch change message, with "n" being the MIDI channel and "PP" being the patch number. Casio's CZ101 and Roland's MT32 are examples of synths which exhibit this kind of behaviour and if you turn system exclusive "off" on Korg's M1 you will get a similar result.
Now reconnect your synths (using a MIDI thru box if more than two or three devices are being connected) and start the sequencer. Each module should receive the selected patch whether or not the memory protection is on, as the parameters are stored in what is called the edit buffer. This is the place in a synth's memory where a voice to be edited is copied to - and it can be used as an extra memory location. The benefit of this is that the patches sent by the sequencer will not affect those held in a synth's memory. This becomes obvious as soon as you select another patch - the one that was sent across from the sequencer disappears on a permanent basis. Or until you run that particular sequence again.
"It's conceivable to produce practically master-quality recordings using just MIDI and a four-track recorder."
Problem solved? To a degree - but what happens if you want to use two different sounds on, say, a Roland D50, with one for the verses of a song and the other for the choruses. How about sending both sounds across to the sequencer using the method above, time transposing them to the correct position and then transmitting them to the D50 where necessary? Unfortunately, it's not quite that easy. System exclusive data cannot be mixed in with other MIDI data, which means that a convenient gap has to be found in the music, and this can be difficult. Most sequencers are non-time sequential - that is to say, they work within patterns in drum-machine style or in parallel linear tracks - which makes it awkward to see precisely what is playing where. There are sequencers which use the equivalent of a single track, like Performer for the Apple Macintosh, and with this type of sequencer, gaps can be selected for inserting SysEx data. But for most of us mere mortals there is no such option - not on the Atari ST, anyway.
TO UNDERSTAND THIS situation, it is necessary to know a little more about MIDI. MIDI is a serial protocol, which means that only one byte can be sent at a time. This actually means that transmitting a chord is impossible. What you are really hearing when you think you hear a chord over MIDI is a fast arpeggiation of that chord. Information is transmitted in "events", which can be made up of one, two or three bytes. A "note on" event is transmitted each time you press a key and uses three bytes, while a patch change event uses two bytes. One-byte events include MIDI timing clock and active sensing, which you'll be aware of if you read the article on Implementation charts last month. The speed of MIDI is such that each byte takes 0.32 milliseconds to transmit, so a single note takes just under one millisecond. If at a particular point in a song there is a bass drum, snare drum, closed hi-hat, bass synth note and two other synths playing chords, with perhaps 12 notes in all, this will take about 12 milliseconds to transmit through MIDI, and will encounter a further time delay at the receiving sound module, which has to physically react and play the notes. If you heard the first and last note separately, you would probably be aware of the delay, but since the arpeggio is very fast, the degree of looseness is something you can get used to when working with MIDI.
System exclusive information behaves completely differently. As long as it starts and finishes with the correct bytes (F0 ... F7) and has the manufacturers ID, it can contain any number of bytes - a bulk dump from an M1 has more than 64,000 bytes, which take about 18 seconds to send - imagine trying to find a gap for that in a song. Some manufacturers split these into smaller entities, for example, Roland, who use 256-byte blocks, allowing MIDI clocks or active sensing bytes to be inserted between blocks, while others (Yamaha for one) simply use a continuous data stream. Edit buffers are too small to accommodate this amount of data.
Let's look at a specific example. The Roland D50 takes 468 bytes to transfer the parameters for a single sound. It is possible to calculate how many clocks, or ticks, this will take on any particular sequencer by using the following formula:
16 x T x R x B / 3000000
where T is the tempo in bpm, R is the resolution of the sequencer in clocks per quarter note (ppqn) and B is the number of bytes. For example, using Steinberg's Pro 24 where R=96, and a song with a tempo of 128 bpm, the D50 requires:
"The edit buffer is the place in a synth memory where a voice to be edited is copied to - and it can be used as an extra memory location."
16 x 128 x 96 x 468 / 3000000 = 31 clocks (approx)
On top of this, the reaction time adds about 20%, resulting in a final figure of 37 clocks. As a 16th note is 24 clocks this should give you an idea of the problem - if the piece of music has 16th hi-hats throughout (obviously a SAW job) we have what is commonly called a no-go situation. The moral of this story is: try to transfer MIDI SysEx data during a song and it will have an audible timing glitch.
MANY OF THE sequencer packages for the Atari ST have extra hardware available to provide more than the standard number of MIDI Ins and Outs. C-Lab have Xport and Unitor, Hybrid Arts have MIDIplexer and Steinberg have SMP24, which is principally a SMPTE generator but also has discrete MIDI ports. In one possible application, this allows all SysEx to be sent via one of these extra ports creating not one, but two serial data flows. There are a variety of options at the receiving end. A MIDI merge box could be used, which would buffer the SysEx and/or the other MIDI data and ensure that performance data did not interrupt the SysEx flow. One such merger is the Philip Rees 2M. Unfortunately, one merger would be required for each MIDI module. This would be both impractical and expensive.
Another possibility is for manufacturers to add an extra MIDI In port specifically for handling SysEx data onto the existing In/Out/Thru configuration, and either use a merge internally, or a logic switch which would allow only a one of the Ins to be used at any one time. This is highly unlikely to happen.
Both of these methods have an inherent problem. The SysEx sent from the sequencer will appear at the MIDI In port of each synth and will fill up their input buffers, so causing timing errors unless the operating systems for the synths allow for SysEx not meant for them to be ignored, after the manufacturer's ID or model code. This would entail a maximum of four bytes in the case of Roland and three for practically everyone else. This translates into a one millisecond delay.
If this situation had been envisaged when MIDI first appeared, it's possible that steps could have been taken to accommodate it. Data blocks could have been limited to 32, 64 or 128 bytes, which could more easily be fitted into performance data and would cause smaller delays. Perhaps an enterprising software writer would like to chance his arm at writing a program to sub-divide SysEx in this manner. In the meantime, try the edit buffer idea for running real time. Most sequencers allow you to mix down an entire song onto one track so you will be able to see where a convenient gap lies for the number of bytes you need to send. This figure can be obtained by sending a patch to the sequencer and visually counting them. That at least, is simple. For the rest of the operation you'll need more patience and a little luck. While it's not perfect. MIDI can be used for many more purposes than it was originally conceived. Remember MIDI is your friend.
Feature by Vic Lennard
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!