• The Transcontinental Midi So...
  • The Transcontinental Midi So...
  • The Transcontinental Midi So...
  • The Transcontinental Midi So...
  • The Transcontinental Midi So...
  • The Transcontinental Midi So...
  • The Transcontinental Midi So...

Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

The Transcontinental Midi Songwriting Shuffle

MIDI Composition via Computer

The tale of two songwriters on opposite coasts of America, using different computers and musical instruments, writing and producing a single without face-to-face collaboration until the final mixdown. Al Hospers reveals what happened.


Projects have a way of getting out of control, and Al Hospers and Craig Anderton ended up going a lot farther than they'd expected. Talking about MIDI one day, they realised that they had never read about anyone doing long-distance, computerised, MIDI songwriting collaborations. So they decided to try the concept out, see what worked and what didn't. But being two songwriters on opposite coasts of America, using different computers and musical instruments, writing and producing a single, without face-to-face collaboration until the final mixdown, is not as simple as it may seem. Al Hospers reveals what happened.

Engineer Matt Hathaway (left) and the author, Al Hospers, listen to the 'Gibberfunk' tracks at MIDI City studios. New York.


I looked across the crowded control room and smiled. After all the hard work and sleepless nights, we were nearing our goal. High above the sleeping city we sped along at 30 ips behind a complex digital console. There were no windows in the control room, so my mind began to wander as the hour grew later and later. I drifted back to that fateful day when I drove up Greenbush Mountain to visit the Captain (Craig Anderton) at his retreat...

It all started when Craig played me his Phoneme sound disk for the Emulator II. It had an absolutely great demo sequence on it called 'Gibberfunk' that used nonsense syllables to produce an amusing, cartoony gibberish. It was looped, and the more I listened to it, the more I heard bass, drums, and keys along with it - not surprising, since I'm the kind of bass player who makes up parts to the rhythm of windshield wipers.

Craig had an Atari 520ST computer in his studio, and I had with me a copy of Dr. T's Keyboard Controlled Sequencer (KCS; yes, I do use what I create), so we booted it up, and I added simple bass and drum parts to 'Gibberfunk'. I took copies of the sequence and Phoneme disks home with me, and the project was underway.

At that point, it seemed like the collaboration would certainly present enough challenges: Craig mostly uses Passport's Master Tracks Pro for the Macintosh, and I use Dr. T's KCS for the Atari. We also had hardly any instruments in common: he has several high-end vintage instruments, and I have a collection of new, cheaper, trendy instruments. Our work was cut out for us.

Over dinner, we hatched the concept of adding sampled voices to the tune, and in keeping with its gibberish nature, agreed that samples of presidential candidates would be thematically consistent. The ground rules were set: the samples would constitute the words and melody of the song, and we would write the tune using two different sequencer programs running on two different computers and do all our collaboration via computer, modem, and fax; we would not physically work together on the tune (now called 'Election Gibberfunk') until mixdown time. We also decided that trying to sell the tune to a major record label would add an interesting element to the story.

LESSON 1: Have a definite plan mapped out for the collaboration; establish as many ground rules as possible. When you're separated physically, a detailed, shared vision of the end result is important.

On arriving home, I immediately set to work recording the rhythm tracks: two eight-bar loops with drums, bass, and clavinet that fit well with the 'gibbers' (synthesized speech). I sent the results to Craig.

He made a frustrating attempt to learn KCS, without a proper manual, in a couple of hours, then decided to go with his familiar environment: Mac and Master Tracks Pro. He copied my tracks from an Atari to the Mac in real time (Atari MIDI Out to Mac MIDI In) and cut, pasted, and transposed until the two eight-bar segments became a complete arrangement.

LESSON 2: You don't have to present a complete idea to your collaborator; sometimes a short, melodic fragment is enough to inspire the other person. Hand over your work before you get tired of it; it's important to keep the song moving back and forth.

A problem cropped up because all the Emulator II gibber sounds, originally intended for speech synthesis, were recorded at the same pitch. The solution was to pitch bend the entire keyboard by a fixed amount when the chords changed. Thanks to its graphic pitch bend editor, Master Tracks Pro really shines at this kind of task (this is one competitor who appreciates a good tool when he sees one).

LESSON 3: With MIDI, there always seems to be a way around problems if you try hard enough to find a solution.

Whilst trying to flesh out the drum part, Craig ran into timing problems: the notes were off by varying degrees. This was strange, as everything I sent him was totally quantised. What neither of us realised was that the KCS tracks were at one clock resolution and Master Tracks Pro at another. This caused problems that got worse before they got better, but they were still in the future.

Burned out on cut/paste operations, and remembering Lesson 2, Craig sent back the tune as a Master Tracks Pro file and as a MIDI 'almost-standard' data file (the MIDI File standard hadn't been officially adopted yet). I was totally blown away with what I heard; my two eight-bar segments had turned into a really hot dance tune. I had hoped to transfer the file to the Mac version of KCS, but the program wasn't finished, so I followed Craig's example and played the file, in real time, Mac MIDI Out to ST MIDI In. It went uneventfully, but I forgot that the drums and some other parts were all on the same MIDI channel, and they got all mushed together. So, it was back to Master Tracks Pro to re-channelise as many parts as I could.

I then copied the merged sequence into KCS track mode, which placed all the MIDI channels on separate tracks. Craig had named his instruments in Master Tracks Pro, so I assigned my instruments to approximately corresponding sounds. With high expectations and properly assigned channels, I hit the play button. What a mess! The drums were all assigned to incorrect pitches on my machine, and there seemed to be some horrible timing problems. This didn't even sound like the stuff I sent to him, so I checked out the reference cassette Craig had sent along with the disk.

LESSON 4: Always send a reference audio cassette along with any data disks; it's the only reliable way of knowing what sound qualities are supposed to be assigned to different instruments.

It was obvious that the differences were induced by the sequence transfer, so I read through the Master Tracks Pro manual. MTP uses a resolution of 240 pulses per quarter note, but I had KCS set up at 96 ppqn. Thus, KCS was quantising everything played into it from Master Tracks Pro at a factor of 2.5 (no wonder the timing wasn't right). So I reset KCS to 240 ppqn, played the data in again, and the timing was fine, but the drum pitches were still a mess.

LESSON 5: Use sequencers with the same timing resolution.

Fortunately, Craig sent the note assignments for his Emu SP1200 drum sounds, so I spent ten minutes setting up a pitch-map macro in Level II KCS that converted his drum mapping to that of my Roland D110 and back again. This made it a one-click operation to map the drum keys from one instrument to another, which turned out to be a real time-saver.



"'Compatible' instruments are not always so, due to different software revisions."


LESSON 6: Decide early on about drum assignments. It's a good idea to remap drum sounds so that equivalent drum sounds on different drum machines are mapped to equivalent MIDI note numbers.

I had added some extra melodic lines and drum fills, and although we had hoped to be transferring via MIDI Files, the Mac version of KCS didn't support them yet, and we also didn't have a way to get them from the Mac to the ST and back. Unfortunately, Craig wasn't set up for telecommunications with his Mac yet; that would have simplified matters.

LESSON 7: Modems are the great equaliser. No matter what kind of computers you're running, squeeze the data through a modem and it all looks the same. Anyone seriously considering songwriting collaboration via MIDI should invest in a modem.

At this point we thought the tune was pretty hot, and we felt compelled to follow it through to the end - actually put it on tape and send out demo tapes and promo copies. This meant taking a tune produced at home into a 'real' studio for mixdown and finding out what surprises would greet us there. But which studio?

Not too long after the MIDI spec was adopted, Bobby Nathan, co-owner of Unique Recording in New York, got the idea that you could create a tune at home in a computer, come into the studio, and play it back through the studio's high-end instruments, directly onto tape. This has become almost standard procedure for pop albums, but back then it was a radical and new idea. Bobby's MIDI studio, MIDI City (no relation to the music stores of the same name in Southern California), has since undergone numerous expansions and has been used by an endless stream of top artists. We figured if any studio was set up to handle our type of project, this would be the one. It was. What's more, given the studio's history, it seemed only fitting that we should cut our project where the whole concept of MIDI production started in the first place.

Bobby and I had been casual acquaintances through PAN (the Performing Artists telecommunications Network), and when I described what Craig and I had been planning, he was intrigued. We booked two evenings - one for tracking, one for mixing - starting at 8 pm. Most studios are on 24-hour schedules, so you can start almost any time of the day or night; often the rates at night, and for MIDI rooms, are reduced.

LESSON 8: All studio time is not created equal. Booking time during off-peak hours and using a room no larger or more capable than necessary will save you money.

Now it was time to gather the samples. There are a bunch of good samplers on the market, but from what Craig says, and from my limited experience, the combination of the Ensoniq EPS sampler, the Mac, and Blank Software's Alchemy editor makes for an absolutely awesome system - sort of a 'baby Emulator III.' When I received the first set of Craig's disks of edited politicians' voices, I was really excited by what he had done.

At this point, I had switched to the Macintosh version of Dr. T's KCS and pressured the programmer working the project, Cobey Gatos, to put MIDI File conversion into the program in a hurry. But since I didn't have a SMPTE synchronisation box available for the Mac, I recorded the sequencer tracks on my 4-track cassette and played the samples on tape by hand, sans sequencer! Actually, this came in handy, because when we went into the studio, we ended up changing some parts at the last minute, and I had to play some directly to tape.

LESSON 9: You should be able to play all the parts you sequenced, just in case. Sometimes, when played in context, a sequenced part won't have the right 'feel'. Playing the part by hand compensates for that.

Another difficulty was that the Mac KCS was not yet finished. I was using a Beta test copy to do all of the work on the tune, which turned many working sessions into debugging sessions. I can really appreciate the user who buys a piece of software and finds it to be full of problems. Still, every week the tune and the software got better.

Another problem occurred when using my EPS sampler to control the Roland D110: it would go completely nuts! A frantic call to Paul Young at Roland (USA) service department solved the mystery: the D110 cannot respond to polyphonic aftertouch (which the Ensoniq EPS can generate), so we had to turn that option off at the EPS. (Paul also told me that the way to reset all of the D110's parameters is to turn it on while pressing 'Write' and 'Enter' at the same time.) Some of the samples were words, and some were phrases. I started stringing them together in phrases to make the speakers say things they hadn't said. For instance, we had three different speakers saying "I've been in," "I want," and "I am President." We had another one saying "four more years." So I played the samples in a way to make it sound as if the speaker was saying: "I've been in four more years," "I want four more years," and "I am President four more years."

LESSON 10: When playing vocal samples, break them down into individual words. Entire sentences are difficult to phrase rhythmically, but you can trigger individual words right on the beat. The same principle applies to playing samples of background vocals.

The first track (with the samples) I sent to Craig was pretty disjointed. He suggested thinking of the samples as if they were a melody line and using them in a verse/chorus structure. Once I started doing that, I made a lot more progress.

Around this time, a major glitch struck. We needed more samples, so I set up my video machine to record the Democratic Convention, but the VCR was broken, and we lost our best sampling opportunity. Craig is the kind of guy who figures everything happens for a reason, if one can just figure out what the reason is, and he decided this meant that we were to add a rap instead of just using sampled sounds. Craig enlisted Vanessa Else's help, and in one evening they sat down and wrote two verses of lyrics that really tied the whole thing together. To make up for the lost samples, Craig used his vocoder and speech synthesizer to create some amazing processed voices. In an all-night editing session that included several phone calls coast-to-coast, we completed the tune, about three days before the session at MIDI City.

LESSON 11: Sometimes glitches are opportunities. Don't panic when you run into a brick wall; either go around the wall or tunnel under it.

About this time, I realised that KCS for the Mac had not been tested fully enough with SMPTE, or even MIDI Song Pointers, to chance using it in the studio, so I transferred the track files over to the Atari, which I knew worked with SMPTE. Unfortunately, the file transfer between the Mac and the ST did not go smoothly.



"With high expectations and properly assigned channels, I hit the Play button... what a mess!"


LESSON 12: Never transfer files using Macbinary. ST-Talk (ST) and Red Ryder (Mac) communications programs worked fine together, and you can put both KCS and MIDI Files up on a service like PAN [or The Music Network - Ed.] from one machine, then download them on another (including the Amiga) with no problem. We also found a great data file conversion program from Amiga to ST or IBM called DOS-to-DOS, and an expansion board for your PC (by Central Point) that has a 3.5" disk drive and will enable the PC to read and write Macintosh disks.

If you're a computer software programmer, developer, or publisher, don't rely solely on beta-testers; use your products yourself, in real world situations. You'll never see them the same way again!

The last bit of pre-production happened on my way to New York. I listened to the tape and realised it had no hook! It wasn't until the plane was landing that I came up with what we needed and had figured out a spoken introduction to tune the listener into the concept.

It was time for the session. Always get there early; I have never been to a recording session that started on time, but the first time you are not there, it will. Being early also sets the tone of the session: you let the studio personnel know you're taking things seriously and want them to run efficiently.

The session got pushed back to 10 pm, but this gave me, Craig, Vanessa, and David Karr (a consultant on the project) a chance to meet for dinner and discuss tracking strategies and scheduling. (I must say it was strange to meet face-to-face after all those months of disk swaps and phone calls.)

LESSON 13: Remember, you are paying for the session time, so use every minute carefully. Plan out the exact order in which you want to lay down the tracks. Try to schedule your session time realistically so that you're not in a constant panic about what's going to happen next. The more practice and planning you do prior to the session, the more self-assured and relaxed you will be at the real thing.

Next, the engineer got sick, pushing the session back to midnight. Craig is a night person, but I'm pretty much 9-to-5, and the waiting was getting to be nerve-wracking.

LESSON 14: Don't be upset if a session runs over into your time. Not only is it bad form, but you may run over into someone else's session sometime.

Prior to the session, we explained the idea behind the project to our engineer, Matt Hathaway, and second engineer Ken Quarterone. They reacted favourably to the concept, and we felt we were now all tuned to the same conceptual wavelength.

LESSON 15: Try to get together with the engineer a day or two before the session to give him/her an idea of what type of session to expect. It makes life easier for the engineer and puts everyone in the proper frame of mind.

I had previously asked for a copy of MIDI City's studio instrument list to determine which instruments we would have to bring or hire. It turned out that the only two common instruments were the Emulator II and the Oberheim DPX1 sample player, so I brought my Ensoniq EPS into the studio. Craig is a big fan of the EPS, and he sampled the sounds from his other synths into it, so he only had to bring a bunch of disks instead of a slew of instruments. We also MIDI'd the studio's D50 and DX7II to our synth sounds to fill them out a bit, even though we hadn't planned on using them.

LESSON 16: Use samplers to store synth sounds rather than carting a bunch of synthesizers around. Regarding synth patches, if the studio has some of the same instruments you normally use, bring your patches with you, either stored in a System Exclusive dump in your sequencer, or in an appropriate editor or librarian. Very often the studio instrument will not have the factory preset sounds you are counting on still being stored in it. Finally, take extra copies of everything, including your sequencer program disk (and get a backup if you don't have one).

Remember, if your whole tune is built around a particular sound on that old Crumar Orchestrator of yours, you're probably not going to find an Emulator II or D550 patch that will do the same job. Take the instruments that are vital to your sound with you into the session; otherwise you might spend all of your time searching for sounds instead of recording them.

I had also brought my Roland D110, a multi-purpose instrument containing drum as well as instrument sounds. This was fortuitous, since, through a misunderstanding, I had thought that MIDI City had an Emu SP1200 drum machine, when they actually had an SP12. Craig arrived with several disks of carefully tweaked SP1200 sounds, but there wasn't an SP1200 to be found in all of New York on such short notice. What to do? We used the D110 drum sounds and the hi-hat from the SP12. Fortunately, Matt, who did a great job throughout the entire session, spent a lot of time on the D110 drum sounds, and they ended up sounding just fine.

LESSON 17: Be flexible. Incompatibility problems are bound to arise, so prepare a backup plan. Craig could have transferred his SP1200 sounds over to an SP12 to cover all possible bases, but didn't think to do that (I bet he will next time). Do as much pre-production at home as you can.

Relying on the EPS almost turned out to be a serious problem, as Craig's had been recently updated, and his operating system wasn't compatible with mine. Fortunately, I had brought my own operating system disk.

LESSON 18: So-called 'compatible' instruments are not always so, due to different software revisions and such. Always bring any operating system disks with you, and when checking a studio's equipment, determine whether the software revisions are compatible. You may need to pull the system PROMs from your gear and temporarily install them in the studio's synthesizers for your session.



"Don't panic when you run into a brick wall; either go around the wall, or tunnel under it."


Once we had the gear sorted out, we striped SMPTE timecode on the tape with a Roland SBX80, loaded up KCS and the Phantom synchronising software into the ST, and crossed our fingers. Luckily, the system worked like a champ, and we laid down all the sequencer tracks in a couple of passes. Since I had the D110 editor in KCS's Multi Program Environment, it was easy to reassign the separate drums to different outputs and for the engineer to EQ each separately.

By this time, Craig was acting as producer, while I played computer jockey, it was an ideal combination, as he blended and coaxed just the right sounds from each instrument.

LESSON 19: Decide on a division of labour before going into the studio. Divide up the tasks you do best, and stick with the plan. At studio rates, you can't afford any wasted time, motion, or effort.

Listening back to the monitor mix, we found that the clav part had some balance problems that made it very hard to record. Editing the sequence to reduce the velocity on selected four-bar sections solved the problem. I then retracked the part, using SMPTE.

LESSON 20: Different instruments have very different velocity curves. Sometimes it's worth switching to a similar sound on a different instrument to see if its velocity curve will match up with the curve of the instrument that was used as a master controller when recording the sequence. Also, be aware of how to 'compress' velocity on your sequencer, to help smooth out velocity variations. Usually, this is done by adding a fixed amount to all velocity values: the highest values 'clip' against the velocity limits, while the lower levels are brought up. This is essentially 'digital compression'.

Now it was time for the vocals. I recorded lead vocal parts with Craig making suggestions, then Craig added some additional parts to thicken the texture.

We were finally down to recording the sampled voices on tape. Some of them were sequenced, so they went on with no problem, but a few of the other parts stubbornly refused to fit in the groove. I had upped the song tempo a few beats per minute, but Craig had prepared the sample lengths specifically to work at 120 bpm. Trouble. Phrases that had synced up perfectly with the beat were now off.

LESSON 21: It's often better to keep the tempo consistent throughout a project and make any final tempo changes when mixing down to 2-track by using the master recorder's variable speed control.

Fortunately, Craig, through his intimate knowledge of the EPS, was able to modify the problem samples so that they meshed with the new tempo.

Another problem was calling up the right samples at the right time for the right place in the track. (There were three disks of samples, but the EPS could hold only one disk's worth of sounds at a time.) Luckily, I had written out the whole piece on staff paper, and this proved to be a real time-saver.

A printout from Dr. T's Copyist program showing the 'Gibberfunk' score.


LESSON 22: Create a lead sheet for the tune before the session. I generally write this as a three-stave score with a rhythm sketch on the two bottom lines and melody and 'hit points' on the top, much like a standard piano vocal score. This gives me a place to indicate SMPTE cue points for the mix, and helps refresh my memory about what is supposed to happen on bar S3 when it's 4:45 am, and I'm cross-eyed!

For the mix, we graduated to Unique's Studio A to take advantage of the Solid State Logic automated mixing console. I listened to the rhythm section tracks and jotted down SMPTE hit times on my lead sheet. This proved to be a great tool for jumping to SMPTE cue spots when working with the SSL, and it saved us a lot of time. The mix was actually the most exciting part of the sessions for me; I could hear how it all fitted together and how the right tweaking could affect everything.

At 11 am, we walked out into the bright sunlight with a final half-inch master tape. The first thing we saw was a photo vendor on the corner with a full-size cutout of President Reagan that Craig and I just had to get our pictures taken with, considering that he had supplied some of the lead vocals! It was a perfect end to the session.

Of course, the story is still being written. When I took the tape to have cassette copies made, I found out that although it's common practice in New York to make dubs from half-inch tape, in Boston the standard is to use VCR, Beta, or ¼" tape. So I had to go to another studio and transfer our tape to VCR. The VCR dub came out brighter, but when the cassette copies were made, they were fine. And now we are in the same place that many of you are, waiting to hear from the A&R person. Who knows, by the time you read this article, you may hear 'Election Gibberfunk' on the radio.

Al Hospers is a graduate of the University of Florida in sculpture, and the University of Miami in jazz performance. A bassist for 20 years, he has performed with Buddy Rich and David Clayton-Thomas/Blood, Sweat and Tears, and has done session work in New York. For the last four years he has been Chief Executive Officer with Dr. T's Music Software Inc, in Boston.

© Copyright 1989, Electronic Musician magazine, (Contact Details). Reprinted with the kind permission of the Publishers.

MIDI STANDARD FILE FORMAT

The existence of MIDI has allowed the proliferation of musical computer software, but until recently there has not been a standardised method for storing this data. Although all MIDI software is recording the very same protocol, most of the existing software stores that data differently. Recently, the Standard MIDI File (SMF) format was approved by the MIDI Manufacturers' Association. SMF provides the ability to transfer files between programs that support the format, preserving the original sequences and their timings.



Previous Article in this issue

Digitech Smartshift

Next article in this issue

FM's Finest Hour


Sound On Sound - Copyright: SOS Publications Ltd.
The contents of this magazine are re-published here with the kind permission of SOS Publications Ltd.

 

Sound On Sound - May 1989

Donated & scanned by: Mike Gorman

>

Should be left alone:


You can send us a note about this article, or let us know of a problem - select the type from the menu above.

(Please include your email address if you want to be contacted regarding your note.)

Feature by Al Hospers

Previous article in this issue:

> Digitech Smartshift

Next article in this issue:

> FM's Finest Hour


Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

Please Contribute to mu:zines by supplying magazines, scanning or donating funds. Thanks!

We currently are running with a balance of £100+, with total outgoings so far of £1,026.00. More details...
muzines_logo_02

Small Print

Terms of usePrivacy