Technology On The Air
When Paul D. Lehrman was asked to use his technological expertise to help create a radio quiz show, he rose to the challenge — and discovered it was just about the best fun he could have standing up.
When do the worlds of hi-tech music and broadcasting meet? More often than you might think: when a MIDI studio is used to write a jingle or station logo, when a TV show is scored with a sequencer locked to videotape, or when samplers and hard disk systems are used to edit and trigger sound effects and dialogue on a radio or television production. But generally speaking, these are 'off-line' activities, which take place before the actual broadcast. Of course, many shows have live bands that use synthesizers, but these are generally played just like any other instrument — the more advanced capabilities of the technology are not used on the air.
But musical technology can be used to great advantage in a live broadcasting context, replacing and augmenting the turntables, CD players, tape decks, and cartridge machines of live radio. I was recently given an opportunity to design an experiment in that direction, stretching the capabilities of several aspects of MIDI and sampling, to produce a new kind of radio show which aired in Boston, Massachusetts, this Spring.
It started with a phone call in January from Mike Manning, a producer at one of the two main public radio stations in Boston, where I live, WGBH-FM. (See the sidebar for more about public radio in the US.) He said that the station was gearing up to produce a live phone-in musical game show. There would be a 'host' to talk to the callers, and someone to play musical examples, fill space, and comment aurally on the proceedings. They couldn't possibly afford to hire a real band, and they didn't want just a piano player, so my name came up, courtesy of an old friend who worked at the station. Could I put together a bunch of MIDI equipment which would allow me to play snippets of symphonies, operas, musical comedies, and big-band swing, at the drop of a hat? And also maybe throw in some sound effects and make wisecracks about the host and the guests, to make it sound like a late-night chat show? "Piece of cake," I replied.
Callers to the show would be asked various questions on musical subjects, such as "Who lived longer, Mozart or Schubert?", or "What composer's name, translated into English, would be 'Joe Green'?", or "Who wrote the lyrics to the Leonard Bernstein musical West Side Story?" (Answers: Mozart, Giuseppe Verdi, and Stephen Sondheim.) There would also be a section in which two callers would race against each other, pressing different buttons on their touch-tone phones if they came up with the answer to a question first. In addition, there was to be a weekly musical puzzle, which would consist of musical fragments combined in weird ways. Listeners would be encouraged to tape the puzzle, sort out and identify the fragments, and send in their answers by mail. The following show, we'd solve the puzzle and announce the winners. The prizes were small, but appropriate to the audience: concert tickets, CDs, coffee mugs, and T-shirts.
My job would be to play musical fragments behind the question-and-answer sections, illustrating some aspect of (and sometimes even giving away the answer to) each question. It would be a kind of 'nudge, nudge, wink, wink' to the radio audience — they could hear me, but the contestants could not. Since the subject matter changed rapidly, I needed to be able to switch instantaneously from one musical style and orchestral setup to another, would also need to have sound effects — boos, cheers, rim shots, maniacal laughs, fanfares, car crashes, things falling down stairs — available to comment on the proceedings, and these would often have to be played simultaneously with the music. The musical puzzles, thankfully, could be pre-produced, either as sequences or as finished recordings.
The project was to culminate in four pilot shows, broadcast over four Sundays in May. If the response was good, the show would be renewed in the autumn, but I was given no guarantees that this would happen. It was a strong incentive to be as imaginative as possible.
Before we went into production on the pilots, Mike, the producer, and executive producer Jon Solins wanted to run some auditions, to hear how I would perform under show conditions, and also to test out several different announcers (they also auditioned another possible musical director who, as it turned out, was a better keyboard player than I, but didn't have the MIDI and orchestration experience). For these auditions, I decided I would play all the cues directly from a keyboard. They sent me a list of possible questions ahead of time, and I came up with something appropriate for all of them — either a tune from my rather eclectic collection of sheet music, a score from the music library at the college where I teach, or something I dug up from memory.
I then created programs tailored for each cue, involving complex keyboard splits, on a Kurzweil 1000 PX module. For example, for a jazz tune behind a question about Miles Davis, I might have acoustic bass layered with ride cymbal in the bottom two octaves, a piano in the next two octaves, and a muted trumpet in the top octave. I would use my trusty old DX7 as a master keyboard, and assign a program change to each setup so I could move quickly among them. I didn't bother recording any sound effects for the auditions (although I did manage a convincing "awwww" using the 1000's chorus sample), figuring I could use a sampler for those later down the road.
The auditions went smoothly, but I found that keeping track of which questions needed which patches and what tune I was supposed to play, and thinking up appropriate wisecracks to add to the merriment was a bit more than I could handle. So I thought it would be a good idea to pre-program the musical cues as much as possible, and find some playback engine that would let me access them randomly.
At the same time, Mike and Jon couldn't decide between two potential hosts, named Tony and Margaret, and so they hired both of them. This meant I would talk less on the air, and although at first my ego was slightly bruised, I soon realised it was for the better, as I could concentrate more on the music.
I found a simple but highly effective way to pre-program cues in Apple's ubiquitous HyperCard with the aid of an Opcode program called MIDIplay. This allows you to create, with the aid of extra HyperCard functions and commands (XFCNs and XCMDs), programs or 'Stacks' that address MIDI instruments, play back Standard MIDI Files, and also offer a variety of real-time controls.
I sequenced each musical cue, complete with patch and tempo changes, using Passport Designs' Pro 5, and exported them as MIDI Files. Since I was planning to use the keyboard 'live' for certain cues and for sound effects, I decided Channel 1 would be my basic channel (that is, the one addressed by the keyboard), and my sequences would use only channels 2 and up. The K2000 has up to 900 internal programs arranged in nine banks, and there are a number of different ways to call them up over MIDI (unfortunately, a direct map — program change number x calls up program y — is not one of them). I decided to use a method which required two program changes at the beginning of every track: one to call up the bank, and the other the individual program. As long as I didn't start any sequences in the middle, which I didn't plan to do anyway, this would work fine.
I then created a HyperCard stack in which individual onscreen 'buttons' would call up the various MIDI Files from disk and then play them. Because neither sequence data nor HyperCard stacks require much memory, I could record the sequences at home using my Macintosh IIcx and put them onto a couple of floppy disks. I would bring these to the station for the broadcast, loading the files and the stack (into which the XCMDs had been installed), onto the station's Mac SE.
There were two slight problems: firstly, because the SE is a much slower computer, the stack's response time was quite a bit longer on the station's Mac, and I found I needed to anticipate each cue by about a second; secondly, playing back some of my MIDI Files crashed the computer. They weren't long or complex files, so I never did figure out why this was happening. In the end I didn't use those files, or I made different versions of them and ensured that these would play back correctly.
Besides giving me random access to as many files as I could fit buttons on a screen, MIDIplay provided a number of very valuable functions. I could stop a sequence at any time, and I could also, by creating a 'freeze' button, stop the sequence and hold the notes, the way a real bandleader would get his orchestra to pause while waiting for something to happen. This freeze button simply set the playback tempo to 1 % of the file's nominal tempo (MIDIplay won't let you specify 0%). After the freeze, I could then tell the band to shut up, or if I wanted to resume playback, a 'resume' button set the tempo back to 100%.
Another MIDIplay feature is a 'master volume' command, which actually doesn't affect volume, but instead is a velocity scaler: it changes the velocity of the played notes from 0% to 150% of their original values. An on-screen 'slider' with a numerical readout allowed me to adjust this parameter in real time. Moving the slider would not affect any notes already playing, but would only change the velocity of subsequent notes, so in a slow-moving passage or over a long string pad, the response would be pretty slow, but it was certainly good enough for my purposes.
In addition, if I found a file that was recorded much too loud or too soft in relation to the rest of the files, I could include a master volume command right in the button that called it up, and thus bring it into line. Because HyperCard lets you change its programming literally as it is running, it was easy to make last-minute adjustments like this in the studio.
The 1000 PX was just fine for most of the musical stuff, but I still had the problem of sound effects to deal with. As I was resigning myself to extracting my Roland S750 sampler from its happy home in my studio and putting it into a portable (but very heavy) rack, a fascinating new instrument crossed my path: the Kurzweil K2000. I was asked to review it for an American pro-audio magazine, and after about an hour with it I realised that, with a few tweaks, here was the answer to all of my problems.
The K2000 has been reviewed extensively (and very favourably) in these pages, so I won't reiterate all of its features, but there was one of particular significance to my situation: its SCSI port. This allows samples to be imported from other sources via 'SMDI', the new high-speed digital transfer format designed by Peavey. It also allows those samples, as well as samples loaded in from floppy disk, to be stored on an external hard disk, from which they can be loaded much faster than from floppies.
This instrument could be my live orchestra (it has more ways to split its keyboard than you can shake a baton at), my sequenced orchestra (it is fully multitimbral), and my sound effects generator. It puts all this functionality into a box that happens to fit very snugly into my old DX7 road case (it even weighs a few pounds less than the former occupant!), and the only other hardware I would have to take to the station for each broadcast was a MIDI interface and an external hard drive. For the former I used an old self-powered Passport interface, and for the latter a Microtech 44MB Syquest removable.
Unfortunately the Kurzweil has no direct sampling inputs (an Internal sampling option had been announced, but was still several months away) so I had to create the samples with something else, and then transfer them digitally into the K2000. This eventually involved three steps. I first recorded the samples (courtesy of the Sound Ideas sound effects library which the station owns) from a CD player with an S/PDIF digital output into Digidesign's Sound Designer II, using their Audio Interface, which has an S/PDIF input (see Figure 1). Sound Designer II, however, does not support SMDI. Passport Designs, on the other hand, were at that moment working on making their Alchemy program SMDI-compatible, and the company sent me a Beta test version of the software that would work with the K2000. Since Alchemy supports Digidesign's file format, I could open the Sound Designer II files in Alchemy and then transfer them directly, without doing any conversions or using up any extra disk space.
SMDI is several orders of magnitude faster than the MIDI Sample Dump Standard — the longest file took less than 30 seconds to transfer. One problem I had was that if I wanted to send more than one sample at a time, Alchemy was not very cooperative, and would freeze after it finished sending the first one. This slowed me down only a little, and Passport assure me that the latest shipping version of Alchemy has this bug fixed.
The K2000 comes with 2MB of sample RAM as standard, but has room for up to 64MB, using standard Macintosh-compatible SIMMs (single in-line memory modules). The Roland S750 also uses Mac SIMMs (very smart marketing decisions on the part of both companies!), so instead of having to buy extra memory, I merely cannibalised my S750 of its 16MB of extra RAM and put them into the K2000. This was plenty for the job at hand. I also discovered that the K2000 uses a clever new type of SIMMs holder which doesn't break nearly as easily as others — if you've ever tried to remove memory from a computer or sampler you'll know what I mean. Other manufacturers please take note!
The samples ranged from a flexitone (110k), to a New Year's Eve party (350k), to an orchestra tuning up complete with the sound of the conductor entering (2.64MB). I also sampled my own voice, saying "aww" and "no, no, no, no...", with a microphone, and sent those to the Kurzweil as well. All of the samples were mono — the K2000 can theoretically handle stereo samples, but the procedure of layering them and making sure they were perfectly in sync was more than I wanted to deal with at the time.
Assigning, transposing, and layering the samples within the K2000 are very straightforward procedures, and within a couple of hours I had a sound-effects 'map' on the keyboard (see Figure 2) which lasted me through all four shows.
When all was said and done, I transferred the complete K2000 setups — samples and programs — to a single Syquest removable hard disk cartridge. Actually, the process wasn't quite that linear: for fear of losing everything in some kind of system crash (which, thankfully, happened very rarely), I was continually loading samples from the Mac and storing them on the Syquest. Because the Kurzweil has only a single SCSI port, this meant using a SCSI A/B switch, with the line coming from the Mac on one side of the switch and the Syquest on the other. As with most complex SCSI setups, getting ID numbers and terminations straight took a bit of experimentation, but the final result was a stable setup.
One of my tasks was to produce the complex musical puzzles, for which listeners were invited to write in with the solutions. We needed three of them; there was no sense in presenting a puzzle on the last show, because we'd have no way to announce the winner! One was a 'quodlibet' which Mike, a concert pianist in a former life, had written. It was a chamber music-like piece which incorporated eight themes ranging from Haydn's Surprise Symphony, to the final cadence of Wagner's Tristan und Isolde, to a riff from 'Purple Haze'.
Since it was all down on paper, it was an easy task to play it into the sequencer, and assign the different parts to strings, winds, and brass on the K2000. Mike, who up to that point had only heard it played on a piano, thought it sounded great.
Another puzzle stemmed from one of my ideas. For the opening and closing themes of the show, I wrote some upbeat, fake Hollywood, disco-sounding stuff, complete with fanfares, stabs, and vamps, which used for its themes a variety of classical, jazz and show tunes. I had produced it in my studio, using Yamaha, E-mu, Roland and Kurzweil synths, and gave it to the station on DAT tape. Mike suggested I tighten it up, give it a definite beginning and end, and re-arrange it so I could play it from the K2000. This would be our second puzzle.
There were 14 tunes in the final version, and only two listeners managed to get all 14. They later became contestants on the show. As much fun as it was to create the puzzle, it was even more fun to explain it on the air. I broke the piece down into 14 small sequences, and emphasized the tune fragment in each one by raising its volume or velocity. I then loaded all 14 into Pro 5, and created a Playlist which would allow me to go from one to the next by pressing any Macintosh key. I felt this would be faster and more error-proof than going through HyperCard. So when it came to dissecting the piece on the air, I'd play a segment, identify it, hit a Mac key and play the next segment, and so on. We got it all down in one take.
Oh yes — I should mention that when we got to this point in the preparation of the programs, it was decided that we would pre-record not just the puzzles, but the shows in their entirety. We had been hoping to do the phone-in segments live, but since none of us were experienced on-the-air phone jocks, we weren't confident that it would work. Pre-recording the games would allow us to edit them, and make sure that the programs moved along quickly without any dud callers. The contestants were real listeners who had called wanting to be on the show, and they did not know the questions before they played, but we had them on tape a few days before the show aired.
The third puzzle was the brainchild of Jon, our executive producer. He wanted to take the first notes of recordings of nine famous pieces — from Beethoven to the Beatles — and string them together. This would normally have entailed recording them all on little pieces of tape, making sure all the levels more or less matched and the segments were all faded and trimmed properly, and splicing them together by hand. Estimated time: about four hours. The station had no digital editing capability (a Sonic Solutions system was on order but hadn't yet arrived), so I volunteered my studio for the job.
Mike brought over the CDs. We recorded them digitally into Sound Designer II, normalised them, and trimmed the beginnings and ends so that each segment would sound smooth. We then imported these segments into Opcode's Studio Vision.
Although I had Pro Tools available, I felt Studio Vision was more suited to the job. Like Pro Tools, Studio Vision would let us move the segments around in time visually, so we could get just the right amount of space between them, but it would also let us adjust the volume of each segment independently with a single command. In Pro Tools, levels are set with the Pro Deck program, which is much more complicated than this task really required. Once we got everything lined up, we recorded the finished puzzle (also digitally) to DAT. Total elapsed time: 40 minutes.
The response to the four shows was not what you'd call overwhelming. After all, they were broadcast on Sunday afternoons just as the weather was turning nice — not exactly an ideal time to catch people sitting at home next to their radios. What response we did get however, was almost universally favourable, and lots of people called in wanting to embarrass themselves on the air.
Unlike commercial television, in which dozens of pilots a year are developed in the hope that one or two will become hits, pilot programs in public radio are relatively rare: WGBH has done only three in the last five years. Whether we are renewed in the autumn will depend not on how many people listened to the four shows, but on whether the station management feels they can get funding for it, and whether it can be shown to pay its way. If it is renewed, chances are the station will try to sell it to other public stations around the country, either on one of the existing public networks or on an ad hoc basis, and we could find ourselves with a national audience.
As I write, it's far too soon to tell whether any of this will happen, but this was certainly one of the most fun and creative gigs I ever had the privilege of landing. It also proved, as if further proof was needed, that music and MIDI technology have uses in the audio production field that most of us have yet to think of.
Feature by Paul D. Lehrman
Previous article in this issue:
Next article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!