Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

Article Group:
MIDI Supplement - Part Two

Inside MIDI

Article from Electronics & Music Maker, June 1984

And following on from that, David Ellis takes an in-depth look at how the MIDI system works, and the sort of potential it encompasses.

Like it or not, MIDI looks set to become the universal synthesiser catchphrase for the eighties. But why has so little effort been made to inform musicians of what they'll realistically be able to get out of it? Inside MIDI attempts to redress the balance, with what we hope are straightforward explanations of what MIDI is, how it works, and what you should be able to do with it. Educational text by David Ellis.

Figure 7.

MIDI is a communications link. Baldly stated like that, it sounds about as interesting as any other bit of abstract computer jargon, but MIDI stands for 'Musical Instrument Digital Interface', which implies that the link must be such that the necessary ins and outs of hi-tech musical instruments and micros are efficiently communicated. Efficiency is a tricky thing to specify from a musical point of view, but the main aim of MIDI is to get all the necessary information across to the right place in and at the right time. Imagine the following situation: a station platform full of prospective passangers, all seeking information on departure times and destinations of trains. There are two alternative methods of presenting this information. First, by having the traditionally misunderstandable station announcer read out the list of destinations, letter by letter and place by place, of, say, the 4.40 from Paddington, over an equally misunderstood Tannoy system; or secondly, by displaying each destination in turn on the destination board. Clearly, the first approach is tantamount to useless (the train will probably have left before you've finally interpreted what was being said), whilst the second stands a good chance of getting the message across.

Musically, things are pretty similar. For instance, one way of recalling a particular synth patch would be to have someone read out the parameters for multiple VCOs, ADSRs, VCFs, et al. as a long stream of numbers, whilst you do your best to update the controls as and when a relevant item in the stream breaks through the surface of your consciousness. Another way would be to record all the parameters on a sheet and then read them off and change controls at your own speed. The second would doubtless be more efficient, but both are painfully slow in comparison to what could be achieved by interfacing the synth sections with a processor and some memory, so that patch information can be stored in RAM and then retrieved at will to effect a change of voice immediately.

The problem with the human approach to re-patching a synth is that there tend to be a lot of controls and a lot of parameters, and that tends to conflict with our rather poor ability when it comes to putting the right object in the right pigeon-hole. The micro control approach, on the other hand, positively delights in making sure that the right parameter update goes to the right module in the synth, and is therefore a good deal more efficient. In this sort of computerised environment, efficient communication of data needs the right sort of labelling to ensure it gets to the right address, and as we'll see shortly, this forms the fundamental basis of MIDI's communication skills.

Serial v Parallel

As the last few paragraphs have been putting across with all the subtlety of a wet fish smacked around the face, there are two basic ways of passing on information: serial and parallel. The point about the serial method is that words get chopped up into their constituent bits, while the parallel method, on the other hand, makes sure that the entire word is presented all at once. Human beings are pretty serial when they're speaking, but from a musical point of view, we're a good deal more efficient. After all, when we play a chord on a keyboard, we don't finger one note at a time in a Chico Marx arpeggioed fashion; instead, we play all four or five notes at once. That's parallelism in a musical context for you.

Fortunately, computers don't really have any particular predilections one way or the other; they're quite happy to be employed to send out and receive information in either serial or parallel forms. However, both these alternatives need some way of connecting the sender with the receiver, and that invariably comes down to common-or-garden wires. Parallel input and output needs a separate wire to carry each part of the word that's being communicated (just like needing a number of fingers to play more than note at once), but because micros operate digitally, the talk is of '0' and '1' rather than the 'O', 'X', 'F', and so on of the railway station announcer. These parts of the word are what we mean by 'bits'.

Now, unlike the many lengths of words found in the English language, the words zooming around the wires in the average home computer and most micro-controlled synthesisers are all of one length - eight characters or bits - and it's this word of uniform length that's termed a 'byte'. So, parallel communication needs a cable containing eight wires at the very least if it's to accomodate the whole length of the byte-size chat that micros are so fond of.

Serial input and output, on the other hand, adopts a more economical approach, and chops up the bytes into a stream of bits that, after sending down the serial line, then get reconstituted into their original format. Because we're now only concerned with sending one bit at a time, the original eight-lane information freeway gets reduced down to just a single lane. In practice, the communication needs to be bidirectional, so the single lane for serial traffic gets doubled up, but it's easy to see that the end result of a five-pin DIN plug at either end of a serial link is nothing like as troublesome (or expensive) as the 25-way 'D' connector needed with a bidirectional parallel link. The big problem with the serial approach is evident from the highway simile: one-lane traffic is going to be a damn sight slower and more frustrating than an eight-lane freeway if you're an important bit of information trying your darndest to get from A to Z. On top of that, there's still the small matter of processing the information to and from a bit stream at either end of the serial link, and that takes time, too...


Now, serial communications links were flitting bytes around the computer industry for a considerable while before the music industry cottoned on to the possibilities of computers and, not surprisingly, the other side have hit upon their own set of standards. The one that may ring a few bells, even if only because it's often included in the wording of adverts for micros, printers, and so on, is 'RS232'. Like most standards, RS232 has a particular way of approaching life: in a nutshell, it likes life in the fast lane. Speed in the serial communications business is described as having a certain 'baud rate' (meaning 'x' bits/sec), where, very roughly, the baud rate divided by ten equals the number of bytes being sent per second. The RS232 standard operates at 19.2kBaud, which is equivalent to a byte communication rate of about 2K per second. So, if your own particular idea of amusement was to send Hamlet (around 50,000 words, which would take up about 300K of storage space) from one micro on one side of a room to another micro on the other side via an RS232 link, the serial transfer would take around two-and-a-half minutes. Pretty quick, really: bet our Will would have been tickled pink...

So where does the MIDI standard fit in as far as parallel vs. serial and speed considerations are concerned? Well, there's good news and bad. The bad news is that the link is serial. The good news is that MIDI is a right old Speedy Gonzales, with a 31.25kBaud rate for both receiving and sending musical data.


Earlier on, I said that the baud rate divided by ten equals the number of bytes being sent per second. But hang about. If the serial link is sending bits down the line at a certain baud rate, and eight bits make up a byte, surely the divider should be eight rather than ten? Well, in reality, serial communication isn't simply a matter of chopping up bytes in an electronic mincer and hoping that they get to their right destination. Actually, they're in desperate need of some help, which is where the two extra bits come in.

Returning to the railway analogy, catching the right train would have been much easier if the station announcer had been able to signify in some way when a particular word stopped and started. For instance, if the 4.40 from Paddington stopped at Reading, Didcot, and Oxford, her serial bit stream would have appeared as something like READINGDIDCOTOXFORD. The last thing you or your micro want is having to sort out where words begin and end, so the conventions of serial data transfer insist on a couple of extra bits at the beginning and end of every eight-bit word to signify 'start' and 'stop'. If British Rail serial transfer used '<' to start a word and '>' to stop it, then our long-suffering announcer would have had a much easier time, as she'd then have been able to come out with <READING><DIDCOT><OXFORD>.

And like any other serial link, MIDI is based on words. What's more, these share in common with the RS232 standard a word length of ten bits, with the first and the last signifying 'start' and 'stop', respectively. See Figure 1.

Figure 1.

Well, at first glance that all looks a bit frightening, but don't give up yet! First, the format diagram tells us that each serial word takes 320us to be transmitted. That follows directly from the 31.25kBaud rate that MIDI operates at, ie. 31.25kBaud = 31.25kbits/sec = 3125 words/sec = 320us/word. Still clear as mud? Oh well, work it out on your calculator. Then, since each serial word has ten bits to it, the TX time of 320us/word is divided ten ways, giving each bit a duration of 32us, and that's just as true for 'start' and 'stop' as it is for what comes in the middle. Note also that 'start' must be low (or '0') and 'stop' must be high (or '1') for words to flow smoothly.

So what's with the wadge of D0-D7 in the middle of the MIDI word? Well, each of these parts of the word constitutes a bit of data - eight bits in all. The numbering from 0 to 7 is a convention for setting the order of significance of the bits in a word, with D0 the least significant bit (LSB) and D7 the most significant bit (MSB). You can see what this means by taking a number - 100, let's say - and then altering one or other of the digits. Changing the last digit to '1' changes the number to 101, and value-wise that's not much different to the original number. Changing the second digit from '0' to '1 ' gives 110, which is a more significant change, and, obviously, doing the same with the first digit is even more significant still. Apologies for the return to kindergarten, but it's worth making the point.

Where those bits eventually go in the synthesiser or computer that's involved in the MIDI link is a 64,000 dollar question that we'll get to later on, but we've also yet to consider how the serial bit stream gets converted into a suitable state for consumption at either end of the chain: this is where the hardware side of MIDI comes into play.

In fact, everything is taken care of by a special chip called an Asynchronous Communications Interface Adaptor, which turns data into serial words, and vice versa. When the micro or synth wants to transmit a byte of data, it makes its wishes known at the 'transmit' part of the ACIA. The chip then produces the requisite serial word, complete with 'stop' and 'start' bits, ready for sending down the line. When that word reaches the ACIA at the receive end, the chip automatically converts the word back into the original byte of data, whipping off the 'start' and 'stop' bits in the process. See Figure 2.

Figure 2.

This hardware is built into every synth that comes ready-equipped with the MIDI, but there are also a few additional bits and pieces built into the interface. First, some dividing circuitry to convert the micro's clock into a pulse that'll give the required TX/RX rate of 31.25kHz; and second, a device called an opto-isolator, that's inserted into the path of the incoming bit stream in order to prevent ground loops from rodgering the data, or the rather expensive ACIA chip from being accidentally treated to a dose of the National Grid. Not that it's occurred to me that a DIN socket is where the mains goes, but, then again, there are some right wallies around...

All in all, then, Sequential Circuits seem to have done a pretty fair job on the hardware side of the interface, though that doesn't mean to say you can afford to push your luck with the nice new MIDI synth you've just acquired. Remember also that as far as the micro side of the serial link is concerned, SCI haven't yet managed to get the necessary bods around the table to accept their standard. Come to that, not even all the synth manufacturers are going along with SCI (CBS/Fender being the main thorn in SCI's flesh), so it's hard to see micro manufacturers adopting MIDI with anything like equanimity, which is a pity.

So, unless the name on your new micro is Yamaha (the excellent new CX5, for instance), you'll have to add on extra hardware to do the other half of the interfacing job.

Fortunately, there are plenty of options in this direction. If you want something ready-built, there's the growing marketplace discussed in our MIDI and the Micro section later on, while for DIY fanatics, there'll soon be E&MM hardware designs for the Spectrum and BBC Model B home micros. Happy interfacing!


Returning to our harassed station announcer, let's suppose she had to announce the destinations of three different trains waiting at platforms 1, 2 and 3, as in Fig 3.

Figure 3.

Well, if everything was working tickety-boo, those are the destinations that'd go up on the indicator boards. But let's suppose there's a panic on in the control room (the station cat's threatening to jump onto the tracks, for instance). Given that, it'd hardly be surprising if the announcer mixed up the routing of the destinations to be platforms, as in Figure 4.

Figure 4.

What's happened, of course, is that the six crucial words intended for transmission to the waiting public on the platforms have done a bunk and gone off in the wrong direction. Not a happy state of affairs. The point is that those words would normally have a marker attached to them, so that when the instructions were sent off down the cable to the indicator boards on the different platforms, the markers would make sure that the destinations clicked with the boards they were intended for. In fact, if we were able to peer into the station announcer's serial bit stream, it should have gone something as follows:


What's happening here is that each word is prefaced with a tag to ensure that the right word goes to the right platform. If you'll pardon the analogy, that's precisely what happens with MIDI communications; like the average train passenger, MIDI banks on knowing where it's heading.

Channels and Notes

OK, let's switch tracks to the nitty-gritty of musical exchanges via MIDI. We're going to start off by considering the simplest and most essential side of musical life - notes. For the sake of argument, let's assume we've got three MIDI monophonic keyboards capable of velocity-sensing both on attack and release (an entirely theoretical beast, in fact!), to which we want to communicate the necessary info for the simple triads in Figure 5.

Figure 5.

To make things more interesting, we'll give each part of the chord to a different synth programmed with a different voice - a Moog-type bass on the bottom, slapped bass in the middle, and a cello on top, for instance. The MIDI words required for this reasonably simple task come under the heading of Channel Data, and for each note event going to a particular keyboard, there are actually three words involved. First, there's a word that defines the nature of the command and to which piece of MIDI equipment it's directed; second, there's the note value itself; and third, the attack velocity of that note. So, starting with the inverted B major chord, the words that'd be transmitted are illustrated in Figure 6(a).

Figure 6(a).

There's quite a lot that's noteworthy in this example. First, the verbal words of the announcer have been replaced by the numeric words of binary code. After all, there's nothing like honest-to-goodness numbers to avoid confusion and get the message across. Secondly, each word is still enclosed by 'start' and 'stop' signals, making up the 10-bit package that whizzes down the MIDI line every 320us. Taking a closer look at the first group of words aimed at producing a B on the cello-patched synth, we've got the parameters in Fig 8(a) to play around with.

Figure 8(a).

Now, it's the first word ('1001 nnnn') that's responsible for making sure there aren't any cock-ups on the routing front, and is known in MIDI parlance as the Status Byte. Make sure that's engraved on your cranium, because these tend to crop up with monotonous regularity! Such crucially important bytes need to be distinguished from their plain old data counterparts, and the way the MIDI communication protocol goes about this is by setting the MSB to '1' for a status byte and '0' for a data byte. So, by coming first in the MIDI data stream, status bytes make sure that the right command ('note on', in this case) gets directed to the right piece of MIDI equipment, and then the subsequent data bytes determine how that command is interpreted.

Let's continue from where we left off with a few more words (Figure 6(b)).

Figure 6(b).

This introduces the next command in our whistle-stop tour through the MIDI protocol - that for switching notes off - and the parameters are shown in Figure 8(b).

Figure 8(b).

Touch, Change and Mode

Next in the line-up of musical exchanges is that delicious bit of icing on the cake - after touch. The MIDI protocol includes two varieties of this: first, one that's note specific; and second, one that affects all the notes that are still on in a given channel. So, whereas the first enables subtle balancing acts in a held chord, the second treats all the notes equally. Their MIDI phrases are illustrated in Figures 8(c) and (d).

Figure 8(c).

Figure 8(d).

Now, touch is all very well if you're into physical contact and that sort of thing, but most of us hae been weaned on bending control wheels, and fortunately MIDI provides plenty of potential in this direction, as illustrated in Figure 8(e).

Figure 8(e).

128 theoretical controls, each with 128 theoretical values, seems like an awful lot to swallow, but in fact it's nothing like as complicated as that - they're just options available to the manufacturer of a MIDI instrument, and far fewer are used in practice. For instance, the SCI SixTrak makes use of just 37 controls, of which nine simply toggle something on or off and the rest actually make some sort of graded change in the direction of coarse and fine VCO frequencies, filter cutoffs, and the host of other parameters deeply engrained in each and every synthesist's brain.

In addition, some of those 128 potential control points are kept for specific purposes such as Mode messages. 122 = keyboard control (which simply switches a synth's keyboard on ('local') or off ('remote')); 123 = all notes off (a command which many keyboards don't obey!); 124 = Omni off; 125 = Omni on (which results in a keyboard losing its ID and responding to the events on all 16 channels at once); 126 = Mono on/Poly off (where a monophonic line on one channel is played by the specific keyboard voice assigned to it, and the value of the second data byte sets how many monophonic channels will be received by a particular keyboard); and 127 = Poly on/Mono off (where the keyboard responds like a normal polyphonic synth to the notes on its assigned channel).

Two important points emerge from the 'Control change' aspect of the MIDI protocol. First, there's ample opportunity for expanding the current range of control options; and second, there's little likelihood (since the MIDI standard doesn't lay down the law on this count) that manufacturers are going to agree on which control is given which address. So, if you've plans to equate real-time changes in cutoff frequency on a Prophet T8 with the decaying modulation index of a DX7, and all from a non-specific sequencer running on the Spectrum, best of British luck!

Figure 6(c).

Figure 6(d).

Figure 6(e).

Figure 6(f).

So, continuing with the other two chords (Figure 6 (c)) and after a quarter-note's duration (Figure 6(d)), we move on to the end of the piece (Figures 6 (e and f)).

OK, that's a fairly simplistic example, but it goes to show how you can make dynamics work for you with MIDI - in this case, moving the cello line up in level against a diminuendo in the other two parts. The other point is that, by requiring both 'note on' and 'note off' commands, the protocol makes nuances of articulation quite straightforward. For instance, a staccato passage of quavers could be achieved by sending the 'note off' command after just a semiquaver duration and then leaving a gap of a further semiquaver before sending the 'note on' command for the next note in the sequence. Just like what's been going on for donkey's years in the way of 'gap time' on MC4s and the like.

On the other hand, if you want a legato, where one note replaces another without any intervening gap, it's clear that having to switch a note off before switching the next one on isn't going to be quite like the real thing. Fortunately, there is a way of getting a legato out of the MIDI protocol, but this involves going into a special mode of operation aimed more specifically at monophonic lines (and therefore called 'Mono' mode). The point about this mode is that it allows a note value to be changed without obliging you (or, more likely, the equipment) to send 'note off' commands - legato, in other words. We'll come back to the various MIDI modes shortly.


Having spent some time talking about the MIDI words needed to send playing instructions, we've still got to address (and I use that word with purpose) ourselves to the problem of making sure that the notes we've tagged with a status byte actually get to the synth they're intended for. Without that, it's a bit like getting off a train to meet someone you've never met before. The solution is to make sure that the person you're meeting is clearly identifiable from the crowd - a prior instruction to wear a red carnation, for instance.

MIDI does much the same by insisting that all the equipment on the receiving end of a MIDI transmission should have their own identification codes. Thus, a DX7 could be assigned the Channel 1 ID code, a SixTrak Channel 2, and a Poly 800 Channel 3. A '144' status byte zooming down the MIDI pipeline would then automatically get routed to the DX7 and effect a 'note on'; a '145' would go to the SixTrak, and a '146' to the Poly 800. I've chosen those three keyboards wisely, because they and the DX9 are the only keyboards available at the moment that allow the user to set the MIDI channel ID himself. The majority of the others are preset to Channel 1, a point that makes them incompatible (though not, of course, unusable) with sequencing software that allows individual tracks to be assigned to particular MIDI channels and, therefore, played with different voice programs (see Figure 7).

Now, the daisy-chained example in the illustration isn't quite as straightforward as you'd imagine. For some reason or other, SCI saw not to equip the SixTrak with a MIDI Thru socket, so the Poly 800 isn't going to get its slice of action from that source (the same's also true of the Poly 800, as it happens). Instead, you're obliged to use one of the extra MIDI Outs on the computer interface (assuming there is more than one - again, SCI make life difficult by providing only one on their Commodore 64 sequencer) for this purpose. Or, failing that, there's always Roland's MM4 box of tricks to turn one MIDI Out into four MIDI Thrus.

Bending and Programs

Finally, to cap the essentially musical side of MIDI relations, there are two more phrases to be encountered, one for pitch bending and the other for voice program changes. (Figures 8(f) and (g).

Figure 8(f).

Figure 8(g).

By now, you should be getting the gist of the MIDI language, but take your time; savour those words and bounce them around until status and data bytes gel into a nice, ordered queue. Because that really is the whole point - those bytes are being squirted down the MIDI pipeline like some endless tube of toothpaste with different flavour stripes for each of the 16 channels, and the whole success of their mission depends on everything going off in the right direction.

Well, that's not 100% true. Sometimes you might feel like mixing all the stripes up and consuming a nice, homogeneous whole - like putting all 16 MIDI channels in a food processor and dishing out a portion to each open-mouthed synth - and that's what the Omni mode is there for. Any item of equipment that's given the OK to go into Omni mode will then take in and digest note on/note off events sent in all 16 channels: just the thing if you want to parallel one keyboard with another and get a super-thick sound.

On the other hand, if you want your computer to control lots of synth voices with rather more in the way of individuality, then the Mono or Poly modes are where you should be heading. Poly is ideal for the sort of situation where you've a number of polyphonic parts that you'd like played by different synths (as in Figure 7, for instance). In contrast, Mono is ideal for those musicians of a more finickety bent who are after individual lines on individual channels being sent off to their own special voices.


How about the other side to MIDI's character - the System data? Well, there are really three sides to the system coin, namely System Exclusive, System Common, and System Real Time. System Exclusive is what you hear muttered about most of the time by the big manufacturers, as this provides the means of a more direct lifeline to the insides of MIDI equipment. For instance, on the SixTrak, SCI use this for loading and dumping stack and program data. There's also a particular System Exclusive instruction available which enables a MIDI transmitter (a computer, for instance) to actually switch the SixTrak to a particular ID channel, thereby freeing the musician from that irksome responsibility.

So, if every manufacturer followed suit, you might in the future be able to enter your parts into the micro, assign them to particular channels, and then enter the type of keyboard you want to play them. The micro would then recognise the keyboard and automatically instruct it to switch to the relevant ID channel. No longer would you be forced to search through the instruction manual to find out how to change IDs, or, horror of horrors, realise that you're stuck with the manufacturer's pre-assignment of IDs. Until then, though, we're obliged to wait patiently for manufacturers to release their System Exclusive data (though that for the DX7 and SixTrak is already available) in order to see what surprises (if any) they've got in store.

Finally, System Common and System Real Time are really for the live musician. The former includes such commands as Song Position (meaning measure number), Song Select, and Tune Request. System Real Time, on the other hand, is pretty important, as its' data bytes provide the wherewithal for keeping things in sync with a 24-pulse-per-quarter-note clock.


So, after all these words, we're still left with the major question of what MIDI will do for the average musician.

If you're using MIDI simply as a means of syncing a keyboard with a drum machine and playing a couple of keyboards in parallel, you're likely to benefit greatly. MIDI is well suited to that purpose - provided, of course, that the manufacturers can get together to agree on what basic information should be sent and received. Moreover, once you get down to using the MIDI as a pipeline for the rather more ambitious and complex instructions encountered in a multitracked piece, put together with the assistance of some software running on a MIDI-equipped micro, the window opens to an exciting future of multi-timbral arrangements. However, that's not without some problems of inception.

For starters, New England Digital, Oberheim, and CBS/Fender weren't out of order when they criticised the slowness of MIDI - note events do take a finite time to get to where they're being directed. Try a simple bit of maths with the 320us that it takes to send each word of the phrase to switch a note on or off. Three words = 960us, or about 1 ms. Switch on eight notes at once, and that takes 8ms. Not too bad, but remember that's without adding on the frills and fancies of after touch, pitch bending program changes, or whatever.

In fact, there also appears to be a measure of unwanted delay creeping into the proceedings even when a relatively small number of note events are being sent between keyboards in a daisy-chain situation. For instance, one experiment I've tried involved a SixTrak transmitting via two DX9s to a DX7. Since Yamaha's keyboards sensibly incorporate MIDI Thru, and since MIDI Thru is claimed to be a carbon-copy of what's presented to the keyboard at MIDI In, you'd expect a straight-down-the-line transmission from the SixTrak to the DX7. But far from it. To be fair, all the notes were there, but there was a quite noticeable delay between what was played on the SixTrak and what emanated from the DX7. Worrying. So, why do Yamaha keyboards add a delay factor between MIDI In and MIDI Thru? Confusing, isn't it?

But let's not carp too much at this stage. MIDI is still young and innocent and just waiting to be taught a trick or two. Personally, I think most of that's going to emerge once people start using computers in a big way with MIDI. There's no doubt that the MIDI language provides enough finesse to make some very interesting and powerful musical statements, but it has to be used intelligently - and that's where you come in!

More with this topic

Browse by Topic:


Previous Article in this issue

Technical Introduction

Next article in this issue

MIDI and the Micro

Publisher: Electronics & Music Maker - Music Maker Publications (UK), Future Publishing.

The current copyright owner/s of this content may differ from the originally published copyright notice.
More details on copyright ownership...


Electronics & Music Maker - Jun 1984

MIDI Supplement - Part Two



Feature by David Ellis

Previous article in this issue:

> Technical Introduction

Next article in this issue:

> MIDI and the Micro

Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

Donations for July 2024
Issues donated this month: 14

New issues that have been donated or scanned for us this month.

Funds donated this month: £20.00

All donations and support are gratefully appreciated - thank you.

Magazines Needed - Can You Help?

Do you have any of these magazine issues?

> See all issues we need

If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!

Please Contribute to mu:zines by supplying magazines, scanning or donating funds. Thanks!

Monetary donations go towards site running costs, and the occasional coffee for me if there's anything left over!

Small Print

Terms of usePrivacy