Composer's Desktop Project
The CDP is a non profit-making co-operative formed by a group of professional composers and researchers at York University, whose intention was to create an affordable personal workstation — a ‘desktop IRCAM’ - on which to run powerful synthesis and signal processing programs. Richard Dobson reveals the power of this Atari ST-based system.
Readers may recall a review in last May's Sound On Sound of an innovative program called MIDIGrid. It was introduced as "...the first mainstream music software product from the Composer's Desktop Project." This was not quite the first time the CDP have been mentioned in SOS - indeed, the previous issue contained a description of a computer music conference at Keele University in which the CDP system figured very prominently - yet the article did not totally raise the veil of esoteric mystique that seems to cover both computer music in general, and the CDP in particular.
So who are, or what is, the CDP? To tell the whole story would take far too long, but in essence, the CDP is a non profit-making cooperative formed by a group of professional composers and researchers at York University. Their intention was to create an affordable personal workstation - a 'desktop IRCAM' - on which to run the powerful synthesis and signal processing programs which were otherwise only available on expensive, and usually inaccessible, minicomputers and mainframes. Equally importantly, the system had to produce sound of professional quality.
Having identified the then new Atari ST as an ideal computer, they designed the SoundSTreamer as the first system component, which allows a Sony PCM AD/DA convertor (a stereo CD quality machine) to communicate, via the ST, with a hard disk. They also ported to the ST some powerful software packages developed at a number of centres prominent in the field of computer sound synthesis and research, such as the Computer Audio Research Laboratory (CARL) of the University of California (UCSD), and Massachusetts Institute of Technology.
That was in 1986. Since then CDP membership has grown substantially, with members as far away as America and Japan. Members are not only 'users' - many are playing a significant role in the development of new software and synthesis techniques. The majority of CDP members are associated one way or another with Universities and Colleges, whilst others (such as myself) are independent composers who simply relished the chance to explore the synthesis power that the CDP system offers. There are some who do not own a system but subscribe anyway, out of interest. The software catalogue is continually expanding, not only through CDP members' own efforts, but also through the acquisition of new programs from outside. Some may be long-established 'classics', and others may reflect state-of-the-art developments - read on for details!
The only item of hardware that CDP builds and markets itself is the SoundSTreamer interface, which enables two-way communication between a Sony PCM AD/DA convertor and a hard disk, via the ST. This simple description conceals some very clever engineering and programming designed to get over the fact that the Atari's cartridge port is theoretically 'read only'. It would be the archetypal 'black box', sporting no more than an on/off switch, a red light and a couple of sockets, were it not actually a rather homely shade of white.
A video cassette recorder, either Betamax or VHS format, connected to the PCM can be used for longterm archiving, or indeed as a CD quality digital recorder. The PCM requires a special interface board for this connection, which can be supplied and fitted by the York University Electronics Centre. The Sony PCM F1 is too small, and is not recommended, but any of the 501, 601 or 701 models will do very nicely. All are available quite cheaply these days. More recently, a further interface has been developed to allow DAT recorders to be connected to the SoundSTreamer. This is a more expensive option; the DAT recorder must have an AES/EBU interface, so the cheap Casio models cannot be used. CDP software now supports both 44.1kHz and 48kHz sample rates for PCM and DAT respectively, with toggles for 22.05kHz and 24kHz rates also.
CDP are hard at work adding compatibility with the MIDI File Standard, which will enable soundfiles (samples) to be transferred directly to a sampler and back. At present, I simply connect the PCM analogue output to the audio input of my Ensoniq EPS sampler, and I have been well pleased with the results, which are certainly no worse than copying from a CD.
As one might expect, hard disks need to be large. My 80Mb SCSI drive is, according to CDP, the smallest for serious work. 'Serious work' is here understood as research, and the creation of pieces of computer music. If your sole intention is to create relatively short sounds to use on a sampler, a lower capacity may well be perfectly adequate, and if you are really strapped for cash, you can use a RAM disk to store soundfiles before dumping to tape. My own ambitions embrace both applications, and my chosen combination of the CDP system and the EPS sampler constitutes, for me, a pretty powerful example of the best of both worlds.
On studying the CDP price list (see panel), you will probably be impressed by the low cost of the software by comparison with 'commercial' software. Much of it has been supplied by institutions virtually as public domain, and it is CDP's policy to make all software available at the lowest possible prices, allowing for any royalty payments that may be due. The one exception to this is MIDIgrid, which is priced as a commercial product.
Supplied with the SoundSTreamer is the Sound Filing System (SFSYS), a set of General Utilities, and the Graphic Desktop. SFSYS is, in fact, more than just a filing system - it is also a memory resident function library. With a 'C' compiler (the 'house' standard is Lattice, but CDP can also provide the public domain Sozobon compiler) these library functions, and their associated CDP 'C' header files, enable you to read from and write to the soundfile directory and thus write your own signal processing programs, or even a whole new synthesis program. CDP actively encourages individual projects such as these, and can even give you access to CDP source code if you sign a non-disclosure agreement.
CDP have recently issued a major upgrade, and as I write the new system manuals are still being finalised, a task complicated by the fact that they have to cover existing as well as new owners. To a novice user, the process of partitioning a hard disk for the GEM and soundfile partitions may seem daunting, but in fact the process is very straightforward. If you have ordered your hard disk through CDP, as I did, they can partition it to your requirements so that all you have to do is connect up and switch on. CDP's experience is mainly of Quantum drives running Supracorp software, and they do warn of possible 'unforeseen complications' if different drives are used. That said, CDP members have between them a wealth of experience in the use of different hard disks, so plenty of advice is available.
Once you have booted up your computer with a correctly partitioned hard disk, you are presented with the familiar GEM desktop, with icons for the GEM hard disk partitions. The soundfile partition is invisible to GEM and occupies the remainder of the disk, and is accessed solely through SFSYS-based commands.
The majority of the CDP programs are text-based. You type in commands, filenames and parameters, you write function files (see below), or score and orchestra files for CSOUND. The flexibility of the system makes it possible to access any program or file from any directory or subdirectory, on any drive. The command line interpreter (Command.Tos) in the Utilities folder, in effect a traditional pre-WIMP Disk Filing System, includes a comprehensive set of commands to change directories and to set pathnames, so that from the most remote subdirectory you can access all the programs and files you need simply by typing in their names. This 'glass teletype' environment may well seem old-fashioned by comparison with GEM, but is much quicker and certainly more convenient for 'serious work'.
A particularly useful facility is the ability to run a batch file. This is a user-written text file listing a series of commands, with the appropriate parameters and filenames; typing the name of the file (which must be given a '.bat' suffix) causes the commands in the file to be executed automatically in sequence. As the manual points out, this is useful for getting the computer to compose while you sleep - handy when some processes can take hours to complete.
This is essentially a graphic 'front end' to Command.Tos, complete with drop-down menus - although programs still have to be called by typing their name, together with input and output filenames and other parameters as each program requires. One advantage of the graphic desktop is the fact that command lines (which can often be very long - too long for an ordinary GEM window to accommodate) are preserved between calls. This saves a lot of typing when only a few characters need to be changed. When you want to record sound from the PCM unit, or review a number of soundfiles, the graphic desktop is undoubtedly more convenient than Command.Tos, and includes a number of extra facilities to make things easier.
The Play/Record screen (Figure 1) is a straightforward representation of a digital recorder, enabling start and end times for the soundfile to be set. Soundfiles can also be set to repeat several times - I have found this facility particularly useful when copying onto my EPS sampler, as the soundfile will repeat automatically while I set the record level and other functions on the EPS. Soundfiles can also be deleted from within the Play/Record screen. Altering the high/low sample rate or mono/stereo toggles plays soundfiles up or down an octave: a stereo soundfile, for example, is not mixed down to mono, the samples are interleaved, which drops the pitch by an octave.
A noteworthy feature is the length allowed for soundfile names. Soundfile subdirectories are represented by the use of long segmented names rather like standard pathnames. The segments are known as 'prefixes'; by presetting one or more prefixes with the command 'cdsf' (change soundfile directory), the menu can be directed to display only those soundfiles collected under the same prefix. This is clearly very handy when a hundred or more soundfiles may have been accumulated, whereas you may only be concerned with a dozen, say, at any given time.
ViewSF (Figure 2) displays a selected soundfile in resolutions ranging from one sample per pixel to 128 samples per pixel. At the highest resolution, you can use a cursor to find the value of any sample, and mark up to five pairs of edit points; these points are saved to a text file which can be used by the program's Cut function. ViewSF is designed for use with single channel soundfiles; if you are working with a stereo soundfile, the two channels are interleaved into a single display. If I want to cut out a section of a long soundfile, I use the Play/Record window to play it, altering the start and end times until the desired section is isolated. These times are then used on the 'cut' command line. The new soundfile can then be viewed and further detailed trimming done if necessary.
It is possible to copy soundfiles to the GEM partition of the hard disk, and vice versa - any GEM file, even a text file, can be copied to the soundfile partition. The system appends a default header, which can be altered later on. There is also a function in the 'Gemfiles' menu to append one file to another. It follows that it should be possible to exchange soundfiles between the CDP system and commercial synthesis/sample editing programs, but as I don't own any of the latter (too expensive!) I have not been able to confirm this.
This is an uncomplicated yet powerful additive synthesis program, called up directly from the desktop 'Options' menu. It allows both amplitude and frequency envelopes for up to 64 partials to be drawn using the mouse (see Figure 3), with up to 50 breakpoints per envelope. Partials can be harmonic or inharmonic (ie. stretched or squeezed), or can all centre on the same frequency for elaborate detuning effects. The soundfile can be of any length up to the available capacity of the hard disk.
When the envelopes have been defined, clicking on the appropriate menu option causes the program to write an ASCII report file containing the breakpoint data from which the soundfile is finally created. The report file can be edited like any standard text file; more importantly, two or more files can be merged before compilation into a soundfile, thus overcoming the 64 partial limit.
Additive synthesis, by its very nature, involves the generation of large amounts of data. However, it is important to remember that with the open-ended structure of the CDP system, you can choose whatever synthesis technique is most suited to your purpose. Thus Adsyn Draw may be valuable for the creation of one component of a sound which draws on any number of different synthesis, mixing, editing and signal processing techniques.
So, let us suppose that we have created a number of soundfiles using Adsyn Draw, and have also recorded some sounds through the PCM - what can we do with them?
The whimsically titled Groucho suite (see panel) contains a wide and expanding range of programs for processing soundfiles. All editing is non-destructive, simply because there is no way you can 'alter' a soundfile as such - all programs create a new soundfile, and leave the original unmodified. It would take far too long to describe them all, and in any case the operation of many of them is self-explanatory, so I will concentrate on those which display features of particular interest.
Function files' generally consist of time/value breakpoints (as many as you like) which control the principal parameter in the program. They are written using a text editor (such as microEmacs, or First Word Plus in ASCII mode), and primitive though they may appear to GEM fans, there are distinct advantages to this text-based approach (as opposed to any system based around a graphic interface). Firstly, problems arising from inadequate screen resolution (ie. in capturing extremes of time and frequency changes) are avoided. Secondly, and more importantly, the function files can contain arithmetic expressions as well as plain numeric data. Moreover, it is possible to use conditional expressions and macro definitions for constants and variables, as in a 'C' program, together with comments; the file is then passed through a standard 'C' preprocessor (included in the Utilities folder) which expands all the macros into the full form required by the program. It is therefore possible to create a set of related function files simply by redefining a macro definition or altering a conditional expression. The writing of a function file can thus be a significant part of the composition process.
Spect performs an FFT analysis of a soundfile. In its own way it is very versatile, generating numeric analysis data in a variety of formats. However, it is very old-fashioned - there are no 3D high resolution 'mountain' displays, only a crude 'glass teletype' column display, which is nevertheless useful for general checks on a soundfile or on filter performance. The numeric data can be redirected to a printer if necessary. A high resolution version is promised as part of a major graphics upgrade planned for this year.
Ftrans works like a varispeed tape recorder, albeit over the whole audio range (within the limitations of the sampling rate). Both portamento and stepped frequency changes can be specified. Without a function file it will apply a very precise transposition to the whole soundfile. The transposition factor can be entered either as a decimal ratio (eg. 0.5 to drop an octave) or in semitones and cents.
Pan is notable for the fact that you can specify pan positions beyond the physical positions of the speakers - the signal level is reduced according to the 'inverse square' law. The manual points out that "the illusion is not complete, as no Doppler shift or filtering is used" - illustrating the importance that is attached to the movement of sounds in space, rather than simple stereo placement.
Mixsf allows you to mix soundfiles. The function file (called a 'mixfile' in this case) lists the names of all files to be mixed, together with start times, number of channels, level and position in the stereo mix. In most other programs the function file is optional, but not for Mixsf. If one is not specified, the program prompts for parameters which will apply to the whole soundfile.
If you are creating sounds for use in samplers, you will naturally use the sampler's own looping facilities. The Groucho program loop is not conceived in terms of keyboard sampling, but as a creative tool in its own right. It creates a basic crossfade loop, with variable 'splice window' length, loop start position and loop length. However, two unusual facilities take this program into the realms of magic. 'Loopstep' tells the program to increment the starting point of the loop each time: if the increment is shorter than the length of the loop itself, the source file will be stretched as the loop progresses along it; if it is longer than the loop length, the file will be shrunk.
The 'Searchfield' parameter defines a length of soundfile, ahead of the starting point, within which random positions will be chosen for the extraction of segments of the length specified by the looplength parameter. You can also specify a loopstep parameter. The length of the splice window can now be used to determine how hard the attack of each segment is. This is such a wonderful idea (called 'brassage') that I cannot imagine why commercial samplers do not include it. The effect does of course depend, among other things, on the nature of the source soundfile - when applied to a speech sample, the result is a rhythmical glossolalia that could prove even more commercial than Paul Hardcastle's 'N-n-n-n-nineteen', in the right hands.
All of the filter programs in Groucho can go well beyond the one to four-pole types available on most current instruments. The faster programs, such as fstatvar, can develop marked resonance peaks if pushed (maximum 'Q' is a mind-boggling 10000), but lphp is impressively clean and free of undesired resonances. This is a 'no compromise' algorithm - a four-pole filter takes about 90 seconds to process one second of a soundfile. The powerful Fltbank program can take appreciably longer than this, if a large number of filters is specified. By contrast, the more simple eq program takes a blistering nine seconds to process one second of soundfile, and is more than adequate for simple tone control type operations.
One of the main reasons why some programs can take so long to carry out their calculations is that the processing uses high precision floating point techniques. This is to ensure that there is no loss of dynamic range, or signal degradation due to quantisation effects, however much sounds are mixed; only when processing is completed is the data converted to integer format. Soundfiles can in fact be stored in either integer ('shorts') or floating point ('floatsams') format. The advantage of the latter is that the consequences of integer numerical overflow can be avoided, and sounds can be rescaled before conversion to 'shorts'.
Fltbank can be used with or without a function file - in this case called a 'config. file'. The use of a file enables unevenly spaced filter tunings to be specified, otherwise the program will simply ask for the number of filters you want and the frequencies of the lowest and highest.
Allpass is something of an 'odd one out' program. An all-pass filter affects only the phase response of a signal, not the frequency response. Together with the comb filter, it is one of the primary building blocks of digital reverb algorithms, and even on its own it can produce useful reverberant effects. Comb filter effects can be achieved with the delay program. Both programs will need to be run several times to achieve specific reverb effects, though once you have settled on a particular combination it can be specified in a batch file, which can then be run as a simple command.
The delay program itself is fairly basic in comparison with commercial digital units (there is no modulation of delay time, for example), and works well enough, but the current version has a strange bug which causes certain delay times to be half as long as they should! CDP are aware of this and a 'fix' should be available shortly. The maximum delay time is determined by your computer's available memory - a good three seconds is available on an Atari 1040ST at the lower sample rate.
Incidentally, some readers may have been wondering why the set of programs is called Groucho. Well, the logic runs like this: from UCSD we get CARL; from CARL we get Marx; from Marx we get Groucho! I wonder what we could get from 'IRCAM'...
The phase vocoder program offers powerful (but time-consuming!) facilities for analysis/resynthesis and signal modification. It takes continuous Fast Fourier Transforms (FFTs) and stores the resulting data in a large analysis file on the soundfile partition. This file can be processed in various ways to create a new analysis file, which is then passed to the synthesis option of the phase vocoder. The process takes roughly an hour per second of soundfile, so it's worth running the program overnight (or whenever it is you sleep). The phase vocoder itself includes three facilities for signal transformation - timestretching, filtering and spectrum warping.
Timestretching involves the separation of time information from pitch information, enabling a sample to be stretched without altering the pitch, or transposed without altering the length. The FFT method is used in the phase vocoder because, as the manual (written by Trevor Wishart) says, "although these effects are more rapidly achieved with a harmoniser type of program for small degrees of stretching or compression, the phase vocoder retains much greater fidelity to the original source."
Any timestretching technique has to arbitrate between precision in time and precision in frequency. In the context of the FFT, this means that for accurate frequency resolution a long sample window must be analysed, but for fine temporal resolution a short window should be used. The phase vocoder resolves this dilemma not only by allowing different window lengths but also by overlapping the windows. The built-in default overlap seems to be one eighth of a window, giving an 'analysis rate' of over 340Hz for a window of 1024 samples. The acid test is a soundfile containing non-harmonic sounds and transient detail; all I can say is that, so far, I have failed to find a sound which it cannot handle accurately.
Filtering is achieved by eliminating analysis channels (representing frequency components) from the resynthesis. Components below or above specified thresholds can be cut - therefore the phase vocoder can be used as an extremely powerful low pass or high-pass filter. A second filtering option cuts odd or even channels only, and is useful for certain types of timbral mixing.
Spectrum warping is a facility by which the spectral profile of the original sound can be applied to the transposed sound. The purpose of this is to preserve the pitch of the spectral formants which, for example, give a vowel sound its particular identity. Thus a vocal sample can be transposed (within reason, of course) without suffering the dreaded 'munchkinisation' or chipmunk effect. It will be interesting to see how long it is before a commercial sampler offers this answer to many a samplist's prayers.
Additional processing facilities are provided by a suite of spectral manipulation programs which are supplied with the phase vocoder. These were written by Trevor Wishart while working at IRCAM on his tape piece 'VOX-5', and permit you to split, shift or stretch the spectrum as a whole or in part. This can be done in a variety of ways, and again, you can retain the original formant shape if you wish. A particular application cited by Wishart (which I have yet to explore myself) involves stretching the spectrum of a vocal sound so that it takes on the inharmonic character of a bell. The program vocinte performs a continuous spectral interpolation between two source sounds - a trick used in 'VOX-5' to transform a vocal 'zzz' into a swarm of bees.
CSOUND is but the latest in an illustrious series of music synthesis languages developed by the computer synthesis pioneers Max Matthews (Bell Laboratories) and, later, Barry Vercoe (MIT). The latter was responsible for the single most popular and enduring language, MUSIC 11, written in 1973 for the PDP11 computer. CSOUND is a new version written in the 'C' language, originally designed to run under UNIX on VAX minicomputers. To have it available on a humble micro such as the Atari ST is a remarkable achievement for the Composer's Desktop Project; it is also a tremendous opportunity for composers for whom a session at Paris' famous music research centre, IRCAM, must remain an idle dream.
CSOUND splits the synthesis process into two stages. The first entails the writing of an 'orchestra file' (see Figure 4a) containing descriptions of 'instruments' - these are in fact algorithms constructed using a range of basic building blocks, such as oscillators, filters and envelopes, collectively known as 'Unit Generators'. Each unit generator has a number of parameters defining such things as amplitude, frequency and the number of a wavetable. These parameters fall into three groups, according to how frequently they may change during the course of a note: 'i' variables are fixed (at 'initialisation time'); 'k' variables (such as envelope parameters) change at the 'control rate'; 'a' variables change at the audio rate. This distinction serves as a convenience to the composer, and to save unnecessary calculations. It is possible to make the control rate equal the audio sample rate so that everything can modulate everything else!
A 'score file' is associated with the orchestra file (see Figure 4b) which specifies not only the parameters for each 'note', but also the waveforms, transfer functions, and other function tables required by the instruments. The score file calls a library of 'GEN' function drawing programs; some use mathematical constructs (eg. polynomials for waveshaping or sums of sine waves for additive synthesis), others use breakpoint data, and one reads from an external soundfile, thus enabling sampled waveforms to be used by CSOUND instruments.
Mono or stereo soundfiles can also be read directly into an instrument, using the 'soundin' unit generator. This uses no parameters other than the file number, so direct amplitude and frequency modulation is not possible, but the full range of signal processing, arithmetical manipulation and mixing facilities can be used.
The most promising newcomer is FOF synthesis. 'FOF' stands for 'Forme d'Onde Formantique' - or, to give it its full English title, Time Domain Formant Wave Function Synthesis. A recent product of research at IRCAM by Xavier Rodet and others, and added to CSOUND by Michael Clarke of Huddersfield Polytechnic, FOF can achieve stunningly realistic imitations of the human singing voice, though it is capable of a much broader application. It is an example of a new generation of synthesis techniques drawing on physical rather than purely mathematical models - it is closely related to the technique of modelling of a vocal or instrumental sound by an impulse generator feeding a number of parallel filters, each tuned to a particular formant region. The important point is that the formant frequencies are independent of the fundamental frequency.
Each FOF unit generator synthesizes a formant region directly, by calculating what the output of such a formant filter would be. A great deal of independent control is possible over such things as the waveform and fundamental frequency of the impulse generator, and the bandwidth and centre frequency of the formant. The fundamental and formant frequencies can both be modulated at audio rates. The impulse waveform is taken from a function table, which can be as simple or as complex as you like. A formant region is created for each partial in the wavetable, so rich sounds can be created using just one FOF generator (though with some loss of control). This could be compared to the difference between FM using sine waves or more complex waveforms.
FOF has been hailed as the richest, most 'un-electronic' synthesis technique yet developed. It may be some time before it is available in a real-time instrument - it is computationally very demanding - but many musicians have, in fact, been using it indirectly for some time. Ensoniq have been using it since the ESQ1 to create preset wavetables, and other less candid manufacturers are surely doing likewise. That is despite the fact that the particular advantages of FOF are lost as soon as the waveform is transposed. The really interesting question is - what implementation are they using? An IRCAM program, or their own design? The commercial impact of FOF synthesis will be at least as strong as that of FM, so the competition to produce the first mass market version must be intense.
Closely related to FOF synthesis is the slightly older Granular Synthesis. Inspired by the principles of quantum physics, it builds a sound by the dense accumulation of very short (a few milliseconds) 'grains' of sound, a grain corresponding to the notion of the smallest acoustic event or 'quantum' that can be distinguished by the ear. It is particularly suited to the creation of streaming, textured sounds with a great deal of internal activity, and has been compared to modern computer graphic techniques for the generation of water, smoke and fire effects - the acoustic grain corresponding in this sense to the graphic pixel. As an analysis technique, it offers another method for timestretching, with unique possibilities for transforming the source sound. The tireless Richard Orton has provided a short CDP program (complete with 'C' source code) to generate the required thousands of CSOUND note commands, using Adsyn Draw to create the grains.
CSOUND also contains unit generators for Linear Predictive Coding, a technique used widely in, but by no means limited to, speech synthesis. Like the phase vocoder, it is an analysis/resynthesis technique. Analysis of a source soundfile (by a separate CDP program) creates a set of information 'frames' containing 'predicted' filter (formant) coefficients which, applied (in the case of speech synthesis) to a band-limited pulse or white noise will recreate the original sound. More important for composers are the transformation techniques, which include timestretching, formant shifting and cross-synthesis.
It is impossible to sum up a system which is growing so rapidly. For providing raw material for samplers, the CDP system is clearly in a class of its own. It also offers unique resources to composers of ambient or transformational 'soundscapes' - which is what a great deal of 'serious' computer music is anyway. The time taken to calculate certain sounds will be a major drawback to many people, but as synthesis software packages (such as Digidesign's Turbosynth) become more popular, the advice 'it's worth the wait' will be heard more and more often. Steve Howell [SOS July 1989] has already said just that with reference to the Akai S1000's timestretching facility - it remains to be seen how long it takes for Akai to match the power of CDP's phase vocoder! Personally, I would rather use these techniques now, however time-consuming, than wait (for how long?) for them to appear on commercial instruments.
The limitations of the system are really those of the knowledge you don't have. The documentation is currently under revision, and is promised to include much more tutorial information; but inevitably, due to the sheer breadth of the subject, it will still make fairly generous assumptions about the level of users' knowledge. Therefore, the more background reading you can do the better. The CDP's current yearbook includes a comprehensive bibliography; together with the quarterly newsletters it is the principal medium for tutorial support. For me, the sheer power of the CDP system sold it to me almost despite the documentation - though I did also have the benefit of experiencing a live demonstration. The best part was the sound of an Inter-City 125 leaving (I think) York station. Actually, it only started off as a train - after a while it was something else...
CDP, (Contact Details).