If musical data processing is essential us so much today, it is that it gradually created tools which are radically modifying the manner of thinking the music. However, its history is short. It merges with the development of numerical technologies: computers, initially, accompanied by creation by the languages symbolic intended for the programming, then with a whole troop of inventions in numerical technology. Enough early in its history, data processing will be shown sufficiently ripe to accommodate concerns of all kinds, energy of accountancy to scientific research, while passing naturally by what interests us, artistic creation.
And it is undoubtedly there that it is necessary to distinguish what arises from data processing itself, and what belongs rather to the broader world of numerical technology. The music amply draws from these two fields its new resources. However, since the field of the sound is converted today into audio numerical, the distinction is essential. Musical data processing is born from the meeting of the musical concerns and the environment resulting from numerical technologies and the specificity of the computer, on the one hand, and of the scientific fields which clarify its research topics. If the musical composition appears there in good place, practically all the other activities of the music are found there. And musical research partly covers the ground cleared by data processing, acoustics, the treatment of the signal, even cognitive psychology: thus musical data processing is at the center of several musical, scientific and technical fields.
But it is the recourse to the specific contributions of the data processing which characterizes its step. New conceptual tools are unceasingly provided by the artificial intelligence, which are concretized by languages such as Lisp or Prolog. They are put at once at the musicologist assistance or abstracting service to the composition. Research in systems real time and on the interfaces interactive makes it possible to conceive new connections between the instrumentalist and the electronic universe.
Great stages of musical data processing
With the origin of musical data processing, one finds two types of activities, independent one of the other. If these activities prudent today, it is in another manner that the original vision which caused their birth could enable to foresee. These two types of activities are: the musical composition, and production of the sound. In both cases, the manufacture of the desired result is ensured by the computer. These two types of activities are appreciably contemporary. The first serious tests of musical composition per computer go back to 1956: it is on this date that Lejaren Hiller calculated a partition using rules encodes in the form of algorithms on the computer Illiac I of the university of Illinois. It is about Illiac Suite for String Quartet, whose three movements are carried out this year by the string quartet WQXR. In a famous structure, published in 1959 and which supports the title of “Experimental Music-Composition with year Electronic Computer”, Lejaren Hiller explains in detail the procedures that it applied to the Illiac computer in order to produce the partition of his string quartet.
To locate this period, it is as into 1956 as John McCarthy forged the term of artificial intelligence. One year later, max Mathews, researcher at the laboratories of the Bell Telephone, in the New Jersey, a first numerical programmer of synthesis of the sound for the computer IBM 704 writes. Known today under the name of Music I, it is the first of a great family of acoustic compilers; a psychologist, Newman Guttman, generates first a one 15 seconds duration study, In the Silver Scale. It is as into 1957 as the four movements of the Continuation Illiac for String Quartet of Lejaren Hiller are published; the same year is born the primitive version from the famous language of FORTRAN program (FORmula TRANslator). Let us note that during the creation of the work of Hiller by the string quartet WQXR, it is max Mathews which organized a recording, which gave place, thereafter, with the publication of this recording in a disc carried out in 1960 by the Bell Laboratories, and entitled Music from Mathematics: even if the ways traced by these two inventors are independent, it is not known as that they did not cross…
From these two almost contemporary events, the development will continue, gradually, in the traced directions: the composition and production of the sound. We will see low the courses of them. But a third way is not long in appearing: it is born from the same observation that had made Hiller: the computer is above all, at that time, a formidable calculating machine. Moreover, the English term of selected computer indicated, before the appearance of these machines, the employees charged to operate calculations. But at the same time, with a bit of fear, one spoke readily at the time of electronic brains. An artist could not approach the computer not without a certain emotion, which explains without any doubt the attraction sometimes terrifying that data processing will exert on the artists of the following decades. But they are two scientists who are at the origin of these experiments: Hiller practiced chemistry, while Mathews was an already famous researcher. It is undoubtedly what explains remarkable methodologies that they reflect in place, each one on their side, and with completely independent aims.
With the Bell laboratories, max Mathews, on his side, written in 1957 a first numerical programmed of synthesis of the sound for the computer IBM 704, equipped with 4096 words of memory. Known today under the name of Music I, it is the first of a great family. The program Music III (1960) introduced the concept of instrument modular. The model imagined by max Mathews is inspired more than one equipment of laboratory or an electronic studio of music that by an acoustic stringed-instrument trade. The program offers a range of independent modules (Unit Generators), in charge each one of an elementary function: oscillator with form of programmable wave, adder of signals, multiplier, generator of envelopes and random signals, etc… The musician builds a “instrument” by connecting a selection of modules between them. The signals produced by the oscillators or the generators are led towards other modules there to be modified or mixed. Several instruments can be joined together within a “orchestra”, each instrument having its own identity. Contrary to what occurs in the material universe, it there not of limit to the number of modules usable simultaneously, except perhaps the memory of the computer. The result of the placement of the instrument is the progressive calculation of the sound in the form of a sequence of numbers which, put end to end, represent a complex sound wave. These numbers are called “samples”. Today, the number of samples representing one second of its was established to 44 100 per channel for the applications general public, and to 48 000 for the professional field.
Because of relative slowness of the machines and design weight to be carried out, time put to generate the sound wave is quite higher than the duration of the sounds; the operation of these programs is known as “in differed time”. With the origin, the sound waves calculated in numerical form were stored on a numerical tape progressively proceeding end of an arithmetic unit of samples. This mode of production of the sound is called “direct synthesis”. Thus creates for itself a “file of sound”; once completed, the musician calls upon a second program, charged reading the file of sound in real time and with sending the samples towards a digital-to-analog converter, which is connected to an amplifier and loudspeakers.
To activate the orchestra, the musician must write a “partition”, in which all the parameters claimed by the modules of the instrument are specified. This partition is presented in the form of a list of numbers or telegraphy codes, each “note” or a each event being the subject of a particular list. These lists are ordered in time.
But to specify each parameter is a difficult task, more especially as the musicians are not trained to give values measured to sound dimensions which they handle. To fight against this obstacle, of the languages of assistance to the writing of partitions were conceived; most known is the Score program of Leland Smith (1972). Score is not an automatic program of composition: it makes it possible to specify the parameters using terms resulting from the musical practice (heights, nuances, durations), automatically to calculate changes of tempo or nuances, and even to supplement sections with notes corresponding to a trajectory given by the type-setter.
The model instrument-partition was firmly established with the arrival of Music IV (1962). This program were born from many alternatives, of which some exist indeed still today. Among these misadventures, let us quote Music 4BF (1966-67), there is nowadays a version for Macintosh (Music 4C, 1989), and especially Music 360 of Barry Vercoe (1968); this descendant of Music IV has as a characteristic to be presented in the form of a true programming language, which undoubtedly explains why it became today with C-Music the acoustic compiler more used. It was initially adapted to minicomputer PDP-11 from DIGITAL in 1973, then, entirely rewritten in language C in 1985, it took the name of C-Sound, and was quickly adapted to all kinds of data-processing platforms, including the micro-computers like Atari, Macintosh and IBM. In 1969 Music V appears, a program which is conceived to facilitate the musical programming of the instruments and the partitions; nowadays, Music V is still largely employed, generally in the form of the adaptation that in made Richard Moore, C-Music (1980).
The computer was also an unquestionable success in a strongly speculative field, the musicologists analysis. With the eyes of the public interested at the beginning of the Sixties, data processing, still rather mysterious and inaccessible, showed the possibility for strange musical work; in composition, in musicology and finally, limited to the laboratories Bell, production of sound. A great musical upheaval of this decade was to come from the world of electronics, with the appearance into 1964 of the synthesizers modular, known as “analogical” since they do not contain numerical electronics. Conceived independently by Paolo Ketoff (Rome), Robert Moog and Donald Buchla (the United States), the synthesizers bring the response to the technological aspirations of many musicians, especially after the popular success of the disc Switched one Bach of Walter Carlos who truly made know these instruments near a large audience. During this time, the program of Mathews knows adaptations on other sites, such as the universities of New York, Princeton or Stanford.
Another application of the computer appears with the piloting of analogical instruments. The machine generates signals with slow variation which modify the adjustments of devices of studio: frequency of oscillators, profit of amplifiers, cut-off frequencies of filters. The first example of this system which one names “hybrid synthesis” was established in 1970 in Elektron Musik Studio of Stockholm, foundation independent since 1969, financed by the Royal Academy of Music, and placed under the direction of Knut Wiggen. A computer PDP 15/40 controlled twenty-four generators of frequency there, a generator of white vibration, two third filters of octave, modulators: out of ring, of amplitude and reverberations. The originality of the system of Stockholm lay in an extremely ergonomic operator console, with which the type-setter could specify the parameters of synthesis by sweeping a panel of figures using a metal brochette. Another studio is to be quoted: that of Peter Zinovieff in London (1969), placed under the control of a minicomputer DIGITAL PDP 8 for which Peter Grogono wrote the language of Musys piloting.
Another remarkable realization is the system Groove (Generated Realtime Operations One Equipment Voltage-controlled, Ca 1969) conceived at the Bell laboratories by max Mathews and Richard Moore. Groove is an instrument intended for the control of parameters of interpretation of a device of synthesis. In this direction, it places the musician more close to the position of a leader that to a type-setter or an instrumentalist, although one can consider that the electronic type-setter of music must often place himself in position of chief, by directly interpreting the music which is made up.
It is the middle of the Seventies which marks the transition towards an inexorable widening from now on from the life of musical data processing, with the appearance of the microprocessor. A data-processing stringed-instrument trade will become gradually possible with the design of complete computers on an integrated circuit: microprocessors. It will also be necessary that the interface with the user improves, and that the punch cards by a more interactive mode of inputs are replaced: it will be the keyboard and the cathode ray tube which will carry it.
The principle of the hybrid synthesis continued to be applied throughout the Seventies, before being supplanted definitively by the numerical synthesizers at the dawn of the Eighties. The American company Intel markets since 1971 the first microprocessor, the circuit 4004, which allow the design of genuine miniature computers, the microcomputers: Intellec 8 (conceived starting from microprocessor 8008 of 1972), Apple I, Altair (1975), gathered soon under the name of micro-computers.
The experiment musical of the Art Group and Data processing of Vincennes (GAIV) illustrate this time of transition well. This team, founded to the university of Paris 8 by Patrick Greussay and a team of artists and architects, known for the publication of a bulletin diffusing the research tasks in most recent art and data processing, entrusted to the type-setter Giuseppe Englert the musical coordination of her activities. It is Intellec 8, microcomputer with words of eight bits, ordered by a paper tape and a keyboard, which was used with the compositionally activities and as research on musical formalization; English synthesizers EMS-VCS3 were controlled by the microcomputer, via digital-to-analog converters charged to provide power of order in exchange of the binary data calculated by interactive programs.
The second effect of the arrival of the microcomputer was the design of the “mixed synthesis”, synthesizers numerical, genuine computers adapted to the calculation of the sound wave in real times, placed under the control of a computer. From second half of the Seventies appear several achievements of this type; we will retain work of James Beauchamp, Jean-François Allouis, William Buxton, inter alia, like those of Peter Samson (synthesizer of Systems Concept, conceived for the research center — CCRMA — university of Stanford), Synclavier de New England DIGITAL Corporation, conceived by Syd Alonso and Cameron Jones under the impulse of the type-setter Jon Appleton, design, under the impulse of Luciano Berio, of a bench of oscillators in Naples by Giuseppe di Giugno, who continued his work in Ircam (series 4A, 4B, 4C and 4X) under the direction of Pierre Boulez; more recently, Fly 30 of the Center of recherché musical of Rome. Let us note that with the 4X of Ircam (1980), the term of synthesizer disappears, replaced by that of numerical processor of signal, which undoubtedly moves the accent on the general information of the machine.
The industry of the electronic instrument does not take a long time to adapt to these new developments. The first stage consisted in introducing microprocessors inside analogical synthesizers (Prophet synthesizers of the firm Sequential Circuits), charged to control the modules ordered in tension; it is still of the “hybrid synthesis”. The second stage follows soon: it consists in designing genuine entirely numerical musical instruments. It is the noticed arrival of Synclavier II, then of Fairlight.
The industrial field is today initially made up by the market of the synthesizers and the processors of the sound, and by the software which makes it possible to exploit them. Today, all the synthesizers are numerical, and necessarily meet the Midi standard. The field of the synthesizers is double: on the one hand apparatuses, often provided with a keyboard, which propose a choice of preprogrammed sounds which one can vary certain parameters by an elementary process of programming; in addition, the machines which are intended to reproduce sounds beforehand recorded and memorized, or stored on mass memory: samplers, or “samplers”.
It should be noted that all these technologies become accessible to the private musician, within the framework of what is called commonly the “personal studio” (home studio).
But these machines, and a fortiori the personal studio do not function without adapted software: the sequencers control the execution of a piece directly starting from a computer, the editors of sound are intended for the treatment, the assembly and the mixing of sound sequences. Programs make it possible to write a partition, which from now on is usually employed by the musical edition. Lastly, the machines can also be placed under the control of supplementary programmes to the composition.
The most original character of the contemporary data-processing stringed-instrument trade is the “workstation”. To conceive a workstation consists in gathering programs of various nature, intended for the analysis or the synthesis of the sound, the control of the sound or the composition. These programs are integrated within data-processing a “environment” organized around a computer and of its peripherals, intended for the treatment of the sound on line. It is the case of the plug-in charts which, coupled to a software, make it possible to read “files of sound” stored on a disc, in exchange of an order, coming for example from a Midi source. This system, if new that it did not find yet a name true, is generally indicated like “hard” or “direct-to-disk” disk.
The musical representation
Since the computer, unlike the electronic music of studio, claims a specification of the data, and thus a writing, the question of the musical representation is a constant concern of the field. We will see two answers. The first illustrates a step a priori compositionnelle: that of Xenakis. The second, more general, is the Midi standard.
Iannis Xenakis innovates with the design of the UPIC (Unit Polyagogique Informatique of CEMAMu). Conceived in the middle of the Seventies, this system rises naturally from the approach of the synthesis of the sound by this type-setter: within the team which it had joined together, baptized initially Emamu (Team of Mathematics and Musical Automatic, 1966), and with the financing of the Foundation Gulbenkian, Xenakis had made build a digital-to-analog converter of high quality. The UPIC represents a complete environment of composition with, in result, the sound synthesis of the page of made up music. Become in 1971 CEMAMu (Center of Mathematics and Musical Automatic) because of the creation of a place intended to shelter its research, the team joined together around Xenakis conceives a system making it possible to the type-setter to draw on a broad table of architect of the “arcs time-height”, by choosing for each arc a temporal trajectory, a form of wave, a nuance. The music thus is initially represented in graphic form. The programs of the first UPIC are written for a minicomputer Solar 16/65, connected to two bodies of magnetic tapes to store the programs and the samples, a digital-to-analog converter, a cathode ray tube making it possible to post the forms of waves, but also to draw these waves using a graphic pencil. To hear the page which it has just drawn, the type-setter must wait until the computer finished to calculate all the samples; the generation of the sound is ensured by a digital-to-analog converter of high quality. More recently, the UPIC was redrawn for microcomputer, and functions without delay.
To represent the sound in the form of a modifiable image, it is the goal of the Phonograms program, designed at the university Paris 8 per Vincent Lesbros. With the manner of a sonograms, the program posts the spectral analysis like a drawing, which can be modified; the new representation can then be synthesized, either by Midi, or in the shape of a file of sound, or even transformed into Midi file.
The reproach often today is heard that the generation of young musicians who approach technology through the environment created around the Midi standard are not a satisfactory aware of not passed of musical data processing and its problems. But is to forget that, in a direction, the birth of the Midi standard was done without true filiation with the preceding stages of the field which one will name musical data processing. The phenomenon which represents Midi is not at all a misadventure of this field.
The Midi standard was developed in 1983 to allow the piloting of several synthesizers starting from one only keyboard; the messages are transmitted in numerical format, according to a well defined protocol. With the origin, Midi is thus well based on instrumental gesturer control: it is a method to represent not the sound, but the gesture of the musician who plays of a Midi instrument. In 1983 leaves the first synthesizer to have an interface Midi, Prophet 600 of Sequential Circuits. What had not been defined, on the other hand, it is the success which this standard was going quickly to gain, which today is used to inter-connect all the machines of a studio of electronic music, and even the sets of lights of a scene.
The work undertaken since 1956 by Lejaren Hiller for the composition of Illiac Suite for String Quartet marks at the same time the true birth of musical data processing and the anchoring of this field in the research, applied in this case to the automatic composition. The computer appeared then as a machine making it possible to treat the complex continuations of operations which characterize the composition of ambitious musical works. This way was going to be reinforced since 1958 by the French type-setter Pierre Barbaud, who melts in Paris the Algorithmic Group in connection with the company Bull-General Electric and begins his research of automatic composition; as of the following year, the first algorithmic work of Barbaud was made up:
Unforeseeable innovations (Algorithm 1), with the collaboration of Pierre Blanchard. The program Musicomp de Lejaren Hiller and Robert Baker, of the same time, conceived for the Illiac computer after the composition of the Illiac Continuation, making university of Illinois one of the centers of musical data processing at that time. And when in 1962, Iannis Xenakis creates ST/10, 080262, work written thanks to the stochastic program ST developed since 1958 on a computer IBM 7090, the composition using the computer enters its golden age. In the Netherlands, Gottfried Michael Koenig writes in 1964 the program of composition Project I (1964), followed soon of Project II (1970). The composition computer-assisted rests on mathematics and the stochastic one, drawing largely from the resources of the processes of Markov (Hiller, Barbaud, Xenakis, Chadabe, Manoury).
With the arrival of the micro-computers a new tendency develops: assistance with the composition, then computer-aided design of composition (CAD). The program demiurge, able to generate a whole composition the model of an environment of data-processing tools charged succeeds to deal with precise musical problems. Let us quote among the principal ones: HMSL (Hierarchical Music Language Specification, 1985) with Mills College in California, Forms, created by Xavier Rodet, Draft and Patchwork, developed in Ircam under the impulse of Jean-baptiste Barrier, Experiment in Musical Intelligence of David Cope, at the university of Santa Cruz in California. These programs are open: they dialogue with the type-setter in an interactive way, and are connected to the universe of the Midi devices. Except for M and Jam Factory of Joel Chadabe and David Zicarelli, they are structured by the use of languages nonnumerical, resulting from the field of the artificial intelligence, such as Forth, and especially Lisp, which explains why they rest not on mathematics, as it was the case for the first generation of composition computer-assisted, but on the formal languages and the generative grammars.
Real time: computer and instrumental universe
The Eighties see developing the use of the computer in situation in concert; thanks to the arrival of the numerical synthesizers in real time, or, more generally, numerical processors of sound, and languages of control real time, the conditions are ripe to revisit this old surface of the music of the XXe century: electronic music on line (live electronic music). In the majority of the cases, it is initially a question of imagining a means of connecting the computer and its computing power to devices of synthesis or treatment of sound, with, if, possible, the interaction of musicians. Répons (1981), of Pierre Swell, by the integration of the procedures of treatment the writing itself, showed in what the computer became an instrument, integrated perfectly into the orchestra. Following this work work appears on the follow-up, by the computer, of the play of the instrumentalist, operation known under the name of “follow-up of partition”. Let us quote the contributions of Roger Dannenberg in the automatic accompaniment and the languages offering the conditions of the communication computer-instrument, those of max Mathews, initially with the Groove system, then more recently with its work on “Radio operator Drum” and the simulation of the rod of the chief orchestrates itself, Miller Puckette with the Max. program.
This is why one was in addition interested to give to instruments of orchestra this capacity, by providing them with sensors, allowing the computer to follow the execution (flute, vibraphone, etc…). All musical industry is concerned with this tendency, although the process to be used is not decided yet: it will be electromechanical (material sensors placed at strategic places of the instrument, conducting membranes, etc…), or will it be necessary to have recourse to the analysis for stolen of the sounds themselves to know of it the height, the spectral structure and the mode of play?
The community organizes itself
The ripening of musical data processing was accompanied by an assumption of responsibility by the community by the musicians and researchers by themselves. Gradually, the conscience of membership of a field is done day. The international congresses appear, followed later local conferences. The communications which are marked there are published in collections available for all the community. These meetings also give place to the presentation in concerts, which tends more strongly to weld between them the conscience of a new field, with the scientific and artistic components. It is the beginning of the “International Computer Music Conferences” (ICMC). In 1978 is born an organization intended to facilitate the communication and to help the behavior of the congresses, the “Computer Music Association”, which becomes in 1991 “International Computer Music Association” (ICMA). The organizers seek to hold the congress one year in North America, and the following year on another continent. These congresses see the ICMA taking a role growing in the assistance brought to the local organizers, like in the diffusion of the publications resulting from these meetings, going until placing from the orders of works which will be carried out during ICMC (ICMA Commission Awards, 1991).
Another vehicle which welds the conscience of membership of a common field is the Computer Music Journal. Appeared in California in 1977, it is taken again by MIT Press starting from volume 4 (1980). The newspaper is essential like the reference as regards scientific publications of the field. Association ICMA publishes a bulletin, Array, which became a body very appreciated of information and discussion on the current topics of musical data processing. The review Interface Dutchwoman, which becomes since 1994 Newspaper of New Music Research, regularly publishes articles on musical data processing. Canada, Musicworks, guided by Gayle Young, ensures information on a broad range of concerns of the new musics. In France, publications of Ircam, InHarmonique, then the books of Ircam open their columns with the aesthetic, theoretical and critical considerations which accompany sudden appearance by new technologies in arts.
In 1991 the Leonardo newspaper, published by the international association for arts, science and technology, founded in 1968 by Frank Malina, lance, under the direction of Roger Malina, Leonardo Music Journal, who bring a full vision of the musical practice related to new technologies, thanks also to the publication, with each number, of a compact disk. More theoretical, the review [http://www.linereview.com] Languages of Design, under the direction of Raymond Lauzzana, is interested in formalization in the artistic procedures, and grants a broad place to musical data processing. With these traditional supports of information the direct communication between musicians and researchers by the means of the data-processing networks is added, allowing the instantaneous email. Lastly, the need to increase the speed of communication gave birth to the electronic newspapers, diffused by the networks such as Internet; released from the structure of manufacture, impression and routing, they allow the same type of access to information as the data banks which, they also, multiply in musical data processing.