A Brief History of Computer Music (1994)

“I dream of instruments obedient to my thought and which with their contribution of a whole new world of unsuspected sounds, will lend themselves to the exigencies of my inner rhythm.” (Hansen, 316) When Edgard Varese spoke these words in 1937, he had no idea that he was presaging one of the greatest movements in modern musical history: Computer Music. While the importance of taped and analogue electronic music is not to be underestimated, the invention and dissemination of the digital system (whether in microcomputer, instrument, or hybrid form) is the greatest single step in the realization of Varese’s dream. If only he had lived to see his dreams come to fruition! Like many composers throughout music history, Varese was tired of the tonal, textural, and timbral capabilities of acoustic instruments. Even the acoustic music purists yearned for the ability to realize a work immediately. The only way to hear a multi instrument piece before the advent of electronic instruments was to have an orchestra close at hand, a very costly and inefficient solution that could only be afforded to the very wealthy or the very famous. These capabilities (and many more that were never considered possible) are now available to almost any composer with a minimum of obtainable funds and space. Of course, the mostly digital system of today has been a long time in the making.

In the half century prior to 1945 (a date recognized by most modern music scholars as the beginning of the electronic era) there seemed to be a sort of lull in the development of new musical instruments. “Innovations in new musical technology, especially the creation of new instruments…have been a normal feature of Western musical history, moving hand in hand with the expansion of compositional resources. Such parallel development is understandable, since extensions in musical language often require new instruments for adequate realization…” (Morgan, 461). While extensions in musical language had flourished throughout the end of the 19th century and beginning of the 20th (Romanticism, Impressionism, Expressionism, chromaticism, atonality, serialism, and chance composition all being excellent examples) the development of new instruments seemed to come to a standstill. That is not to say that new instruments were not invented during this period, but none with the capabilities necessary to complement the new compositional developments. For instance, there is a surprising entry in the records of the United States patent office dated 1897. The patent, registered in the name of Thaddeus Cahill, describes an “electrically based sound generation system, subsequently known as his Dynamophone or Telharmonium, the first fully developed model being presented to the public in 1906…” (Manning, 1). When the proportions of the device are considered, 200 tons in weight and nearly 60 feet in length, it is easy to see why this invention passed into obscurity. It is slightly more difficult to understand the enigmatic demise of such devices as Lev Termen’s Theremin (1924), capable of performing a continuous range of pitches by altering the frequency of an electronic oscillator, Maurice Martenot’s ondes martenot (1928), also capable of continuous frequency range, but with improved pitch control and greater timbral variety, and the Trautonium (1930), a keyboard controlled instrument. These instruments were all capable of creating sounds that were previously unimaginable. Why then did these first electronic inventions fail? For two reasons. First, all of these “instruments were quite primitive in both construction and sound producing capacity. Moreover there was as yet no efficient means for storing, transforming, and combining sounds.” (Morgan, 462) More importantly, the music world was not quite ready for instruments that so radically changed the traditional concept of the musical instrument. Even the ground breaking composers remained committed to the equal tempered Western pitch system and the traditional musical instruments. “While the above devices concern the electronic synthesis of sound, other pre-1945 activities focused on the electronic manipulation of sounds already extant.” (Schwartz, 109) In the 1930’s the electric phonograph was frequently employed for compositional and performance purposes. Several important composers took advantage of this new medium: Paul Hindemith, Ernst Toch and Darius Milhaud were some of the earliest experimenters. All three used variable speed turntables capable of creating distortion and remarkable collage effects. Varese used the turntable to compose a series of very noisy compositions of great originality, but stopped composing around this time because he was no longer interested in seeking sound from conventional instruments. In John Cage’s Imaginary Landscape No. 1 (1931) a variety of turntable speeds were used to manipulate laboratory test signals, and the results were mixed with muted piano and cymbal for live radio broadcast. The work of these composers and their use of nontraditional instrumentation was beginning to gain widespread acceptance. More importantly, these early attempts at composition using mechanical instruments paved the way for the transpiring electronic advancements.

Although the major advancements in electronic and computer music occurred primarily in the United States, the first steps were taken almost simultaneously in France and Germany. The analogue tape recorder had been perfected by this time, and it played a substantial part in the creation and distribution of music from this point forward. In 1948 a young engineer for French National Radio, Pierre Schaeffer, began producing taped recordings of natural sounds. These included locomotive sounds, wind, thunder, and a variety of others. More importantly, the sounds were transformed in several ways. “The transformational included editing out portions of the sound, varying the playback speed, playing the sounds backward (tape reversal), and combining different sounds (overdubbing).” (Morgan, 463). Schaeffer first performance of his work, in Paris in October of 1948, was significant because it was the first public performance of music that was not played by humans. This music was entitled musique concrete, because the sounds were concrete, sonorous objects that could be plastically manipulated, and not “abstract.” The West German Radio Corporation was the location of similar experiments at roughly the same time. The works composed here by Herbert Eimert and Werner Mayer-Epper had a decidedly experimental slant. The composers were less interested in creating atmospheric sounds (like Schaeffer) and more concerned with studio created sounds. Several extremely important new sounds were recorded: the first, a simple sine tone, free from overtones, produced by an electric oscillator. “The Cologne studio also had noise generators (capable of producing a thick band of frequencies within a given range), ring modulators, filters, and reverberators. The ring modulators allowed one tone to modulate the amplitude of another, producing complex sidebands (sum and difference tones), and the resulting sonority could then be filtered to control timbres.” (Schwartz, 113) Here were the true beginnings of electronic music! The verification of the importance of these new discoveries is provided by Karlheinz Stockhausen. After a year long visit to Paris where he worked with Schaeffer in the French studio, Stockhausen returned to his home, Cologne. The works he composed during this period, Elektronische Studien I and II (1953 and 1954), were the first uses of these technologies in compositions of a more intellectual approach. (Studie II was, in fact, the first electronic composition to be formally notated.) The upper section of Studie II (see figure 1), calibrated from 100 to 17,200 refers to pitch and timbre. The individual pitches used in this composition are chosen from a scale of 81 steps with a constant interval ratio of 25\/5 and 193 mixtures constructed from them. The heavy horizontal lines indicate the high and low frequencies of the first sound mixture, to which another overlapping mixture is added. The two horizontal lines in the middle of the page indicate the duration of the sounds in terms of centimeters of tape moving at specified speed. The triangular shapes at the bottom indicate volume in decibels. Not long after the publication of this work the term musique concrete became synonymous with the more popular term “electronic music” and by the late 1950s fell from popular usage. This was probably due to the rapidly increasing popularity of all things electronic. The recognition gained by the works of Stockhausen, as well as interest in the instruments which he had exploited in their production, had a profound influence on worldwide research combining music and technology. Edgard Varese, excited by the prospect of new sonic potential, accepted an invitation from Schaeffer to come to Paris and resume composing. This was primarily due to a lack of comparable facilities in America. The results of this foray were not wholly satisfactory, a combination perhaps of three factors: the relatively short period spent in preparation, the limitations of the equipment, and the immense practical problems which confront any composer encountering a complex studio for the first time. (Manning, 92) Nevertheless, Varese produced Deserts, and in doing so attracted a considerable amount of attention to electronic music. The first major presentation of Varese’s new body of work took place on 30 November, 1955, at Town Hall, New York. “This work could not have occurred at a more appropriate time, for the interest of institutions in supporting electronic music was just being kindled.” (Manning, 93) The Rockefeller Corporation paved the way for an increase in research by providing funding for an investigation into the state of studio facilities in the US and overseas. The investigators found that the studios abroad were well advanced. In sharp contrast, very limited progress had been made in America during this time. The investigators did find that several American institutions were attempting to use computers for musical composition, but with only moderate success. The most important research of this period (unbeknownst to the electronic musicians of the time) was a project that had begun at Bell Telephone Laboratories, New Jersey, that would lead to the first digital synthesizer. Nevertheless, the investigators were disturbed by the lack of research possibilities in America. This lack of promise was the force that drove the investigators, Luening and Ussachevsky, to take the initiative and approach the authorities of Columbia University with plans for an electronic music research facility. Their proposal met with favorable response and a grant was awarded. Soon thereafter, upon completion of their investigation of studios worldwide, the men received a grant for $175,000 from the Radio Corporation of America. RCA wanted Luening and Ussachevsky to use their facility as the basis for a Columbia-Princeton Electronic Music Center. Here the first analog synthesizer (creatively titled the RCA synthesizer) was developed (1959). Quickly thereafter, the Mark I and Mark II RCA synthesizers were developed.

Figure 1 – An Excerpt from the Score for Stockhausen’s Studie II

Research in the US escalated and a variety of institutions (mostly academic) took an active part in computer music research. One of the first composers to take advantage of the new RCA equipment was Milton Babbitt. Babbitt’s first electronic work, Composition for Synthesizer (1961), was the fruit of a seemingly effortless transition from his strictly ordered style of instrumental writing to an electronic equivalent. (Manning, 113) These machines eliminated not only the need for tape manipulation but also for the laborious task of interconnecting numerous electronic components: “For Babbitt, the RCA synthesizer was a dream come true for three reasons. First, the ability to pinpoint and control every musical element precisely. Second, the time needed to realize his elaborate serial structures were brought within practical reach. Third, the question was no longer “What are the limits of the human performer?” but rather “What are the limits of human hearing?” (Schwartz, 124) The publicity accorded to the Columbia/Princeton Facility by the presence of Babbitt and numerous others (Ussachevsky, Luening, Dabh, and Berio to name a few) increased the interest in electronic music even further. The introduction of the voltage controlled synthesizer in the mid 1960’s was the result of research at numerous institutions and commercial research. Dr. Robert Moog (Moog), probably the best known synthesizer developer, Donald Buchla (Buchla), Paul Ketoff (SynKet), and the Arp Corporation all introduced voltage controlled, often keyboard oriented synthesizers at around the same time. The commercial potential of these small, portable, simplified systems was tremendous.

As the 60’s came to a close, a number of key composers did much to further the cause of electronic music. The electronic instruments of this time were monophonic; that is, capable of producing single melodies which had to be recorded and combined with other recorded melodies to make music. In this manner, complex textures could be obtained by the combination of several different melodic or rhythmic lines. (Hansen, 362) Varese, Stockhausen, and Babbitt remained prominent in the field and a variety of newcomers to electronic techniques, Cage, Subotnick, and especially Carlos, who made the term “synthesizer” a household word, helped to augment the expanding library of electronic works. Morton Subotnick’s work was the first to take full advantage of the capabilities of these smaller synthesizers. Silver Apples of the Moon (1967) was the first newly composed work relying on voltage control to gain widespread attention and the first such work intended specifically for recording. (Schwartz, 126). The piece was commissioned by Nonesuch Records and realized on a Buchla synthesizer. This was followed by several other recorded works including The Wild Bull (1968) and Touch (1969). John Cage used these new instruments in his legendary collaborations with the dancer and choreographer Merce Cunningham (throughout the 1960s). Wendy Carlos provided the most dramatic impetus to public acceptance with the phenomenal success of Switched On Bach(1968), a commercial recording featuring virtuosic arrangements for Moog synthesizer of compositions by J.S. Bach. (Morgan, 470) Even during this time of great compositional resourcefulness, many composers longed for further developments. The primary concern of this contingent was an artificiality of sound that was difficult to avoid with these analogue instruments. Fortunately, looking just a few years ahead, it was possible to see an almost separate revolution in the making: the birth of the digital system. “Although computer music was born in the 1950s, it was not until the mid-1970s that digital technology began to rival concrete techniques and voltage controlled synthesis in widespread usage. The decades that followed saw an exponential growth in computer science and an equally remarkable expansion of its musical applications.” (Schwartz, 135)

In order to understand the state of computers and digital systems in the 1970s it is first necessary to look to the 1950s. In 1957 Max Mathews, then an engineer at the Bell Telephone Laboratories in New Jersey, began experimenting with the computer to generate and manipulate sound. It was at this lab that the first computer program capable of sonic manipulation was developed. It was entitled MUSIC4. By today’s standards, the mainframe computer on which Mathews’ program ran was big, awkward, slow and extremely expensive to operate. In spite of all this, several composers were enthusiastic about this new development. James Tenney, Godfrey Winham, Hubert Howe, J.K. Randall, Gottfried Michael Koenig, and Barry Vercoe all worked with Mathews’ system. A few years later Howe, Randall, and Windham started a computer music facility at Princeton University using a modified version of MUSIC4 running on an IBM mainframe. Eventually Max Mathews left Bell Labs for Stanford. By the mid-1960s research had shifted from the large corporations to the major universities of America. Leading the way were Princeton and Stanford (of course), and the University of Illinois. At the same time that Max Mathews was experimenting with computer generated sound, University of Illinois scientist and composer Lejaren Hiller was using the computer to a very different end. “… Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly.” (Schwartz, 347) Hiller’s first piece was entitled Illiac Suite (1957) for string quartet. Although not entirely successful, this work served to further research regarding artificial intelligence and computer assisted composition. “The foregoing would seem to imply that Europe played no important role in the first two decades of computer music, and indeed,…little happened there until 1969.” (Schwartz, 350) It was during this year that Frenchmen Jean Claude Risset and Pierre Boulez joined together to battle the skepticism on that continent toward computer music. The French government eventually conceded and by 1976 IRCAM was founded: “A vivid indication of the growing importance of computer technology in the field of contemporary music is provided by the Institut de Recherche et de Coordination Acoustique/Musique in Paris. …Under the general direction of Pierre Boulez and funded by the French government, IRCAM is a large and active research organization devoted to the scientific study of musical phenomena and to bringing together scientists and musicians to work on common interests.” (Morgan, 477). From then until the present, IRCAM has remained one of the most prestigious and richly endowed centers for computer music research and composition in the world, and one of the few not aligned with a university. (Schwartz, 350) Composers from all over the world have worked there including John Chowning, and Max Mathews. While research dealing with the computational and sound generating capabilities of computers was booming, development of instruments had taken a back seat.

As the 1970s concluded and computers became smaller, faster, and cheaper, emphasis was redirected toward the development of instruments that utilized this improved technology. “The work of John Chowning at Stanford has proved particularly significant in this context.” (Manning, 223). During the late 1960s Chowning had been experimenting with frequency modulated sound (the same technique utilized by radio and television operators to transmit noise (called free signals). His discoveries were exciting but not commercially viable at the time. After evaluating his discovery and two of his subsequent compositions the Stanford authorities, apparently more interested in commercially viable professors, turned down his application for tenure! The calculations required to perform FM synthesis were so complex that most US instrument manufacturers couldn’t understand the concept, much less see the viability of mass production. It wasn’t until the late 1970’s that a firm was willing to commercially market FM. That firm was Yamaha. “Turning FM synthesis from a software algorithm that ran on mainframes into chips that powered a commercial synthesizer took seven years.” (Johnstone, 58) From the Yamaha point of view, the wait paid off. In 1983, Yamaha introduced the first stand alone digital synthesizer, the DX-7. Priced under $2000 the keyboard was a huge success, selling more than 200,000 units, ten times more than any synthesizer before or since. Just prior to the release of the DX-7, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (musical instrument digital interface). “This important communications standard, the result of an agreement reached by all major manufacturers in 1983, has made it possible to adopt a modular approach to the construction of comprehensive mixed digital systems, easily expanded to accommodate new developments.” (Manning, 257) The electronic composer was now provided with not only an inexpensive instrument, but also the capacity to control it. “This technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer (e.g., an Apple Macintosh) to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.” (Schwartz, 359) Thus, Varese’s dream for absolute control had finally been achieved. A single individual could now control an entire studio worth of gear without getting out of her/his seat. Moreover, with easily acquired software, note sequences played on a keyboard or other MIDI capable instrument could be digitally recorded and played back by the computer. These sequences could then be easily stored and randomly accessed at a later date. Numerous useful programs were soon developed for compositional purposes. The most important of these is probably MAX, an object oriented music programming environment capable of a variety of “artificially intelligent” processes. It was developed at IRCAM specifically for the Apple Macintosh series of computers by programmer Miller Puckette and later refined by David Zicarelli. “Perhaps the only MIDI resource ideally suited to experimental composition, MAX allows even musicians with no programming expertise to create an infinite variety of custom made MIDI output devices and routines using simple on-screen graphic displays. One can continually invent and reinvent a studio full of phantom hardware at will, limited only by the composer’s imagination.” (Schwartz, 361)

The final result of the new developments in computer music is the ability for the computer to participate in live performance. Electronic works could now be performed precisely in a live setting and interaction with human performers was effortless. A new wave of compositions, written by composers most of whom grew-up with some semblance of synthesizers and computers surrounding them, utilize MIDI and programs such as MAX to realize a vast array of new ideas. Furthermore, a variety of new instruments have been developed. Gary Nelson, codirector of the TIMARA (Technology in Music and the Related Arts) program at Oberlin College, developed the MIDI horn in 1985. This instrument, which made no sound of its own, was interfaced with a DX-7 and provided breath control of volume and a familiar key layout that almost any woodwind player could manipulate with a minimum of practice. In a slightly different vein, Dexter Morrill, a Colgate University Professor of Music, developed a computer program that could recognize the pitches of an acoustic instrument. His program (written in the artificial intelligence language, LISP) could reroute incoming MIDI data to synthesizers and effect boxes, allowing a solo instrumentalist to sound like a complete orchestra. Morrill’s Sketches for Invisible Man (1989) employed this technique, allowing the performer to improvise while the computer provided intelligent accompaniment. Morton Subotnick quickly became one of the most avid proponents of real-time interactive composition. (Schwartz, 363) His work Hungers (1987) takes advantage of an entire studio worth of MIDI equipment. In this work, all of the equipment is controlled by a series of MIDI commands from a Macintosh computer. The computer responds to live input from a MIDI keyboard. Another work, In Two Worlds (1987) was written for solo saxophone, orchestra and WX-7 (Yamaha’s sax-like MIDI controller). Here the computer serves as the orchestra, reacting to a MIDI Baton wielded by the conductor just like a real orchestra. “Still in the forefront of computer music research, Max Mathews has updated his concept with his Radio Drum; stick movements are sensed by antennae and converted to MIDI signals, whose effects on electronic instruments are freely defined by the performer through the MAX program.” (Schwartz, 364) Another innovator in the area of MIDI control is Tod Machover. Machover first came to prominence as one of the leading IRCAM research directors of the late 1970s. The composer developed the “hyperinstrument” concept utilizing Macintosh computers and a variety of “data gloves”. These devices were worn on the hand of the conductor and the wrist of the performer(s) and could control synthesizers in a variety of ways. Machover’s work Begin Again… (1991), was written for cellist Yo-Yo Ma (wearing the wrist device) and an array of electronic gear. As the cellist performs his portion of the piece the entire systems reacts, transforming his movements into a full orchestral accompaniment. Rounding out the lot of recent composers is the inimitable Pierre Boulez. Boulez is one of the few electronic musicians still attempting to combine musicians with larger computer systems. His work Repons was realized at IRCAM using their 4x synthesizer, a large computer, and Boulez’s Ensemble InterContemporain, “known for its astonishing virtuosity and precision under his direction.” (Schwartz, 365) During the performance of this particular piece the ensemble plays a variety of instruments whose sounds are processed through the computer. The computer then responds to the sounds by putting them through various transformations and playing them back.

All of the works mentioned above are a testament to the advances of music technology during this century. Composing used to mean sitting at a piano with pencil and manuscript and working for days only to find that, once the orchestra got together to perform the piece, it wasn’t exactly what was expected. Computer music changed this forever. Although none of the advances in music technology came quickly, come they did. “Composing computer music used to mean laboring for months on a mainframe to produce a seemingly random assemblage of bleeps and bloops that would be taped and replayed in performance. Now computers jam.” (Neuwirth, 80) Even the acoustic music purist cannot deny the ease with which compositions can be realized on the latest equipment. “The lure of computer technology has been its potential to analyze instrumental and vocal sounds and to recreate them with complete dictatorial control over the outcome and without the vagaries or expense of live performance.” (Schwartz, 366) As for the future, one thing is certain: from the very beginnings of computer music, every stage of development has seemed revolutionary until rendered obsolete. The same will one day happen with today’s technology, as has already happened with yesterday’s. Of course, better technology never guarantees better music, and often seems to the outsider to yield just the opposite. But arguments from the fringe such as these cannot stop the vision of the believers. While the validity of the electronic medium may always be denounced, the advantages are becoming more and more clear every day.

References

Hansen, Peter S. 1969. An Introduction to Twentieth Century Music. Allyn and Bacon.

Johnstone, Bob. 1994. “Wave of the Future.” Wired Magazine 2(3).

Manning, Peter. 1985. Electronic and Computer Music. Clarendon Press.

Morgan, Robert P. 1991. Twentieth-century Music. W. W. Norton.

Neuwirth, Robert. 1993. “Binary Beat.” Wired Magazine 1(5).

Schwartz, Elliot and Godfrey, Daniel. 1993. Music Since 1945: Issues, Ideas, and Literature. Wadsworth.