Artificial Composers: Tools of the Modern Musician or Affront to Human Creativity?

By Jonathan P. Farrell
2015, Vol. 7 No. 03 | pg. 1/2 |

On August 13, 2014, Youtube user CGPGrey posted Humans Need Not Apply, an informative video detailing the trajectory of automated technology and its implications on the job market and human employment. After describing the superhuman superiority of automation in low-skill, white collar, and professional job markets, Grey (2014) addresses artistic specialists: “But perhaps you are unfazed because you’re a special creative snowflake. Well, guess what: you’re not that special” (11:15). He then proceeds to debunk the myth that creativity is exclusively human, and does so to the music of emerging composer Emily Howell. Howell, of course, is a new kind of composer: an artificial one.

Howell was developed by composer and music professor David Cope in the 1990s. Finished in 2003, her works have been celebrated as “rich and emotional” (De Lange, 2012) by some, and criticized by others as “bland, arpeggiated compositions… …like a technically skilled child trying to ape Beethoven or Bach, or like Michael Nyman on a bad day: fine for elevators but not for the concert hall” (Ball, 2012). These comments assume an innate personhood in the composer, raising a fundamental question: should Emily Howell, an artificial composer, actually be considered and credited as a composer?

This question is a divisive one. Upon hearing pieces from one of Cope’s creations at a concert, “one music lover allegedly told Cope he had ‘killed music’ and tried to punch him” (DeLange, 2012). Clearly, people are sympathetic to CGP Grey’s sobering argument regarding the superiority of artificial intelligence and the insignificance of humanity. But maybe artificial composers like Emily Howell are not replacement composers, but new tools that change the music making process. This paper explores this idea: that artificial composers like Emily Howell are not bona-fide composers, but extensions of music technology.

Artificial Composers: What are They?

Before one can understand how artificial composers can be explained as forms of music technology, one must have an understanding of artificial composers. An artificial composer is a computer program that composes music based on an algorithm—a set of rules and instructions that the program relies on when stringing together a composition. Algorithmic composition is not new. Famous humans have produced musical works using algorithms: composers like Mozart and Haydn contributed to Musikalisches Würfelspiel, “a musical dice game… in which players rolled dice to determine which of 272 measures of music would be played in a certain order” (Blitstein, 2010); fourteenth century composer Guillaume de Machaut adhered to specific lengths for interlinked pitches and durations when composing (Muscutt, 2007, p. 10); and modern composer Iannis Xenakis used probability equations to create works (Blitstein, 2010).

Even specific genres can be seen algorithmically: blues music, for example is typically played using a three-chord progression (the tonic, four, and five chords in a given musical scale) and is often sung in a specific lyrical arrangement (one line sung twice, followed by a second line). These two patterns often overlap, with the first line sung over the tonic chord, the repeated line over the four chord, and the final line over the five chord. Most blues music follows this format. From Robert Johnson’s Sweet Home Chicago to Van Halen’s Ice Cream Man, even blues musicians adhere to algorithms to create music.

Algorithms for artificial composers can work in multiple ways. David Cope has created two artificial composers, and they follow two popular algorithmic methods. The first method involves parsing through musical examples for patterns, rules, and similarities, and using this information to create original content. One can “[feed] in musical scores and out [pops] new material in the composer's style” (De Lange, 2012), similarly to reading all of Shakespeare’s sonnets to understand the similarities in their underlying structures, themes, and content, and thenusing those rules to create a Shakespearean-style sonnet. The second method uses associative networks that receive external musical input, devise a unique response, and receive external feedback on whether or not the generated output is acceptable. Ryan Blitstein (2010) explains this process when talking about Emily Howell:

Instead of spitting out a full score, [Howell] converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as “good,” others as “bad.” Eventually, the exchange produces a score, either in sections or as one long piece.

In this method, the program uses the external feedback to revise the connections in its network in order to create music that conforms to those standards. This approach is similar to a computer program playing Twenty Questions: in order to correctly guess what the player is thinking about, the computer needs the player to answer questions in order to narrow down its output (answer) to the desired result (what the player is thinking about). This process is completely dependent on the human for proper function.

These are Cope’s algorithms of choice. There are other types of approaches to artificial composing algorithms that will be discussed later.

Cope’s Creations

David Cope’s creations are some of the most well known cases of artificial composers, with Emily Howell being his most successful iteration. Howell was programmed to create its own style based off of input from Cope and his decision on whether Howell’s output was acceptable. Therefore, Howell’s compositions follow the associative network approach, and come from the active process of approval from her creator. Howell’s predecessor, EMI (short for Experiments in Musical Intelligence, pronounced “Emmy”), was created to overcome “a mean case of composer’s block” (Cheng, 2009) and was less active, following the parsing approach: EMI imitated famous composers like Bach and Mozart by analyzing thousands of their pieces and finding the similarities within them.

She followed (and sometimes deliberately broke) these rules in order to create faithful Bach-esque works, but according to Cope (1997), “its best efforts often [achieved] less than satisfactory results” (p. 20). Radio host Fred Childs feels similarly about Emily Howell’s pieces, commenting, “If I hadn't known that [it was composed by a computer program], I might've thought it's the work of some not-so-great composer who is trying some interesting things, but not quite getting it” (Rocheleau, 2010). It is clear that people see some disconnect between the works of flesh and blood composers and those of David Cope’s experiments.

Music Technology and Modern Musicians

It is also important to understand what music technology is, what it does, and where it exists in relation to modern music making. On a general level, music technology comprises the pieces of hardware and software that process sound. Functions like delay, chorus, artificial reverb, flanging, looping, and amplification are all effects achieved by pieces of music technology, usually through foot pedals or software. The ability to cut, delete, and rearrange sound files are also achieved through pieces of music technology—usually DAWs (Digital Audio Workstations).

Most pieces of music technology take some form of audio input and process it so that a different product comes out—whether that change is subtle or drastic depends on the process. Software programs such as Max/MSP/Jitter bridge the gap between making music and computer programming, allowing the user to program intricate strings of audio manipulation. Users can utilize audio input as triggers for other kinds of audio to start, stop, change, etc.; they can also create numbers, patterns, or algorithms to guide a user-generated and customized synthesizer. The depths of these processes are as limitless as traditional computer programming and have become integral to contemporary music making.

Contemporary musicians use music technology to expedite the music making process. Prior to the development of artificial reverb, for example, musicians would have to record in specific locations in order to achieve the desired reverberation levels. If a musician wanted a lot of reverb, (s)he would have to record in a large, closed-off space, such as a church. With the development of artificial reverb, however, a musician need only sing into a microphone and use a software-generated reverb effect to create a believable facsimile of the same reverberation (s)he wanted from the church.

Similarly, the ability to record instruments separately and combine them in a software program later allows musicians to create a coherent collaboration without having to be in the same room at the same time. These technological advancements allow musicians a massive amount of flexibility, autonomy, and independence when creating music.

Take, for example, street performer Benjamin Stanford/Dub FX (2013). With an effects board, a looping pedal, and a microphone, Stanford can create a whole band’s worth of music using only his mouth, rather than needing a whole band or multiple instruments. In his October 10th, 2008 street performance, he sings one line for the rhythm and another few for the melody, then processes and loops them to create a backing track for his song Love Someone. He then modifies his vocal track in real time, adding and subtracting effects from his singing while turning the previously recorded tracks on and off as needed. Stanford uses music technology to create his pieces faster, easier, and more efficiently—and the effects are evident.

Music Tech and David Cope

Music technology is especially important when considering the role of artificial composers. In an interview with Keith Muscutt (2007), David Cope was asked how he felt about how EMI’s music competed with real-life composers. He replied, saying,

I do not perceive this as a machine-versus-human issue, but simply as a human-with-pen-and-paper versus human-with-electric-slide-rule issue. I—as a human being—create music using my software tools. …My programs follow my instructions in producing what would have taken me several hundred or even a thousand times longer to create. (p. 13)

His answer shows that he views EMI not as an autonomous entity, but as an extension of his own composition process. In other words: Cope provides the input, EMI uses that input to create musical pieces, and Cope gives the final word—yay or nay.

Emily Howell serves a similar function to Cope. According to the AfterMatter’s Theo Caplan (2012), “Cope feels that the program is essentially an extension of him: he is just teaching her his biases, and she is speeding up the job for him,” showing that Cope views Howell as a cog in the composer’s machine. He functions as the originator of the content and has the final say on whether Howell’s outcome is sufficient. If it isn’t, he has the ability to tweak Howell’s algorithm to produce the result he desires, not that Emily desires.

If anything, it seems that Cope views his artificial composers as a way to quickly churn out music, and to do so in a new and exciting way. His creations take care of the music processing, allowing Cope to “create hundreds [of pieces] in minutes” (Muscutt, 2007, p.12) and choose the best piece. EMI and Emily are tools that Cope uses to expedite his composition process, just as Dub FX uses his effects board and looping pedal to expedite the creation of a finished piece.

While Cope’s artificial composers may be producing their own combinations of sound, he is in control of the music’s starting point and end result. It is also important to note that as the creator, Cope’s input on the role of his creations can be enlightening. It seems that while Emily Howell is marketed as a person, Cope believes that “computers simply obey the dictates of their programmers. …Computers are tools—nothing more, nothing less” (Muscutt, 2007, p. 19), and these tools are pieces of music technology.

Suggested Reading from Inquiries Journal

The production, distribution, and consumption of music are highly international transactions. In the past, what made them so was the proliferation of American music throughout the rest of the world. Recently, however, things... MORE»
Advertisement
Music has accompanied major social events throughout the history of mankind. Major gatherings such as weddings, graduations, or birthdays are usually recognized by a familiar tune. There is evidence that music plays a large role in emotional processes within the brain. An individual’s emotional state of mind can directly impact... MORE»
This research paper addressed how technology has changed cultural relationships consumers have with music. The music industry’s business model has undergone substantial change over the last decade, and understanding artists’ cultural influence is critical in reevaluating their position in society. After a literature... MORE»
The advent of digital computers and contemporary neuroscience has fundamentally changed possible approaches to artificial intelligence (AI). Mankind’s perpetually evolving technological capacity inevitably leads to... MORE»
Submit to Inquiries Journal, Get a Decision in 10-Days

Inquiries Journal provides undergraduate and graduate students around the world a platform for the wide dissemination of academic work over a range of core disciplines.

Representing the work of students from hundreds of institutions around the globe, Inquiries Journal's large database of academic articles is completely free. Learn more | Blog | Submit

Follow IJ

comments powered by Disqus

Latest in Music

2021, Vol. 13 No. 05
During the height of the Cold War, the US State Department sponsored a series of racially integrated “Jazz Ambassador” tours in order to project proof of American talent and egalitarianism abroad. Representing and wielding the cultural... Read Article »
2020, Vol. 12 No. 12
While the music of Johann Sebastian Bach is often characterized by elaborate Baroque counterpoint and a relatively conservative set of aesthetic principles, elements of the emergent galant fashion are exemplified in certain mid-to-late career compositions... Read Article »
2017, Vol. 9 No. 11
The period of time from the Bebop era to the present—mid-1940s onwards—has been an era of great cultural evolution in the United States, and in few groups more so than the African American community. A factor particularly significant... Read Article »
2012, Vol. 4 No. 05
Starting from humble roots, hip-hop has grown from the creative outlet of underrepresented black teenagers living in the South Bronx in the late 1970s to a highly successful commercialized business that in 2000 grossed over $1.8 billion in sales... Read Article »
2015, Vol. 5 No. 1
Published by Clocks and Clouds
The production, distribution, and consumption of music are highly international transactions. In the past, what made them so was the proliferation of American music throughout the rest of the world. Recently, however, things have begun working in... Read Article »
2015, Vol. 7 No. 10
On the first day of 2010, I got a phone call from the Georgian Public Broadcaster (GPB), the official organizers of Eurovision Song Contest’s Georgian chapter. The Eurovision Song Contest (ESC) is an annual event that attracts some 800 million... Read Article »
2015, Vol. 7 No. 10
This paper explores how the Cuban Diaspora has formed connections and forged a new identity around music, meanwhile reinforcing the resiliency, adaptability, creativity and autonomy of the Cuban people in the midst of crisis and uncertainty. Arts... Read Article »

What are you looking for?

FROM OUR BLOG

Presentation Tips 101 (Video)
The Career Value of the Humanities & Liberal Arts
7 Big Differences Between College and Graduate School