The University of Sheffield 4th July
The day began with a keynote talk given by Professor Peter Nelson (Music, University of Edinburgh). Professor Nelson started by discussing the great changes and improvements in computation over the last 30 years, and how these changes have affected the relationship of the composer with the computer. Where computers were once large, slow, prone to errors, and had difficult user interfaces, computers now have a much more interactive user experience, frequently connected to a network and invariably capable of immense computational speed with relatively low error rates; features conducive to computer music making.
Professor Nelson then proceeded to ask the question of what exactly the implications of these changes have on music making, to which he identified four salient areas for discussion – 1. The Instrument, 2. Territory, 3. Work, and 4. Agent. As to the first topic, Professor Nelson noted that historically the instrument was seen to be a necessary component for music making and so posed the question of whether a computer can be an instrument. Additional questions raised in this section were whether we must submit to the rhetorical trope of having control and domination over an instrument in order to produce music, and whether we can bypass the hand’s control over the instrument and receive data directly to the brain (possibilities that may be realised using computers and modern technology).
In section 2 (territory) Professor Nelson argued that auditory communication shapes our imagination, and finds it implausible that anyone can imagine un-experienced sounds. Computers allow us to discover sounds at a greater rate than ever before, and so not only expand musical territory but also that covered by the imagination. As an example, he spoke of the influence the space race had on music making – the beeps of satellites and other unusual electronic sounds made their way into music making and powered the ‘experience of the aether (space)’ for many individuals.
Section 3 (work) considered the computer as a labour saving device, and questioned how an instrument contains the notion of labour in its constitution. This may be grounds for why some maybe reluctant to allow a computer to be an instrument. An instrument must be worked on in order to achieve a high degree of musical ability, where the same output of that degree of ability maybe achieved with less effort on a computer, due to a computers user-friendly interface (albeit with a loss in the organic nature of the laboured skill on an instrument). Section 4 briefly explored the general relationship between a human and an inanimate object, which covers both instrument and computer.
Professor Andy Hamilton (Philosophy, University of Durham) gave the second keynote talk. In it, he considered the possibility of non-musical soundart, which has arisen from the 20th century’s rejection of the notion that we must only include instrumental sounds (and voice) of determinate pitch as music. Professor Hamilton asserted that ‘music is the art of tones’ (tones in the sense of relatively determinate sounds produced by humans and not necessarily referring to works based on traditional tonal schemes), and interestingly asserted that music is not the only art of sound, for there is an emergent non-musical soundart comprised of non-tonal sounds. He terms this position ‘non-universalism’, and holds that music is a point on the spectrum of soundart that differs from non-musical sound art in the preponderance of tonal material.
Professor Hamilton noted, however, that a piece comprised solely of a single pure sine-tone, may not qualify as music since music characteristically involves impure tones as a consequence of intentional human production via voice or instrument. Likewise, a piece may involve no tones, as in percussive music (such as a piece for maracas or tam-tam), but nevertheless qualify as music. With this admission, Professor Hamilton distinguished two categories of soundart (sound-design) – first is ‘significant sound-design’, such as tones produced in mobile phone ring-tones, car horns, or door chimes, and the second category is ‘non-significant sound-design’, such as the sound produced by a fountain, car engine, or jackhammer.
After lunch, Dr. Adam Stansbie (Music, University of Sheffield) started the first discussion session with the question of whether there really is such a thing as a ‘tradition’ in music. There has been a constant evolution in the development of music-making and musical practice, but it is not clear whether this evolution necessitates a tradition. The discussion then focused particularly on whether the ‘work’ concept is usefully applied to human+computer music, wherein the composer creates a type (i.e. a set of instructions), and performances count as potentially variable tokens of this type. Although the algorithmic and chance processes employed in human+computer music seem to undermine the notion of the work, it was also argued that abandoning this concept threatened our capacity to make sense of what it is to understand the music or what the composer has actually achieved. During this session we had a live performance by James Surgenor (University of Sheffield) that incorporated spoken word, clarinet and computer elements. The idea behind the performance was to have an improvised duet of sorts with the computer, where the computer seemingly imitated the output of the clarinet but modified its tonal attributes, such as pitch and timbre, as well as rhythm. After the performance we discussed when we would say that the music started – when James stood in front of the microphone and initiated the computer program or when he started playing the clarinet? Similarly, how vital was it to the identity of the performance that it involve genuine interaction between James’ performance and the computer programme in the moment, rather than simply a pre-recorded set of sounds? In many ways, the performance was compared to an illusion, like a ventriloquist act, while James also insisted that performing is not lying.
After a short break began our second discussion session with the theme of the value of music. Initiated by Dr. Tom Cochrane (Philosophy, University of Sheffield), we considered four main potential ways we may value music – for its aesthetic value, expressive value, cognitive value, and moral value. Using these values, we discussed how they interact to give us our overall value of music. For example, the aesthetic enjoyment of music seems strongly related to how much we understand the music- where we can enjoy music that is neither boringly predictable, nor overly complex. The ways in which human+computer music can challenge the understanding of the listener are thus of great relevance to this value. Similarly, when computer music composers defer agency to algorithmic and probabilistic processes, questions are raised about whether this threatens the value of musical expression. Typically, for a composer to express themselves (and in ideal cases to achieve some kind of self-realisation) requires that they consciously bring music into being that reflects in some way their individual state of mind. At the very least, the appreciation of the composer’s craft requires that we have a clear sense of the success or failure of intent.
We then had the final live performance of the day given by Michael Quinn & Fergal Dowling of the Dublin Sound Lab, with Dowling operating the computer and Quinn playing various complex phrases on the piano thereby providing the sound input for the computer. The discussion proceeded to focus on the values of the real time interaction between the two performers, when perhaps the sound quality might have been superior when using two acoustic instruments instead. The social factors of both performing and listening to the music were widely agreed to be crucial to its value. The day ended with the discussion continuing over gin and tonic and then a meal at a restaurant with the day’s participants.
The workshop was kindly supported by the University of Sheffield via the Arts & Humanities PGR Forum.
Note from the organisers:
We took several risks with this workshop. First, we invited keynotes to talk about the subject, but did not ask what they might say. Second, we left much of the time available free for discussion, not knowing who would participate or what they would contribute. Third, we chose a date that was potentially problematic, in that it sat soon after two major conferences, one on music philosophy and the other on human+computer music.
That being said, we felt that the workshop was a success, as much for what it did not achieve as for what it did. The keynotes were stimulating (and at times controversial) the discussions lively and the live performances thought-provoking. It was clear that those coming from music had much to gain from existing philosophical methods, and those coming from philosophy had ample opportunities to challenge/expand their conceptions of music in light of the forms that human+computer music might take. Certainly, the interchange was refreshing.
What did we not achieve was a thorough investigation of the area. It was obvious that there is far too much to discuss in a day (perhaps week or even month). Two significant areas were as follows: we barely touched on music that had significant amounts of synthesised sound (the two live examples both used samples of the human performer rather than constructing their own bleeps and whistles). Second, we barely managed to interrogate the challenge that the increased sonic possibilities of this music poses to the common notion of music as exemplified by the Western classical paradigm.
Thus there is much work still to do. We hope in the near future to build on this promising start, with outline plans for more events next year which will lead to a major publication (edited book/themed journal issue).
Thanks to the University of Sheffield for the funding, and thanks to the RMA MPSG committee for their guidance in preparing for the workshop.