Held at the Department of Music, University of Sheffield on 27th May 2015
University of Sheffield, Wednesday 27 May 2015
Fundamental issues raised by the production and reception of this music are often obscured in the literature by a focus on technical details of system construction or function. Meanwhile, philosophical work on music is typically focused on acoustic instrumental/vocal works, and arguably has yet fully to engage with the challenges raised by current movements in human+computer music. This is especially the case when human+computer music does not conform to the established work-concept and/or pitch-based structures.
Organisers:
Dr. Adam Stansbie and Mark Summers, Department of Music, University of Sheffield
Tom Hewitt, Department of Music, Open University
Dr. David Roden, Department of Philosophy, Open University
Workshop schedule and paper abstracts
10.30 Arrival/coffee
11.00 Paper session 1:
Entangled Network Space – the fuzzy space where music is
Tom Hewitt, Open University
Imagined performances in electroacoustic music
Robert Bentall, Leeds College of Music
Surfaces, systems, senses, social circumstances
Owen Green, University of Edinburgh
12.30 Lunch at a nearby restaurant
1.30 Paper session 2:
Do we need robust audio interfacing based on psychoacoustic principles of hearing?
Amy Beeston, University of Sheffield
Are computational composers really creative?
Valerio Velardo, University of Huddersfield
Textility of live code
Alex McLean, University of Leeds
3.00 Coffee
3.30 Keynote discussion session:
Human+computer music performed live by Pete Furniss (clarinets)
Session chaired by David Roden, Open University
5.30-6ish Gin & tonic
Do we need robust audio interfacing based on psychoacoustic principles of hearing?
Amy V. Beeston
Department of Computer Science, University of Sheffield
For human+computer music, sound has always been the output modality for the machine. Increasingly, sound is now used as the input modality in addition. When received at the microphone, the audio signal present may subsequently (i) undergo signal processing to produce transformed audible sound layers and/or (ii) be used to derive specific items of control information. While the latter procedure does not directly create sonic material to listen to, it can offer structural influence over a live performance, for instance in the organisation of time-based score following procedures (Orio et al, 2003), or in crafting fluid interactions between networks of control variables (Di Scipio, 2003).
Despite its increased prevalence, there is still incomplete understanding of the microphone as a sensor for real-world signals. Specific recommendations to summarise signal properties in perceptually-relevant dimensions (e.g. Peeters et al, 2011) are based on databases of conventional instrumental sound rather than electro-acoustic sound, and thus may under-estimate the importance of certain timbral properties. Moreover, there are few methods to reliably target ‘interesting’ variations in the input signal, or to otherwise account for perceptually irrelevant signal variability arising from microphone characteristics or placement, background noise sources, or environmental acoustics. The current paper therefore asks whether a machine-listening approach motivated by psychoacoustic principles of hearing might eventually lead to a robust audio interface for performances involving acoustic instruments and live electronics, and what the advantages (or disadvantages) might be.
References:
• Di Scipio, Agostino (2003). “‘Sound is the interface’: from interactive to ecosystemic signal processing.” Organised Sound 8(3), 269-277.
• Orio, Nicola, Serge Lemouton, and Diemo Schwarz (2003). “Score following: State of the art and new developments.” In Proc NIME, Singapore, 36-41.
• Peeters, Geoffroy, Bruno L. Giordano, Patrick Susini, Nicolas Misdariis, and Stephen McAdams (2011). “The timbre toolbox: Extracting audio descriptors from musical signals.” JASA 130(5), 2902-2916.
Imagined performances in electroacoustic music
Robert Bentall
Leeds College of Music
In this paper, I will be examining some of the issues that have arisen from my practice-led research within the field of electroacoustic composition. These works, although fixed-media in format, are primarily made up of instrumental recordings; these are both of directed materials and improvisations. Ivory Terrace (2013) makes use of the bass trombone, Sauntering (2013) uses the violin, Two Movements (2014) uses the concertina and Summer Anthem (2013) presents the mandolin as the sole source materials. The compositional process relies on the re-organisation of these instrumental materials into a new structure, thus creating an imagined performance. Improvisation plays a key part in human + computer music of both live and fixed-media varieties, despite being diametrically opposed to the perceived rigidity of music technology. Recent multi-channel works, as well as those of Strniša and Perkins use pointillistic spatial technique to simulate novel ensembles e.g. mandolin sextet; spatial loudspeaker arrays thus allow for instrumental combinations that would be difficult to assemble in everyday life. It seems logical to make use of music technology to enhance instruments in a way that a human cannot perform, for instance, adding notes lower or higher than are playable, time-stretching notes or having a hexaphonic chord on an instrument with four strings. However, equally pertinent is idea of de-hyper-instrumentalisation as proposed by Climent; most of the sounds within an electroacoustic should be perceived as performable, even if they are technologically mediated. This brings up the issue of whether instrumental virtuosity is seen as less necessary than technological virtuosity in human + computer music – is a well-developed piece of software or strong command of processing tools more valuable than what can be done on an instrument?
1. Particles of Accordeon, 2014.
2. Axe, 2010.
3. Term coined with regard to Koorean Air (2010) for Violin and Live Electronics.
Surfaces, systems, senses, social circumstances
Owen Green
University of Edinburgh
It is not uncommon to come across the complaint that, as an academic discipline, computer/electronic music spends rather too much time discussing ‘the technology’ rather than ‘the music’. Whilst this is justified, I shall suggest that it is not a problem that is simply overcome: given the plurality of disciplinary and musical commitments at work in the current milieu there isn’t a waiting set of lingual and conceptual tools to which we can turn and start discussing ‘the music’ with any hope of widespread comprehension. The challenge seems especially acute in live electronic and improvised musicking. Whilst we may have patches of inherited vocabulary from allied practices, there doesn’t appear to be a way of getting at the musicality of what goes on without recourse to a discussion of the concrete social and material circumstances of production / reception, to the occasional frustration of those who contend that an adequate discourse should be available from examination of the sonic surface alone.
Can a philosophy of technology help us here? In particular, can a Feenbergian approach that seeks to resolve the tension between abstract instrumentality on the one hand, and lived practice on the other, be of help to we researchers in live electronics who need to account for both building and playing in our practice? Feenberg’s work gets us to a point of better understanding ways in which the technical, socio-cultural and sensual are intertwined and helps us map the issues involved in getting better at discussing ‘the’ music. With some additional support from anthropology (Born, Ingold) and pragmatism (Shusterman), I suggest that we can do better but that it needs to be a cooperative effort.
Entangled Network Space – the fuzzy space where music is
Tom Hewitt
Open University
Here are a musician, a computer and a work of music
And they are easy to tell apart, aren’t they?
In this paper I will suggest otherwise. Taking as a starting point the Extended Mind hypothesis of Clark and Chalmers (2010) and Derrida’s discussion of the nature of an artwork’s frame in Parergon (1979) I will suggest that the boundaries between people, their tools and their artistic artefacts are far fuzzier (pace Kosko 1993) than we usually imagine.
I will draw on Derrida and the broadly “connectionist” philosophies of Deleuze, Guattari, Latour, DeLanda, Vitale and Hodder to propose a metaphysical space – Entangled Network Space (ENS) – where the interactions and connections between people and things can be explained and understood. I will borrow Derrida’s description of the parergon, i.e., that which perhaps is, or is not, part of the work and conjoin that discussion with a description of what I call the paraprosopon, i.e., that which perhaps is, or is not, part of the person. It is in this Entangled Network Space where our ever-shifting prosoponal encounters with the ergonal things in the world ultimately have meaning.
References:
• Clark. A. and D.J. Chalmers 2010 (in Menary, R. (ed.) 2010). “The Extended Mind” The Extended Mind. Cambridge, Mass: MIT Press. (27-42).
• Derrida, J. 1979. “The Parergon” October. 9. 3-41.
• Kosko, B. 1994. Fuzzy Thinking. London: Flamingo (Harper Collins)
Textility of live code
Alex McLean
University of Leeds
Live coding is a practice involving live manipulation of computation via a notation (see e.g. Collins et al, 2003). While the notation is written and edited by a human, it is is continually interpreted by a computer, connecting an abstract practice with live experience. Furthermore, live coding notations are higher order, where symbols do not necessarily represent single events (e.g. notes), but compose together as formal linguistic structures which generate many events. These two elements make live code quite different from the traditional musical score; a piece is not represented within the notation, but in changes to it. Rather than a source of music, the notation becomes a live material, as one component in a feedback loop of musical activity.
There are many ways to approach live coding, but for the present discussion I take the case study of an Algorave-style performance (Collins and McLean, 2014), for its keen focus on movements of the body contrasted with abstract code and the fixed stare of the live coding performer. In this, the live coder must enter a hyper-aware state, in creative flow (Csikszentmihalyi, 2008). They must listen; acutely aware of the passing of time, the structure as it unfolds, literally counting down to the next point at which change is anticipated and (potentially) fulfilled via a code edit. In the dance music context this point is well defined, all in the room aware of its approach. The coder must also be aware of physical energy, the ‘shape’ of the performance (Greasley and Prior, 2013). All this is on top of the cognitive demands of the programming language, manipulating the code while maintaining syntactical correctness.
The philosophical question that this raises is how (in the spirit of Small, 1998), does this musical activity model, allow us to reflect upon and perhaps reimagine, the human relationship with technology in society? Can we include wider perspectives, by drawing upon neolithic approaches to technology such as the warp weighted loom, in this view (Cocker, 2014)?
References:
• Csikszentmihalyi, M. (2008). Flow: the psychology of optimal experience. HarperCollins.
• Cocker, E. (2014, January). Live notation – reflections on a kairotic practice. Performance Research Journal 18 (5).
• Collins, N. and A. McLean (2014). Algorave: A survey of the history, aesthetics and technology of live performance of algorithmic electronic dance music. In Proceedings of the International Conference on New Interfaces for Musical Expression.
• Collins, N., A. McLean, J. Rohrhuber, and A. Ward (2003). Live coding in laptop performance. Organised Sound 8 (03), 321-330.
• Greasley AE; Prior HM (2013) “Mixtapes and turntablism: DJs’ perspectives on musical shape”, Empirical Musicology Review. 8.1: 23-43.
• Small, C. (1998, June). Musicking: The Meanings of Performing and Listening (Music Culture) (First ed.). Wesleyan.
Are computational composers really creative?
Valerio Velardo
University of Huddersfield
In the last few decades, the introduction of computers in the compositional process has radically changed the music landscape. Computers have been used as a means to support composers by providing novel musical ideas. The interplay between machines and humans has led to new hybrid creative systems which transcend the traditional notion of composer. A more radical way of using computers in music completely removes the role of humans from the generation process. Indeed, there already are computational systems capable of generating complex pieces autonomously. For example, EMI is able to convincingly create music in the style of a target composer. Iamus composes orchestral contemporary music; and some of its works have been recorded by the London Symphony Orchestra. The development of these artificial creative systems entails a number of philosophical questions that have not yet been answered. Can we define these systems as creative? If so, what are the features which make them creative? Also, should artificial composers be constrained to the generation of humanlike music only? What will be the effect of computational systems on music in the long term? Although there are several theories in philosophy and psychology which aims to explain the compositional process in music, none accounts for the presence of artificial creative systems.
In this paper, I answer the previous questions by introducing the concept of General Creativity. I use this notion as a basis to build a formalised theoretical framework which provides the necessary tools for: (i) univocally describing any music creative agent (i.e., human and nonhuman); (ii) studying societies of music creative agents, and (iii) characterising different forms of creativity. I also argue that by letting artificial systems explore new musical styles which go beyond human comprehension, we can derive a better understanding of the rules governing human music.
Generously sponsored by:
In our report on last year’s workshop we stated that the day ‘was a success, as much for what it did not achieve as for what it did.’ The feeling then was that there were any number of burning musico-philosophical issues relating to human+computer music which simply could not be given an airing within the limited confines of a single day’s workshop. With these thoughts in mind, the call for papers for this second workshop shifted its focus away from the technical mechanics of human+computer music’s inception, composition and performance (conferences on such topics being legion) towards debate about the aesthetics (broadly construed) of these musics. We invited contributors to consider the philosophical aspects of musics outside the conventional work-concept paradigm of Western Art Music. And we were not disappointed.
The first part of the workshop was structured conventionally, with two sessions of paper presentations. The day was then rounded off with a live musical performance by the clarinettist Pete Furniss, followed by a discussion, chaired by Dr David Roden, of the philosophical issues raised by the performance. The programme for the day and the paper abstracts can be found here.
My paper, Entangled Network Space – The fuzzy space where music is, started proceedings. Taking a view on the metaphysical possibility spaces described by writers including Deleuze, Guattari, Latour, Hodder, Vitale and De Landa, and invoking the ‘fuzzy logic’ of Kosko, I questioned whether the ‘assemblages’ which we usually describe as ‘persons’, ‘minds’, ‘computers’, ‘musical works’, and so on, are really quite as discrete as ordinarily supposed. I concluded that they are not discrete and that the dynamic, diachronic activities within the possibility space mean that they are ontologically fuzzy and entangled. The script and slides for the talk can be found here.
Robert Bentall’s paper, Imagined performances in electroacoustic music, examined aspects of ‘virtuosity’ between musicians using ‘conventional’ instruments and those using ‘technology-mediated’ instruments. In using technologies which allow, for example, the sounding of a hexachord on what would conventionally be a four-stringed instrument or the creation of infeasible ensembles, are we listening to a ‘disembodied extension of human capabilities’? Robert introduced Climent’s notion of ‘de-hyper-instrumentalisation’; the thought that the sounds produced within an electroacoustic performance ought to be, in principle, performable, even though they are technologically mediated. I was struck in particular by his use of the term ‘unimprovisation’, the practice of musicians using improvised samples as part of the palette of sounds in further composition. This question of normativity raises many issues concerning the path of current and future performance practice and organological ontologies.
Owen Green gave us Surfaces, systems, senses, social circumstances. His contention was that there is no ‘waiting set of lingual and conceptual tools’ to enable us to discuss ‘the music’ simpliciter, given the ‘plurality of disciplinary and musical commitments at work in the current milieu’. Owen said that the consideration of musical surface alone makes difficult (if not impossible) the development of any adequate discourse of the praxis of electronic musicking, to use Small’s terminology. He acknowledged the relevance of the assemblages described in my paper when discussing the importance of ‘the concrete social and material circumstances of production / reception’ of these musics; such assemblages, he said, ‘Enlarge the frame of what we consider to be technology.’ He suggested that Richard Shusterman’s bridging of the pragmatic / continental divide might help us here.
There was a great deal of heated debate and conversation over a splendid (and most un-conference-like) lunch at a nearby Turkish restaurant. All of the delegates expressed their satisfaction.
The afternoon paper session began with Amy V. Beeston’s Do we need robust audio interfacing based on psychoacoustic principles of hearing? Amy began by pointing out that the human ear/mind is able to compensate for the surroundings in which a sound source is produced in under a second (probably in virtue of our brains’ massive parallel processing capacities) whereas even the best current technology cannot ‘learn’ to do this in under several hours. If we are ever to use the microphone (or other sound input mechanism) to control the dynamics of the mediated electronic performance of musical outputs, then ought we to consider the application of the processes of our biological psychoacoustic principles to these technological tools? My question would be, would the development of these ‘intelli-mics’ have relevance to the issue of agency in musical performance?
Next was Valerio Velardo with his paper Are computational composers really creative? He pricked up our ears with the fairly bold claim that a computer (Iamus) is a better composer than Mozart! Valerio began by explaining how Iamus is an autonomous compositional system, before going on to ask whether such systems can be considered to be creative. He proposed the concept of General Creativity to explore the ontologies of human / human-machinic hybrid / machinic creativity. Further, he gave us a schematic nested ontology space, in which musica humana is a subset of musica mechanica, which itself is a subset of musica mundana. I took these categories to represent the possibility spaces of, respectively, all possible human-composed musics, the much larger (but machine-tractable) space of machine-composed musics and, finally the intractable, but possible space of all musics. This latter space, at least in its outer fringes, must (of computational necessity) be some transcendent Platonic realm which need not concern us. Valerio considered the corpus of machine-only musics, i.e., musical artefacts composed by machines and only understood by machines. Valerio’s paper received animated response from the audience throughout.
Our final paper was Textility of live code by Alex McLean. He described the production of music from the changes written in real time to computer code. Such code is, according to Alex, a meta-order object, where individual components of the code (unlike, e.g., a crotchet in a conventional score) might trigger a number of lower-order musical events. In this sense, the code is a ‘live material’ and part of a feedback assemblage of an iterative process of musical activity. Apart from the constraining nature of the real time decision-making processes, Alex pointed out that these changes to digital inputs in order to vary the outputs are nothing new, giving us the example of weaving patterns on looms from the Neolithic period to the present day.
Our final session was a musical performance / discussion session. We were fortunate to hear two pieces by the clarinettist Pete Furniss, the first on clarinet and the second on bass clarinet. Since one aim of this workshop was to move beyond technical descriptions and commentary, I will myself refrain here from such commentary. Suffice it to say that Pete improvised his clarinet output, which, via microphone input, was mediated, moderated and mashed around by a computational process, involving the manipulation of his input signal with the addition of synthesized elements. The product of this manipulation was output as sounds through loudspeakers which complemented his playing. These speaker sounds provided material for feedback which further influenced his improvisatory playing. And the effect on the uninitiated auditor such as myself? The performance struck me as a musical duet between the observed clarinettist and some acousmatic partner.
The discussion session following Pete’s performance was co-ordinated by David Roden, who, in his introductory remarks, tied some of the phenomenology of Pete’s performance into aspects of the day’s previous papers, particularly the topics of assemblages and of agency in human+computer performance. Pete noted that, whilst he knows that the sounds generated by the system are not the result of action by an intelligent agent, it feels nonetheless that he is collaborating in an improvisatory co-performance with another live agent. Certainly, that is the effect which I perceived as a lay listener. Pete has installed a ‘cut-out’ pedal into the system so that he can occasionally mute the system-produced sound in order to take back an element of control. He says that he is very aware, during performance, of being part of a performer-clarinet-software-hardware assemblage. There was a discussion about what it would mean for a machinic ‘collaborator’ to possess real agency and about what criteria would need to be applied in order to tell – a kind of musical Turing test. David asked Pete about these dynamic interactions between performer and system. Pete is very aware of them. Certainly as an observer/auditor it was possible to see the haptic effects that certain system sounds seemed to induce in Pete (from hunched shoulders to smiles). There is much further work to be done in this area, not least in terms of the epistemology and ontology of such ‘works’ and of performance philosophy more generally.
The day was rounded-off with a cocktail session and snacks in the foyer of the Jessop Building.
We are in discussions with an academic publisher about producing a volume of the proceedings of the workshop. We also hope to set up a page on our website of suggested reading on these topics, such an online resource being singularly lacking at present. There are plans to run a further workshop next year.
The organizers would like very much to thank the RMA Music and Philosophy Study Group for the opportunity to run this workshop under their aegis.
We are grateful for financial support from CHASE, one of the new AHRC doctoral training partnerships, and from the University of Sheffield Arts and Humanities PGR Forum.
Follow our Twitter updates at Φ H+C Mus
And finally, many thanks to the co-organizers for their impeccable management of the day, Mark Summers, Adam Stansbie and David Roden.
Tom Hewitt, PhD Student and CHASE Scholar, Department of Music, The Open University