How artificial intelligence is changing music

Irish musician Martin Clancy’s new book explores the implications of AI for our musical future

The final round of this year’s AI Song Contest was contested by an oddly dissonant ode to coffee, some boppy Eurovision-esque ditties, a gently French melody, and a host of more genre-defying anthems. The competition is styled on the Eurovision, although it is open to entries from around the world, and the songs are entirely composed by computers. Listening to the finalists, and the ultimate winner – Thailand’s Yaboi Hanoi with Asura Deva Choom Noom (Enter Demons & Gods); at aisongcontest.com – I found myself wondering if anyone had cheated by adding a little helping human hand. I also found myself wondering where the boundaries blur.

Machines have been facilitating music-making since instruments were invented, but computer technology is a seismic shift on a par with the advents of sound recording and electrical amplification. While AI is currently causing future-shock rumblings, the influence of computers on our music has been felt for a while. People of the right vintage, myself included, will remember the camera teams on Top of the Pops trying to make two blokes at a pair of keyboards look exciting from the late 1970s on, as synth-pop started to dominate the charts including, memorably, Kraftwerk and Soft Cell, and, more forgettably, the likes of Howard Jones and Nik Kershaw.

Ideas surrounding AI in music sit at that fascinating crux between the resistance to change that, Clancy agrees, is a fundamental part of our psyche, and our other fundamental desire to explore new things

The advent of Auto-Tune in 1996 meant anyone could become a pop star, and frequently did, as people with the most minor claims to celebrity could release tracks where it sounded like they could sing. Even singers of established talent use the pitch-correcting programme, either to help them out on off days, or to create a new vocal sound – such as Cher’s 1998 Believe, often credited with mainstreaming the software. Later, Radiohead would use it on their album Amnesiac to depersonalise the vocal sound in Packt Like Sardines.

For Irish musician Martin Clancy, the way musicians use Auto-Tune to create something new is part of a rich lineage of invention. “Look at the history of the piano,” he says. “All the technological developments, and all the patterns that were created along the way to it being what we see now as a piano. The only difference with AI,” he continues, “is the speed of change”. The Dublin-based artist, who was part of In Tua Nua in the 1980s, has just brought out a book exploring these themes and their implications.

READ MORE

“Artificial Intelligence and Music Ecosystem” is a collection of essays, edited by Clancy, exploring everything from robotics to ethics, music in XR (extended reality), to issues around intellectual property and the law. Clancy also appears as an author, and as interviewer with artist Holly Herndon and music industry executive Scott Cohen. Published by Routledge, the book owes its origins to Clancy’s own PhD work exploring the financial and ethical implications of AI in music, and as such most chapters have plenty of references, lots of further reading and very few pictures.

Looks can be deceptive, however, and at whatever level you approach it, it’s a fascinating read. Take Miller Puckette, whose Max visual programming tool for music allows you to (basically) build your own desktop music suite. His chapter opens by inviting the reader to see from his references section – or lack thereof – that this is not a piece of scholarly writing. Instead, he incisively dissects how the boundaries between designer, composer and performer; and researcher and realiser have broken down, and how the results of these new hybrid labours are “not so much a fixed entity, such as a piece of software, but as a creative practice”. This, he says, “becomes folk knowledge […] part of a more general culture”.

Ditch, on Puckette’s advice, the idea that using electronics to imitate traditional instruments “sounds like using margarine instead of butter . . .” and imagine instead that “if you can make a synthesiser sound like a traditional musical instrument, you can, for instance, put it in dialogue with an acoustic one”.

These observations touch on a key idea: while technology is a tool that can generate music, it is the human hand and mind that turns its outputs into art. Also, currently, whatever claims may be made, the outputs of AI are still no more than admittedly faster and more complex iterations of what human programmers have put in.

The interesting thing is still, as Clancy puts it, “the ability for humans to kind of kick that technology around a bit. Think about someone like Jimi Hendrix. He took feedback and turned it into an art form. But the concern is that speed of change is now so fast. It’s not a question of what is right or wrong.”

Instead, and of more pressing concern, says Clancy, is what it all means for employment. He is not just referring to the streaming model of music consumption that has eroded the income of most recording artists, and in the process, inevitably pushed up prices for concert tickets; but also to the array of incredibly highly skilled, but mostly invisible and behind the scenes talents that have previously gone into music production.

Ideas surrounding AI in music sit at that fascinating crux between the resistance to change that, Clancy agrees, is a fundamental part of our psyche, and our other fundamental desire to explore new things. As I explore links, such as Herndon’s TED Talk – in which she uses AI to show how you can sing through the voice of another person, or even sing simultaneous duets with yourself, or partner with anyone you choose, from Pavarotti to Pharrell Williams – my mind keeps wanting to find reasons why it’s not right. “Don’t think of it that way,” Clancy suggests. “It’s more that it’s deeply strange.”

In pursuit of this idea, he suggests I look up the clip of the deepfake Simon Cowell by Metaphysic on America’s Got Talent, broadcast earlier this year. Watching it, I wonder if, alongside the obvious oddness of two Cowells in a TV studio, another layer of disquiet comes from the almost imperceptible signals my brain is receiving, that something is off. It’s a bit like the way the millisecond delay in a Zoom session means you can never quite achieve the intimacy of a real face-to-face.

Fascinating in conversation, Clancy wears his evident intelligence and erudition lightly. He quotes eclectic sources, from poetry to philosophy, avant-garde film-makers to pieces from popular culture, without you ever thinking you’re being quoted at. Instead the effect is of having your own curiosity sparked and rewarded, over and over again. From Hendrix we switch to Abba. He hasn’t yet seen Abba Voyage, the avatars-in-concert spectacular currently packing them in, in London.

Music offers a test case, says Clancy, and a space to think about the economic, environmental and human impacts of the epic expansion of AI into our lives

Abba are, I imagine, the best – perhaps the only – band I can think of to launch, and popularise, what will undoubtedly become a core strand of in-concert experience. Abba are, for a start, not dead, so you don’t have that creepy feeling of seeing re-animated corpses, or the emotional whack of being not quite face-to-face with a lost idol like David Bowie. Abba themselves have also spawned so many touring tribute acts that their essential authenticity has already been played around with in our minds, and audiences are already softened up to the hyperreality of the idea of it all.

Still, I wonder, if it’s all pre-programmed, how can an avatar concert catch the energy of a spontaneous live performance, where the band are feeding off, and giving back the energy of the crowd. Clancy reminds me that with Abba’s Voyage, the avatars are backed by a live band, but, he adds, there are also algorithms that can respond to the sound of a crowd and bring the energy up a notch, or tone it down as required.

There is so much to be gained from this book and from the concepts it explores, and Clancy is an excellent guide. The ideas generated by and implicated in AI and music seep out into such a broad range of areas: there’s the difference between intelligence and simply being stuffed with coded learnings; there’s authenticity and reproduction, plagiarism and reinterpretation. Many of the things we call AI today are actually heavily marketed codes, computation in place of understanding, machines trained to respond rather than to take and build on.

And perhaps that idea of taking and building is where there’s hope. Even though corporations have currently hoovered up so many musical resources, patents, distribution channels and copyrights that “fewer than 1 per cent of musicians can make a living from streaming”, there is still the fact that even before there was a capitalist model of ownership of music and its production, people still had the impulse to create. We may be able to tour a million dead artists forever, but that’s not going to stop people wanting to make new art. Clancy smiles at this idea, and admits that alongside all the technological tools at his disposal (such as Ableton Live), he is presently engaged with learning to play the trumpet. What needs to be fixed isn’t the fact of AI augmenting our repertoire of instruments, but the insidious processes of financial ownership and control.

The book opens with French theorist and economist Jacques Attali describing the processes by which abstract entities are transformed into artefacts, and thence into sources of profit. Music has been on this journey for centuries, while, Attali argues, AI has the potential to do the same for our memories, our consciousness – whatever that may be. These are ideas that are important to explore, before, as Clancy puts it, “the horse has bolted”.

Music offers a test case, says Clancy, and a space to think about the economic, environmental and human impacts of the epic expansion of AI into our lives. “It may produce wisdom,” he says, with characteristic intelligence. He goes on to quote legendary Hollywood screenwriter William Goldman: “It’s something I teach my students all the time: that ‘nobody knows anything’.” It’s a thought echoed by Puckette: “Since about the mid-1960s, with the introduction of the commer­cial modular synthesiser, there has been an explosion of invention […] And yet I do not think we really know, at the bottom, what we are doing. I doubt we ever will. My job remains a pretty interesting one.” So too is th e book.

Artificial Intelligence and Music Ecosystem, edited by Martin Clancy, is published by Focal Press/Routledge, martinclancy.eu

Gemma Tipton

Gemma Tipton

Gemma Tipton contributes to The Irish Times on art, architecture and other aspects of culture