AI is Changing How We Think About Music
Artificial intelligence is rapidly revolutionizing art—and it’s got far-reaching implications for both working musicians and listeners.
Photo Budrul Chukrut/SOPA Images/Shutterstock
Last fall, more than 40 years after John Lennon was killed, the world got to hear the final song featuring his voice in The Beatles. The track, “Now and Then,” recorded in the late 1970s on a tape recorder and shelved for decades, originally had a sound that was too rough to release—until technology caught up. And oh, how technology has caught up. With sound source separation—a form of artificial intelligence—producers pulled Lennon’s voice out of the dusty mix and placed it into a polished soundscape. That’s how, in 2023, we can still get “new” Beatles songs.
Different sets of AI tools are also how, in 2023, we can get “Heart on My Sleeve,” a phony, viral song made with deepfaked versions of Drake and The Weeknd’s voices (crafted with timbre transfer programs). And it’s how musicians like Holly Herndon could, in 2022, create Holly+, an AI version of her own voice, which convincingly covered Dolly Parton’s “Jolene.” Herndon also used a singing AI called Spawn in her 2019 album Proto, and continues to experiment with new possibilities around developing technology.
Many artists make use of AI programs to write, mix and master their music—and to perform it. But not all artists fully embrace the tech. Performing rights organization ASCAP commissioned a survey among 2,000 music creators and found that half of all respondents saw AI as a threat to their livelihood, and that eight out of 10 believe AI needs to be better-regulated. Though AI music is not a new idea, it does have a lot of new implications—and a lot of possibilities. “I think it is going to be a rollercoaster,” Herndon said in an email interview with Paste. “I am most excited to see what people do once they have the tools in their hands, which is happening more and more.”
AI has been used in music composition since the early 1960s, when forms of rule-based systems used algorithms to train and write original songs in specific styles. Since then, training data and systems have improved, leading to deep-learning algorithms and generative AI models seen in recent years. Deep-learning models, which are designed to imitate human decision-making and knowledge, are responsible for most recent significant advancements in AI.
Professor Roger Dannenberg, a composer, musician and professor of computer science, art and music at Carnegie Mellon University, has been studying computers and music for four decades. (He also developed Audacity, a beloved audio editing app.) In those 40 years, he’s seen a variety of phases and developments in AI and music. Expect plenty more on the way, he says. “I don’t think deep learning is the end of AI. I think it’s actually the beginning,” Dannenberg continues. “I think that in five or 10 years, we’ll look back and say, ‘Oh, that was just another phase in the development of AI.’ Whatever technique comes next is going to probably build on deep networks, but it’s going to fix a lot of problems they have, or maybe transcend deep networks to solve what are currently the difficult problems.”
While the latest versions of AI might not be embedded into music creation quite yet, it’s only a matter of time before they’re commonplace, Herndon adds. “Musicians and artists have been using automated tools to make music for decades and may not even think about it,” she says. “Generative AI, deep learning, is not a new idea, but what is new is that it has become very sophisticated and easy for people to use.”
Can you tell when music has been made with AI? Not really—at least, not in the ways it’s being used right now, according to Dannenberg. “If you hear anything that sounds good, it’s almost certainly not entirely made just with AI; a lot of human work was involved,” he concludes. “But that’s the problem, because elements can be generated by AI, and then carefully selected and manipulated in, and worked with, by hand, by a skilled musician or producer. The limitations of AI are compensated for by human production.”
For a little taste of a completely AI-created song, tune in to the single “Betrayed By This Town” by AI singer/songwriter Anna Indiana. Listen: It’s not good. Without any human involvement, you get some amalgamation of training data—in this case, basic, cookie-cutter instrumentals paired with weirdly patterned, tinny vocals. Can we expect that to get better? Yeah, probably.
Meanwhile, other applications of AI could create new sounds entirely. Berklee College of Music professor and researcher Akito Van Troyer has used AI tech to design new instruments, and hopes for more new sounds to surface in future developments. “Lots of the people who use AI tools for music-making end up just making the same thing that existed before—and I don’t blame them,” Van Troyer says. “The AI technology we have today is based on machine learning, which means you’re training these AI models based on existing data. What it can produce is basically imitations of what it learns is good. Hopefully, we’ve got to come up with techniques to actually create new kinds of sounds and music that leads to musical innovation, beyond just the technological innovation.”
Can AI music be copyrighted? Sort of. Songs made entirely with AI (like the ones made by Anna Indiana) cannot currently get copyright protection in the United States. The music must have human authorship in some way to be eligible for copyright, according to Miriam Lord, the U.S. Copyright Office’s associate register of copyrights and director of public information and education. Last year, the Copyright Office granted roughly 100 copyright registrations for work that used AI, each on a case by case basis, Lord says.
That situation is developing. Last year, the Office announced its AI Initiative to give musicians, thinkers and tech companies a space to comment and help shape policy. More than 10,000 organizations and individuals provided input for the office, and that will inform a report on copyright and AI to be released over the course of this year, clarifying some of the muddy waters of AI and music copyright. “We want to find a way forward that promotes rather than inhibits the development of this exciting and valuable technology, while also ensuring that human creativity continues to thrive,” Lord explains. “This is not the first time in its long history that the Office has grappled with technological change and its impact on copyright law, and it won’t be the last.”