AI is Changing How We Think About Music

Artificial intelligence is rapidly revolutionizing art—and it’s got far-reaching implications for both working musicians and listeners.

Music Features Artificial Intelligence
AI is Changing How We Think About Music

Last fall, more than 40 years after John Lennon was killed, the world got to hear the final song featuring his voice in The Beatles. The track, “Now and Then,” recorded in the late 1970s on a tape recorder and shelved for decades, originally had a sound that was too rough to release—until technology caught up. And oh, how technology has caught up. With sound source separation—a form of artificial intelligence—producers pulled Lennon’s voice out of the dusty mix and placed it into a polished soundscape. That’s how, in 2023, we can still get “new” Beatles songs.

Different sets of AI tools are also how, in 2023, we can get “Heart on My Sleeve,” a phony, viral song made with deepfaked versions of Drake and The Weeknd’s voices (crafted with timbre transfer programs). And it’s how musicians like Holly Herndon could, in 2022, create Holly+, an AI version of her own voice, which convincingly covered Dolly Parton’s “Jolene.” Herndon also used a singing AI called Spawn in her 2019 album Proto, and continues to experiment with new possibilities around developing technology.

Many artists make use of AI programs to write, mix and master their music—and to perform it. But not all artists fully embrace the tech. Performing rights organization ASCAP commissioned a survey among 2,000 music creators and found that half of all respondents saw AI as a threat to their livelihood, and that eight out of 10 believe AI needs to be better-regulated. Though AI music is not a new idea, it does have a lot of new implications—and a lot of possibilities. “I think it is going to be a rollercoaster,” Herndon said in an email interview with Paste. “I am most excited to see what people do once they have the tools in their hands, which is happening more and more.”

AI has been used in music composition since the early 1960s, when forms of rule-based systems used algorithms to train and write original songs in specific styles. Since then, training data and systems have improved, leading to deep-learning algorithms and generative AI models seen in recent years. Deep-learning models, which are designed to imitate human decision-making and knowledge, are responsible for most recent significant advancements in AI.

Professor Roger Dannenberg, a composer, musician and professor of computer science, art and music at Carnegie Mellon University, has been studying computers and music for four decades. (He also developed Audacity, a beloved audio editing app.) In those 40 years, he’s seen a variety of phases and developments in AI and music. Expect plenty more on the way, he says. “I don’t think deep learning is the end of AI. I think it’s actually the beginning,” Dannenberg continues. “I think that in five or 10 years, we’ll look back and say, ‘Oh, that was just another phase in the development of AI.’ Whatever technique comes next is going to probably build on deep networks, but it’s going to fix a lot of problems they have, or maybe transcend deep networks to solve what are currently the difficult problems.”

While the latest versions of AI might not be embedded into music creation quite yet, it’s only a matter of time before they’re commonplace, Herndon adds. “Musicians and artists have been using automated tools to make music for decades and may not even think about it,” she says. “Generative AI, deep learning, is not a new idea, but what is new is that it has become very sophisticated and easy for people to use.”

Can you tell when music has been made with AI? Not really—at least, not in the ways it’s being used right now, according to Dannenberg. “If you hear anything that sounds good, it’s almost certainly not entirely made just with AI; a lot of human work was involved,” he concludes. “But that’s the problem, because elements can be generated by AI, and then carefully selected and manipulated in, and worked with, by hand, by a skilled musician or producer. The limitations of AI are compensated for by human production.”

For a little taste of a completely AI-created song, tune in to the single “Betrayed By This Town” by AI singer/songwriter Anna Indiana. Listen: It’s not good. Without any human involvement, you get some amalgamation of training data—in this case, basic, cookie-cutter instrumentals paired with weirdly patterned, tinny vocals. Can we expect that to get better? Yeah, probably.

Meanwhile, other applications of AI could create new sounds entirely. Berklee College of Music professor and researcher Akito Van Troyer has used AI tech to design new instruments, and hopes for more new sounds to surface in future developments. “Lots of the people who use AI tools for music-making end up just making the same thing that existed before—and I don’t blame them,” Van Troyer says. “The AI technology we have today is based on machine learning, which means you’re training these AI models based on existing data. What it can produce is basically imitations of what it learns is good. Hopefully, we’ve got to come up with techniques to actually create new kinds of sounds and music that leads to musical innovation, beyond just the technological innovation.”

Can AI music be copyrighted? Sort of. Songs made entirely with AI (like the ones made by Anna Indiana) cannot currently get copyright protection in the United States. The music must have human authorship in some way to be eligible for copyright, according to Miriam Lord, the U.S. Copyright Office’s associate register of copyrights and director of public information and education. Last year, the Copyright Office granted roughly 100 copyright registrations for work that used AI, each on a case by case basis, Lord says.

That situation is developing. Last year, the Office announced its AI Initiative to give musicians, thinkers and tech companies a space to comment and help shape policy. More than 10,000 organizations and individuals provided input for the office, and that will inform a report on copyright and AI to be released over the course of this year, clarifying some of the muddy waters of AI and music copyright. “We want to find a way forward that promotes rather than inhibits the development of this exciting and valuable technology, while also ensuring that human creativity continues to thrive,” Lord explains. “This is not the first time in its long history that the Office has grappled with technological change and its impact on copyright law, and it won’t be the last.”

Can AI programs use copyrighted music to train? TBD on that one. Currently, there’s a lot of discussion around precedent in using copyrighted works to train large language models. In late December, the New York Times sued Microsoft and OpenAI for copyright infringement, noting that millions of NYT articles were used to train AI chatbots like ChatGPT without permission or payment. The lawsuit could have major implications on the use of copyrighted materials in AI training—and that extends to music, too. However, use of copyrighted music is already legally tricky in commonplace ways, such as in sampling, Van Troyer points out.

“You can generate music based on existing music. Isn’t that kind of like a new form of sampling?” Van Troyer says. “I think the debate is going to keep going on, and I’ll be interested in where this goes. But we honestly haven’t even solved the problem of sampling, speaking from the legal perspective. I think it’s gonna take a really, really long time to figure out if using AI to make the music touches the copyright or not.”

Some organizations, like ASCAP, have emphasized songwriters’ and composers’ rights while analyzing decisions around things like copyright and AI music. The organization’s board of directors put together six creator-focused principles in response to AI, and have used them as a guiding light for advocacy work when it comes to consent-based issues in AI training models. “One of our main principles is ‘Humans Creators First,’” says Nick Lehman, ASCAP Chief Strategy & Digital Officer, in an email interview with Paste. “As the source of all creativity, we must prioritize the rights of, and compensation for, our human members. The way we see it, when technology is used in a way that takes aim at human creators, ignores their rights, or devalues their work, that is not innovation.”

ASCAP doesn’t consider some AI models’ use of copyrighted music for training as fair use, but has demonstrated a collective licensing model that could allow AI programs to license members’ music for use in AI training. While copyrighted materials might be iffy when it comes to training programs, some organizations have already helped creators “opt out” of AI training programs in the meantime, removing their works from large-language models. Herndon and her team created the Spawning program to do just that—mainly focusing on visual art, to start. “Spawning can currently offer opt-outs for anything that has a URL, but music has not been the first priority,” Herndon says. “I know the guys there have been working on it, and did a project to help voice actors opt-out their voices.”

Some streaming sites like SoundCloud have empowered musicians to upload music made with AI tools through integrations with some programs. Meanwhile, SoundCloud says it aims to prevent AI training models from crawling the platform to use its music catalog for training models. Spotify readily uses AI for its AI DJ features, many playlist recommendations and even its year-end, individualized “Spotify Wrapped” roundups.

While AI has made leaps and bounds in some ways, in other ways it’s struggling to catch on to certain elements of music-making, Dannenberg says. Part of that has to do with how machine learning programs work from large data sets. “There’s a bias built into machine learning systems, that you kind of average over all the data to figure out what the truth is,” Dannenberg says. “You can look at this statistically—the distribution within a song is not typical of the distribution overall, across all songs. That makes every song very distinctive. That was just the opposite of what ChatGPT does; it’s steering all of its responses to echo sentences and knowledge that appeared in the past.”

In other words, while individual musicians tend to create music that stands out from the fray, AI programs might attempt to create music based on the average of all songs. “This idea that songs have a lot of repetition and have very sparse vocabularies is not something that appears in machine generated music—at least in the stuff we’ve looked at,” Dannenberg continues. “But now that we know that, and now that other machine learning people know that, maybe that can be something [we fix].” The idea of subjective art creation and perception is currently difficult to achieve in an AI model. “For example, when someone uses an image model and asks for something ‘beautiful,’ we all tap into the same cheesy definition of beautiful in the model,” Herndon says. “Over time I think there is a lot more research to be done to help translate subjective perception through models. Giving artists the power to craft more subjective models should be a goal for art, as one of the functions of art is to translate subjectivity.”

So what might come next with AI? Well, expect more content—a flood of it. Some musicians and theorists predict that AI will produce infinite, unlimited creations derived off of other previously produced media. Herndon calls the AI age “a new paradigm of the internet,” with good and bad—and fascinating—aspects. Sound scary? She doesn’t think so. “I don’t have a desire to return back to some romantic reactionary state or something, but I do feel that once our feeds are saturated with infinite media, people will begin to desire something very new,” Herndon notes. “AI tools offer some ideas to imagine whatever that something else is.”

One thing that won’t change, Herndon thinks, is peoples’ desire for live, human performances, outside of the digital realm. Meanwhile, applications of AI within music—as in any art form—are far-reaching. Dannenberg says some of his students are working on AI programs that could take a person’s voice and show it singing in a foreign language, or performing a song with an advanced technique. He sees educational value to some AI programs, especially for budding vocalists. Van Troyer sees young musicians using new AI tools as another musical tool, similar to instrument performance or production skills—and he’s excited about the resulting diversification in music. “AI tools are going to support people in different levels, from novice music-makers to professional musicians,” Van Troyer says. “It’s nothing to be absolutely afraid of; it’s just going to be part of your repertoire of music-making, and it’s just going to make you more musically creative, I think.”


Annie Nickoloff is a reporter and editor based in Cleveland, Ohio.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin