Pitch perfect protection

Juan Carlos Quintero was working as a music supervisor for a television show when he got his first inkling that artificial intelligence could cause complications for musicians and other artists.
The show's executive producer asked him about the possibility of generating a score with AI instead of hiring a composer. "He wasn't thinking about the creative standpoint," says Quintero, professor of practice and director of music business and communications in the University of Tennessee's Natalie L. Haslam College of Music. "He was thinking about a computer creating music that could be inserted into his shows and generate royalties for him as the copyright owner."
The executive producer walked back his suggestion when Quintero explained the importance of the human element in music creation. "When you give notes to AI, you cannot have the creative conversations that you would have with a composer about your vision for the music. You can't have drinks with your laptop and discuss that, you know?"
With recent advancements in technology, AI companies can now train their generative models to create high-quality musical compositions. But first, those systems have to learn to compose music by scraping huge amounts of existing music data from online platforms and streaming services—often without the original artists' permission. Unscrupulous model owners can then pass AI-generated music off as their own and profit from it, leaving the actual creators without attribution or monetary compensation for their original work.
In 2024, Assistant Professor of Electrical Engineering and Computer Science Jian Liu, Ph.D. student Syed Irfan Ali Meerza, and Lehigh University faculty member Lichao Sun set out to help artists like Quintero protect their music from generative AI. The team developed
HarmonyCloak, a tool that hampers the data-scraping process to prevent AI models from reproducing an artist's unique sound.

"Given how AI has disrupted other creative fields—like visual art, where artists have seen their styles mimicked without permission—I wanted to explore a way to protect musicians from a similar fate," says Meerza.
To do this, HarmonyCloak capitalizes on the human ear's inability to detect noises that are extremely quiet or outside certain frequencies. HarmonyCloak embeds such sounds in music data, making the data unlearnable to generative AI models. The noises are imperceptible to human listeners, so the music's original sound quality is preserved, but the embedded sounds cloak the music. When AI tries to create music based on that protected data, the resulting composition comes out as incoherent gibberish.
The researchers have already received hundreds of requests from musicians and composers interested in trying HarmonyCloak. Based on feedback from artists, students, and audiophiles, the team is working on refining the tool with the goal of making it available for free later this year.
"I feel the entire music industry has been waiting for a tool like this for a long time because they all realize the problem with current generative AI models," Liu says. "We have been working very hard to release the tool as a service, so users can upload their music to our website and then download the cloaked version of their music."
The researchers also hope HarmonyCloak can be used to teach students how to protect copyrighted material from AI, a topic Quintero discusses regularly in the classroom.
"Technology can create a smoke screen so we're unable to see who an artist really is," Quintero says. "That's a big discussion in class—authenticity. When we talk about copyrights, the students' understanding shifts based on their knowledge of the stakeholders. Who owns a song, and what does it mean to a copyright owner when that song is on YouTube, on the radio, in a restaurant?"

As AI grows more sophisticated, artists, AI proponents, and policymakers grapple with how to balance creativity, technology, and ethics. Music artists, their representatives, and record labels have begun to sue companies for training AI models on copyrighted recordings or for offering platforms that create music based on user prompts. In 2024, Tennessee enacted the Ensuring Likeness Voice and Image Security (ELVIS) Act to provide legal protection for artists' voices against unauthorized AI replication.
AI is challenging traditional copyright law in two main ways, says Assistant Professor of Law Nick Nugent. First is the question of whether AI-generated works can be copyrighted. Although US copyright law doesn't explicitly state that only humans can create copyrightable works, that's how courts and the US Copyright Office have interpreted it.
In Naruto v. Slater, for example, a court held that photos taken by a monkey were not entitled to copyright protection based on the US Copyright Office's policies that it will not register works produced by nature, animals, or plants. Those same policies also exclude works produced by machines or mechanical processes that operate without creative input from humans.
AI takes this issue into a new frontier. Although copyright doesn't cover purely machine-generated creations, human applicants may be able to obtain protection by demonstrating that the work contains significant amounts of their own personal creativity.
Nugent explains the implications this emerging area of law could have in the realm of music. "If a user merely instructed an AI tool to create new music, that resulting music would likely not be copyrightable for lack of a human author. But if the user selected and arranged various AI-generated elements and/or engaged in an iterative prompting and refinement process with the AI tool, then those actions might provide the threshold level of human input to qualify the AI-generated music for copyright protection."
The second way in which AI is challenging copyright law concerns the doctrine of fair use. AI companies must train their models on vast amounts of data, a process that often involves copying copyrighted content without authorization. The fair use doctrine permits such copying if it transforms the work or doesn't harm the copyright owner's market.

But music copyright holders argue that even transformative uses by AI can harm their markets, particularly because copyright protects an artist's stylized voice recordings but not the artist's actual voice or style, enabling AI to clone the artist's voice and style to make new music.
"Music artists are concerned that if AI can be asked to create a pop song in the style of Beyoncé, users will have less incentive to pay money for actual Beyoncé music," Nugent says. "Soon, some fear, countless musicians will be out of a job because AI will do just as good of a job for a fraction of the price, or for free—but only after training on the works of true human composers, songwriters, and artists."
Most copyright doctrines were developed long before the advent of generative AI, when such issues could not even be imagined. Now artists and legal experts face the reality of having to find ways to protect their content from generative technology, even if they can't rely on existing intellectual property laws. One strategy, Nugent says, is to release music only on streaming platforms whose terms and conditions explicitly prohibit users from consuming their music for AI training. This enables artists to use contract law to protect themselves in situations that copyright law may not cover.
HarmonyCloak offers music artists another powerful, practical way to defend their creative and economic interests against generative AI. At the same time, the tool leaves room for AI technology to flourish.
"There's nothing wrong with AI models," Liu says. "The problem is when AI companies train their models on copyrighted music without permission."
Quintero agrees. "The incredible new AI technology should be embraced, but it also has a lot of implications in terms of someone's ability to have a career, make a living, and remain active in their industry."
For Meerza, HarmonyCloak represents an exciting chance to make a meaningful impact on the world while still a Ph.D. candidate. "The opportunity to contribute to a solution that could protect creative expression—while also pushing the boundaries of adversarial techniques in AI—is incredibly rewarding. It reinforces why I pursued research in the first place: to address emerging challenges at the intersection of technology and society."
Provided by University of Tennessee at Knoxville