A North Carolina musician pleaded guilty on Thursday in what federal prosecutors call the first criminal case involving AI-assisted music streaming fraud in the United States.
The case could spark more conversation about how artificial intelligence is used and the role the government might play in regulating it.
Michael Smith admitted he used fake songs and fake listeners to steal millions in royalties from legitimate artists, according to the U.S. Attorney’s Office for the Southern District of New York. He pleaded guilty to a single count of conspiracy to commit wire fraud and agreed to forfeit over $8 million. He faces up to five years in prison.
Smith was accused of using AI tools to create hundreds of thousands of low-cost songs. He set up more than 1,000 bot accounts on services like Spotify, Apple Music, Amazon Music, and YouTube Music that were programmed to stream songs on repeat, thereby generating revenue.
Smith estimated that his automated network could get more than 660,000 plays per day, translating into over $1 million in annual royalties. He ran the scam between 2017 and 2024 before a royalty watchdog flagged suspicious activity in his catalog and halted payments.
Recommended
Rolling Stone reported that Smith spread his faux streams across many tracks and services to make it “more difficult to detect.” A distributor flagged him for possible fraud. Smith claimed in an email that “there is absolutely no fraud going on whatsoever!”
Yet, at the same time, he was emailing partners, saying, “We need to get a TON of songs fast to make this work around the anti-fraud policies these guys are all using now.”
This is part of a growing problem regarding artificial intelligence-created music. The Rolling Stone report explained that this harms flesh-and-blood artists because streaming services pay them from a shared pool based on total plays. This means Smith’s fake songs “stole millions in royalties that should have been paid to musicians, songwriters, and other rights holders whose songs were legitimately streamed.”
Some experts estimate that as much as 10 percent of all streams could be fake, which costs the industry billions of dollars per year.
Law enforcement officials say Smith’s case is an early example demonstrating how AI-enabled fraud is affecting streaming platforms as scammers use these tools to churn out vast libraries of content and then deploy bots or click farms to manufacture “listens” at scale.
From The Hollywood Reporter:
Streaming fraud has been a rampant issue in the music industry for years, a problem only exacerbated by AI now that fraudsters can quickly generate thousands of songs to flood the zone on streaming services like Spotify and Apple Music. The French music streaming service Deezer previously reported that it’s seeing 60,000 AI songs uploaded to its platform every day, further noting that as much as 85 percent of streams on those tracks are fraudulent.
As The Hollywood Reporter exclusively reported in February, Apple Music doubled its penalties for those caught engaging in streaming fraud, with the company saying AI’s impact on fraud was a factor in the decision.
Meanwhile, the music industry is struggling to figure out how to treat music generated by AI. The technology can already mimic human voices and compositional styles well enough that casual listeners might sometimes mistake fake artist with a real one. However, it isn’t that hard to tell the difference if you listen closely. Still, the technology is in its infant stages — with further advances, it could become closer to the real thing.
The question is: What happens when people aren’t sure whether their favorite artist is even human?

