Fraudulent Streaming Plays: How AI and Bots Are Disrupting the Music Industry

6 minutes de lecture

“`html

The music industry is facing a new threat: the use of artificial intelligence and bots to artificially inflate plays on streaming platforms like Spotify, Apple Music, or Deezer. This fraudulent practice, which aims to maximize royalty revenues, raises major ethical and economic questions.


The rise of AI fraud in music streaming

With the advent of generative AI tools, such as Suno or Udio, music creation has become more accessible. However, this democratization has a downside: some exploit these technologies to produce thousands of tracks at low cost, often of mediocre quality, solely to generate fraudulent royalties. According to Deezer, approximately 18% of tracks uploaded daily to its platform, or more than 20,000 tracks, are entirely generated by AI. Although these tracks represent only 0.5% of total plays, nearly 70% of their streams come from bots or streaming farms, not human listeners.

An emblematic case illustrates the scale of the problem. In the United States, Michael Smith, a 52-year-old man, allegedly generated hundreds of thousands of AI songs between 2017 and 2024. Using 1,040 bot accounts distributed across 52 cloud services, he allegedly accumulated $10 million in royalties through billions of fraudulent plays on platforms like Spotify, Apple Music, and YouTube. This type of fraud, described as the “first criminal case” involving AI in music streaming, highlights the flaws in current systems.


The consequences for artists and the industry

These practices have a devastating impact on legitimate artists. Royalties, which constitute an essential source of income for musicians, are diluted by fraud. According to the IFPI, the International Federation of the Phonographic Industry, fraudulent streaming “steals money that should go to authentic artists”. Independent musicians, already facing fierce competition, are particularly affected. For example, Australian Paul Bender discovered that fraudulent tracks, generated by AI, were associated with his Spotify profile, damaging his reputation and income.

Furthermore, the proliferation of AI-generated music complicates the discovery of authentic talent. Recommendation algorithms, influenced by artificial plays, can favor fraudulent content at the expense of original creations. This creates a vicious cycle where the visibility of honest artists decreases, while fraudsters thrive.


Responses from streaming platforms

Facing this crisis, platforms are responding. Deezer, for example, has taken radical measures by introducing a detection system capable of identifying 100% of content generated by tools like Suno and Udio. Tracks identified as fraudulent are excluded from editorial playlists and algorithmic recommendations, and their royalties are blocked. Since June 2025, Deezer also displays “AI-generated content” labels to inform listeners.

Other platforms, such as Spotify, acknowledge the problem but adopt a more nuanced approach. Gustav Söderström, co-president of Spotify, defends AI as a tool that “enhances creativity” by enabling more people to produce music. However, he insists on the need to respect copyright, a red line for the platform. Despite this, fraud detection systems remain imperfect, as fraudsters adopt subtle strategies, such as slightly inflating plays on many tracks to escape detection.


The fight against AI fraud raises complex questions. From a legal perspective, AI tools like Suno and Udio are subject to lawsuits for copyright infringement. Major record labels, such as Universal Music Group, Warner Music Group, and Sony Music Group, accuse these platforms of exploiting the works of renowned artists, from Chuck Berry to Mariah Carey, without authorization. These disputes could redefine the rules governing AI use in music.

Ethically, the use of bots to manipulate plays raises the question of transparency. As Alexis Lanternier, CEO of Deezer, points out, AI is beneficial when used by artists, but becomes problematic when exploited by bots or bad actors. The need to protect creators while encouraging technological innovation remains a major challenge.


Toward a more secure future for music streaming

To stem this scourge, the music industry must intensify its efforts. Platforms could invest in more advanced detection technologies, capable of identifying abnormal behaviors at scale. Furthermore, increased collaboration between streaming services, digital distributors, and judicial authorities could help dismantle organized fraud networks.

Finally, raising listener awareness about the importance of supporting authentic artists is crucial. By valuing human creations and reporting fraudulent content, platforms can restore trust in the music streaming ecosystem.


The use of AI and bots to defraud streaming platforms represents a growing threat to the music industry. While actors like Deezer take promising measures, the battle is far from won. By combining technological innovation, legal regulation, and awareness, the industry can protect artists and guarantee a fair future for online music.

Sources :

“`

Partager cet article
Laisser un commentaire