An alarming report circulated online in the first week of 2021: Spotify had reportedly pulled “some 750,000 songs” off its platform due to evidence of streaming fraud, according to an entertainment attorney, who added that “the vast majority” of the songs “appear[ed] to have used Distrokid for distribution.” Furious artists ripped into both the streamer and the distributor on social media, claiming they had never been involved with fraudulent streams and they didn’t know why their music had been taken down.
Spotify disputed both of those assertions, however, saying that the number of tracks removed was far fewer than 750,000 and that music from a variety of distributors was impacted. Distrokid founder Philip Kaplan wrote, “These takedowns were distributor-agnostic and affected music from all distributors (not just DistroKid).” Despite these rebuttals, the initial claims continued to circulate for months.
More than two years later, a variation of this episode played out again. On May 1, Boomy, a music-tech company that allows users to create songs with help from artificial intelligence tools, posted to its Discord that new uploads were paused and “certain catalog releases” had been pulled from Spotify due to “potentially anomalous activity.”
The Boomy post was measured, the response less so: The company’s statement was initially viewed as confirmation that AI was causing more trouble amid a wave of anti-AI sentiment in the music industry. Then the narrative changed: Spotify said the “anomalous activity” was related to streaming fraud, not the fact that Boomy’s tools rely on artificial intelligence. And then it turned out that Spotify had pulled down more music — that had nothing to do with Boomy, or AI — due to evidence of manipulation as part of a routine sweep a few days later.
The whole incident now seems to be less about any one company, and more like a natural part of streaming services’ ongoing efforts to prevent fraud from impacting payouts on their platforms. (Spotify has consistently said over the years that “stream manipulation is an industry-wide issue that” it treats “very seriously.”)
These episodes show the challenge of accurately reporting on the murky world of streaming fraud, where even the most basic information — how many tracks were impacted, what criteria were used to determine they were manipulated and how it compares to overall fraud levels — is often kept out of reach by tech companies. But combatting streaming fraud is a never-ending game of whac-a-mole that takes place, to varying degrees, across all streamers and all distributors, and focusing on any single mole can obscure the larger context. As Christine Barnum, chief revenue officer at the distributor CD Baby, recently told Billboard, “nobody’s immune” to this type of fraud.
Boomy says roughly 7% of the music it had on Spotify was pulled down because those songs were targeted with bot activity in April; after a short pause, Boomy users were able to resume uploading new songs to Spotify as of May 5. For comparison’s sake, a Deezer executive said last year that “7% of the volume of daily streams [on Deezer] is now detected as fraudulent.” Merlin, which handles digital licensing for many prominent independent labels and distributors, briefly had fraud levels near 10% (from music on the ad-supported tier of Spotify) in 2020.
Talking publicly about streaming fraud was once viewed as “airing dirty secrets,” one executive told Billboard recently. But this is changing: Leaders at SoundCloud, Pandora and Napster all spoke about their efforts to fight fraud on their respective platforms at a Music Biz conference panel in 2022. Last month, Umeadi Onyekwelu, music licensing lead at the African streaming service Mdundo, wrote that “looking at the music industry in Nigeria, one of the biggest problems is stream farming, which has become more widespread and prominent over the years.”
This is the day-to-day reality of the modern music ecosystem. Last year, Deezer said that the company detects suspicious activity on 45,000 accounts a day, and Spotify sends regular reports to major rights holders about the level of fraud detected on their catalogs. (The fraudulent plays identified in those reports were caught, which means they did not impact payouts.) A Spotify spokesperson noted in a statement that the platform “consistently removes products designed to game the system in order to generate royalties.”
Alex Mitchell, Boomy’s CEO, said in an interview with Billboard this week that “our review team spends a huge amount of time on that issue [protecting the platform from fraud]. We have much stricter policies, frankly, than many other distributors. We have systems that alert us if we think something might be suspicious. And then we have an investigative process that our team will go through to decide if they need to hold the revenue or contact a DSP. We’re also working with industry-leading fraud detection companies to improve our systems as we scale.”
Even so, anyone’s music can be targeted with bots on a streaming platform. Morgan Hayduk, co-founder/co-CEO of Beatdapp, a company that builds software to detect and mitigate fraud, told Billboard earlier this year that one under-discussed aspect of fraud was the “collateral damage” caused. “Fraudsters often employ user accounts on the streaming platform to stream a mix of the target’s content alongside other popular artists to evade detection,” he explained — using bots to play a melange of music from legitimate stars, for example, as well as the track they are trying to boost. (Beatdapp works with Boomy but directed questions about any “anomalous activity” back to Boomy.)
This means that, even when fraud is identified, it can be difficult to determine its source. “When it comes to who is responsible, it’s hard to pinpoint,” Ludovic Pouilly, senior vp of institutional and music industry relations at Deezer, told Billboard in January. “Distributors might say it’s the labels. The labels might say it’s the management. And artists themselves might tell you it’s the competition who’s trying to negatively impact their reputation.”
At a time when more platforms are openly discussing fraud, the Boomy announcement on Discord surely got so much attention because of the company’s connection to artificial intelligence, a topic that currently appears to have many music executives quaking in their boots. (“People are over-panicking a little bit,” one veteran music tech executive recently told Billboard.) Some of that concern is related to major label market share dilution, which impacts payouts from streaming services. The music industry is also nervous about AI technology’s potential for copyright infringement, and the extent to which it could possibly replace musicians and songwriters.
But as Mitchell points out, streaming fraud “existed for a long time before AI was on the scene and before we were on the scene.” And fighting fraud was already a tech-based arms race between those who want to protect the streaming services’ royalty pools and those who want to extract money from them.
A French study of streaming fraud released in January noted that “the imagination of hackers is rich and evolving, to the point that the countermeasures imagined and implemented by the platforms in the first place — but also the distributors and music rights holders — must, not only constantly evolve and improve, but also anticipate any counter-offensive from fraudsters.”
Perhaps Boomy summed the situation up best when it initially announced that Spotify had detected potential evidence of fraud on some releases. “As the music industry continues to navigate the use of bots and other types of potentially suspicious activity,” the company wrote, “these pauses are likely to happen more regularly and across a wider set of platforms.”