×
Spotify removes AI songs falsely attributed to deceased country artists
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Spotify removed AI-generated songs falsely attributed to deceased country artists Blaze Foley and Guy Clark after fans and record labels flagged the fraudulent uploads. The tracks, which appeared on the artists’ official pages with proper cover art and credits, slipped through Spotify’s content verification systems via TikTok’s music distributor SoundOn, highlighting a troubling escalation in AI-generated content fraud that could undermine artist legacies and streaming platform integrity.

What happened: Two AI-generated country songs appeared on Spotify under the names of artists who died decades ago, presented as official releases.

  • “Together,” attributed to Blaze Foley (who was shot and killed in 1989), appeared on his official artist page with professional presentation including cover art and copyright information.
  • A second fake track was uploaded under Guy Clark’s name (who passed away in 2016).
  • Both songs carried copyright tags listing “Syntax Error” as the owner, though little is known about this company.
  • The tracks were removed after being flagged by fans and the artists’ record labels.

The big picture: This represents a significant escalation beyond typical AI music uploads, which usually appear under fictional artist names with clear AI attribution.

  • These fraudulent tracks were embedded directly into real artists’ discographies without permission from families or labels.
  • Unlike AI tribute songs or experiments, these were presented as legitimate posthumous releases.
  • The incident exposes gaps in major streaming platforms’ ability to verify content authenticity before publication.

How it happened: Spotify attributed the uploads to SoundOn, TikTok’s music distribution service, revealing weaknesses in the content verification chain.

  • The platform processes tens of thousands of new tracks daily, relying heavily on automation for efficiency.
  • Current systems may only check technical requirements rather than investigating track origins or authenticity.
  • Spotify stated the content “violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead.”

Why this matters: The incident raises critical questions about artist verification and royalty protection in the AI era.

  • When AI can manufacture fake songs under deceased musicians’ names without immediate detection, it threatens both artistic integrity and economic rights.
  • The problem extends beyond Spotify—Apple Music and YouTube have also struggled with deepfake content filtering.
  • As AI tools like Suno and Udio make song generation increasingly accessible, platforms may face growing pressure to implement stronger verification measures.

What they’re saying: Spotify emphasized its commitment to combating fraud while acknowledging the scale of the challenge.

  • “We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify,” the company stated.
  • The platform noted that such deceptive content violates policies against impersonation and misleading representation.

Looking ahead: The incident highlights the need for better verification systems and clearer AI content labeling as generative music tools become more sophisticated.

  • Potential solutions include enhanced verification processes, AI-generated content watermarks, and improved tagging systems.
  • However, platforms prioritizing streamlined uploads may resist additional verification steps that could slow the publishing process.
  • The music industry’s response to this fraud could influence how AI-generated content is regulated across other creative sectors.
Spotify had to pull an AI-generated song that claimed to be from an artist who passed away 36 years ago

Recent News

Answer.AI enables 70B model training on consumer gaming GPUs

Two $5,000 gaming cards can now do what $80,000 data center hardware once required.

Trump plans to roll back FTC enforcement against AI companies

Biden-era cases targeted fake AI lawyers and weapon-detecting systems that missed knives.

TXShare connects 77 vetted AI vendors with Texas cities lacking tech expertise

Cities can now focus on choosing vendors rather than figuring out what questions to ask.