×
Wired, Business Insider publish AI-generated articles under fake bylines
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Renowned tech publications Wired and Business Insider were caught publishing AI-generated articles under the fake byline “Margaux Blanchard,” exposing how sophisticated AI content is infiltrating mainstream journalism. The incident highlights a growing crisis where AI-generated “slop” is eroding trust in online media, with human editors at reputable outlets falling victim to increasingly convincing automated content.

What happened: Multiple publications discovered they had been duped by AI-generated articles submitted under a fictitious journalist’s name.

  • Wired published “They Fell in Love Playing Minecraft. Then the Game Became Their Wedding Venue,” which referenced a non-existent 34-year-old ordained officiant in Chicago.
  • Business Insider ran two personal essays, including one about remote work and parenting that contained generic, AI-typical language patterns.
  • Other affected outlets included Index on Censorship, Cone Magazine, and SFGate, which still has a Disney superfandom article live that mentions a fake TikTok creator named “Kayla Reed.”

The telltale signs: Editors and fact-checkers identified several red flags that revealed the AI-generated nature of the content.

  • Jacob Furedi from Dispatch, a news publication, received a pitch about a Colorado town serving as “the world’s most secretive training grounds for death investigation” that couldn’t be independently verified.
  • Articles contained familiar sentence structures typical of AI writing and referenced people who don’t actually exist.
  • One piece included the generically AI-sounding conclusion: “There is no perfect time to become a parent. There is only the time that life gives you and what you choose to do with it.”

Why this matters: The incident represents a significant threat to journalism’s credibility as AI-generated content becomes increasingly sophisticated.

  • University of Kansas research shows that reader trust and credibility in news sources drops when AI involvement is known.
  • Separate research by Trusting News, an independent organization focused on media credibility, found that AI disclosures by newsrooms can also hurt trust.
  • Publications like Wired, which regularly covers AI’s negative impact on content quality, found themselves victims of the very phenomenon they report on.

The financial stakes: The scam potentially involved substantial sums, as publications like Wired sometimes pay thousands of dollars for in-depth reporting.

Industry response: Affected publications acted swiftly to remove the fraudulent content and issued editor’s notes explaining their decisions.

  • Business Insider’s note stated the essay “didn’t meet Business Insider’s standards.”
  • Wired explained their article “does not meet our editorial standards.”
  • Even aggregated content was pulled, with Mashable removing their coverage that had praised Wired’s piece as a “charming feature.”

What editors are saying: Industry professionals report being overwhelmed by AI-generated pitches.

  • “I’m already being inundated by pitches which are clearly written by ChatGPT,” Furedi told Press Gazette, calling it a “terrible” trend that’s “symptomatic of the direction that certain types of journalism are going in.”
Wired and Business Insider Accidentally Published AI-Generated Slop Articles by Seemingly Fake Journalist

Recent News

OpenAI chairman reveals AI erodes his identity as a programmer

His fears may serve strategic purposes for his $4.5 billion AI startup.

AI cameras target Somerset, UK’s deadly A361 bypass after 6 deaths

Smart cameras spot phone use, seatbelt violations and careless driving beyond traditional speed detection.