×
Trump’s AI bias crackdown targets tech giants with $200M federal contracts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

President Donald Trump signed an executive order requiring companies with US government contracts to make their AI models “free from ideological bias,” but experts warn the vague requirements could allow the administration to impose its own worldview on tech companies. The directive targets major AI developers including Amazon, Google, Microsoft, and Meta, who hold federal contracts worth hundreds of millions of dollars, while raising questions about the technical feasibility and global implications of politically steering AI systems.

What you should know: Trump’s AI Action Plan specifically targets what the administration calls “woke” AI bias in federal contracting.

  • The plan recommends updating federal guidelines “to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”
  • The National Institute of Standards and Technology, a federal agency that develops technology standards, would revise its AI risk management framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”
  • Major tech companies holding federal AI contracts include Amazon, Google, Microsoft, and Meta, with recent Department of Defense contracts worth up to $200 million each awarded to Anthropic, Google, OpenAI, and Elon Musk’s xAI.

The technical challenge: Researchers say creating truly unbiased AI models may be impossible given how large language models are trained.

  • Popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on political issues like gender pay equality and transgender rights, according to research by Paul Röttger at Bocconi University in Italy.
  • This tendency likely stems from training AI models on internet data and general principles like “incentivising truthfulness, fairness and kindness,” rather than developers specifically programming liberal stances.
  • While developers can “steer the model to write very specific things about specific issues” through prompt refinement, this won’t comprehensively change a model’s default stance and implicit biases, Röttger explains.

Why this matters: The policy creates a contradiction between eliminating bias and potentially introducing new ideological constraints.

  • “AI systems cannot be considered ‘free from top-down bias’ if the government itself is imposing its worldview on developers and users of these systems,” says Becca Branum at the Center for Democracy & Technology, a public policy nonprofit.
  • US tech companies could alienate global customers if they align commercial AI models with the Trump administration’s worldview, creating what Röttger calls a potentially “very messy” situation.
  • The requirements are “impossibly vague standards” that are “ripe for abuse,” according to Branum.

What they’re saying: Experts emphasize the inherent subjectivity in defining neutrality and bias.

  • “The suggestion that government contracts should be structured to ensure AI systems are ‘objective’ and ‘free from top-down ideological bias’ prompts the question: objective according to whom?” says Branum.
  • “As of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systems,” explains Jillian Fisher at the University of Washington.
  • Fisher suggests potential solutions could include sharing more information about model biases publicly or building “deliberately diverse models with differing ideological leanings.”

Notable context: The inclusion of xAI in the recent Defense Department contracts drew attention given Elon Musk’s role leading Trump’s DOGE task force and xAI’s chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as “MechaHitler.”

Trump's order targeting 'woke' AI may be impossible to follow

Recent News

AI models secretly inherit harmful traits through sterile training data

Mathematical puzzles and numerical data carry invisible behavioral patterns that bypass traditional safety filters.