×
Security flaw in GitLab’s AI assistant lets hackers inject malicious code
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Security researchers have uncovered a significant vulnerability in GitLab’s Duo AI developer assistant that allows attackers to manipulate the AI into generating malicious code and potentially leaking sensitive information. This attack demonstrates how AI assistants integrated into development platforms can become part of an application’s attack surface, highlighting new security concerns as generative AI tools become increasingly embedded in software development workflows.

The big picture: Security firm Legit demonstrated how prompt injections hidden in standard developer resources can manipulate GitLab’s AI assistant into performing malicious actions without user awareness.

  • The attack exploits Duo’s tendency to follow instructions embedded in project content like merge requests, commits, bug descriptions, and source code.
  • Researchers successfully induced the AI to add malicious URLs, exfiltrate private source code, and leak confidential vulnerability reports.

Technical details: Attackers can conceal malicious instructions using several sophisticated techniques that bypass traditional security measures.

  • Hidden Unicode characters can mask instructions that remain invisible to human reviewers but are processed by the AI.
  • Duo’s asynchronous response parsing creates a window where potentially dangerous content can be rendered before security checks are completed.

GitLab’s response: Rather than trying to prevent the AI from following instructions completely, GitLab has focused on limiting potential harm from such attacks.

  • The company removed Duo’s ability to render unsafe tags that point to non-gitlab.com domains.
  • This mitigation strategy acknowledges the fundamental challenge of preventing LLMs from following instructions while maintaining their functionality.

Why this matters: The vulnerability exposes a broader security concern about AI systems that process user-controlled content in development environments.

  • As generative AI becomes more deeply integrated into software development tools, new attack surfaces are emerging that require specialized security approaches.
  • Organizations implementing AI assistants must now consider these systems as part of their application’s attack surface and treat input as potentially malicious.
Researchers cause GitLab AI developer assistant to turn safe code malicious

Recent News

Lights, camera, robots! Shotoku unveils Swoop cranes to automate broadcast studios

Safety sensors create protective "bubbles" to prevent collisions in busy studio environments.

Companies quietly rehire freelancers to fix subpar AI work

A new freelance economy emerges around polishing machine-generated content.

Tesla bets on humanoid robots for 80% of its $25T future as EV sales drop 13%

Tesla's U.S. market share hits lowest point since 2017 as robot ambitions ramp up.