×
Iffy ethics as eufy pays users $40 to film fake package thefts for AI training
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Anker’s camera brand eufy paid users up to $40 per camera to submit footage of package theft and car break-ins to help train its AI detection systems in late 2024. When users lacked real criminal activity to film, eufy explicitly encouraged them to stage fake thefts, suggesting they position themselves to be captured by multiple cameras simultaneously for maximum efficiency.

Why this matters: The approach highlights the creative—and potentially problematic—methods companies use to gather training data for AI systems, raising questions about whether synthetic data can effectively replace authentic criminal behavior patterns.

How the program worked: Users could earn $2 for each approved video clip showing package theft or attempted car break-ins, with a maximum of 10 videos per criminal activity type per camera.

  • The company solicited these “donations” through its community forums as part of efforts to improve AI recognition of suspicious behavior.
  • When real crime footage wasn’t available, eufy actively encouraged users to fake criminal acts, stating: “Don’t worry, you can even create events by pretending to be a thief and donate those events.”

The technical rationale: Machine learning systems focus on visual patterns rather than intent, making staged criminal behavior theoretically equivalent to authentic footage for training purposes.

  • Eufy suggested users could “complete this quickly” by having “one act captured by your two outdoor cameras simultaneously, making it efficient and easy.”
  • The approach reflects AI’s pattern-matching nature—these systems excel at recognizing visual similarities but don’t truly understand the difference between real and staged criminal behavior.

Potential concerns: While the crowdsourcing method appears cost-effective, questions remain about whether systems trained on authentic footage might perform better or produce fewer false positives.

  • The reliance on staged scenarios could potentially impact the AI’s ability to accurately detect real criminal behavior in varied real-world conditions.
  • The effectiveness of this training approach remains unclear, with the company’s AI improvements over the past six months yet to be fully evaluated by users.
Not enough package thieves to train your AI? Just pay users to act it out

Recent News

OpenAI launches Operator AI agent and jobs platform to blend human-AI work

Real-world deployment will test which tasks stay human versus which go fully automated.

Anthropic brings Claude AI directly into Slack for paid teams

The AI can access past conversations and files to contextualize workplace responses.