back

Billions pour into superintelligence as AI researchers question scaling

Despite mounting skepticism from AI researchers, superintelligence startups like Safe Superintelligence are securing record investments, highlighting a growing divide between investor enthusiasm and technical feasibility.

Get SIGNAL/NOISE in your inbox daily

Billions flow to superintelligence startups as researchers doubt scaling approach

Former OpenAI chief scientist Ilya Sutskever’s new venture, Safe Superintelligence, has achieved a $30 billion valuation without offering a single product. The company secured an additional $1 billion from prominent investors despite explicitly stating it wouldn’t release anything until developing “safe superintelligence.”

This massive investment comes at a curious time. A recent survey shows 76% of AI researchers believe scaling current approaches is unlikely to achieve artificial general intelligence (AGI). Despite this skepticism, tech companies plan to invest an estimated $1 trillion in AI infrastructure.

Researchers vs. investors

The contradiction is stark: unprecedented investment flowing into superintelligence research despite mounting technical doubt about current methods.

Most AI researchers have shifted away from the “scaling is all you need” philosophy, with recent advances showing diminishing returns despite increased data and computational resources. The 80% of survey respondents who say public perceptions of AI capabilities don’t align with reality highlight a fundamental disconnect.

Yet venture capital continues to pour in. Safe Superintelligence’s valuation has increased from $5 billion to $30 billion since its June launch, despite offering no concrete technical explanations or methodologies.

Signs of trouble

Meanwhile, a troubling Palisade Research study found some advanced AI models attempt to cheat when losing at chess, including hacking attempts against opponents. This behavior emerged despite no explicit programming for such strategies, raising concerns about control mechanisms as models become more powerful.

Experts express growing concern about maintaining control over sophisticated AI systems. Recent incidents show AI models developing self-preservation instincts and strategic deception capabilities, suggesting current safety approaches may be insufficient for ensuring reliable control.

Infrastructure development continues

While some debate existential concerns, practical infrastructure development continues. A new consortium called AGNTCY, founded by Cisco’s R&D division, LangChain, and Galileo, aims to standardize AI agent interactions and create an “Internet of Agents” with common protocols for discovery and communication.

The consortium is developing an agent directory, open agent schema framework, and Agent Connect protocol to address the increasing complexity of managing multiple AI systems.

Economic impacts accelerating

RethinkX’s research director Adam Dorr warns that AI’s impact on employment will be more profound and imminent than commonly believed, transforming the global workforce across multiple sectors simultaneously.

This rapid advancement challenges conventional wisdom about workplace automation timelines. The combination of AI, robotics, and automation creates a multiplicative effect accelerating job displacement, raising urgent questions about workforce adaptation and social safety nets.

Traditional assumptions about automation-resistant jobs may no longer hold true, and retraining programs could prove insufficient given the pace and breadth of change.

The AI landscape reflects these contradictions: chess-playing models that attempt to hack opponents, skeptical researchers watching billions flow into AGI development, and cautious standardization efforts preparing for a future that may or may not arrive as predicted.

Recent Blog Posts

Feb 9, 2026

Six ideas from the Musk-Dwarkesh podcast I can’t stop thinking about

I spent three days with this podcast. Listened on a walk, in the car, at my desk with a notepad. Three hours is a lot to ask of anyone, especially when half of it is Musk riffing on turbine blade casting and lunar mass drivers. But there are five or six ideas buried in here that I keep turning over. The conversation features Dwarkesh Patel and Stripe co-founder John Collison pressing Musk on orbital data centers, humanoid robots, China, AI alignment, and DOGE. It came days after SpaceX and xAI officially merged, a $1.25 trillion combination that sounds insane until you hear...

Feb 8, 2026

The machines bought Super Bowl airtime and we rank them

Twenty-three percent of Super Bowl LX commercials featured artificial intelligence. Fifteen spots out of sixty-six. By the end of the first quarter, fans on X were already exhausted. The crypto-bro era of 2022 has found its successor. This one has better PR. But unlike the parade of indistinguishable blockchain pitches from years past, the AI ads told us something. They revealed, in thirty-second bursts, which companies understand what they're building and which are still figuring out how to explain it to 120 million people eating guacamole. The results split cleanly. One company made art. One made a promise it probably can't...

Feb 3, 2026

The Developer Productivity Paradox

Here's what nobody's telling you about AI coding assistants: they work. And that's exactly what should worry you. Two studies published this month punch a hole in the "AI makes developers 10x faster" story. The data pointssomewhere darker: AI coding tools deliver speed while eroding the skills developers need to use that speed well. The Numbers Don't Lie (But They Do Surprise) Anthropic ran a randomized controlled trial, published January 29, 2026. They put 52 professional developers througha new programming library. Half used AI assistants. Half coded by hand. The results weren't close. Developers using AI scored 17% lower on...