back

Meta AI did something WILD again… wtf is Next Concept Prediction?

Meta AI's concept models unlock human-like thinking

In the rapidly evolving AI landscape, Meta has quietly introduced a breakthrough approach that could fundamentally change how large language models think. Their recent research on Next Concept Prediction reveals a radical shift from word-based language models to concept-based thinking—potentially bridging the gap between AI's computational processes and human-like reasoning.

The breakthrough in Meta's approach lies in training models to think conceptually rather than just processing language sequentially.

  • From words to concepts: Meta's Large Concept Models (LCMs) overcome language limitations by training models to think in abstract concepts alongside words.
  • Repurposing interpretability tools: They cleverly transform Sparse Autoencoders (SAEs) from analysis tools into active training components that guide model development.
  • Next Concept Prediction: Similar to next token prediction, the model learns to anticipate upcoming concepts, creating a feedback loop where concepts guide token generation.
  • Efficiency gains: The approach saves up to 21.5% of training tokens while delivering similar or better performance across benchmarks.

The conceptual revolution

The most insightful aspect of Meta's research is the fundamental paradigm shift in how AI systems process information. Rather than retrofitting "thinking" capabilities onto language-trained models (as seen in previous approaches like Huggin or Coconut), Meta builds concept-awareness directly into the training process. This creates a more coherent foundation for abstract reasoning.

This matters because it addresses one of the most persistent limitations in current AI—the inability to maintain conceptual consistency across long outputs. By embedding conceptual understanding from the ground up, these models can potentially avoid the hallucinations and reasoning breakdowns that plague even the most advanced language models today.

Beyond the research paper

What Meta's paper doesn't fully explore is the potential impact on multimodal AI systems. The conceptual approach seems particularly well-suited for bridging different modes of perception. For example, a medical AI using this architecture could maintain conceptual coherence when analyzing both patient notes and medical imaging simultaneously—maintaining the abstract concept of "inflammation" regardless of whether it's described in text or visible in an MRI.

The business implications are equally significant. Today's enterprises struggle with AI systems that generate plausible-sounding but factually inconsistent outputs

Recent Videos

May 6, 2026

Hermes Agent Master Class

https://www.youtube.com/watch?v=R3YOGfTBcQg Welcome to the Hermes Agent Master Class — an 11-episode series taking you from zero to fully leveraging every feature of Nous Research's open-source agent. In this first episode, we install Hermes from scratch on a brand new machine with no prior skills or memory, walk through full configuration with OpenRouter, tour the most important CLI and slash commands, and run our first real task: a competitor research report on a custom children's book AI business idea. Every future episode will build on this fresh install so you can see the compounding value of the agent in real time....

Apr 29, 2026

Andrej Karpathy – Outsource your thinking, but you can’t outsource your understanding

https://www.youtube.com/watch?v=96jN2OCOfLs Here's what Andrej Karpathy just figured out that everyone else is still dancing around: we're not in an era of "better models." We're in a different era of computing altogether. And the difference between understanding that and not understanding it is the difference between being a vibe coder and being an agentic engineer. Last October, Karpathy had a realization. AI didn't stop being ChatGPT-adjacent. It fundamentally shifted. Agentic coherent workflows started to actually work. And he's spent the last three months living in side projects, VB coding, exploring what's actually possible. What he found is a framework that explains...

Mar 30, 2026

Andrej Karpathy on the Decade of Agents, the Limits of RL, and Why Education Is His Next Mission

A summary of key takeaways from Andrej Karpathy's conversation with Dwarkesh Patel In a wide-ranging conversation with Dwarkesh Patel, Andrej Karpathy — former head of AI at Tesla, founding member of OpenAI, and creator of some of the most popular AI educational content on the internet — shared his views on where AI is headed, what's still broken, and why he's now pouring his energy into education. Here are the key takeaways. "It's the Decade of Agents, Not the Year of Agents" Karpathy's now-famous quote is a direct pushback on industry hype. Early agents like Claude Code and Codex are...