Meta’s new AI lab ships its first internal models — now comes the hard part

Meta says its newly formed Meta Superintelligence Labs has delivered its first high-profile AI models internally this month—an early milestone in Mark Zuckerberg’s broader effort to reboot the company’s AI momentum and compete at the very top of the field.

The signal here isn’t “Meta solved AI.” It’s that the company has moved from reorg mode to shipping mode—even if the products aren’t public yet.

What Meta just confirmed (and what it didn’t)

Speaking at the World Economic Forum in Davos, CTO Andrew Bosworth described the new lab’s early outputs as “very good,” noting the team is roughly six months into the work.

He didn’t name the models, but prior reporting has pointed to at least two internal projects:

  • a text-oriented model said to be codenamed “Avocado”
  • an image/video model said to be codenamed “Mango”

Even without confirmed names, the key takeaway is that Meta is building multiple model tracks at once—a practical approach in a world where “one model to rule them all” is increasingly unrealistic.

Training is the milestone — post-training is the battle

Bosworth’s most important point wasn’t about bragging rights. It was about the grind that comes after the model finishes training:

There’s “a tremendous amount of work to do post-training,” he said, to make these systems usable internally and by consumers.

That’s the part casual observers miss. Training creates a capable engine; post-training turns it into a vehicle people can actually drive. That includes:

  • safety tuning and refusal behavior
  • tool use and reliability improvements
  • latency and cost optimization
  • evaluation, guardrails, and monitoring
  • packaging models into real products (not demos)

In consumer AI, the difference between “impressive” and “useful” is usually everything after training.

Why this matters: Meta is trying to claw back narrative control

Meta’s AI efforts have been closely watched ever since Zuckerberg reshuffled leadership, built a new lab structure, and reportedly pursued top talent aggressively—moves that reflect how intense the AI arms race has become.

Meta has also faced scrutiny over model performance relative to rivals, at a time when competitors have been landing faster product wins and stronger mindshare.

So this internal delivery is not just an engineering update—it’s Meta saying: the pipeline is real, and it’s moving.

The consumer endgame: 2026–2027 is the real proving ground

Bosworth described 2025 as a “tremendously chaotic year” of lab-building, infrastructure expansion, and power procurement—and suggested the payoff period is ahead.

He also framed 2026 and 2027 as pivotal years where consumer AI starts to harden into everyday behavior, because models are already good at answering the kinds of questions people ask constantly in real life—family logistics, daily decisions, basic planning.

And Meta clearly wants those behaviors anchored in its own ecosystem—especially via hardware. The company is marketing AI-enabled Ray-Ban Display glasses, and has reportedly paused international expansion to prioritize meeting U.S. demand.

Bottom line

Meta’s announcement is an early checkpoint: its new AI lab is producing internal models, but the difficult work—polishing, aligning, productizing, and scaling—still decides whether this becomes a comeback story or just another internal milestone.

The next question isn’t “Did Meta train good models?”
It’s whether Meta can turn those models into products people choose every day.