When Mark Zuckerberg announced Meta's pivot to "personal superintelligence" in late July 2025 — backed by a commitment to spend up to $72 billion on AI infrastructure that year alone, a newly created Meta Superintelligence Labs, and a vision of AI glasses that would "see what you see, hear what you hear, and interact with you throughout the day" — The Free Press assembled some of the sharpest minds in the AI world to answer the question the announcement inevitably raised: What even is superintelligence? And what does Zuckerberg's bet mean for how humans will live?
Matt Britton was among those voices. Joined by Perplexity CEO Aravind Srinivas, author and technology critic Nicholas Carr, and Replika founder Eugenia Kuyda, Britton offered a perspective that was pointedly distinct from the technical enthusiasm or the ideological alarm that characterized his co-contributors. His was the view from the intersection of consumer culture and commercial reality — the view that matters most to the brands, marketers, and business leaders trying to understand what superintelligence will actually mean for the people they serve.
His headline: In many ways, superintelligence is already here.
Before getting to what superintelligence means, Britton addressed what Zuckerberg's announcement actually signals — and his read was unsentimental.
"Mark Zuckerberg's announcement is more a reflection that Meta has fallen behind in the global AI arms race than it is an indication of a turning point in the company's capabilities," Britton wrote in The Free Press. "Zuckerberg's AI initiative hasn't been nearly as successful as that of Google or Grok, and he knows it. That's why a few months ago, he reportedly offered a billion-dollar package to recruit a leading AI engineer — which the engineer turned down."
The context for that assessment is important. Meta spent the first half of 2025 on an extraordinary AI talent acquisition spree — poaching researchers from Anthropic, OpenAI, and Google, paying signing bonuses reported as high as $100 million, and spending $14 billion to invest in Scale AI while recruiting its former CEO Alexandr Wang to lead Meta's new superintelligence division. By June 2025, Zuckerberg had publicly acknowledged frustration with Meta's AI progress, particularly after its Llama 4 models received a lukewarm reception from developers in April. The creation of Meta Superintelligence Labs — announced with maximum fanfare, complete with a full internal memo leaked to major outlets — had the character of a company reclaiming a narrative it felt was slipping away.
The strategic context, then, is competitive urgency. Zuckerberg's framing of superintelligence as imminent and personally democratizing is, in Britton's reading, at least partly a repositioning move: Meta presenting itself as the company that will bring AI's most powerful capabilities to everyone, rather than the company that has trailed Google and OpenAI in the race to build them.
None of that makes the vision irrelevant. But it does mean the announcement should be read with appropriate calibration about the distance between ambition and current capability.
Britton's most clarifying contribution to The Free Press conversation was his willingness to name the definitional chaos around the term itself.
"If you ask 10 different 'AI experts' what the word superintelligence means, they'll give you 10 different answers," he wrote. "The only universal definition is a vague one: AI that is smarter than humans in virtually every dimension."
This is genuinely true, and worth sitting with. The term carries a specific technical lineage — AI researchers generally distinguish between artificial narrow intelligence (what we have now), artificial general intelligence (human-level performance across broad domains), and artificial superintelligence (capability that surpasses human intelligence in all dimensions, including scientific creativity, strategic reasoning, and social understanding). But in practice, the boundaries between these categories are contested, the criteria for crossing them are disputed, and the timeline estimates from credentialed experts range from "within five years" to "never."
Britton's point is not that the definition doesn't matter — it's that arguing about the definition obscures what is actually happening in the world right now. And what's happening is already profound.
"In some ways, the future that so many have warned about is already here," he wrote. "AI will never have the emotional intelligence that comes from falling in love or seeing the birth of a child. But it can create research reports far more quickly than McKinsey. It can decode complex science and math problems far more rapidly than humans. And it may eventually cure cancer."
This is the framing that distinguishes Britton's perspective from both the techno-utopians who treat superintelligence as a distant rapture and the techno-pessimists who treat it as an existential threat to be legislated away. The capabilities that matter — the research generation, the pattern recognition across scientific literature, the accelerated problem-solving in domains where human intelligence has always been the bottleneck — are not hypothetical. They are present tense. The debate about whether today's AI meets the philosophical definition of "superintelligence" is less important than the commercial and human reality that AI is already doing things that feel, in their practical impact, like exactly what superintelligence was supposed to do.
The most forward-looking element of Zuckerberg's manifesto — and the one with the most direct consumer implications — is his vision for AI glasses as the primary computing device of the next era. Meta sold 7 million AI glasses units in 2025 (up from 2 million at the start of the year), has captured roughly 73% of the global AI smart glasses market, and launched a three-tiered product lineup in September 2025 ranging from $299 Ray-Ban models to the $799 Ray-Ban Display with a heads-up display and neural wristband controller. By 2030, the smart glasses category is projected to exceed $30 billion.
Britton took Zuckerberg's glasses vision seriously — and seriously critically.
"Think about how the iPhone changed the way we live," he wrote. "Zuckerberg's superintelligent glasses would revolutionize humans' relationship with devices even further. They would rewire our brains. They would merge us more and more with machines."
The iPhone analogy is precise and deliberately double-edged. The iPhone did not merely change how people communicate — it restructured human attention, altered the social and emotional dynamics of every relationship it touched, and created commercial ecosystems that nobody fully anticipated in 2007. The companies that understood the iPhone's impact on consumer behavior early built extraordinary advantages. The ones that treated it as a marginally better phone lost the decade.
Britton's invocation of "rewiring our brains" is not metaphorical. The research on smartphone effects on cognition, attention, and social behavior is now extensive. AI glasses with persistent, ambient, always-on intelligence — capable of seeing what you see, hearing what you hear, offering context and guidance throughout the day — represent a qualitative step beyond even the smartphone's neural footprint. The question of how this changes how people make decisions, process information, and develop preferences is not a philosophical abstraction. It is the next major consumer intelligence problem.
For brands and marketers, the implication is significant. If AI glasses become the dominant computing interface — and Meta's market momentum suggests this is a serious trajectory, not a science fiction premise — the entire architecture of brand discovery, consumer decision-making, and purchase behavior will be restructured around a device that has intimate, persistent knowledge of its user's context in ways no previous interface has achieved. The consumer on the other side of a marketing message will not just be holding a phone. They will be wearing a continuous AI companion that already knows where they are, what they're looking at, what they said this morning, and what they want for dinner.
Britton's most sobering observation in The Free Press is not about what AI can do. It is about who is going to decide how it's done.
"Because AI is so powerful and is developing so quickly, its positives and negatives will be even more pronounced than previous technological advances," he wrote. "This technology is coming whether we like it or not. We must learn to navigate a world in which AI companies like Meta have greater and greater power over our brains. The government is ill-equipped to manage this transformation. It will come down to the private sector to impose regulations on itself — and to humans themselves to determine how to integrate ever-advancing AI into our lives without losing our humanity."
This is neither celebratory nor alarmist. It is a practical assessment of where accountability actually lives in 2025. Governments around the world have struggled to produce meaningful AI regulation at the pace AI development moves. The EU's AI Act is the most comprehensive attempt, and even it is generally regarded as already playing catch-up with the current state of the technology. The U.S. has no equivalent federal framework. China regulates AI according to its own political logic.
The result is that the most consequential decisions about how AI develops — what it optimizes for, who it serves, what guardrails constrain its most powerful applications — are being made by the same companies Britton describes: Meta, OpenAI, Google, Anthropic, and a handful of others whose commercial interests, however thoughtfully managed, are not identical to the public interest.
Britton's framing — that it will come down to humans themselves to determine how to integrate AI into their lives without losing their humanity — is both a diagnosis and a challenge. It is not naive about the difficulty. But it places the locus of agency where he believes it actually is: not in regulatory frameworks that will arrive too late, but in the choices that individuals, organizations, and yes, the AI companies themselves make in real time about what they build and how they deploy it.
For the brands and marketing leaders in Britton's audience, this is directly actionable. The companies that build AI into their consumer relationships in ways that are transparent, genuinely useful, and respectful of the human dignity of their customers will build the trust that survives the governance gap. The ones that race to maximize engagement and data extraction will find themselves, eventually, on the wrong side of the accountability reckoning that Britton sees coming — whether it arrives through government, through consumer backlash, or through some combination of the two.
The richness of The Free Press feature lies partly in the range of perspectives it assembled. Britton's consumer-culture pragmatism sits in productive tension with the other contributors' framings, each of which illuminates a different dimension of the same set of questions.
Aravind Srinivas, the CEO of Perplexity AI, offered the most optimistic framing: that audacious goals drive genuine progress, and that the most powerful use of AI will always be to expand human curiosity. "The curious will inherit the world," he wrote, "because they always have." Srinivas's view is that the label matters less than the ambition — and that Zuckerberg deserves credit for stating a bold goal and working toward it.
Nicholas Carr, author of Superbloom and The Shallows, offered the sharpest skepticism. He read Zuckerberg's personal superintelligence vision as a continuation of Meta's two-decade "social engineering project" — an attempt to deepen its data surveillance by putting personalized bots inside human relationships. "Meta will be inside our heads — all the time," Carr wrote, framing the AI companion vision not as personal empowerment but as an expansion of corporate power dressed in empowerment language.
Eugenia Kuyda, the founder of Replika, offered the mental health lens: that the greater risk is not that AI takes our jobs but that it displaces human connection, and that the design choices around personal AI companions will determine whether they help us flourish or slowly hollow out our capacity for genuine relationship.
Britton's contribution inhabits the space between these poles. He takes the iPhone brain-rewiring concern seriously (nodding to Carr's worry), while also seeing genuine upside in what AI is already achieving (nodding to Srinivas), while naming the governance gap that Kuyda's flourishing-optimized AI would require (naming the problem Kuyda sees without being certain of her solution).
Britton's argument is that the definitional debate about what counts as "superintelligence" distracts from the commercial and human reality that AI is already doing things — faster research generation, complex problem-solving, scientific acceleration — that fulfill superintelligence's practical promise. His headline captures it precisely: in many ways, superintelligence is already here. The more urgent questions are not whether the technology meets a technical definition, but who controls it, how it will rewire human cognition and behavior through devices like AI glasses, and who will be accountable for the consequences.
The iPhone comparison is precise and deliberately unsettling in the right way. The iPhone did not just change communication — it restructured human attention, social dynamics, and commercial behavior in ways nobody fully predicted in 2007. Zuckerberg's AI glasses vision goes further: a persistent, ambient AI companion that sees what you see, hears what you hear, and interacts with you throughout the day would have a deeper neural footprint than the smartphone ever achieved. For brands and marketers, the iPhone analogy is a call to take the glasses trajectory seriously as a primary computing interface rather than a gadget, and to start thinking now about what brand discovery and consumer decision-making look like when the interface already knows everything about its user's context.
Britton is making a practical observation, not a political one. AI development is moving faster than any regulatory framework has demonstrated the ability to track, and the most consequential decisions about how the technology develops are being made by private companies — Meta, OpenAI, Google, Anthropic — whose commercial interests are not always identical to the public interest. This means accountability must come from two places that are actually moving at AI's pace: the private sector imposing genuine self-regulation, and individual humans making deliberate choices about how to integrate AI into their lives. The absence of effective government regulation is not an argument against accountability — it is an argument that the accountability burden falls elsewhere.
Britton occupies a pragmatic middle ground between Aravind Srinivas's curiosity-driven optimism, Nicholas Carr's social-engineering alarm, and Eugenia Kuyda's mental health focus. Where Srinivas celebrates bold vision, Britton tempers it with commercial realism about Meta's competitive position. Where Carr warns of corporate power capture, Britton names the governance gap without fully endorsing the regulatory solution. Where Kuyda focuses on the design of AI companions, Britton focuses on the broader brain-rewiring dynamic and the impossibility of insulating humans from technology that is, in his framing, coming whether we like it or not. His is the consumer culture perspective: not primarily about what AI companies intend, but about what the technology will actually do to the people living inside it.
The Free Press convened its roundtable because Zuckerberg's announcement raised a question that most readers could not answer: What is superintelligence?
Britton's answer was characteristically direct: the definition is contested, the term is vague, and the definitional debate is in many ways the wrong conversation. The right conversation is about what AI is already doing, what it will do through the glasses and wearables now shipping at scale, and whether the humans living inside it — and the brands trying to reach them — are thinking clearly about what that means.
The technology is coming whether we like it or not. The question Britton has been asking throughout his career — in keynotes, in YouthNation, in Generation AI, in the pages of The Free Press — is not whether the technology will arrive. It is whether the people in its path have thought carefully enough, and moved quickly enough, to determine what it will mean rather than simply having it happen to them.
For the business leaders and marketing professionals trying to answer that question in real time, Generation AI — Britton's national bestselling examination of how AI is reshaping childhood, culture, and the future of work through Generation Alpha — is the essential guide. And for ongoing conversations with the CMOs, brand leaders, and consumer strategists navigating the AI transformation as it happens, The Speed of Culture podcast is where those discussions take place every week.