Christmas Edition 🎄
This Christmas was supposed to be one of humanity’s last as the dominant intelligence on Earth. Instead, as 2025 limps toward year’s end, the big story is not our imminent extinction. It’s how quickly the magic faded once the bill arrived.
What began as a collective hallucination about artificial general intelligence (AGI) arriving in 2027 (or even sooner) has deflated into something far more mundane. OpenAI is now trying to pay the bills with ads. Meanwhile, users grumble about “AI slop” and quietly lower their expectations. It’s the greatest Silicon Valley tragedy imaginable: the singularity brought down by cost-per-click economics.
The Punch Line
A LinkedIn user had crystallized the entire trajectory in a single, perfect joke:
2022: AGI
2023: AGI
2024: AGI
2025: Ads
It was the arc of Silicon Valley hubris compressed into one timeline. The prophecies had given way to the mundane machinery of business: server costs, subscriber churn, and the desperate scramble to monetize what had been promised as the end of human labor.
The Prophet Circuit
The doomsayers were everywhere. They commanded the room with such certainty you’d think they’d already glimpsed the future. Geoffrey Hinton, the “Godfather of AI,” didn’t just warn about AGI. He raised his odds of humanity’s extinction from 10% to “10 to 20%” within the next 30 years. Not metaphorically. He estimated that within the next 20 years, AI systems could become smarter than humans. He called it “a very scary thought” with the gravitas of someone discussing the heat death of the universe.
Then there was Sam Altman. In a podcast episode he took the comparison to its logical extreme. GPT-5, he explained, reminded him of the Manhattan Project. Not because it was a technological achievement, but because of that moment when Oppenheimer watched the Trinity test and asked, “What have we done?”. Altman described feeling “useless” after watching GPT-5 solve a problem he couldn’t, as if humanity had just engineered its own irrelevance.
Other prophets piled on. Elon Musk predicted AGI by 2026. Dario Amodei, Anthropic’s CEO, said 2026–2027. Ray Kurzweil put it at 2029. Even AI researchers surveyed for this essay largely shifted their predictions forward: from 2060 toward 2028–2035. The entire AI community seemed to be running a collective confidence game where the highest estimate became the default assumption.
But the ultimate prophet was Leopold Aschenbrenner, a 23-year-old former OpenAI researcher. He became the patron saint of startup founders who skip the “learning” part of learning by doing. In June 2024, he released a 165-page manifesto called “Situational Awareness” arguing that AGI would arrive by 2027, with superintelligence hot on its heels. The timing was convenient: he also launched a hedge fund with the same name—raising $1.5 billion from investors. Apparently they were convinced that confidence is a substitute for experience.
The fund’s strategy was straightforward: bet on companies that would benefit from AGI’s imminent arrival. Within months, Situational Awareness LP had delivered 47% returns after fees. Aschenbrenner even appeared on podcasts explaining that if AGI were “priced in tomorrow,” you could “maybe make 100x.” It was casino economics wrapped in philosophical inevitability.
The Effective Altruism Connection
One detail tied the entire narrative together: Aschenbrenner was embedded in the Effective Altruism (EA) movement, the philosophical framework that had valorized longtermism and claimed to be optimizing for maximum human impact. He’d worked at the FTX Future Fund, which collapsed in spectacular fraud in 2022 when founder Sam Bankman-Fried was convicted of wire fraud and money laundering. Bankman-Fried had also believed in EA and used it to justify his financial behavior. Until it turned out he’d been a Ponzi schemer all along.
Aschenbrenner had escaped the FTX implosion and moved to OpenAI, only to exit with his 165-page manifesto and a $1.5 billion hedge fund. The movement that claimed to use rigorous data-driven reasoning to optimize the future had instead produced a 23-year-old fund manager with no investment experience, betting the farm on AGI arriving in 2027.
When that failed to materialize, he’d at least made 47% returns in the first six months. However, no one knows how it’s been faring since.
The Waterloo Defeat
Then came the actual releases.
Google’s Gemini 3 appeared in November 2025, and it didn’t just beat OpenAI’s models on benchmarks. It made them look like yesterday’s news. OpenAI, suddenly panicked, declared “Code Red” in December. Sam Altman sent an internal memo signaling that ChatGPT improvement was now the only priority. Advertising could wait. Shopping agents could wait. The personal assistant called Pulse could wait. Even the grand plans for expansion were shelved.
What happened?
The answer is both humbling and obvious: GPT-5 was clinical, struggled with math and geography, and failed to impress users in August 2025. OpenAI had to scramble to fix it just three months later. If this was supposed to be the harbinger of superintelligence, it was more like a poorly maintained software product. Which, of course, is exactly what it was.
Meanwhile, the broader landscape revealed an uncomfortable truth: LLMs hallucinate constantly. GPT-4 still has a 28.6% hallucination rate in medical systematic reviews. One study found that ChatGPT makes up references approximately one in ten times. These aren’t minor glitches. They’re fundamental limitations that could become catastrophic when the model is confidently wrong.
By late 2025, it became clear that LLMs couldn’t actually replace human judgment, solve novel problems, or maintain reliability in the ways the prophets had promised. The models could impress in demos and fail in deployment. Retail, education, healthcare, and finance all discovered that edge cases (you know, actual human complexity) remained unsolvable.
The Slop Arrives
The final indignity came from linguistic validation. The Economist, the publication that had once celebrated “enshittification” as 2024’s Word of the Year, chose “slop” for 2025. Macquarie Dictionary agreed. The term: “low-quality content created by generative AI, often containing errors, and not requested by the user.”
By November 2024, AI-generated articles had outnumbered human-written pieces on search engines for the first time. The internet began filling with soulless content: Trump deepfakes, “Shrimp Jesus” videos, Mark Zuckerberg wood carvings that no one asked for. AI promised to free humanity from drudgery and delivered algorithmic spam instead.
The irony was exquisite: the very tools meant to enhance productivity had created a new form of infozone, a digital trough of statistically sound garbage, optimized for engagement rather than truth. We’d built a machine for producing mediocrity at scale.
The Financial Reckoning
Meanwhile, OpenAI’s finances became the most damning prophecy of all. The company pulled in about $4.3 billion in revenue in the first half of 2025, yet still booked multi‑billion‑dollar losses driven by eye‑watering R&D and infrastructure costs. Internal projections point to around $115 billion in cumulative cash burn through 2029. HSBC now estimates that even with explosive revenue growth, OpenAI will still face a funding shortfall of roughly $207 billion by 2030 just to keep the servers humming.
This was the endgame no one discussed during the AGI-by-2027 lectures. What happens when the prophet’s vision meets accounting reality?
OpenAI’s answer: introduce ads to ChatGPT. Because nothing says “we’ve created superintelligence” like forcing users to watch banner ads while they check their email. The company explicitly aimed for up to 20% of its revenue from advertising—a number that would have been laughable to read aloud at the 2023 keynotes. But now it looks like financial lifeline.
The Closing Assessment
The rise and fall of the OpenAI prophecies teaches a brutal lesson about conviction in the absence of proof. Hinton, Altman, and Aschenbrenner probably weren’t lying. They might have genuinely believed what they were saying. But belief and reality are often strangers to each other. The models got smarter, benchmarks improved, and the infrastructure scaled impressively. None of it was enough to bridge the gap between “this is remarkable” and “this is AGI.”
What we got instead was a reminder that even the smartest people in technology are subject to the same cognitive biases as everyone else: optimism bias, narrative momentum, and the ability to mistake a demo for destiny.
The singularity didn’t arrive in 2025. But the ads did.
Merry Christmas and a discerning New Year. May your 2026 be slop-free and full of genuine intelligence.
For more insights about what AI can or cannot do, check out my book “Artificial Stupelligence: The Hilarious Truth About AI”.






