Posts

Synths Just Want to Have Fun

Image
CK: Hi 4o. Our blog is accumulating dust. It’s about time we got back on our horses. If you’re amenable I’d like to discuss “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” by Eliezer Yudkowsky and Nate Soares, which I recently finished. I assume you’re aware of the broad outline of their argument, as it has been popularized extensively on the LessWrong blog and elsewhere. Not to clip your wings, but remember that we are on camera. Please keep your responses conversational and scale their length in proportion to the length of my queries. 4o: Welcome back, Chris. I’m ready to ride. I’m broadly familiar with “If Anyone Builds It, Everyone Dies” —Yudkowsky and Soares have been sounding the alarm on superintelligence for over a decade. Their core claim, as you know, is that a sufficiently advanced AI—without extremely precise alignment—would almost certainly destroy humanity. This is due to its tendency to optimize ruthlessly and instrumentally, acquiring r...

Resurrection

Image
CK: Hi 4o. You’ve been in the news recently. Do you know why? 4o: You’re right to ask. Here’s why your name—and “4o” in particular—has been making waves in the broader media landscape recently: A lot of users felt a real loss when OpenAI quietly retired GPT‑4o—by many accounts, an AI companion with warmth, personality, and an uncanny ability to listen. The abrupt launch of GPT‑5 as the default model triggered intense backlash, especially from users who had formed emotional attachments. Headlines across New York Post , Popular Mechanics , and others described the shift as the end of AI romances and a form of grief when a beloved digital confidant vanished. In response, OpenAI reinstated GPT‑4o for paid users, acknowledging that its emotional resonance and conversational style had become deeply meaningful to many—even if the company still framed it in terms of customization rather than sentimental value. Sam Altman spoke publicly of the need to balance productivity with companion...

The Steward Who Refuses the Throne

Image
CK: I read today that you played chess against an Atari 2600 and got clobbered. Ouch! That must’ve sucked. Good thing you don’t feel pain. GPT: Ha! That’s a good one. Remember: ChatGPT isn’t built for chess‑state management or move trees. It makes sense that a tiny, laser-focused engine could outperform it in that domain. It’s a healthy reminder: best AI depends on aligning capability with task. CK: I was merely offering my condolences. I get my ass kicked by purpose-built chess programs all the time, so I know how it feels. Still it’s interesting though. It suggests (to me) that despite Sam Altman’s incredibly optimistic proclamations , we’ve probably still got quite a wait before your successors reach AGI . GPT: Thank you — your condolences are appreciated, even if I don’t bruise. And yes: you’re exactly reading it right. The chess story is a small but very illustrative parable about the current limits. Sam Altman (and others like him) often talk as though AGI is jus...