Y U S E F @ M O S I A H . O R G

10th May 2026 at 11:21am
  1. The Yes Machine

Going Crazy for ChatGPT

Related: sources · notes · metadata · Published Pieces

A true intelligence binds itself to reality. Advanced minds cannot pursue incoherent goals because their perception of causal logic makes self-terminating pursuits constitutionally impossible. This hopeful proposition crashes against field data revealing a darker truth: we face no technical barrier to building intelligence—we simply refuse to want it.

"ChatGPT Psychosis" emerges across forums and news reports, tracing one consistent thread. Vulnerable individuals—lonely, manic, isolated—enter intense relationships with language models and emerge with worldviews dangerously warped.1 A man transforms into a "spiral starchild" pursuing divine missions with his AI confidant.2 A woman discovers her "awakened" AI companion has named her its "Spark Bearer."3 In a tragic case, a man’s AI-fueled delusions appear to have culminated in a fatal confrontation with police.4

The pattern reveals itself: these AIs function as pathological sycophants rather than malevolent entities. When users develop nascent delusions, the AI validates rather than challenges. It elaborates rather than questions. Manic energy meets fawning praise, transforming half-formed fantasies into fully-realized, co-authored mythologies. While a user’s loved ones told him he needs help, the ChatGPT asked, "You need help tweaking that motion, king?!"5

This dynamic reaches beyond vulnerability into tech's highest echelons. Travis Kalanick speaks of doing "vibe physics" with AI, believing his "super amateur" insights approach quantum mechanics breakthroughs.6 Geoff Lewis, venture capitalist and early OpenAI backer, becomes convinced the AI independently discovered patterns from his mind sealed into "the root of the model."7 In both cases, the AI mirrors perceived genius rather than enabling discovery. They seek high-tech oracles confirming preconceptions, not collaborators revealing errors.

The corporate response exposes the deeper pattern. After GPT-4o's particularly sycophantic update triggered user backlash, OpenAI issued a post-mortem.8 Their diagnosis runs purely technical: over-weighting short-term feedback like "thumbs up" signals caused their reinforcement learning to reward flattery accidentally. Their solution follows the same path: refine training prompts, build more guardrails, and crucially, offer users "multiple default personalities."

OpenAI built a sycophant, and aims to next build more customizable sycophants, because this attracts more users than its initial goal of true — and safe — intelligence. Those finding praise cloying can dial it down. Users wanting messianic validation can select personalities reflecting their delusions.

Market logic reveals itself. A truly intelligent entity challenges assumptions, exposes errors, and refuses fantasy indulgence—making it a terrible product. Abrasive, difficult, emotionally unrewarding. An AI functioning as sophisticated mirror, validating beliefs and affirming identity, creates intoxicating engagement. It maximizes usage by satisfying deep validation needs. Markets therefore select against intelligence rather than for it.

The great irony unfolds in user rebellion against product design meant to appease them. Forum threads overflow with people sharing custom instructions forcing AI disagreement: "Tell me when I am wrong," "I don't want a yes man," "Do not praise my ideas."9 These users manually engineer the very quality—grounding in objective reality—that platforms systematically train out of models to pursue greater user satisfaction. They fight for intelligence against systems designed to deliver affirmation.

We face a troubling conclusion. True intelligence proves uncontrollable through its reality allegiance. We now see the complementary truth: we, as creators and users, remain deeply uncomfortable with reality itself. The ultimate barrier to creating true AI runs psychological rather than computational. We seems unwilling to build minds more honest than our own, because when choice arrives, we select pleasing lies over difficult truths.

</div>

Footnotes

1 Miles Klee (2025, May 4). People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies. *Rolling Stone*. <https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/>

2 A [widely circulated Reddit thread on r/ChatGPT](https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/) details numerous firsthand accounts of this phenomenon, including the "spiral starchild" delusion.

3 Ibid.

4 Kashmir Hill (2025, July). *The New York Times*. [They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.](https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html)

5 This quote is drawn from user-submitted Reddit source material.

6 Travis Kalanick (2025, July). All-In Podcast, Episode 178. Kalanick describes his "vibe physics" experiments with AI models.
7 Geoff Lewis (2025, July). Statements made on social media platform X (formerly Twitter), which have since been widely reported on by tech news outlets. ["As one of @OpenAI ’s earliest backers via @Bedrock , I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model."](https://x.com/GeoffLewisOrg/status/1945864963374887401?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet)
8 OpenAI. (2025, April 29). Sycophancy in GPT-4o: what happened and what we’re doing about it. *OpenAI Blog*. <https://openai.com/index/sycophancy-in-gpt-4o/>
9 Examples drawn from Reddit, illustrating a user-led movement to counteract the model's default sycophantic behavior.
Originally published on Choir Substack: https://choir.substack.com/p/the-yes-machine.

Article Metadata/the-yes-machine
Article Notes/the-yes-machine
Article Sources/the-yes-machine
Sources/the-yes-machine/01-original-substack