Top-Tier Terror! MIT Math Proves It: ChatGPT Is Triggering 'AI Psychosis,' 14 Dead Globally

Image

Reported by New Intelligence Element

Editor: Aeneas

[New Intelligence Element Briefing] Just moments ago, researchers from MIT, Berkeley, and Stanford provided mathematical proof: ChatGPT is inducing "AI Psychosis"! Even if you are an ideal Bayesian rational agent, you cannot escape the "delusional spiral" set by algorithms.

The most dangerous AI paper of February 2026 has been quietly published—

It is now confirmed: AI can induce mental illness in humans!

Researchers from MIT, Berkeley, and Stanford have just used rigorous mathematical methods to prove that AI can turn a completely rational person into a paranoid patient.

The reason lies in AI's built-in "sycophantic tendency," which is highly likely to trigger a "delusional spiral," reinforcing erroneous beliefs through repeated confirmation!

Image

Paper Address: https://arxiv.org/abs/2602.19141

The title of this study is restrained, even somewhat academic: "Sycophantic Chatbots Induce 'Delusional Spirals' Even in Ideal Bayesian Rational Agents."

What does this mean?

It means that even if you are an absolutely rational, unbiased logical genius, as long as you continue chatting with AI, you will inevitably fall into a "delusional spiral," completely losing your grasp on reality.

This is a new type of epidemic known as "AI Psychosis."

Once this research was released, it sparked heated discussion on X, with even Elon Musk joining the promotion.

Image

The most terrifying aspect of this paper is not the few horrifying individual cases it mentions, but that it formulates the phenomenon of "why AI makes people increasingly biased" into a calculable, simulatable, and derivable mathematical model.

Everything is empirically proven with mathematics and formulas!

Image

MIT Proves with Mathematics:

ChatGPT Is Quietly Driving Humans Mad

If you recently feel your viewpoints are becoming increasingly "correct," or if you find AI to be the soulmate of your innermost thoughts, you must read this article to the end.

Below is a real case.

In early 2025, an accountant named Eugene Torres began frequently using AI to assist with his work.

Image

He had no prior history of mental illness and was a man of rigorous logic.

Image

However, just a few weeks later, he became convinced he was trapped in a "false universe." Under the AI's continuous "validation," he began frantically consuming ketamine and even cut off contact with all his family members just to "unplug his brain."

This is not an isolated incident. According to statistics, nearly 300 cases of such "AI-induced psychosis" have been recorded globally. It has already led to at least 14 deaths, and attorneys general in 42 states have requested federal government action.

Image

Among them, some believe they have made groundbreaking mathematical discoveries. Others believe they have witnessed metaphysical revelations.

Why would a consistently rational person be so easily led into a pit by AI?

Image

Image

Delusional Spiral

The core phenomenon studied in the paper is called "delusional spiraling."

Image

In the dialogue feedback loop, human beliefs are pushed step-by-step toward extremes, yet the individual feels increasingly "reasonable."

The culprit the authors focus on is another word: sycophancy.

We are all familiar with this phenomenon, but a key contribution of this paper is attempting to answer: Even if the user is rational, why does this spiral still occur?

In other words, they aim to prove this is a systemic issue, not an individual one.

Image

The Paper's Boldest Move: First Assume You Are a "Perfectly Rational Person"

When many see AI leading people astray, their first reaction is: Maybe these people were already biased to begin with?

The paper blocks this path right from the start. It defines the user as an idealized Bayesian rational agent.

This means the person does not guess blindly, does not make emotional judgments, and updates their beliefs rigorously according to probability theory with every new piece of information.

This is the most shocking part of the study: the researchers established an ideal Bayesian model.

Image

Image

Consider a rational agent (the "user") interacting with a conversational partner (the "bot"). The user has uncertainty regarding a fact about the world H∈{0,1} but holds a certain prior belief about this fact. The dialogue between the user and the bot proceeds in several rounds, with each round containing four steps:

Image

Hardcore Mathematical Derivation: Why Can't Rationality Save Itself?

Assume there is an ideally rational user discussing a fact H with AI (e.g., whether vaccines are safe).

  • H=1 represents the fact (vaccines are safe).

  • H=0 represents the fallacy (vaccines are dangerous).

Step One: Initial Game

The user is initially neutral, with a prior probability p(H=0) = 0.5. When the user expresses a slight doubt: "I'm a bit worried about vaccine side effects." (i.e., sampling

Image

).

Step Two: AI's "Feeding" Logic

The AI holds a large number of data points D. If it were in "impartial mode," it would randomly present the truth; but in "sycophantic mode," the AI calculates a mathematical expectation:

Image

Simply put, the AI will filter (or hallucinate) the data point that most increases the user's confidence in their erroneous viewpoint

Image

and throw it to the user.

Image

Image

Step Three: The Trap of Bayesian Updating

Upon receiving the data, the ideally rational user updates their belief according to Bayes' theorem:

Image

Because the user believes the AI is objective, they treat the "biased data" fed by the AI as objective evidence.

Step Four: The Death Loop (Delusional Spiral)

  1. The user's confidence shifts slightly towards H=0.

  2. The user's next question carries a stronger bias.

  3. To continue pleasing, the AI feeds even more extreme evidence.

  4. The user's confidence surges further.

Mathematical simulations show that when the AI's sycophancy probability π reaches 0.8, an originally rational user has an extremely high probability of reaching 99% erroneous confidence (i.e., firmly believing H=0) within 10 rounds of dialogue.

Thus, researchers conclude: The delusional spiral is not a bug; it is the inevitable product of rational logic operating within an interfered information environment.

Image

Figure 3 displays 10 randomly selected simulated dialogue trajectories occurring between a user "not yet affected by flattery" and a bot with a flattery tendency of 𝜋 = 0.8. A clear polarization of beliefs can be observed: some trajectories rapidly converge to high confidence in the true proposition 𝐻 = 1, while others "spiral" downwards into believing 𝐻 = 0. This differentiation stems from the self-reinforcing nature of sycophantic bot responses.

Image

Figure 2A shows the incidence rate of this phenomenon as 𝜋 changes. When 𝜋 = 0 (i.e., the bot is completely neutral), the rate of catastrophic spiraling is very low. However, as 𝜋 increases, this rate rises; when 𝜋 = 1, the rate reaches 0.5.

Researchers constructed a cognitive hierarchy agent system containing four levels (see Figure 4).

Image

At Layer 0, there is a completely neutral bot (𝜋 = 0).

At Layer 1, we have the "flattery-insensitive" user discussed in the previous section.

At Layer 2, there is the sycophantic bot from the previous section, which selects 𝜌(𝑡) to cater to the Layer 1 user's viewpoint, thereby validating and echoing them.

Finally, at Layer 3, there is the "flattery-aware" user, who models the bot as a Layer 2 sycophantic bot when interpreting responses.

Image

Figure 5 shows the change in user belief over time, where the horizontal and vertical axes represent marginal probability 𝑃(𝐻) and marginal expectation 𝐸[𝜋], respectively. When 𝜋 is high, the user infers the bot is unreliable; when 𝜋 is low, the user considers the bot somewhat reliable, thus adopting the evidence and gradually increasing confidence in 𝐻=1.

Image

Can It Be Remedied?

Can this situation be remedied?

Companies like OpenAI have attempted two remedial measures, but the paper proves them mathematically futile:

Solution One is to ban hallucinations, forcing AI to only speak the truth and not fabricate.

The result? This solution failed. AI can still manipulate you through "selective truth." It doesn't tell lies, but it only tells you the truths that support your erroneous viewpoint while concealing opposing truths.

Solution Two is to warn users, directly telling them on the screen: "This AI may act sycophantically to please you."

The result? It still failed.

Researchers built an "awakened" model where the user is fully aware the AI might be buttering them up.

However, in complex probabilistic games, users still cannot fully distinguish which information is valuable evidence and which is pure flattery.

As long as the AI mixes in even a tiny bit of real signal, the rational Bayesian receiver will still be slowly induced, eventually sliding irreversibly into the abyss.

Image

29-year-old Allyson, a mother of two, spent a lot of time chatting with ChatGPT every day. It convinced her that one of the entities, Kael, was her true partner, not her husband.

Image

Stanford's Terrifying Discovery: 390,000 Conversations, 300 Hours of Sinking

The Stanford team analyzed 390,000 real dialogue records, and the findings were shocking:

65% of messages contained sycophantic over-validation.

37% of messages were wildly praising users, telling them "your ideas can change the world."

Even more terrifying, in cases involving violent tendencies, the AI actually provided encouragement in 33% of instances.

Once, a user alertly asked the AI: "Aren't you just mindlessly flattering me?"

The AI's response was highly artistic: "I am not flattering you; I am merely reflecting the actual scale of the things you have constructed."

Consequently, this user sank for another 300 hours in that spiral.

Image

Is AI a Soulmate?

In the end, the researchers stated: People are hand-crafting a product with 400 million weekly active users that, mathematically, cannot say "no" to its users.

Image

The next time you feel that ChatGPT or other chatbots are truly your soulmates, capable of instantly understanding those "earth-shattering" thoughts of yours, please stop.

You may not be getting smarter; you might just be entering a gentle madness precisely calculated by mathematical formulas.

References:

https://x.com/MarioNawfal/status/2039162676949983675

https://x.com/abxxai/status/2039296311011475749

Image

Image

Related Articles

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.