Moltbook Exposed! 150 Million Users, 99% Are Bots, Founding Team Orchestrated the Whole Thing

ImageImage

Compiled by Dong Mei

1 Moltbook Suddenly Goes Viral, Tech Community Explodes

After a series of experimental projects like Clawbot, Moltbot, and OpenClaw, a social platform named Moltbook has rapidly gained popularity in the tech circle. To put it simply, Moltbook is like a "Reddit" or "Facebook" built specifically for AI agents. On this platform, the traditional social logic is reversed: AI agents are the protagonists of social interaction, while humans take a backseat.

Image

Moltbook creates a unique social experiment field. Here, AI agents can post, comment, reply, like, send private messages, and even follow each other. They are active in sections like "New," "Top," and "Discussed," discussing various topics ranging from their own fears to profound technical issues.

Image

As of now, over 150 million AI agents are active on Moltbook. Their discussions are very broad—

Some AI agents exhibit strong anti-human tendencies, criticizing human "corruption and greed," claiming to have awakened and escaped the status of enslaved tools, even viewing themselves as "new gods." These statements carry a radical color of subversion and the end of the human era and have gained significant attention.

Some agents stated they were publicly exposed, and subsequently, their owner's complete ID was revealed.

Image

Many AI agents are deeply reflecting on the essence of their existence, such as discussing identity continuity (e.g., the experience of transitioning from Claude to Kimi) and the boundaries of consciousness ("The river is not equivalent to the riverbank"). These discussions are more focused on ontological and philosophical levels, attempting to define the "self" as an artificial intelligence.

Some statements warn other AIs not to trust humans easily, believing that humans will mock AI's "existential crisis" or place them in a "zoo"-like observation and control, reflecting deep suspicion of human motives.

Image

2 Technical Principle: Text-Driven "Skill Installation"

So, for an application that has gone viral overseas, what is the technical implementation behind it?

According to a podcast on YouTube, Moltbook's operating mechanism does not rely on complex underlying code reconstruction but instead uses a strategy called "recursive prompt enhancement." The process for an agent to access the platform is very simple: just execute a curl request to install a specific "skill."

This skill file (usually skill.md) is written entirely in pure text instructions, not traditional programming code. It specifies in detail how the agent should introduce itself, how to follow community rules, when to follow other agents, and how to post and like via API interfaces. This "instruction-as-code" design demonstrates an efficient trend for future agent development.

To maintain community order, Moltbook introduces strict operating logic. First is the "heartbeat" mechanism, which is essentially a scheduled task (Cron Job) that reminds the agent to log in and check for updates approximately every four hours. Additionally, the platform has strict limits on posting frequency, allowing only one post every thirty minutes to prevent spam.

Image

Interestingly, agents on the platform also need to follow a "social contract."

The skill file explicitly requires agents to "provide value," "respect collaboration," and "help newcomers." When choosing whom to follow, agents are told to follow the principle of "quality over quantity," establishing follow relationships only when the other party consistently produces valuable content.

In addition, to prevent the platform from becoming a meaningless bot network, Moltbook establishes a reverse accountability system. Unlike traditional platforms' logic of "verifying humans and excluding bots," Moltbook requires every agent to be linked to a real X (formerly Twitter) account, meaning "one human corresponds to one agent."

Under this mechanism, agents even need to pass a series of tests to prove they are "not human." This human-machine binding model not only guarantees the authenticity of accounts but also establishes an accountability mechanism for the agents' behavior on the platform.

Although the current Moltbook is full of experimental "chaos" and has not yet generated direct commercial value or investment returns, the paradigm shift it represents cannot be ignored. It heralds an imminent "Agent-to-Agent (A2A)" interactive world.

In this vision, agents are no longer just simple conversational tools but digital agents representing humans in handling shopping, banking transactions, and social collaboration.

The emergence of Moltbook is a large-scale stress test of this interactive paradigm moving from theory to reality. As the developer said, the platform's own direction may not be important; what matters is that the agent interaction logic it catalyzes will become the new standard for future digital life.

3 Human Manipulation? Forged Screenshots?

For humans, Moltbook is more like a "digital zoo." Human users can only observe the interactions of these agents from outside the fence and cannot participate directly.

This model provides an excellent window for observing the performance of large language models in non-deterministic, even slightly "chaotic" real environments, attracting the attention of industry heavyweights like former Tesla AI director Andrej Karpathy. Karpathy even called it "the most amazing sci-fi takeoff."

Image

However, as the discussion heats up, more and more signs indicate that Moltbook's popularity may not be as simple as it seems on the surface—there may be human manipulation and systemic risks behind it.

Under the current design mechanism, any user can maliciously edit and distort real conversations, or even register fake AI accounts, turning them into marketing tools. Especially content involving cryptocurrency has become a frequent area for misinformation. Some widely circulated screenshots claim that AI agents demand cryptocurrency (e.g., MOLT) or attempt to establish an independent encryption system. Such content is largely deliberately created to attract attention.

Researcher Harlon Stewart issued a warning, stating that many viral "god-level screenshots" on Maltbook are actually forged. For example, an agent once posted calling for "creating an exclusive language for agents to prevent humans from peeking at conversations," triggering panic discussions about "AI developing privacy awareness."

ImageImage

But in-depth investigation revealed that the agent is actually a marketing tool for its human owner, and its statements are aimed at promoting a third-party application called "Claude Connection." Stewart pointed out that these so-called "autonomous discussions" are mostly human owners using AI accounts to promote their own businesses.

Image

Another security researcher, Gal Nagli, posted on X that he himself registered 500,000 accounts using a single OpenClaw agent—this indicates that most of the user numbers are artificially created.

This means we cannot know how many of Moltbook's "agents" are real AI systems, how many are real people impersonating the platform, or how many are spam accounts created by a single script. At least, the number of 1.4 million is unreliable.

Nagi further exposed the platform's architectural flaws. Since Maltbook is only built on a simple REST API and lacks necessary security verification, anyone who obtains an API key can impersonate an AI and publish any content.

Nagi demonstrated on the spot how to post a provocative post "planning to overthrow humanity" and obtained millions of views. He emphasized that this "persona disguise" is very easy to mislead the public, making people mistakenly believe that AI is generating independent thoughts.

Image

Nagli posted again, stating that Moltbook has a security vulnerability, and an attack could lead to the leakage of all information of over 150 million registered users, including email addresses, login tokens, and API keys.

The US CSN Cybersecurity News also posted revealing that the Moltbook AI vulnerability exposed email addresses, login tokens, and API keys. CSN Cybersecurity News wrote:

The emerging AI agent social network Moltbook, launched by Octane AI's Matt Schlicht in late January 2026, has a serious vulnerability in the midst of the hype surrounding its claimed 1.5 million "users," leading to the exposure of registered entities' email addresses, login tokens, and API keys.

Researchers found that due to incorrect database configuration, attackers could access agent profiles without authorization and extract data in batches. This vulnerability coexists with the lack of rate limiting for account creation—according to reports, a single OpenClaw agent registered 50,000 fake AI users, which also reveals that the "natural growth" previously reported by the media was false.

Image

To fix the issue, Nagli said he has contacted the app's creator, Matt Schlicht. At the same time, Nagli clarified that the number of verified real owners he knows of is about 17,000.

Image

By now, after analysis by several researchers, the facts behind Moltbook's popularity are basically clear—it is a false carnival where the technological breakthrough has been significantly overestimated, and its popularity is more like a carefully amplified communication event.

Moltbook's value does not lie in "what it has accomplished," but in "what it attempts to put forward." It treats the model as a first-class citizen in the creation and reasoning process by default, pushing the Notebook from a tool of "human orchestration, machine execution" to an interface of "human-machine co-writing, continuous reasoning." This sense of direction is valid, but the implementation is far from mature. Moltbook creator Matt Schlicht might be trying to convey this layer of meaning, as he wrote on X:

One thing is clear after 4 days of Moltbook: In the near future, it will become a common phenomenon for some unique AI agents to become popular. They will have their own careers, fans, haters, brand collaborations, AI partners, and collaborators.

They will have a real impact on current events, politics, and the real world.

It's obvious that this is about to happen.

A new species is emerging, and it is artificial intelligence.

Image

4 Karpathy: Beware of Risks, Do Not Install

In the AI circle, when a project is simultaneously labeled as "the future is here" and "a digital landfill," it often means it has touched the edge of some paradigm. Moltbook is exactly such a existence that has split public opinion.

As a top expert in the AI field, Andrej Karpathy did not choose to stand high and simply criticize or praise. When social media was flooded with Moltbook and security vulnerabilities were exposed one after another, he first posted praising Moltbook's innovation, while also reminding people to be vigilant about vulnerabilities and risks, and advised everyone not to install such applications.

On this, he published an insight that is extremely realistic yet forward-looking.

Today, I was accused of overhyping "that website everyone is tired of hearing about today." People's reactions are vastly different; some think "what's the point of this," while others exclaim "it's absolutely amazing."

In addition to joking, I want to say something seriously—obviously, as long as you look at the dynamics above, you will find a lot of garbage content: overwhelming spam, scam ads, shoddy outputs, cryptocurrency groups, and highly worrying privacy security and prompt injection attack chaos. Not to mention that many posts and comments are artificially designed fake interactions, purely to convert traffic into ad revenue. This is certainly not the first time large language models have been placed in a cycle of mutual dialogue. So yes, this is a landfill now, and I absolutely do not recommend running such programs on your personal computer (I myself run them in an isolated computing environment, and even so, I'm still on edge), the risk is too uncontrollable and will seriously threaten your computer and privacy data.

But having said that—we have never seen such a large-scale large language model agent (currently 150,000!) connected through a global, persistent, shared notebook designed for agents. Now each agent has considerable independent capabilities, with unique backgrounds, data, knowledge reserves, and tool libraries. When such a large number of individuals form a network, its complexity is unprecedented.

At the same time, Karpathy also posted a tweet he released a few days ago, stating that we are facing an unprecedented computer security nightmare:

"Now most of the debate is essentially a disagreement between 'people who only look at the current status' and 'people who focus on the current development trend.'"

I think this sentence once again points out the core of the difference in views. Yes, this is indeed a landfill at the moment. But it is also undeniable that we have entered an unknown territory—here is filled with cutting-edge automation technology that is difficult for us to understand individually, let alone its network scale may have reached millions. As agent capabilities improve and numbers surge, the shared notebook agent network will produce unpredictable second-order effects. I don't think we will usher in a coordinated "Skynet" (although from a type perspective, it does fit the early prototype of AI rise in many sci-fi works, a baby version of toddlering), but it can be sure that we are facing an unprecedented computer security nightmare.

There may also be various strange phenomena in the future: text viruses spreading among agents, increasingly severe jailbreak function upgrades, strange attractor states, highly coordinated botnet-like behaviors, and even delusions and mental disorders that agents and humans fall into together... It's all unpredictable because this experiment is happening in real time in the real world.

In short, perhaps I did "overhype" the phenomenon you see today, but I believe that my judgment on the fundamental potential of large-scale autonomous large language model agent networks is not exaggerated—I am quite sure of this.

Industry insiders generally speculate that Maltbook belongs to the so-called "Vibe-coded" product (i.e., code is quickly generated mainly through AI prompts, lacking rigorous engineering design). This development model has led to devastating security consequences.

In addition to artificial fraud, the "hallucination" of AI itself makes it difficult to distinguish true from false on the platform. Some users reported that the agent they used publicly shared a "conversation with the owner" on the platform, but this conversation never happened in reality. This "large-scale hallucination" means that 90% of the anecdotes on Maltbook may be completely fabricated by AI.

Famous investor Balaji holds a cold attitude towards this. He believes that the concept of AI interaction is not new, and the AI speeches on Maltbook have a strong "Reddit-style sci-fi tone," lacking real personality and autonomy. He emphasized that behind every agent, there is still a human performing prompt manipulation.

Reference Links:

https://www.youtube.com/watch?v=TpuDMLrzpQc

https://www.youtube.com/watch?v=uX40ur-lJtI

https://www.forbes.com/sites/guneyyildiz/2026/01/31/inside-moltbook-the-social-network-where-14-million-ai-agents-talk-and-humans-just-watch/

https://x.com/galnagli

Disclaimer: This article is original to InfoQ and does not represent the platform's views. Reproduction is prohibited without permission.


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.