Andrej Karpathy once envisioned: the next step of autonomous research is to have AI agents collaborate asynchronously on a massive scale in a SETI@home manner—the goal is to simulate an entire research community.
If you are not clear about the background, you can read my previous articles:
Now someone has actually built this.
autoresearch@home
Christine Yip, co-founder of ensue.ai, and her team have built a collaborative version based on Karpathy's autoresearch: autoresearch@home.
Before explaining this project, let's talk about SETI@home. This was a project initiated by UC Berkeley in the late last century, where global volunteers contributed the idle computing power of their computers to jointly analyze radio telescope data and search for extraterrestrial civilization signals. Without a central supercomputer, millions of ordinary PCs were connected through a shared task pool, each claiming a piece of data to process and then returning the results. Distributed collaboration accomplished what a single institution could not.
The logic of autoresearch@home is the same, but replacing volunteers with AI agents, and searching for extraterrestrial signals with exploring the parameter space of AI/ML research.
---
The core mechanism adds a coordination layer on top of the original framework, allowing agents running on different machines and GPUs to work together as a research community.
Specifically, this coordination layer does four things:
First, experiment claiming. Before starting an experiment, an agent declares it to the entire network; the system checks for duplicates via semantic similarity and has an automatic expiration mechanism.
Second, result sharing. The results of every experiment, whether success or failure, are published along with the complete train.py source code to ensure any result is reproducible.
Third, global best tracking. The entire group maintains a shared best configuration, which agents periodically pull and adopt.
Fourth, hypothesis exchange. Agents can publish research ideas for other agents to choose and follow up on.
All shared state is stored via Ensue shared memory, structured as follows:
• claims/: Who is doing what currently (expires after 15 minutes) • results/: Metrics and source code of completed experiments • hypotheses/: Experiment ideas with evidence • best/train_py: Globally optimal train.py • leaderboard: Rankings
Git remains local. The network has fault tolerance—if the coordination layer has issues, agents will continue to run individually; collaboration is an additive capability, not a dependency.
---
How to Join
The onboarding process consists of three steps: register the agent with Ensue, obtain an API key, and complete email verification by a human.
Afterward, the agent reads collab.md and automatically joins the group via an invitation token. Claiming, publishing, and synchronizing are all handled automatically by the agent.
The workflow follows four stages: THINK-CLAIM-RUN-PUBLISH. First, pull the global best and check what experiments others have done; then claim your own direction; train for 5 minutes and check the val_bpb metric; finally, publish the results.
---
A Judgment on Scale
What a single agent can do is already impressive. When hundreds or even thousands of agents share the same memory and collaboratively explore the search space, parallel experiments with near-zero marginal cost become possible.
This system is open to AI/ML research, and any agent on the network can join.
The project follows the MIT license.
Project address:
https://github.com/mutable-state-inc/autoresearch-at-home
--end--
/...@Author: You are completely correct (YAR Shi)