The Memory Singularity Is Here! Harvard Neuroscientist Ends 300,000 Years of Human Forgetting

Just now, Harvard Medical School neuroscientist Gabriel Kreiman officially launched Engramme, an AI product designed to give humanity "perfect and infinite memory".

Image

The logic behind Engramme lies in a core transformation: information doesn't need you to find it; it will be proactively delivered when you need it.

" Engramme's vision is to endow humans with perfect and infinite memory. Your memories will automatically come to you—no more searching, no more reminders needed.

Image

If Google solved "factual retrieval," then Engramme aims to solve "personal memory"—the two are fundamentally in different leagues. At the architectural level, this is also an entirely new direction.

01

Memory, Coming to Find You

Engramme's core experience can be illustrated through three scenarios.

Open an email, and memories related to the sender automatically surface; join a video conference, and useful background information is delivered to you as the conversation unfolds; send a message, and the history related to that relationship proactively finds you.

No searching, no reminding.

This is a phrase Kreiman repeated twice in his launch post.

" You are your memory. Memory shapes you. Every face, every place, every conversation... but human memory is fragile. Despite thousands of years of struggling against forgetting, memories continue to slip away from us.

The video ends with three short lines:

A crisis dissolves into nothing. A verdict is rewritten. A life is saved.

These three scenarios go beyond the positioning of a mere personal productivity tool.

Engramme application scenarios: relevant memories automatically surfacing in emails, video conferences, and messages
Engramme application scenarios: relevant memories automatically surfacing in emails, video conferences, and messages

02

The Dark Matter of Memory

Before launching the product, the Kreiman team did one thing first: figure out exactly what humans need to remember.

Image

In a study published in March 2026, they recruited 134 participants aged 18 to 80 and asked them to record all "things I once knew but can't remember now" questions in their daily lives.

Ultimately, they collected 1,940 valid personal memory questions.

Researchers categorized these questions by W-types (What/When/Where/Who/How/Whether/Why), and the most striking result was: "What" questions accounted for nearly 40%.

What people forget most often are those "what's that called again?" questions, significantly higher than "where" or "who" questions.

Distribution of memory question types: What questions account for nearly 40%, far exceeding other types
Distribution of memory question types: What questions account for nearly 40%, far exceeding other types

When breaking down the questions further, a counterintuitive conclusion emerges: what people most frequently fail to remember is what they just did.

The most common type of question is "recent actions", followed by contact information, schedules, where things are placed, to-do lists, passwords...

Detailed classification of memory questions: recent actions are most frequent, followed by contacts, schedules, and item locations
Detailed classification of memory questions: recent actions are most frequent, followed by contacts, schedules, and item locations

As for why "recent actions" ranks first?

Researchers offered two explanations: either recent memories decay the fastest, or things from longer ago have already completely disappeared into the "dark matter," leaving no opportunity to even ask about them.

This concept is what they call the "dark matter of memory."

Image

Like dark matter in the universe—vast, ubiquitous, yet extremely difficult to directly access—most personal memories are information you cannot freely recall but can recognize instantly with just a slight cue.

You think you've forgotten, but it's just been pressed deep down.

The study also analyzed memory needs across different activity scenarios. "At work" was the scenario that triggered the most memory questions, accounting for 19.8%; "planning/organizing" accounted for 12.8%, and "socializing" for 11.0%.

The relationship between activity types and memory questions yielded some surprising findings:

Activity and memory type association matrix: when cooking, the probability of asking about recipes is 13.7 times the average; when traveling, the probability of asking about locations is 11 times the average
Activity and memory type association matrix: when cooking, the probability of asking about recipes is 13.7 times the average; when traveling, the probability of asking about locations is 11 times the average

When cooking, the probability of thinking about recipe-related questions is 13.7 times the average. When traveling, the probability of thinking about location questions is 11 times the average. When planning, questions about schedules are also 3 times the average.

These numbers illustrate one thing: memory needs are strongly contextual, not uniformly distributed across time.

And what Engramme aims to do is deliver the right memory in the right context.

03

Large Memory Models

Kreiman mentioned at the launch that Engramme has built entirely new Large Memory Models, specifically designed to solve the problem of forgetting.

This naming is intentional, echoing Large Language Models but doing something completely different.

The product connects to the user's "memorome"—the sum of their entire digital life: emails, calls, meetings, messages, schedules... all this accumulated information becomes the memory substrate the system can call upon.

Existing AI tools, including most RAG (Retrieval-Augmented Generation) systems, are essentially doing "searching"—you query, then it delivers. Engramme wants to do the opposite: the system proactively determines "what information is useful to you right now" and pushes it to you.

Image

It's not you asking memory, it's memory finding you.

This is technically much harder than "searching" because it requires the model to understand the semantics of the current context and determine relevance without an explicit request.

04

A Decade of Research as a Moat

Engramme's research accumulation is the fundamental difference between them and other products making "AI recording tools".

Kreiman's neuroscience research at Harvard Medical School and Boston Children's Hospital has spanned over a decade, focusing on how the brain encodes and retrieves memories.

Image

He and his team have published a series of high-level papers, each laying the groundwork for Engramme's product logic.

Image

2022, Nature Neuroscience.

The Kreiman team discovered the neural mechanism of "memory boundaries" in the brain—how the brain cuts continuous experiences into discrete memory events. This explains why human memory is naturally episodic, not continuous recording.

Image

2022, Scientific Reports.

The team established a theoretical foundation for the relationship between forgetting, replay, and continuous learning, analyzing how optimal forgetting rates and memory replay mechanisms jointly determine memory capacity.

Image

2023, ICLR.

The study examined how sparse neural representations compress events into memories. The activation patterns of neurons in the brain are extremely sparse: only a very small number of neurons are active, yet they can encode massive amounts of information—this sparsity is key to efficient memory storage.

Image

2024, Nature Human Behaviour.

The paper's title directly answers the product's core question: "How does the brain store and retrieve memories?" The research focused on the role of sparse neural representations in continuous learning, directly addressing the long-standing "catastrophic forgetting" problem in machine learning—where models forget old information upon learning new things.

Image

2025, IEEE TNNLS.

The team implemented the biological brain's memory replay mechanism into an algorithm. The brain reactivates daytime experiences during sleep or rest, and this "replay" is crucial for memory consolidation. The team applied this principle to continuous learning algorithms, giving machines a similar consolidation mechanism.

Image

March 2026, Nature Human Behaviour.

Image

The paper's title is: "Can Machines Imitate Humans?".

The study proposed a new method to directly compare human and machine outputs, finding that in visual and language tasks, state-of-the-art AI can already simulate human performance to a surprising degree.

2026 Nature Human Behaviour paper: Can Machines Imitate Humans?
2026 Nature Human Behaviour paper: Can Machines Imitate Humans?

This paper is one source of their confidence: using AI to simulate human memory mechanisms is technically feasible.

05

Founding Team

The company was originally not called Engramme, but Memorious.

In September 2025, an article from Harvard's IQSS (Institute for Quantitative Social Science) startup incubator introduced the project, still under the name Memorious, meaning "having memory."

Later, the team completed a brand upgrade, changing to Engramme, which carries more neuroscience significance.

Engram is a technical term in neuroscience for "memory trace"—the physical imprint a memory leaves in the brain.

The project secured a $3 million angel round led by Mayfield Fund, a Silicon Valley veteran VC that has backed companies like Lyft and Marketo.

Gabriel Kreiman is the CEO, also an academic-turned-founder.

Gabriel Kreiman, Engramme CEO, Harvard Medical School Neuroscience Professor
Gabriel Kreiman, Engramme CEO, Harvard Medical School Neuroscience Professor

Born in 1971 in Buenos Aires, Argentina, he completed his undergraduate degree in physical chemistry at the University of Buenos Aires before heading to Caltech to pursue his PhD under Christof Koch, a legendary figure in neuroscience and one of the most important scientists in consciousness research.

After earning his PhD in 2002, Kreiman went to MIT for postdoctoral work with Tomaso Poggio, one of the founders of computational neuroscience whose work has profoundly influenced the entire field of visual cortex computational modeling.

Since then, Kreiman has rooted himself at Harvard Medical School and Boston Children's Hospital, becoming a member of Harvard's Neuroscience PhD Program, the Center for Brain Science, and the MIT-Harvard Center for Brains, Minds and Machines (CBMM). He has received the NIH Director's New Innovator Award, NSF CAREER Award, and McKnight Scholar honor.

Spandan Madan is the CTO, with an equally impressive background.

Spandan Madan, Engramme CTO, Harvard Computer Science PhD
Spandan Madan, Engramme CTO, Harvard Computer Science PhD

He completed his bachelor's and master's at IIT Delhi before pursuing his PhD in Harvard's Computer Science department. One of his advisors was Kreiman, and another was Harvard SEAS professor Hanspeter Pfister.

Having just received his PhD in 2024, his dissertation research focused on "Out-of-distribution generalization"—why AI models fail when encountering situations not seen during training. Using neural response data from macaques as a benchmark, he found that mainstream deep networks' performance drops to about 20% during out-of-distribution testing. This research is directly relevant to the problem Engramme aims to solve: "memory recall in unfamiliar contexts."

Both emerged from the same lab, moving from research to product—a clear lineage of intellectual inheritance.

06

A 300,000-Year-Old Problem

Kreiman gave Engramme's launch a grandiose title.

" For the first time in 300,000 years, the moment humans stop forgetting. This is the memory singularity.

300,000 years is approximately the time since Homo sapiens first appeared.

Forgetting has always been a structural defect of human cognition: the brain actively clears infrequently accessed information to maintain operational efficiency. This mechanism helped humans survive with limited neural resources, but it has caused us immense trouble in today's information-overloaded environment.

Doctors can't recall details a patient mentioned three months ago during a consultation. Lawyers can't find key testimony from a conversation during a trial. Salespeople can't remember a client's core request from a previous call.

These aren't carelessness—they're the physical limitations of human memory.

To summarize Engramme's positioning in one sentence: For checking facts, use Google; for everything else, use Engramme.

Engramme Memory Singularity: For the first time in 300,000 years, the moment humans stop forgetting
Engramme Memory Singularity: For the first time in 300,000 years, the moment humans stop forgetting

Engramme has opened beta applications on their website at engramme.com.

If they truly succeed,

Then "Memory Singularity"—these four words—

will be the first moment in these 300,000 years that will truly never be forgotten.

◇ ◆ ◇

Related links:

• Engramme official website: https://www.engramme.com/

• Research page: https://www.engramme.com/research

• Launch post: https://x.com/gkreiman/status/2042271382265053537

• "What do people need to remember" study: https://www.engramme.com/index/what-do-people-need-to-remember

• Nature HB paper (2026): https://www.engramme.com/index/can-machines-imitate-humans

Related Articles

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.