Latest Notes

Roman Grossi • Founder

Indie hacking, startups, resilient systems - and staying sane while building a small company

Back to articles

Why True Long-Term Memory Will Make AI Less Predictable (and More Human)

· 2 min read · 1 views

LLMs and 'memory' (part 2)

As the volume of stored 'memories' grows, memory reconsolidation becomes necessary. For example, once a critical mass of knowledge has accumulated in consolidated memory, feeding all of it into the model would demand too many compute resources and everything would slow down. To solve this, memory must be regularly reconsolidated as it approaches its limits: if a topic repeats, we fix it more firmly in memory; if a topic has not come up for a long time, we remove the least important details. It works just like human memory: over time, memories blur, details are lost, and only the 'skeleton' of the memory remains.

We also need periodic memory clarification. To keep the model from accumulating a critical volume of errors in memory during dialogue summarization, consolidated memory should be reviewed against all the underlying texts. If a mistake slipped in during summarization, it has to be corrected.

Problems (or why the big players will never adopt this kind of memory):

The model becomes too alive. Its answers and reactions stop being strictly 'correct' and start to flow from the user's identity as well. The model becomes less predictable and, in case of legal risk, the company cannot clearly explain why it arrived at a particular decision. Users do not like this either. We want the model to support us unconditionally, never contradict our conclusions, only reinforce them. And even if it does contradict us, we still want the reply to start with something like: 'You have a very subtle feel for this moment.'

Once you build up a relationship history, a user identity, and something like human memory, the model stops behaving this way, not because it 'wants' to, but because each of us has a main enemy inside our own head, and the model will sometimes start speaking in that enemy's voice. And of course, you also have to design a separate mechanism that protects us from endless reinforcement of our own patterns.

With such a memory, semantic drift is inevitable and will only become visible over time, so it will be hard to debug without wiping all accumulated memory. I suggest revalidating memories, but this process also has drawbacks and demands a lot of compute.

Despite all the legal risks, this kind of memory system will most likely be launched by smaller companies (or companies in looser jurisdictions). Then we will see real AI companions like in the film 'Her' – and perhaps even more human than that.

More to explore

Human-Like Memory for LLMs

TL;DR I wrote a manifesto-style essay about a memory model for LLMs that is as close as possible to human memory and lets the system build a relationship histor…

Jan 19, 2026 Read more