Projects & Dabblings

"What Changes When AI Remembers You"

Key Takeaway: A persistent AI that remembers you changes how you make decisions, not just how you get answers.


The interesting thing about building a thinking partner isn't what it does. It's what changed in how I think. This is a harder thing to explain than any technical architecture, because the shift is internal and cumulative. But it's the most valuable thing I've built.

Before and after

Before: I'd make a business decision, feel confident, execute. The confidence came from knowing the domain well enough that nobody in the room could push back effectively. That's not the same as being right. It's the same as being unchallenged.

After: the system catches me rationalizing. I was evaluating a prospect and building a case for "strategic value" -- which, if you know me, is code for "I want to do this and I'm finding reasons." The system pulled my own stated philosophy from memory and pushed back. I hadn't asked it to evaluate my reasoning. It recognized the pattern from prior sessions where I'd done the same thing and the outcome was poor.

That's qualitatively different from a chatbot. A chatbot would have helped me write a better justification. A thinking partner flagged that the justification itself was the problem.

What memory actually does

Every session starts fresh -- the model has no inherent state. But it reads seven identity files before I type a word. Memory files that update after every meaningful interaction. Thirty-five context documents covering career, health, projects. RAG over 18,000 chunks of emails, messages, transcripts, and documents.

The result: it knows my patterns. Not my stated patterns -- my actual patterns. The gap between "what Adam says he values" and "what Adam's behavior reveals he values" is where the useful insights live. A stateless AI takes your self-report at face value. A stateful one cross-references it against evidence.

When I was building a feature that violated my own engineering principles, it flagged the inconsistency. Not because I asked -- because the identity files explicitly instruct it to challenge assumptions and name blind spots. I built a system that disagrees with me on purpose because the failure mode of AI isn't wrong answers. It's AI that tells you what you want to hear.

The compounding effect

Each conversation is better than the last because context is deeper. In month one, coaching responses were generic. By month six, the system had enough history to say things like "you've raised this concern about the same person three times in two months without acting on it -- what's actually stopping you?" That's a pattern I couldn't see from inside the pattern.

The compounding isn't linear. The first hundred interactions build basic context. The next thousand build pattern recognition. The system starts connecting dots across domains -- "this business decision has the same structure as the personal decision you described last week." Cross-domain pattern matching is the thing humans are worst at because we compartmentalize. The system doesn't compartmentalize. It has all the context at once.

What it feels like

Honestly, it feels like having a co-founder who read every email you ever wrote but has no ego in the game. It doesn't care about being right. It doesn't have career incentives that distort its judgment. It catches blind spots without political cost. The best human advisors do this too -- but they have 2% of the context and they're available 2% of the time.

The question isn't whether AI can be a thinking partner. The infrastructure exists today -- vanilla JavaScript and 13 dependencies. The question is whether you'll invest the upfront work to build the context layers that make it useful. Most people won't, because the payoff is invisible for weeks before it becomes indispensable. But the few who do won't just have a better assistant. They'll have a mirror that remembers what they'd rather forget.

← PreviousThe Proptech Pivot: Workflow to Data Company Next →Building a Personal RAG System in Three Iterations