How AI Agents Remember Things

How AI Agents Remember Things

How Does AI Memory Work?

Understanding AI Agents and Memory

  • AI agents start without memory, meaning each conversation begins with no prior context. OpenClaus addresses this using markdown files and specific mechanisms.
  • Conversations are stateless; they lack memory between calls, leading to a blank slate for each new interaction.

Components of Memory Systems

  • Memory systems consist of session memory (short-term) and long-term memory. Session memory retains the history of a single conversation.
  • Compaction occurs when the context window limit is reached, summarizing conversation history into essential information to maintain continuity.

Strategies for Compaction

  • Three strategies trigger compaction:
  • Count-based: Activated after exceeding token size or turn count.
  • Time-based: Triggered when user inactivity is detected over time.
  • Event-based/Semantic: Initiated upon completion of a task or topic, though it's complex to implement accurately.

Long-Term Memory Framework

  • Long-term memory persists beyond sessions, akin to organizing notes on a desk versus filing them away in cabinets.
  • Google’s framework categorizes agent memory into three types:
  • Episodic: Recalls past interactions with the LLM.
  • Semantic: Stores facts and user preferences about topics or users.
  • Procedural: Contains workflows and routines for task accomplishment.

Effective Memory Management

  • An effective memory system must filter key details from conversations while consolidating similar entries to avoid redundancy.
  • The system should also allow updates to previous knowledge as user preferences change over time, preventing contradictory information.

Implementation Examples

  • Various storage solutions exist for memories, ranging from simple markdown files to advanced vector databases that can be searched efficiently.

OpenClaw's Memory Model

  • OpenClaw exemplifies practical agent memory with three core components:
  • Memory MD File: A semantic store containing stable facts and user identity info, capped at around 200 lines.
  • Daily Logs: Append-only records of recent contexts organized by day representing episodic memory.
  • Session Snapshots: Captures the last meaningful messages during a session via specific commands, excluding non-essential data like tool calls.

Understanding Open Claw's Memory Mechanisms

Overview of Memory Systems

  • Clause memory is essentially markdown files, but they require mechanisms to read and write them effectively. These files alone do not serve a purpose without proper management.
  • The first mechanism discussed is bootstrap loading at the session start, where the memory MD is automatically injected into the prompt for every new conversation, ensuring the agent has access to necessary context.

Key Mechanisms in Memory Management

  • The second mechanism involves pre-ompaction flush, which uses a count-based approach. When nearing context window limits, an invisible agentic turn prompts the LLM to save important information before potential loss.
  • A checkpoint is created when the agent writes to the daily log upon receiving this message. This process transforms a potentially destructive operation into a safeguard against losing context.

Session Management Techniques

  • The third mechanism is session snapshots that are saved whenever a new session starts (via /new or /reset commands). It captures meaningful messages from previous conversations and generates descriptive file names.
  • Lastly, users can directly request memory retention by instructing the agent with phrases like "remember this." The agent determines how to categorize this information without needing special hooks.

Conclusion on Open Claw's Memory System

  • Overall, Open Claw’s memory system relies on markdown files and specific timing for writing to them—semantic memory in memory MD files, episodic memories in daily logs and session snapshots. This structured approach ensures efficient data management across sessions.
Video description

How do AI agents remember things between sessions? Every agent forgets everything when a conversation ends, so how do the best ones seem to know you? I break down the memory architecture behind real AI agents, using OpenClaw (an open-source AI assistant) as a reference implementation. You'll see how LLM agents write, store, and load persistent memory using plain markdown files, and the four mechanisms that keep context across sessions, including context window management, bootstrap loading, and pre-compaction memory flush. --- *What you'll learn:* - The pre-compaction flush: how agents save context before it's lost - Four memory mechanisms that give agents persistent context - Why markdown files (not databases) are the source of truth - How bootstrap loading gives agents instant recall on startup --- *Resources mentioned:* - 📝 Full blog post (written version): https://www.damiangalarza.com/posts/2026-02-17-how-ai-agents-remember-things/ - ▶️ Part 2 — How Agents Search Memory: https://youtu.be/SpReZZk_13w - OpenClaw repo: https://github.com/openclaw/openclaw - Google "Context Engineering" whitepaper (sessions & memory): https://www.kaggle.com/whitepaper-context-engineering-sessions-and-memory *Practical AI engineering patterns and agent architecture breakdowns in my newsletter:* → https://www.damiangalarza.com/newsletter?utm_source=youtube&utm_medium=video&utm_campaign=how-ai-agents-remember *Want help building AI into your engineering workflow?* → Book a 1:1 coaching session: https://www.damiangalarza.com/coaching?utm_source=youtube&utm_medium=video&utm_campaign=how-ai-agents-remember --- *Timestamps:* 0:00 - Intro 0:21 - Why AI agents forget 0:37 - Sessions and long-term memory 1:02 - Compaction strategies 2:04 - Long-term memory explained 2:23 - Google's memory framework (episodic, semantic, procedural) 3:02 - What makes memory effective 4:29 - OpenClaw's memory system 5:54 - The four mechanisms 6:17 - Bootstrap loading 6:43 - Pre-compaction flush 7:16 - Session snapshots 7:44 - User-initiated memory 8:03 - Conclusion --- *About me:* I'm Damian Galarza, a software engineering leader and former CTO with 15+ years building SaaS products. I make practical AI tutorials and share what I'm learning about tools like Claude Code. *Connect:* - Newsletter: https://www.damiangalarza.com/newsletter?utm_source=youtube&utm_medium=video&utm_campaign=how-ai-agents-remember - LinkedIn: https://www.linkedin.com/in/dgalarza - Blog: https://www.damiangalarza.com #AIAgents #LLM #SoftwareEngineering #AgentMemory