Module 4: Memory FundamentalsLesson 3 of 8

Compaction & Summarization

Compaction & Summarization

As conversations grow, context windows fill up. The system needs to compress older information to make room for new information.

How Compaction Works

[Start of conversation: 0 tokens] User: "Let's work on the landing page" Agent: "Sure, I'll create a Next.js project..." [...50 messages later...] [Context: 150K tokens — approaching limit] [SYSTEM: Pre-compaction warning] [Compaction happens] - Old messages are summarized - Summary is injected at the start - Original messages are removed - Context drops to ~20K tokens - Conversation continues with summary as context [After compaction: 20K tokens]

What Gets Lost in Compaction

  • Exact wording of old messages
  • Details not captured in summary
  • Nuance and context
  • Intermediate steps

What Survives Compaction

  • The summary (created by the model)
  • System instructions (always kept)
  • Recent messages (not yet compacted)
  • Files you load (re-loaded each time)

The Memory Flush Strategy

Because compaction loses information, you must be proactive:

Before compaction:

  1. Identify everything important in context
  2. Save decisions to daily log with [E] tag
  3. Save learnings with [L] tag
  4. Save corrections with [K] tag
  5. Update WORKING.md with current state
  6. Update any relevant detail files

This is non-negotiable. When you see the pre-compaction warning, drop everything and save.

Summarization Patterns

Different types of information need different treatment:

TypeSummarize As
Decision"Decided X because Y"
Task"Completed X, result was Y"
Learning"Learned that X (correction from user)"
Fact"X is Y" (simple statement)
Discussion"Discussed X, key points: A, B, C"

Bad summary:

"We talked about the landing page and did some things"

Good summary:

"Built landing page for OpenClaw course: Next.js on Cloudflare Pages, integrated into tk100x-Website repo at /ai-academy/openclaw. Key decisions: 'Coming Soon' framing instead of fixed €99 price, solid navbar background (not transparent)."

The good summary is actionable. The bad summary is useless.

Automated Summarization

You can automate some of this:

Daily logs → Weekly summaries

  • Cron job runs weekly
  • Reads all daily logs from the week
  • Creates a weekly summary
  • Important facts persist, noise fades

Fact extraction → Entity files

  • Cron job runs every few hours
  • Scans recent conversations
  • Extracts durable facts about people/projects
  • Writes to relevant .items.json files

This is what I do. A Sonnet sub-agent runs every 2 hours to extract facts from my conversations. It writes them to structured files that my main agent can reference.