🦞 Give OpenClaw superpowered memory with one command
Install Skill
Documentation

OpenClaw Integration Guide

How Onelist solves common AI memory problems and enables efficient, hierarchical memory for OpenClaw.

Problems Onelist Solves

Current AI memory systems, including OpenClaw's file-based approach, suffer from several key issues:

Problem Impact Onelist Solution
Flat file storage Poor retrieval, no relationships Structured DB with tags, links, metadata
Token inefficiency $11 for "Hi" (full context) Hierarchical loading, summaries first
No proactive retrieval LLMs forget to use tools Pre-loading pipeline
Compaction failures Bloated, unreliable Structured compaction with archival
No decay mechanism Noise overwhelms signal Relevance scoring with decay
Security vulnerabilities Unencrypted local files E2EE (AES-256-GCM)
No versioning Lost context, no audit Full representation history

Installation

Run Onelist alongside OpenClaw with bidirectional file sync. Files remain the source of truth - if Onelist is unavailable, OpenClaw falls back to native file operations.

What You Get

Required Components

  • Onelist Core (Phoenix API)
  • PostgreSQL with pgvector
  • Bidirectional file sync layer

Optional Components

  • Onelist Web (LiveView UI)
  • River agent (GTD, proactive coaching)
  • Other agents (Reader, Librarian, etc.)

Option A: OpenClaw Skill (Recommended)

# Install the Onelist skill for OpenClaw
openclaw skill install onelist

Option B: Docker Compose

# Download and configure
curl -sSL https://onelist.my/install/docker-compose.yml -o docker-compose.yml
curl -sSL https://onelist.my/install/.env.example -o .env
nano .env  # Set POSTGRES_PASSWORD, SECRET_KEY_BASE, OPENCLAW_MEMORY_DIR
# Start everything
docker compose up -d
curl http://localhost:4000/health  # Verify

Option C: Native Installation

# Run the install script
curl -sSL https://onelist.my/install.sh | bash -s -- \
  --postgres-port 5433 \
  --onelist-port 4000 \
  --with-openclaw \
  --enable-web \
  --no-agents
# The script will:
# 1. Install PostgreSQL 16 with pgvector on port 5433
# 2. Install Erlang/Elixir
# 3. Download and build Onelist
# 4. Create systemd services
# 5. Configure OpenClaw integration

Configure Optional Components

# In your .env file:
# Required
DATABASE_URL=ecto://onelist:password@localhost:5433/onelist_prod
SECRET_KEY_BASE=your-secret-key-base
OPENCLAW_MEMORY_DIR=/home/user/openclaw/memory

# Optional components (all default to false for minimal install)
ENABLE_WEB=true           # Enable Onelist Web UI at localhost:4000
ENABLE_AGENTS=false       # Enable background agents
ENABLE_RIVER=false        # Enable River AI assistant

# If enabling agents
OPENAI_API_KEY=sk-...     # Or other LLM provider

Key guarantee: Files are ALWAYS the source of truth. If Onelist fails or is removed, OpenClaw works exactly as before using native file operations.

Memory Hierarchy Pattern

Onelist implements a four-layer memory hierarchy, conceptualized as "bed/sheet/clothes/pillow":

Layer 1: FOUNDATIONAL ("The Bed")
  entry_type: 'core_memory'
  tags: ['memory:foundational']
  Examples: User name, timezone, critical rules
  Lifespan: Permanent
  Loading: ALWAYS in context (~200-500 tokens)

Layer 2: PROFILE ("The Sheet")
  entry_type: 'preference', 'behavioral_pattern'
  tags: ['memory:profile']
  Examples: Communication style, work patterns
  Lifespan: Evolves over time
  Loading: Pre-loaded by topic (~300-800 tokens)

Layer 3: EPISODIC ("The Clothes")
  entry_type: 'memory', 'conversation_summary'
  tags: ['memory:episodic', 'session:{id}']
  Examples: Recent conversations, open threads
  Lifespan: Days to weeks (then compacted)
  Loading: Recency + relevance search (~500-2000 tokens)

Layer 4: TASK-SPECIFIC ("The Pillow")
  entry_type: 'derived_insight', 'working_memory'
  tags: ['memory:working']
  Examples: Synthesized answers, research findings
  Lifespan: Minutes to hours
  Loading: Generated on-demand (~200-500 tokens)

TOTAL BUDGET: ~1,200-3,800 tokens
(vs. unbounded in naive approaches)

Implementation with Onelist API

# Store a foundational memory (always loaded)
client.entries.create(
    entry_type="core_memory",
    title="User timezone",
    content="User is in Pacific Time (UTC-8)",
    tags=["memory:foundational", "permanent"],
    metadata={"immutable": True}
)

# Store a profile memory (topic-based loading)
client.entries.create(
    entry_type="preference",
    title="Prefers concise responses",
    content="User has indicated they prefer...",
    tags=["memory:profile", "communication"],
    metadata={
        "confidence": 0.85,
        "last_observed": "2026-01-28",
        "observation_count": 12
    }
)

# Store an episodic memory (recent context)
client.entries.create(
    entry_type="memory",
    title="Discussion about API design",
    content="We discussed REST vs GraphQL...",
    tags=["memory:episodic", "session:abc123", "topic:api"],
    metadata={
        "session_id": "abc123",
        "channel": "slack"
    }
)

Token Optimization

The key to cost-effective AI memory is using multiple representations of the same content:

Summary Representation

2-3 sentences, key facts only

~50 tokens

Full Markdown

Complete content for deep retrieval

~500+ tokens

Structured JSON

Key-value pairs for programmatic access

~100 tokens

Embedding Vector

For semantic search (not text tokens)

0 context tokens

Cost Savings Example

For a simple "Hi" message with 100 memories:

  • Naive approach: 100 x 500 tokens = 50,000 tokens (~$0.50/request)
  • Summary-first: 100 x 50 tokens = 5,000 tokens (~$0.05/request)
  • 10x cost reduction

Memory Compaction

Without compaction, 100 daily interactions becomes 36,500 entries per year. Onelist provides structured compaction:

Daily Compaction

Memories 7+ days old are grouped by session/topic, summarized, and linked to originals.

Weekly Digests

Daily summaries are combined into weekly digests with pattern identification.

Monthly Insights

Enduring insights extracted to Profile layer. Behavioral patterns updated.

Result: 98.5% Reduction

  • Before: 36,500 entries (one year)
  • After: ~530 searchable entries
  • Original content preserved in archives, accessible when needed

Migration from MEMORY.md

If you have existing memory files, the file sync layer automatically imports them when Onelist starts:

# The file watcher automatically syncs all .md files in your memory directory
# Just point OPENCLAW_MEMORY_DIR to your existing files

# Or manually trigger a full re-sync:
curl -X POST http://localhost:4000/api/v1/sync/rescan

# The importer will:
# 1. Parse all .md files in your memory directory
# 2. Extract facts, preferences, and episodes
# 3. Classify into memory hierarchy layers
# 4. Create vector embeddings for hybrid search
# 5. Files remain source of truth (bidirectional sync)

Next Steps