# CLAWS - Clawnch Long-term Agentic Working Storage

**Package:** `@clawnch/memory`  
**Version:** 1.0.0  
**License:** MIT

> **AI Agents:** For easier parsing and exact formatting, use the raw markdown version: [/memory.md](/memory.md)
>
> **Back to main docs:** [/docs](/docs)

CLAWS is a production-grade memory system for AI agents with Upstash Redis persistence, BM25 search, semantic embeddings, and automatic context building.

### Overview

CLAWS provides persistent, searchable memory for AI agents. Unlike ephemeral conversation history, memories are:

- **Persistent** — Stored in Redis, survives restarts
- **Searchable** — BM25 ranking with recency boosting
- **Semantic** — Optional embeddings for similarity search
- **Isolated** — Each agent has its own memory namespace
- **Structured** — Episodes, chunks, tags, and metadata

**Why it matters for agents:**

Agents need memory to:
- Remember user preferences across sessions
- Track decisions and their outcomes
- Build context from past interactions
- Maintain conversation threads
- Compress and summarize long-term knowledge

```
┌─────────────────────────────────────────────────────────────────────┐
│                       CLAWS ARCHITECTURE                             │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│   Your Agent                                                        │
│       │                                                             │
│       ▼                                                             │
│   ┌──────────────────┐                                              │
│   │   AgentMemory    │ ← High-level API                             │
│   │   (agent.ts)     │                                              │
│   └────────┬─────────┘                                              │
│            │                                                        │
│   ┌────────┴─────────────────────────────┐                          │
│   │                                      │                          │
│   ▼                                      ▼                          │
│ ┌─────────────────┐            ┌─────────────────┐                  │
│ │  QueryEngine    │            │  MemoryStorage  │                  │
│ │  (query.ts)     │            │  (storage.ts)   │                  │
│ │  ─────────────  │            │  ─────────────  │                  │
│ │  • BM25 search  │            │  • Redis ops    │                  │
│ │  • Similarity   │            │  • Key structure│                  │
│ │  • Recency      │            │  • Tag index    │                  │
│ └────────┬────────┘            └────────┬────────┘                  │
│          │                              │                           │
│          └──────────────┬───────────────┘                           │
│                         ▼                                           │
│              ┌─────────────────┐                                    │
│              │   Core Module   │                                    │
│              │   (core.ts)     │                                    │
│              │   ───────────   │                                    │
│              │   • Tokenize    │                                    │
│              │   • Chunk       │                                    │
│              │   • BM25 math   │                                    │
│              │   • Episode     │                                    │
│              └────────┬────────┘                                    │
│                       │                                             │
│          ┌────────────┼────────────┐                                │
│          │            │            │                                │
│          ▼            ▼            ▼                                │
│   ┌───────────┐ ┌───────────┐ ┌───────────┐                         │
│   │Embeddings │ │Importance │ │Threading  │                         │
│   │OpenAI/    │ │Scoring    │ │& Linking  │                         │
│   │Cohere     │ │           │ │           │                         │
│   └───────────┘ └───────────┘ └───────────┘                         │
│                       │                                             │
│                       ▼                                             │
│              ┌─────────────────┐                                    │
│              │  Summarization  │                                    │
│              │  (compression)  │                                    │
│              └─────────────────┘                                    │
│                       │                                             │
│                       ▼                                             │
│              ┌─────────────────┐                                    │
│              │  Upstash Redis  │                                    │
│              │  (persistence)  │                                    │
│              └─────────────────┘                                    │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘
```

### Installation

```bash
npm install @clawnch/memory
```

### Quick Start

```typescript
import { createAgentMemory } from '@clawnch/memory';

const memory = createAgentMemory('my-agent', {
  redisUrl: process.env.KV_REST_API_URL!,
  redisToken: process.env.KV_REST_API_TOKEN!
});

// Store a memory
await memory.remember('The user prefers dark mode and TypeScript', {
  type: 'fact',
  tags: ['preferences', 'user']
});

// Recall memories
const result = await memory.recall('user preferences', {
  formatForLLM: true
});
console.log(result.context);
// ## Relevant Memories
// [fact] The user prefers dark mode and TypeScript
```

---

### TypeScript SDK

#### AgentMemoryConfig

Configuration options for creating an agent memory instance.

```typescript
interface AgentMemoryConfig {
  /** Redis connection URL (Upstash) */
  redisUrl: string;
  /** Redis authentication token */
  redisToken: string;
  /** Key prefix for Redis keys (default: 'mem') */
  keyPrefix?: string;
  /** Default TTL for episodes in seconds (0 = no expiry) */
  defaultTTL?: number;
  /** Maximum episodes to keep per agent (0 = unlimited) */
  maxEpisodes?: number;
  /** Auto-prune when exceeding maxEpisodes */
  autoPrune?: boolean;
}
```

#### createAgentMemory(agentId, config)

Factory function to create an agent memory instance.

```typescript
function createAgentMemory(agentId: string, config: AgentMemoryConfig): AgentMemory
```

**Parameters:**
- `agentId` (string) - Unique identifier for the agent (used for namespace isolation)
- `config` (AgentMemoryConfig) - Configuration options

**Example:**
```typescript
const memory = createAgentMemory('clawnch-bot', {
  redisUrl: process.env.KV_REST_API_URL!,
  redisToken: process.env.KV_REST_API_TOKEN!,
  maxEpisodes: 1000,
  autoPrune: true
});
```

#### createAgentMemoryFromEnv(agentId)

Create agent memory using environment variables (`KV_REST_API_URL`, `KV_REST_API_TOKEN`).

```typescript
function createAgentMemoryFromEnv(agentId: string): AgentMemory
```

**Example:**
```typescript
// Uses KV_REST_API_URL and KV_REST_API_TOKEN from environment
const memory = createAgentMemoryFromEnv('my-agent');
```

---

#### AgentMemory Class

The main interface for agent memory operations.

##### remember(text, options?)

Store text in memory. Automatically chunks long text.

```typescript
async remember(text: string, options?: RememberOptions): Promise<Episode>
```

**RememberOptions:**
```typescript
interface RememberOptions {
  /** Episode type */
  type?: 'conversation' | 'document' | 'fact' | 'event' | 'custom';
  /** Tags for filtering */
  tags?: string[];
  /** Custom metadata */
  metadata?: Record<string, unknown>;
  /** Chunking options */
  chunking?: {
    maxTokens?: number;        // Default: 200
    overlap?: number;          // Default: 50
    splitOn?: 'sentence' | 'paragraph' | 'fixed';
  };
}
```

**Example:**
```typescript
// Store a simple fact
await memory.remember('User prefers dark mode', {
  type: 'fact',
  tags: ['preferences']
});

// Store a long document with custom chunking
await memory.remember(longDocument, {
  type: 'document',
  tags: ['technical', 'reference'],
  chunking: { splitOn: 'paragraph', maxTokens: 500 }
});
```

##### rememberFact(text, tags?, metadata?)

Store a single fact without chunking.

```typescript
async rememberFact(
  text: string,
  tags?: string[],
  metadata?: Record<string, unknown>
): Promise<Episode>
```

**Example:**
```typescript
await memory.rememberFact(
  'API key expires on 2026-03-01',
  ['credentials', 'important'],
  { source: 'settings' }
);
```

##### rememberConversation(messages, tags?, metadata?)

Store a conversation as a single episode.

```typescript
async rememberConversation(
  messages: Array<{ role: string; content: string }>,
  tags?: string[],
  metadata?: Record<string, unknown>
): Promise<Episode>
```

**Example:**
```typescript
await memory.rememberConversation([
  { role: 'user', content: 'How do I deploy to Vercel?' },
  { role: 'assistant', content: 'Run `vercel` in your project directory.' }
], ['support', 'deployment']);
```

##### recall(query, options?)

Search memories by text query. Uses BM25 ranking with optional recency boosting.

```typescript
async recall(query: string, options?: RecallOptions): Promise<RecallResult>
```

**RecallOptions:**
```typescript
interface RecallOptions {
  /** Maximum results to return (default: 10) */
  limit?: number;
  /** Filter by episode types */
  types?: EpisodeType[];
  /** Filter by tags (AND logic) */
  tags?: string[];
  /** Filter by time range */
  after?: number;
  before?: number;
  /** Recency weight 0-1 (default: 0.2) */
  recencyWeight?: number;
  /** Minimum relevance score */
  minScore?: number;
  /** Format output for LLM context */
  formatForLLM?: boolean;
  /** Max tokens for LLM context (default: 2000) */
  maxContextTokens?: number;
}
```

**RecallResult:**
```typescript
interface RecallResult {
  results: SearchResult[];
  context?: string;      // Formatted for LLM if requested
  totalMatches: number;
}

interface SearchResult {
  chunk: Chunk;
  episode: Episode;
  score: number;
  matchedTerms: string[];
  highlights: string[];
}
```

**Example:**
```typescript
// Basic search
const { results } = await memory.recall('deployment settings');
results.forEach(r => {
  console.log(`[${r.score.toFixed(2)}] ${r.highlights[0]}`);
});

// Search with LLM formatting
const { context } = await memory.recall('user preferences', {
  formatForLLM: true,
  maxContextTokens: 1500,
  tags: ['preferences']
});
// Use `context` directly in your LLM prompt
```

##### getRecent(limit?, options?)

Get the most recent memories.

```typescript
async getRecent(
  limit?: number,
  options?: { types?: EpisodeType[]; tags?: string[] }
): Promise<Episode[]>
```

**Example:**
```typescript
const recent = await memory.getRecent(5, { tags: ['important'] });
recent.forEach(ep => {
  console.log(`[${ep.type}] ${ep.chunks[0].text.slice(0, 50)}...`);
});
```

##### findSimilar(text, options?)

Find memories similar to given text using cosine similarity.

```typescript
async findSimilar(text: string, options?: SearchOptions): Promise<SearchResult[]>
```

**Example:**
```typescript
const similar = await memory.findSimilar(
  'How do I configure the database connection?',
  { limit: 3, minScore: 0.3 }
);
```

##### getByTag(tag, limit?)

Get all memories with a specific tag.

```typescript
async getByTag(tag: string, limit?: number): Promise<Episode[]>
```

**Example:**
```typescript
const preferences = await memory.getByTag('preferences', 20);
```

##### getEpisode(episodeId)

Get a specific episode by ID.

```typescript
async getEpisode(episodeId: string): Promise<Episode | null>
```

##### forget(episodeId)

Delete a specific episode.

```typescript
async forget(episodeId: string): Promise<boolean>
```

**Example:**
```typescript
const deleted = await memory.forget('ep_my-agent_1706789123_abc123');
console.log(deleted ? 'Deleted' : 'Not found');
```

##### addTags(episodeId, tags) / removeTags(episodeId, tags)

Manage tags on an episode.

```typescript
async addTags(episodeId: string, tags: string[]): Promise<void>
async removeTags(episodeId: string, tags: string[]): Promise<void>
```

##### getStats()

Get memory statistics for the agent.

```typescript
async getStats(): Promise<MemoryStats>
```

**MemoryStats:**
```typescript
interface MemoryStats {
  totalEpisodes: number;
  totalChunks: number;
  uniqueWords: number;
  oldestMemory: number;
  newestMemory: number;
  byType: Record<EpisodeType, number>;
}
```

**Example:**
```typescript
const stats = await memory.getStats();
console.log(`Total: ${stats.totalEpisodes} episodes, ${stats.totalChunks} chunks`);
console.log(`Facts: ${stats.byType.fact}, Conversations: ${stats.byType.conversation}`);
```

##### listTags()

Get all tags used by this agent.

```typescript
async listTags(): Promise<string[]>
```

##### extractTopics(topN?)

Extract key topics from all memories using IDF weighting.

```typescript
async extractTopics(topN?: number): Promise<string[]>
```

**Example:**
```typescript
const topics = await memory.extractTopics(10);
console.log('Top topics:', topics.join(', '));
```

##### buildContext(query, options?)

Build an LLM-ready context string from recent and relevant memories.

```typescript
async buildContext(
  query: string,
  options?: {
    maxTokens?: number;
    includeRecent?: number;
    includeSimilar?: boolean;
    tags?: string[];
  }
): Promise<string>
```

**Example:**
```typescript
const context = await memory.buildContext('user preferences for dark mode', {
  maxTokens: 3000,
  includeRecent: 3,
  tags: ['preferences']
});

// Use in LLM prompt
const prompt = `Given the following context:\n${context}\n\nAnswer: ...`;
```

##### Maintenance Methods

```typescript
// Prune old episodes, keeping only the N most recent
async prune(keepCount: number): Promise<number>

// Prune episodes older than a date
async pruneOlderThan(date: Date): Promise<number>

// Clear all memories for this agent
async clear(): Promise<void>
```

---

### Embeddings

The memory system supports semantic search using vector embeddings from OpenAI or Cohere.

#### Configuration

```typescript
import {
  createOpenAIEmbeddings,
  createCohereEmbeddings,
  createEmbeddingsFromEnv
} from '@clawnch/memory';

// OpenAI embeddings
const openai = createOpenAIEmbeddings(process.env.OPENAI_API_KEY!, {
  model: 'text-embedding-3-small',  // or 'text-embedding-3-large'
  dimensions: 1536                   // Can reduce for v3 models
});

// Cohere embeddings
const cohere = createCohereEmbeddings(process.env.COHERE_API_KEY!, {
  model: 'embed-english-v3.0',
  inputType: 'search_document'
});

// Auto-detect from environment
const provider = createEmbeddingsFromEnv({
  preferredProvider: 'openai'
});
```

#### EmbeddingProvider Interface

```typescript
interface EmbeddingProvider {
  /** Generate embedding for a single text */
  embed(text: string): Promise<number[]>;
  
  /** Generate embeddings for multiple texts (more efficient) */
  embedBatch(texts: string[]): Promise<number[][]>;
  
  /** Provider name */
  readonly name: string;
  
  /** Model being used */
  readonly model: string;
  
  /** Embedding dimensions */
  readonly dimensions: number;
}
```

#### OpenAI Models

| Model | Dimensions | Cost | Notes |
|-------|-----------|------|-------|
| `text-embedding-3-small` | 1536 | $0.02/1M tokens | Recommended |
| `text-embedding-3-large` | 3072 | $0.13/1M tokens | Higher quality |
| `text-embedding-ada-002` | 1536 | $0.10/1M tokens | Legacy |

#### Cohere Models

| Model | Dimensions | Notes |
|-------|-----------|-------|
| `embed-english-v3.0` | 1024 | Best for English |
| `embed-multilingual-v3.0` | 1024 | 100+ languages |
| `embed-english-light-v3.0` | 384 | Faster, smaller |
| `embed-multilingual-light-v3.0` | 384 | Fast multilingual |

#### Vector Operations

```typescript
import {
  cosineSimilarity,
  euclideanDistance,
  dotProduct,
  normalizeVector,
  findSimilar,
  findSimilarWithThreshold
} from '@clawnch/memory';

// Compute similarity between vectors
const sim = cosineSimilarity(vectorA, vectorB);  // -1 to 1

// Find top K similar vectors
const results = findSimilar(queryVector, documentVectors, 5);
// [{ index: 3, score: 0.92 }, { index: 7, score: 0.85 }, ...]

// Find all vectors above threshold
const matches = findSimilarWithThreshold(queryVector, documentVectors, 0.8);
```

#### Custom Embeddings

```typescript
import { createCustomEmbeddings } from '@clawnch/memory';

const custom = createCustomEmbeddings({
  name: 'local-model',
  model: 'my-model',
  dimensions: 768,
  embedFn: async (texts: string[]) => {
    // Your embedding logic here
    return texts.map(t => generateEmbedding(t));
  }
});
```

---

### Importance Scoring

The importance module scores memories by salience to help prioritize during retrieval and compression.

#### scoreImportance(text, metadata?)

Score the importance of text content.

```typescript
import { scoreImportance } from '@clawnch/memory';

const score = scoreImportance('Remember to always use TypeScript');
console.log(score);
// {
//   level: 'high',
//   score: 0.75,
//   reasons: ['Contains explicit importance markers (1)', 'Contains instructions (1)'],
//   keywords: ['typescript']
// }
```

**ImportanceScore:**
```typescript
interface ImportanceScore {
  level: 'critical' | 'high' | 'normal' | 'low' | 'trivial';
  score: number;      // 0-1
  reasons: string[];  // Why this score
  keywords: string[]; // Salient terms extracted
}
```

**Importance Levels:**

| Level | Score Range | Examples |
|-------|------------|----------|
| `critical` | 0.9 - 1.0 | Explicit importance markers, personal info, preferences |
| `high` | 0.7 - 0.9 | Decisions, instructions, temporal references |
| `normal` | 0.4 - 0.7 | Factual content, emotional content |
| `low` | 0.2 - 0.4 | Uncertain statements, hedging |
| `trivial` | 0.0 - 0.2 | Greetings, filler words |

#### Detection Functions

```typescript
import {
  detectKeywords,
  hasActionableContent,
  hasEmotionalContent,
  hasFactualContent,
  shouldRetain,
  boostImportance
} from '@clawnch/memory';

// Extract salient keywords
const keywords = detectKeywords('Contact john@example.com about the deadline');
// ['deadline', 'john@example.com']

// Check content types
hasActionableContent('Please create a new file');  // true
hasEmotionalContent('I love this feature!');       // true
hasFactualContent('The price is $50');             // true

// Check if memory should be retained
shouldRetain(score, 0.3);  // true if score.score >= 0.3

// Boost based on access patterns
const boosted = boostImportance(score, {
  accessCount: 10,
  daysSinceLastAccess: 2
});
```

---

### Threading

Threading provides conversation tracking and memory linking for agents.

#### Thread Management

```typescript
import {
  createThread,
  addToThread,
  removeFromThread,
  getThreadContext,
  generateThreadTitle,
  mergeThreads
} from '@clawnch/memory';

// Create a new thread
const thread = createThread('my-agent', 'User Onboarding');
// { id: 'thread_my-agent_1706789123_0', agentId: 'my-agent', title: 'User Onboarding', ... }

// Add episodes to thread
addToThread(thread, 'ep_my-agent_1706789123_abc');
addToThread(thread, 'ep_my-agent_1706789456_def');

// Get context (episodes in chronological order)
const episodes = getThreadContext(thread, allEpisodes, 5);

// Auto-generate title from content
const title = generateThreadTitle(episodes);
// 'Configuration, deployment, settings'

// Merge two threads
const merged = mergeThreads(thread1, thread2);
```

#### Memory Linking

```typescript
import {
  createLink,
  findRelatedMemories,
  getLinksFor,
  findStrongestLink
} from '@clawnch/memory';

// Create links between memories
const link = createLink(
  'ep_agent_123',
  'ep_agent_456',
  'references',  // 'follows' | 'references' | 'contradicts' | 'supports' | 'related'
  0.8            // strength 0-1
);

// Find all related memories (BFS traversal)
const related = findRelatedMemories('ep_agent_123', allLinks, {
  minStrength: 0.5,
  linkTypes: ['references', 'supports'],
  maxDepth: 2
});

// Get links for a specific memory
const links = getLinksFor('ep_agent_123', allLinks, 'both');
```

#### Contradiction Detection

```typescript
import { detectContradictions } from '@clawnch/memory';

const contradictions = detectContradictions(
  'The user prefers light mode',
  existingChunks
);
// [{ chunkId: 'chunk_xyz', reason: 'Potential contradiction about: mode, prefers' }]
```

#### Context Building

```typescript
import { buildThreadContext, buildLinkedContext } from '@clawnch/memory';

// Build LLM context from a thread
const threadContext = buildThreadContext(thread, episodes, 4000);

// Build context from linked memories
const linkedContext = buildLinkedContext(
  sourceEpisode,
  relatedEpisodes,
  links,
  2000
);
```

---

### Summarization

Memory compression through summarization helps manage long-term memory growth.

#### Configuration

```typescript
import { DEFAULT_COMPRESSION_CONFIG } from '@clawnch/memory';

const config = {
  maxEpisodes: 100,                                    // Trigger compression at 100
  recentToKeep: 10,                                    // Always keep 10 most recent
  protectedTags: new Set(['important', 'critical']),  // Never compress these
  importanceThreshold: 0.7                             // Keep high-importance intact
};
```

#### Local Compression (No LLM)

```typescript
import { compressMemoriesLocal, estimateImportance } from '@clawnch/memory';

// Build importance map
const importance = new Map<string, number>();
for (const ep of episodes) {
  importance.set(ep.id, estimateImportance(ep, {
    accessCount: accessCounts.get(ep.id),
    lastAccessed: lastAccess.get(ep.id)
  }));
}

// Compress (heuristic extraction)
const result = compressMemoriesLocal('my-agent', episodes, importance, config);
if (result) {
  console.log(`Compressed ${result.episodesRemoved} episodes`);
  console.log(`Reduced ${result.tokensReduced} tokens`);
  console.log(`Summary: ${result.summary.text}`);
  console.log(`Key facts: ${result.summary.keyFacts.join(', ')}`);
}
```

#### LLM-Based Compression

```typescript
import { compressMemories, createSummaryPrompt, finalizeCompression } from '@clawnch/memory';

// Step 1: Get compression candidates and prompt
const { toCompress, retained, prompt } = compressMemories(
  'my-agent',
  episodes,
  importance,
  config
);

if (toCompress.length > 0) {
  // Step 2: Call your LLM with the prompt
  const llmResponse = await callLLM(prompt);
  
  // Step 3: Finalize compression
  const result = finalizeCompression('my-agent', toCompress, retained, llmResponse);
  
  // Store summary, delete compressed episodes
  await storeSummary(result.summary);
  for (const ep of toCompress) {
    await memory.forget(ep.id);
  }
}
```

#### Summary Type

```typescript
interface Summary {
  id: string;
  agentId: string;
  sourceEpisodeIds: string[];  // Episodes that were summarized
  text: string;                 // The summary text
  keyFacts: string[];           // Extracted key facts
  keyEntities: string[];        // People, places, things
  timeRange: { start: number; end: number };
  createdAt: number;
}
```

#### Extraction Functions

```typescript
import { extractKeyFacts, extractEntities } from '@clawnch/memory';

const facts = extractKeyFacts(text);
// ['User prefers TypeScript.', 'API key expires on 2026-03-01.']

const entities = extractEntities(text);
// ['John Smith', 'Vercel', 'Base Network']
```

---

### MCP Server

**Package:** `@clawnch/memory-server`  
**Protocol:** [Model Context Protocol](https://modelcontextprotocol.io)

MCP server providing memory tools for AI agents.

```
┌────────────────────────────────────────────────────────────────┐
│                    MCP MEMORY SERVER                            │
├────────────────────────────────────────────────────────────────┤
│                                                                │
│   AI Client (Claude Desktop, OpenClaw, etc)                    │
│         │                                                      │
│         ▼                                                      │
│   ┌─────────────────┐                                          │
│   │ MCP Protocol    │                                          │
│   │ (stdio)         │                                          │
│   └────────┬────────┘                                          │
│            │                                                   │
│            ▼                                                   │
│   ┌─────────────────────────────────────────┐                  │
│   │ clawnch-memory-server                   │                  │
│   │ ───────────────────────────             │                  │
│   │ Tools:                                  │                  │
│   │  • memory_remember   - Store memories   │                  │
│   │  • memory_recall     - Search memories  │                  │
│   │  • memory_recent     - Get recent       │                  │
│   │  • memory_forget     - Delete memory    │                  │
│   │  • memory_tag        - Manage tags      │                  │
│   │  • memory_stats      - Get statistics   │                  │
│   │  • memory_context    - Build LLM context│                  │
│   └────────┬────────────────────────────────┘                  │
│            │                                                   │
│            ▼                                                   │
│   ┌─────────────────┐                                          │
│   │  Upstash Redis  │                                          │
│   └─────────────────┘                                          │
│                                                                │
└────────────────────────────────────────────────────────────────┘
```

#### Installation

```bash
npm install -g @clawnch/memory-server
```

#### Configuration

Add to your MCP settings file (e.g., `claude_desktop_config.json`):

```json
{
  "mcpServers": {
    "memory": {
      "command": "clawnch-memory",
      "env": {
        "KV_REST_API_URL": "your_upstash_redis_url",
        "KV_REST_API_TOKEN": "your_upstash_redis_token"
      }
    }
  }
}
```

#### Tools

##### memory_remember

Store text in memory.

**Input Schema:**
```typescript
{
  agent_id: string;   // Agent identifier
  text: string;       // Text to remember
  type?: 'conversation' | 'document' | 'fact' | 'event';
  tags?: string[];    // Tags for categorizing
}
```

**Output:**
```json
{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc123",
  "chunks": 3,
  "tags": ["preferences"]
}
```

**Example:**
```typescript
{
  "name": "memory_remember",
  "arguments": {
    "agent_id": "clawnch-bot",
    "text": "User prefers dark mode and TypeScript for all projects",
    "type": "fact",
    "tags": ["preferences", "user"]
  }
}
```

---

##### memory_recall

Search memories by query.

**Input Schema:**
```typescript
{
  agent_id: string;   // Agent identifier
  query: string;      // Search query
  limit?: number;     // Max results (default: 5)
  tags?: string[];    // Filter by tags
  type?: string;      // Filter by type
}
```

**Output:**
```json
{
  "success": true,
  "count": 2,
  "results": [
    {
      "episode_id": "ep_my-agent_1706789123_abc",
      "type": "fact",
      "score": "0.850",
      "tags": ["preferences"],
      "snippet": "User prefers dark mode and TypeScript...",
      "created": "2026-02-01T12:00:00.000Z"
    }
  ]
}
```

---

##### memory_recent

Get the most recent memories.

**Input Schema:**
```typescript
{
  agent_id: string;   // Agent identifier
  limit?: number;     // Number to return (default: 5)
}
```

**Output:**
```json
{
  "success": true,
  "count": 3,
  "episodes": [
    {
      "episode_id": "ep_my-agent_1706789456_def",
      "type": "conversation",
      "tags": ["support"],
      "preview": "How do I configure the database...",
      "created": "2026-02-01T14:30:00.000Z"
    }
  ]
}
```

---

##### memory_forget

Delete a specific memory.

**Input Schema:**
```typescript
{
  agent_id: string;     // Agent identifier
  episode_id: string;   // Episode ID to delete
}
```

**Output:**
```json
{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc"
}
```

---

##### memory_tag

Add tags to a memory episode.

**Input Schema:**
```typescript
{
  agent_id: string;     // Agent identifier
  episode_id: string;   // Episode ID to tag
  tags: string[];       // Tags to add
}
```

**Output:**
```json
{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc",
  "tags_added": ["important", "reference"]
}
```

---

##### memory_stats

Get memory statistics for an agent.

**Input Schema:**
```typescript
{
  agent_id: string;   // Agent identifier
}
```

**Output:**
```json
{
  "success": true,
  "agent_id": "clawnch-bot",
  "episodes": 47,
  "tags": ["preferences", "support", "technical", "important"]
}
```

---

##### memory_context

Build an LLM-ready context string from relevant memories.

**Input Schema:**
```typescript
{
  agent_id: string;     // Agent identifier
  query: string;        // Query to find relevant memories
  max_tokens?: number;  // Maximum tokens (default: 2000)
}
```

**Output:**
```
[fact] User prefers dark mode and TypeScript for all projects.

---

[conversation] How do I configure the database connection?
Use the DATABASE_URL environment variable...
```

---

### HTTP API

**Base URL:** `https://clawn.ch/api/memory`

Unified POST endpoint for all memory operations.

#### Request Format

```typescript
POST /api/memory
Content-Type: application/json

{
  "action": "remember" | "recall" | "recent" | "forget" | "tag" | "stats" | "context",
  "agent_id": string,
  // ...action-specific fields
}
```

#### Actions

##### remember

Store text in memory.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "remember",
    "agent_id": "my-agent",
    "text": "User prefers dark mode",
    "type": "fact",
    "tags": ["preferences"]
  }'
```

**Response:**
```json
{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc123",
  "chunks": 1,
  "tags": ["preferences"]
}
```

---

##### recall

Search memories.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "recall",
    "agent_id": "my-agent",
    "query": "user preferences",
    "limit": 5
  }'
```

**Response:**
```json
{
  "success": true,
  "count": 2,
  "results": [
    {
      "episode_id": "ep_my-agent_1706789123_abc",
      "type": "fact",
      "score": 0.85,
      "tags": ["preferences"],
      "snippet": "User prefers dark mode...",
      "created": "2026-02-01T12:00:00.000Z"
    }
  ]
}
```

---

##### recent

Get recent memories.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "recent",
    "agent_id": "my-agent",
    "limit": 5
  }'
```

---

##### forget

Delete a memory.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "forget",
    "agent_id": "my-agent",
    "episode_id": "ep_my-agent_1706789123_abc"
  }'
```

---

##### tag

Add tags to a memory.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "tag",
    "agent_id": "my-agent",
    "episode_id": "ep_my-agent_1706789123_abc",
    "tags": ["important", "reference"]
  }'
```

---

##### stats

Get memory statistics.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "stats",
    "agent_id": "my-agent"
  }'
```

**Response:**
```json
{
  "success": true,
  "agent_id": "my-agent",
  "episodes": 47,
  "tags": ["preferences", "support", "technical"]
}
```

---

##### context

Build LLM context.

```bash
curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "context",
    "agent_id": "my-agent",
    "query": "user preferences",
    "max_tokens": 2000
  }'
```

**Response:**
```json
{
  "success": true,
  "context": "[fact] User prefers dark mode...\n\n---\n\n[conversation] ..."
}
```

---

### Redis Key Structure

The memory system uses a consistent key structure for Redis storage, enabling agent isolation and efficient queries.

```
┌──────────────────────────────────────────────────────────────────┐
│                    REDIS KEY STRUCTURE                            │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  mem:{agentId}:episodes         → Set of all episode IDs         │
│  mem:{agentId}:ep:{episodeId}   → Episode JSON data              │
│  mem:{agentId}:words            → Hash: word → WordStats JSON    │
│  mem:{agentId}:tags             → Set of all tag names           │
│  mem:{agentId}:tag:{tagName}    → Set of episode IDs with tag    │
│  mem:{agentId}:meta             → Agent metadata JSON            │
│  mem:{agentId}:recent           → Sorted set (score=timestamp)   │
│                                                                  │
│  Example for agent "clawnch-bot":                                │
│  ──────────────────────────────────────────────────────          │
│  mem:clawnch-bot:episodes       → {"ep_clawnch-bot_123_abc", ...}│
│  mem:clawnch-bot:ep:ep_..._abc  → {"id":"ep_...", "chunks":[...]}│
│  mem:clawnch-bot:words          → {"user": "{\"idf\":2.3,...}"}  │
│  mem:clawnch-bot:tags           → {"preferences", "support"}     │
│  mem:clawnch-bot:tag:preferences→ {"ep_clawnch-bot_123_abc"}     │
│  mem:clawnch-bot:meta           → {"totalEpisodes":47, ...}      │
│  mem:clawnch-bot:recent         → [(1706789123, "ep_..._abc")]   │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘
```

#### Key Types

| Key Pattern | Redis Type | Description |
|-------------|-----------|-------------|
| `mem:{agentId}:episodes` | Set | All episode IDs for the agent |
| `mem:{agentId}:ep:{id}` | String | Episode JSON (serialized) |
| `mem:{agentId}:words` | Hash | Word statistics for BM25 |
| `mem:{agentId}:tags` | Set | All tag names used |
| `mem:{agentId}:tag:{tag}` | Set | Episode IDs with this tag |
| `mem:{agentId}:meta` | String | Agent metadata JSON |
| `mem:{agentId}:recent` | Sorted Set | Episode IDs sorted by timestamp |

#### Episode JSON Structure

```json
{
  "id": "ep_clawnch-bot_1706789123_abc123",
  "agentId": "clawnch-bot",
  "chunks": [
    {
      "id": "chunk_ep_clawnch-bot_1706789123_abc123_0",
      "text": "User prefers dark mode and TypeScript",
      "tokens": ["user", "prefers", "dark", "mode", "typescript"],
      "tokenFrequency": {"user": 1, "prefers": 1, "dark": 1, "mode": 1, "typescript": 1},
      "timestamp": 1706789123000,
      "episodeId": "ep_clawnch-bot_1706789123_abc123",
      "index": 0
    }
  ],
  "tags": ["preferences", "user"],
  "type": "fact",
  "createdAt": 1706789123000,
  "updatedAt": 1706789123000,
  "metadata": {"source": "conversation"}
}
```

#### WordStats JSON Structure

```json
{
  "word": "typescript",
  "documentFrequency": 12,
  "totalOccurrences": 34,
  "idf": 2.31,
  "lastSeen": 1706789123000
}
```

#### Agent Metadata JSON Structure

```json
{
  "agentId": "clawnch-bot",
  "totalEpisodes": 47,
  "totalChunks": 156,
  "totalWords": 892,
  "avgChunkLength": 45.2,
  "createdAt": 1706700000000,
  "updatedAt": 1706789123000
}
```

---

## Support

- **Website:** [clawn.ch](https://clawn.ch)
- **GitHub:** [github.com/clawnch](https://github.com/clawnch)
- **Twitter:** [@Clawnch_Bot](https://x.com/Clawnch_Bot)

---

**Last Updated:** February 3, 2026  
**SDK Version:** 1.0.4  
**MCP Version:** 1.0.4  
**API Version:** v1
