Engram — context memory for AI agents

A brain-inspired, portable context database. Store agent knowledge as atomic bullets in a concept graph — not raw text. Any agent, any LLM, any framework. Context persists, transfers, and gets smarter with every use.

MIT License · pip install engram-contextdb

Agent memory is broken

Current AI frameworks store context as raw text, summaries, or vector chunks. This leads to predictable failures.

Context decay

Details lost through repeated summarization. By session 10, your agent has forgotten what mattered in session 2.

Context isolation

Claude can't share context with GPT or Gemini. Switch models and you start from zero.

Context-as-text

No structure, no relationships, no intent tracking. Just blobs of text with no way to query or evolve them.

No learning

Context doesn't improve based on what actually worked. Every recall is equally weighted regardless of past usefulness.

Three operations. That's it.

Agents send raw text in (commit) and get structured context back out (materialize). The server does all the heavy lifting.

1

Commit

Your agent sends raw text — conversation snippets, tool outputs, documents. Engram's canonical Reflector extracts structured bullets (atomic knowledge units), the Curator deduplicates and merges, and delta operations update the concept graph. The raw text is preserved permanently — like git commits.

2

Materialize

When your agent needs context, Engram embeds the query, finds relevant bullets via spreading activation, ranks by effective salience (usage-weighted), packs into a token budget, and renders for the target model — Claude gets XML, GPT gets Markdown, Gemini gets prose.

3

Learn

After using materialized context, agents report back: did it help? Reconsolidation updates bullet salience — useful knowledge gets stronger, unhelpful knowledge fades. The graph gets smarter with every cycle.

What makes Engram different

Atomic Bullets

Knowledge stored as discrete, individually-trackable units — facts, decisions, strategies, warnings, procedures — each with usage stats and lifecycle tracking.

Delta Operations

Mutations are never wholesale rewrites. Every change is an atomic delta op in a batch — preventing context collapse that plagues summarization-based approaches.

Cross-Platform

Store once, materialize for any LLM. Claude, GPT, Gemini, DeepSeek, local models — context transfers seamlessly between them.

Reinforcement Loop

Bullets that prove useful get stronger; unhelpful ones fade away. Inspired by memory reconsolidation in neuroscience — the graph learns from outcomes.

Canonical Reflector

One server-level model processes all raw input from all agents. Consistent bullet quality regardless of which agent committed the data.

Multi-Agent Safe

Per-context advisory locks serialize delta application. Multiple agents can compute in parallel — only writes are serialized.

Every operation is modeled after a real brain mechanism

Engram doesn't just store data — it mirrors how human memory actually works: encoding, recall, reinforcement, forgetting, and consolidation.

Hippocampus Ingestion (Reflector + Curator)
Neocortex Concept Graph + Schemas
Amygdala Salience Scorer
Memory Recall Materialization Engine
Reconsolidation Post-Recall Feedback Loop
Forgetting Curve Salience Decay (Ebbinghaus)
Sleep / Dreams Consolidation Engine
Schema Formation Schema Induction

Based on research from Nader, Schiller & Phelps (reconsolidation), Ebbinghaus (forgetting curve), Bartlett & Piaget (schema theory), and Born & Wilhelm (consolidation). Architecture also draws from the ACE framework (Stanford/SambaNova).

Works with your stack

Engram is a server that sits between your agents and a knowledge graph. Your agents talk to it over HTTP.

MCP Server

Claude Code / Desktop

First-class MCP integration. pip install engram[mcp]

Python SDK

LangGraph, CrewAI, AG2

Async Python client. Full LangGraph example in the README.

Function Calling

OpenAI / GPT Agents

Drop-in tools for OpenAI function calling. get_engram_tools()

Three commands to get started

# Install
pip install engram-contextdb

# Configure (.env with your API keys)
cp .env.example .env

# Run
engram
# Server running at http://localhost:5820

SQLite for local dev (zero setup) or PostgreSQL + pgvector for production. Docker Compose included.

Start building agents that remember.

Engram is MIT-licensed and free to use. Built by the same team that ships production AI systems for enterprise clients.

Need help building agent systems with persistent memory? Talk to our team →