Your AI forgets everything between conversations. Ask it something you told it last week β blank stare. Supermemory fixes this with a unified memory and context API: it extracts facts from conversations, tracks changes over time, auto-forgets expired info, and gives any AI app a persistent, personalized memory layer. Ranked #1 on all three major memory benchmarks (LongMemEval, LoCoMo, ConvoMem).
| *Source: GitHub β supermemoryai/supermemory (21K+ stars) | δΊθ±ε εΎ· on Douyin | supermemory.ai* |
Why Not Just RAG?
Traditional RAG retrieves the same documents for all users β it doesnβt know whoβs asking. Supermemory is different:
Traditional RAG:
User asks β Search docs β Same results for everyone β Answer
Supermemory:
User asks β Search docs + recall user-specific facts β Personalized answer
β
βββ "User prefers Python over Java"
βββ "User's project uses PostgreSQL"
βββ "User said yesterday: budget is $5K"
(supersedes last month's "$10K")
Key difference: Supermemory tracks facts per user over time, handles contradictions (newer info supersedes older), and auto-forgets temporary context.
Core Capabilities
| Feature | What It Does |
|---|---|
| Fact extraction | Automatically pulls facts from conversations β no manual tagging |
| Temporal awareness | Knows that βI moved to NYCβ supersedes βI live in SFβ from last month |
| Auto-forgetting | Expired info (e.g., βmeeting at 3pm todayβ) is purged automatically |
| User profiles | Auto-maintained context combining stable facts + recent activity (~50ms retrieval) |
| Hybrid search | RAG + memory queries combined β knowledge base docs + personalized context in one call |
| Multi-modal | PDFs, images (OCR), videos (transcription), code (AST-aware chunking) |
| Data connectors | Real-time sync with Google Drive, Gmail, Notion, OneDrive, GitHub |
Quick Start
As MCP Server (Claude Code / Cursor / VS Code)
npx -y install-mcp@latest https://mcp.supermemory.ai/mcp --client claude --oauth=yes
One command β Claude Code gains persistent memory across sessions.
As SDK
npm install supermemory # or: pip install supermemory
import { SuperMemory } from 'supermemory';
const client = new SuperMemory();
// Store a memory
await client.add("User prefers dark mode and uses vim keybindings");
// Retrieve user profile + search
const result = await client.profile({ user_id: "user123", search: "editor preferences" });
Framework Integrations
Drop-in wrappers for: Vercel AI SDK, LangChain, LangGraph, OpenAI Agents SDK, Mastra, Agno, n8n.
Benchmark Results
| Benchmark | What It Tests | Supermemory Score | Rank |
|---|---|---|---|
| LongMemEval | Long-term memory with knowledge updates | 81.6% accuracy | #1 |
| LoCoMo | Fact recall across extended conversations | Top | #1 |
| ConvoMem | Personalization and preference learning | Top | #1 |
The team also open-sourced MemoryBench β a benchmarking framework for comparing memory providers head-to-head.
How It Compares
| Β | Supermemory | Mem0 | Zep | Letta |
|---|---|---|---|---|
| Approach | Unified memory ontology | Graph-enhanced memory | Temporal knowledge graph | Self-editing memory (OS metaphor) |
| Temporal handling | Auto-supersede + auto-forget | Manual updates | Tracks fact changes over time | Archival store |
| User profiles | Auto-generated, ~50ms | Manual configuration | Built-in | Per-agent state |
| MCP support | Yes (one-command install) | Via community servers | No | No |
| Multi-modal | PDF, images, video, code | Text-focused | Text-focused | Text-focused |
| GitHub stars | 21K+ | 48K+ | 3K+ | 15K+ |
| Best for | Full-stack AI apps with personalization | Chatbots, personal assistants | Enterprise with compliance needs | Agent runtimes with autonomy |
Architecture: One Ontology, Not Five Systems
Most memory solutions require you to configure separate systems: vector DB for search, graph DB for relationships, key-value store for facts, profile builder for users. Supermemory consolidates everything into a single unified memory ontology:
βββββββββββββββββββββββββββββββββββββββββββ
β Supermemory Unified Layer β
β β
β ββββββββββββ ββββββββββββ βββββββββββ
β β Fact β β Search β βProfile ββ
β βExtractionβ β (hybrid) β βBuilder ββ
β ββββββ¬ββββββ ββββββ¬ββββββ βββββ¬ββββββ
β β β β β
β ββββββββββββββββΌβββββββββββββ β
β βΌ β
β ββββββββββββββββββββ β
β β Unified Memory β β
β β Ontology β β
β ββββββββββββββββββββ β
β β β
β ββββββββββββββββΌβββββββββββββ β
β βΌ βΌ βΌ β
β ββββββββββββ ββββββββββββ βββββββββββ
β βConnectorsβ β Multi- β β Auto- ββ
β β(Drive, β β modal β β forget ββ
β β Notion) β β(PDF,img) β β ββ
β ββββββββββββ ββββββββββββ βββββββββββ
βββββββββββββββββββββββββββββββββββββββββββ
No separate vector DB to configure. No graph DB to maintain. One system.
How LearnAI Team Could Use This
- Persistent course assistants β Give AI tutors memory across sessions so students donβt re-explain goals, misconceptions, or project context.
- Research project continuity β Store evolving facts about datasets, hypotheses, and results so research agents resume accurately.
- Personalized AI literacy coaching β Track each learnerβs preferred tools, skill level, and recurring blockers.
- Faculty support workflows β Maintain context across syllabus design, assignment drafting, rubric iteration, and feedback cycles.
Real-World Use Cases
- Personal AI assistants β Remember user preferences, projects, deadlines, and recent decisions across conversations.
- Customer support agents β Retrieve account-specific history without forcing users to repeat context.
- AI coding agents β Preserve repository conventions, architectural decisions, and developer preferences across sessions.
- Enterprise knowledge apps β Combine shared documents with user-specific context for more relevant search.
- Sales workflows β Track changing customer needs, budgets, stakeholders, and follow-up commitments over time.
Links
- GitHub: supermemoryai/supermemory (21K+ stars)
- Docs: supermemory.ai/docs
- Console: console.supermemory.ai
- MemoryBench: Open-source benchmarking for memory providers
- MCP install:
npx -y install-mcp@latest https://mcp.supermemory.ai/mcp --client claude --oauth=yes