← Back to Home

Using Memori with AI Coding Assistants

Give Claude Code, Cursor, and Cline persistent memory across sessions. Remember coding preferences, project context, and architectural decisions.

💡
Community Request: This guide was created in response to Issue #66 asking how to use Memori with Claude Code. No, you don't need vLLM - just an MCP server!

Supported Tools

Claude Code

Anthropic's official CLI for Claude. Full MCP support for memory tools.

Fully Supported

Cursor

AI-first code editor. Configure MCP servers in settings for memory integration.

Fully Supported

Cline

VS Code extension for AI coding. Supports MCP for external tools like Memori.

Fully Supported

How It Works

AI coding assistants own the LLM call internally, so you can't wrap their client directly with Memori's SDK. Instead, Memori runs as an MCP (Model Context Protocol) server that exposes memory tools the assistant can call.

┌─────────────────────┐         MCP Protocol          ┌─────────────────────┐
│                     │                               │                     │
│   Claude Code       │  ◄─── remember / recall ───►  │   Memori MCP        │
│   Cursor            │                               │   Server            │
│   Cline             │                               │                     │
│                     │                               └──────────┬──────────┘
└─────────────────────┘                                          │
                                                                 ▼
                                                      ┌─────────────────────┐
                                                      │   Local Database    │
                                                      │   (SQLite/Postgres) │
                                                      └─────────────────────┘

Quick Start: Claude Code

  1. Install the Memori MCP Server
    pip install memori-mcp

    See Memori MCP Server for details.

  2. Configure Claude Code

    Add to your MCP configuration (~/.claude/claude_desktop_config.json):

    { "mcpServers": { "memori": { "command": "memori-mcp-server", "args": ["--db", "sqlite:///~/.memori/claude-code.db"], "env": { "MEMORI_ATTRIBUTION": "claude-code:${USER}" } } } }
  3. Restart Claude Code

    Restart to load the MCP server configuration.

  4. Test It

    Try these prompts:

    # Store a memory Remember that I prefer TypeScript over JavaScript for new projects. # Recall later (even in a new session) What are my coding preferences?

Quick Start: Cursor

  1. Install the MCP Server
    pip install memori-mcp
  2. Configure Cursor

    Edit ~/.cursor/mcp.json:

    { "mcpServers": { "memori": { "command": "memori-mcp-server", "args": ["--db", "sqlite:///~/.memori/cursor.db"] } } }
  3. Enable MCP in Settings

    Open Settings (Cmd/Ctrl + ,) → Search "MCP" → Enable "Model Context Protocol"

  4. Restart Cursor

Quick Start: Cline

  1. Install the MCP Server
    pip install memori-mcp
  2. Configure Cline

    In VS Code, open Cline settings and add:

    { "cline.mcpServers": { "memori": { "command": "memori-mcp-server", "args": ["--db", "sqlite:///~/.memori/cline.db"] } } }
  3. Reload VS Code Window

Practical Use Cases

🏗️ Codebase Onboarding

Remember key facts about a new project's architecture, conventions, and patterns.

"Remember: Auth uses JWT in HTTP-only cookies, API prefix is /api/v1"

✨ Coding Style

Store personal preferences so generated code matches your style.

"Remember: I prefer async/await over .then() chains"

🐛 Bug Investigation

Record debugging context and solutions for future reference.

"Remember: Payment bug was race condition, fixed with idempotency keys"

📐 Architecture Decisions

Document why certain technical choices were made.

"Remember: Chose PostgreSQL over MongoDB for ACID compliance"

Available Memory Tools

Once configured, the AI assistant has access to these tools:

memori_remember

Store a memory for later recall.

// Parameters { "content": "User prefers functional programming", // required "type": "preference", // preference | fact | summary | rule "ttl": null // optional: auto-expire in seconds }

memori_recall

Retrieve relevant memories based on semantic similarity.

// Parameters { "query": "What are the user's coding preferences?", "limit": 5, // max memories to return "min_score": 0.7 // similarity threshold }

memori_forget

Remove specific memories or by query.

// By ID { "memory_id": "mem_abc123" } // By query { "query": "outdated config", "older_than": "30d" }

memori_list

List all memories for inspection and debugging.

{ "type": "preference", "limit": 20 }

Frequently Asked Questions

Do I need vLLM to use Memori with Claude Code?

No. vLLM is for self-hosting open-source LLMs. When using Claude Code, Cursor, or Cline, the LLM is already provided by the tool. You only need Memori's MCP server for memory storage.

Can I use the Anthropic API directly with Memori?

Yes! If you're writing your own code that calls the Anthropic API, you can integrate Memori directly by registering the client:

from memori import Memori from anthropic import Anthropic mem = Memori(database_connect="sqlite:///memory.db") client = Anthropic() mem.llm.register(client) mem.enable()
Are my memories sent to the cloud?

No, by default. The MCP server runs locally with a local SQLite database. Your memories stay on your machine. For team features, you can optionally connect to Memori's cloud service.

How do I share memories between Claude Code and Cursor?

Point both tools to the same database file:

"args": ["--db", "sqlite:///~/.memori/shared.db"]
Can I scope memories to a specific project?

Yes! Use a project-local database path:

"args": ["--db", "sqlite:///.memori/project.db"]

This keeps memories isolated per project.

Resources

Questions or feedback? Open an issue on GitHub or join the community Discord.

Back to Home