HomeToolsArchitecture QuickstartBenchmarksPatterns Main Site

Install

Choose your preferred installation method. The result is the same: a single binary called m1nd-mcp.

macOS (ARM64) macOS (x86) Linux (x86_64) Windows (WSL2)
Binary Download Cargo Install Build from Source

Download the pre-built binary for your platform. Fastest option -- no build tools needed.

# macOS (Apple Silicon) $ curl -fsSL https://github.com/maxkle1nz/m1nd/releases/latest/download/m1nd-mcp-aarch64-apple-darwin.tar.gz \ | tar xz -C /usr/local/bin/ # macOS (Intel) $ curl -fsSL https://github.com/maxkle1nz/m1nd/releases/latest/download/m1nd-mcp-x86_64-apple-darwin.tar.gz \ | tar xz -C /usr/local/bin/ # Linux $ curl -fsSL https://github.com/maxkle1nz/m1nd/releases/latest/download/m1nd-mcp-x86_64-unknown-linux-gnu.tar.gz \ | tar xz -C /usr/local/bin/ # Verify $ m1nd-mcp --version m1nd-mcp 0.1.0

Install via Cargo. Requires Rust toolchain (1.75+).

$ cargo install m1nd-mcp # Or from the repository directly $ cargo install --git https://github.com/maxkle1nz/m1nd.git m1nd-mcp

Build from source for maximum control and customization.

$ git clone https://github.com/maxkle1nz/m1nd.git $ cd m1nd $ cargo build --release # Binary at: target/release/m1nd-mcp $ cp target/release/m1nd-mcp /usr/local/bin/

Configure Your MCP Client

Add m1nd to your AI coding agent's MCP configuration. Works with any MCP-compatible client.

Claude Code
~/.claude/claude_desktop_config.json
{ "mcpServers": { "m1nd": { "command": "m1nd-mcp", "args": [], "env": { "M1ND_PROJECT_ROOT": "/path/to/your/project" } } } }
OpenCode / Codex
.opencode/mcp.json or opencode.json
{ "mcpServers": { "m1nd": { "command": "m1nd-mcp", "env": { "M1ND_PROJECT_ROOT": "/path/to/your/project" } } } }
M1ND_PROJECT_ROOT

Set this to the root of the codebase you want m1nd to analyze. The graph will be built from files under this directory. If not set, m1nd uses the current working directory.

First Ingest

Build the connectome graph from your codebase. This happens automatically on first query, but you can also trigger it explicitly.

1

Ingest the codebase

Tell your agent to ingest. m1nd will parse your source files, extract modules/functions/types, and build the graph.

// In your AI agent conversation > Use m1nd to ingest my codebase at ./my-project // The agent calls: m1nd.ingest({ "source": "./my-project", "mode": "full" }) Ingested 847 files, created 2341 nodes, 4892 edges. Graph ready.
2

Run your first query

Activate the graph to find related modules. This is the moment it clicks.

> What modules are related to authentication? // The agent calls: m1nd.activate({ "query": "authentication", "depth": 3 }) Found 12 related nodes: auth_middleware.py (0.94) — JWT validation, session management user_model.py (0.87) — User schema, password hashing oauth_handler.py (0.82) — OAuth2 flow, token exchange permission_guard.py (0.76) — RBAC permission checks ...
3

Teach the graph

Provide feedback to improve future results. The graph learns from your usage.

// Those results were exactly what I needed m1nd.learn({ "query": "authentication", "feedback": "correct" }) Hebbian update: strengthened 12 edges in auth subgraph.

Verify It Works

Run a health check to confirm everything is connected.

m1nd.health({}) { "status": "healthy", "nodes": 2341, "edges": 4892, "memory_mb": 47, "uptime_seconds": 312, "queries_served": 3, "persistence": "auto (every 50 queries)" }
Empty graph?

If you see 0 nodes and 0 edges, run m1nd.ingest first. The graph starts empty and must be populated from your codebase. Check that M1ND_PROJECT_ROOT is set correctly.

Keep Going

You have a working connectome. Here is where to go next.