Introduction
m1nd is a local MCP runtime for coding agents. It ingests a repository into a graph-backed operational model so an agent can ask for structure, impact, connected context, continuity, and likely risk instead of reconstructing the repo from raw files every time.
The current public shape of the product is not just “graph search.” It is a guided runtime with:
- graph-grounded retrieval and impact analysis
proof_stateon the main structural flowsnext_suggested_tool,next_suggested_target, andnext_step_hint- actionable continuity through
trail_resume - observable multi-file writes through
apply_batch - recovery loops that teach the next valid move when a tool is used badly
m1nd ships as an MCP server, runs locally, and works with any MCP-compatible client over stdio. The exported schema exposes the live MCP tool surface for your current build; use tools/list for the exact count.
The Problem
Most agent loops still waste time in the same place: navigation.
An LLM can reason about a file once it has the file. The expensive part is getting the right file, the right neighbors, and enough proof to act without reopening half the repo.
Without a structural layer, the loop usually looks like this:
- grep for a symbol or phrase
- open a file
- grep for callers, callees, or related paths
- open more files
- repeat until the subsystem shape becomes clear
That cost shows up as:
- more file reads than necessary
- more token burn on repo reconstruction
- weaker stopping rules during triage
- more false starts before editing
- more friction resuming prior investigations
What m1nd Changes
m1nd keeps the graph local and lets an agent ask for structure directly:
tracemaps stacktraces to likely suspectsimpactinspects blast radius before editsseekandactivatefind intent and connected structuredocument_resolve,document_bindings, anddocument_driftconnect docs/specs to likely code targets and surface stale linksdocument_provider_healthandauto_ingest_*expose the local-first document runtimevalidate_planandsurgical_context_v2prepare safer multi-file changestrail_resumerestores investigations with next-focus and next-tool hintsapply_batchexposes progress, phases, and final handoff signals
The result is less context churn and better decision quality per step.
Documents And Knowledge Artifacts
The public shape of m1nd is no longer just code plus optional markdown memory.
The merged universal lane can ingest and operationalize:
- markdown notes
- HTML/wiki pages
- office documents
- scholarly PDFs
- structured standards and citation corpora
When a document enters through the universal lane, m1nd can preserve canonical local artifacts, bind that document to likely code, and surface document/code drift when the implementation moves faster than the docs.
Current benchmark truth from the recorded warm-graph corpus:
10518 -> 5182aggregate token proxy50.73%aggregate reduction14 -> 0false starts39guided follow-throughs12successful recovery loops
Not every scenario is a token win. Some wins are continuity, recovery, or execution clarity. That is part of the product truth too.
Core Runtime Ideas
Graph-grounded retrieval
The graph is still the foundation. Activation, semantic retrieval, path search, temporal history, and blast-radius analysis all sit on top of a shared structural model rather than a stateless grep loop.
Guided handoff
Several high-value tools now return more than raw results. They can expose:
proof_statenext_suggested_toolnext_suggested_targetnext_step_hint
That turns the server from a catalog of answers into a layer that helps the agent decide what to do next.
Continuity
trail_resume is no longer just bookmark restore. It can return compact resume hints, reactivated nodes, the next focus node, the next open question, and the likely next tool. This is one of the main reasons the benchmark corpus now records fewer false starts.
Observable execution
apply_batch is now an observable write surface:
status_messageproof_state- lifecycle phases such as
validate,write,reingest,verify, anddone - coarse progress fields like
progress_pct - structured
progress_events - live SSE progress in serve mode
Recovery loops
Common failures no longer have to be dead ends. Many invalid calls now return hints, examples, and a suggested next step so the agent can repair the call instead of rediscovering the workflow from scratch.
Who This Is For
- agent builders who want a local structural layer for navigation and edit prep
- MCP client users who want better triage, continuity, and connected context
- multi-agent systems that need shared graph truth without shipping code to an API
- teams that want safer workflow around stacktrace triage, blast radius, and multi-file changes
m1nd is not a compiler, debugger, or test runner replacement. It is best when the real bottleneck is structural understanding and repo navigation.
How To Read This Wiki
Architecture explains how the core crates and auxiliary bridge surfaces fit together and how the MCP server turns graph truth into agent-facing runtime behavior.
Concepts covers the underlying graph ideas such as activation, plasticity, and structural holes.
API Reference documents the current MCP surface, including underscore-based canonical tool names, guided outputs, and transport behavior.
Tutorials walks through the main workflows from first ingest to connected edit prep.
The Benchmarks page is the current product-truth layer for token proxy, false starts, guided follow-through, and recovery loops. The Changelog tracks the release history from v0.6.x through v0.8.0 and onward.