How the connectome engine works internally. From MCP request to graph response, through 6 computation engines.
Every request flows through five layers, from the MCP protocol surface down to persistent storage.
The graph stores four types of nodes and four types of weighted edges. All edges carry a weight between 0.0 and 1.0 that adapts through Hebbian learning.
Every edge has a weight between 0.0 and 1.0. Weights are initialized from static analysis (import depth, call frequency) and adapted at runtime through Hebbian learning. When an agent confirms a connection is useful (via learn), the weight increases. When confirmed wrong, it decays.
Each engine handles a different class of computation against the shared graph. They can be composed -- a single tool call may invoke multiple engines.
The primary query engine. Combines spreading activation (signal propagation along weighted edges) with semantic matching and XLR noise cancellation to produce ranked results.
Tracks how the graph evolves over time. Detects co-change patterns (files that change together), builds causal chains, and applies temporal decay to reduce noise from stale connections.
Simulates removing nodes from the graph without modifying it. Creates a shadow copy, performs the removal, and reports disconnections, orphans, and broken dependency paths.
Analyzes the structural properties of the graph topology. Finds structural holes (gaps where connections should exist), computes betweenness centrality, and identifies community boundaries.
Sends simultaneous signals from multiple seed nodes and detects standing wave patterns -- places where signals from different sources reinforce each other. Reveals deep cross-domain relationships.
Implements the learning loop. When agents provide feedback via learn, the PlasticityEngine adjusts edge weights using Hebbian rules -- "neurons that fire together, wire together."
The Model Context Protocol (MCP) layer handles communication between AI coding agents and the m1nd binary. JSON-RPC over stdio, no network required.
One m1nd instance serves all agents. The SharedGraph uses Rust's Arc<RwLock<Graph>> for safe concurrent access. Writes are immediately visible to all agents.
The graph auto-persists to disk every 50 queries and on server shutdown. Two files are written: graph_snapshot.json (full graph state) and plasticity_state.json (learned weights). On startup, both are loaded to restore the full learned state.
m1nd is a Rust workspace with 3 crates. The binary is ~4MB, zero runtime dependencies beyond the standard library.