HomeToolsArchitecture QuickstartBenchmarksPatterns Main Site
60%
Token Reduction
1.2M → 480K tokens/day
62%
Fewer Greps
40 → 15 calls/hour
~50MB
Memory
5000-node graph
<100ms
Query Latency
Activate, median

Context Token Savings

m1nd reduces context token consumption by focusing agent attention on relevant code before it reads files. Instead of grepping everything and dumping results into context, agents query the graph first.

Daily Token Consumption
Tokens consumed by agent context per working day
Context Tokens / Day
Without
1.2M
With m1nd
480K
Grep Calls / Hour
Without
40
With m1nd
15
"m1nd does not replace search.
It focuses search."
Agents still use grep and glob. But they ask the graph where to search first, reducing wasted reads by 60%.

Query Latency

All queries complete in under 100ms. The graph lives in memory -- no disk I/O on the hot path.

activate
<100
ms (p95)
impact
<50
ms (p95)
why
<80
ms (p95)
missing
<120
ms (p95)
resonate
<150
ms (p95)
ingest
<2000
ms (full rebuild)
Query Latency Distribution
Milliseconds by tool category (p95)
impact
<50ms
why
<80ms
activate
<100ms
missing
<120ms
resonate
<150ms
ingest (full)
<2s

Memory Footprint

The entire graph lives in memory for instant access. Rust's zero-cost abstractions keep the footprint minimal even for large codebases.

RAM Usage: 5000-node graph
~50 MB
0 100 MB 200 MB 300 MB 400 MB 500 MB
Memory by Graph Size
Approximate RAM usage for different codebase sizes
Graph SizeNodesEdgesRAMLoad Time
Small project5001,200~8 MB<200ms
Medium project2,0005,000~25 MB<800ms
Large project5,00012,000~50 MB<2s
Monorepo15,00040,000~150 MB<5s
Enterprise50,000120,000~500 MB<15s

How m1nd Saves Tokens

The savings come from precision, not replacement. Here is the mechanism.

Without m1nd
Agent exploring "authentication" in a 200-file backend
1. grep "auth" across all files → 47 matches → ~12K tokens read into context
2. Read 8 files to understand scope → ~24K tokens
3. Realize 3 files were irrelevant → ~9K tokens wasted
4. grep again with refined terms → ~6K tokens
Total: ~51K tokens, 4+ tool calls
With m1nd
Same task, same codebase
1. m1nd.activate("authentication") → 12 ranked nodes → ~800 tokens
2. Read 5 files (the right ones) → ~15K tokens
3. m1nd.learn("correct") → graph improves → ~100 tokens
Total: ~16K tokens, 3 tool calls
69% fewer tokens.
Same outcome. Better next time.
The graph learns from feedback. Each session makes the next session more efficient. This is compounding intelligence.