A coding agent where
the graph is the product

Every decision, tool call, and file edit is a node you can navigate, replay, and steer. Not a chat log — a knowledge graph that grows with your project.

# Start the Graphirm server
$ graphirm serve
INFO graphirm_server: Starting on 127.0.0.1:5555

# Every message, tool call, and file edit becomes a graph node
$ curl -s -X POST localhost:5555/api/sessions \
    -d '{"name":"fix-auth-bug"}'
{"id":"3fa85f64","name":"fix-auth-bug","status":"idle"}

# Open the web UI at http://localhost:5555
# Chat pane on the left, live graph explorer on the right

Why graph-native?

Cross-session memory

Knowledge nodes persist across sessions. PageRank surfaces what matters — not just what was said last.

Context by relevance

Graph traversal + edge weights build the context window. Relevant history from 10 sessions ago beats verbatim recency.

Multi-agent coordination

Subagents write nodes into a shared graph. Any agent can traverse them. No message-passing, no shared state bugs.

Navigate your reasoning

Interactive whiteboard with React Flow. Click any node to expand it, steer from any point, and annotate the canvas.

Task DAG

Tasks form a directed acyclic graph with depends_on edges — visible, trackable, resumable across restarts.

Single Rust binary

No Docker, no Python, no runtime deps. One static binary + SQLite. Runs on any server or laptop.

Architecture

graphirm/ ├── crates/ │ ├── graph/ ← GraphStore (rusqlite + petgraph) — every node + edge │ ├── llm/ ← providers: Anthropic, OpenAI, DeepSeek, Ollama, OpenRouter │ ├── tools/ ← bash, read, write, edit, grep, find, ls, graph_query │ ├── agent/ ← async loop, context engine, multi-agent, knowledge extraction │ ├── server/ ← axum REST + SSE streaming API │ └── tui/ ← ratatui terminal UI ├── web-app/ ← React + React Flow interactive whiteboard (Vite, TypeScript) └── graphirm-vscode/ ← VS Code/Cursor extension

Self-host in minutes

1

Clone and build (Rust 1.88+ required)

git clone https://github.com/graphirm/graphirm && cd graphirm && cargo build --release
2

Set your LLM API key and start the server

export GRAPHIRM_MODEL=openrouter/qwen/qwen3-coder-next && export OPENROUTER_API_KEY=sk-or-... && ./target/release/graphirm serve

Supports Anthropic, OpenAI, DeepSeek, Ollama, and OpenRouter via GRAPHIRM_MODEL.

3

Open the web UI

open http://localhost:5555

Interactive whiteboard with chat pane, graph explorer, node expansion, and canvas annotations.