Introducing Canopy: Self-Hosted Code Intelligence for AI Coding Agents
Canopy is a single static binary that makes Claude Code, Cursor, and Codex CLI dramatically smarter. No cloud, no credits, no vendor lock-in.
Note: Canopy is in pre-release development. Purchase and download links aren’t live yet. Join the waitlist to be notified at launch.
This is our first public look at what we’re building. Canopy is not yet available for purchase or download — we’re targeting a public launch later this year. But the architecture is real, the code runs internally, and we want developers who work with AI coding agents every day to know it’s coming.
AI coding agents have a structural problem: they are brilliant at reasoning but nearly blind to your codebase. They guess at import paths, miss dependencies, and leave stale references behind. This is not an LLM failure — it is a context failure. Canopy fixes it.
The Problem Agents Have
Ask any AI coding agent to “refactor the authentication flow” in a medium-sized codebase and watch what happens. It reads a few files it thinks are relevant, makes changes, and hands them back. What it misses: the 14 other files that import the function you just renamed. The test suite that will break. The integration in the dashboard that nobody updated the docs for.
The agent is not dumb. It just does not have a dependency graph. It does not know what depends on what. It is pattern-matching on filenames and grep results, which is better than nothing but consistently wrong in specific, painful ways:
- Broken imports — a renamed symbol updated in 3 of 5 call sites, silent at write time, loud at runtime
- Dead exports — a removed function that three other modules still import
- Stale types — TypeScript interface changed in the source, not updated in the callers
- Circular dependencies — introduced silently during a refactor, shows up as a cryptic error at build time
These are not edge cases. If you use an AI agent daily on a real codebase, you have hit all of them.
What Canopy Is
Canopy is a single Rust binary that indexes your codebase and exposes 21+ MCP tools to your AI coding agent. It understands your code at the AST level: what each file exports, what each file imports, which symbols are used where, and which dependencies form cycles. It integrates with SCIP for compiler-precise cross-language analysis, ingests test coverage data, and tracks git history to surface high-churn files.
When your AI agent connects to Canopy via MCP, it gains structured access to all of that context. Instead of reading 40 files and guessing, it calls canopy_trace_dependents and gets a precise list. Instead of hoping the refactor is clean, it calls canopy_validate and gets a GO/CAUTION/STOP assessment with specifics.
Key facts about Canopy:
- Single static binary. One file. No runtime dependencies. Ships for Linux x86_64/ARM64, macOS x86_64/ARM64, and Windows x86_64.
- 21+ MCP tools in the upcoming release. Covering workflow, search, AST, dependency graph, health checks, git history, and index management.
- Runs locally. Your code never leaves your machine. The index lives in
~/.canopy/. - License validation with weekly health check. Ed25519 cryptographic signature verification — no network call to activate. A weekly hash-only heartbeat validates your license server-side and enables revocation on refund/dispute. See our privacy policy for exactly what is transmitted (spoiler: just your license hash, client version, and platform — never code or queries).
- Pluggable. YAML plugin system for custom health checks and pattern rules.
- Community mode. Try Canopy’s core search on your codebase without a license. To integrate with your AI agent and unlock all 21+ tools, you’ll need a paid tier — available at launch.
Who It Is For
Canopy is built for solo developers and small teams (3-10 people) who use AI coding agents every day and care about keeping their source code off of cloud infrastructure.
If you are running Claude Code, Cursor, Codex CLI, Windsurf, Zed, or anything that speaks MCP — Canopy will plug directly into your workflow. You install it once, index your repo once (canopy index . — seconds to minutes depending on repo size, see /benchmarks), and from that point your agent knows what it is working with.
You are a good fit if you:
- Find yourself cleaning up after AI agents that miss call sites
- Work on a codebase large enough that agents cannot read it all at once
- Work in a regulated environment where code cannot leave your network
- Are tired of credit-based pricing models where a large context window empties your account
You are probably not the right fit yet if you need enterprise features (SSO, audit logs, SLAs) or want a bundled AI model alongside the context engine — Canopy is a context engine, not an agent.
What Canopy Does Not Do
To be direct:
Canopy does not ship an LLM. It gives your existing agent better context. Canopy does not run as a hosted service. It runs on your machine. Canopy does not collect telemetry or analyze your code. The only data that leaves your machine is a weekly license health check (hash only, never code — see privacy policy).
Pricing (at launch)
| Tier | Annual | Monthly | Who It Is For |
|---|---|---|---|
| Community | Free | Free | Core search, single repo, CLI only |
| Solo | $199/yr | $24/mo | 1 developer, full 21-tool MCP integration |
| Pro | $349/yr | $39/mo | Multi-repo, CI mode, semantic vector search |
| Team | $349/user/yr | $39/user/mo | 3+ devs, shared configs, admin portal, CI cache, invite flow |
All paid tiers will start with a 14-day free trial — full features of your chosen tier, card required upfront, no charge until day 15. Cancel anytime via the customer portal.
Annual pricing saves roughly 30% over monthly. If you plan to use Canopy for more than eight months, annual is better math.
What to Expect at Launch
When Canopy launches later this year, here is what the setup will look like:
- Download the binary for your platform. No account required.
- Run
canopy setupto initialize~/.canopy/config.toml. - Run
canopy index /path/to/your/repo. - Add Canopy to your MCP client. For Claude Code, add it to
.mcp.json:
{ "mcpServers": { "canopy": { "command": "canopy", "args": ["serve", "--mcp"] } }}- Start a session and ask your agent about a refactor. Watch it call
canopy_preparebefore touching anything.
What Is Next
Canopy is nearing production readiness. The foundation is stable: the MCP tools, the indexer, the dependency graph, and the health check system are in solid shape internally.
Actively in development for launch and beyond:
- Dashboard polish — the local web UI for the dependency graph is functional; UX is getting sharper
- CI mode expansion — GitLab and Azure Pipelines are implemented; PR comment decorations for GitHub Actions are next
- Plugin marketplace — curated YAML packs for common frameworks (Next.js, Rails, Django, Cargo workspaces)
Stay in the Loop
If you are building with AI agents every day, we would like to know what problems you are hitting. The best way to follow development is GitHub Discussions or email support@canopy.ironpinelabs.com.
Join the waitlist at canopy.ironpinelabs.com — we’ll email you when Canopy is available.