Precision Context Engine

GPS for Your Codebase.
Not Pins on a Map.

RAG gives your AI 10 random files and says "good luck." OCXP gives it a map with turn-by-turn directions. Less chat. More action.

~0%
RAG failure rate
0-3
Hops to any answer
0 sec
To full context
ocxp-engine
Infrastructure-aware impact analysis
// INFRASTRUCTURE AWARENESS

"What happens if I change this function?"

RAG finds the file. OCXP finds the file, the database it writes to, the queue it triggers, and the downstream service it would break.

RAG: File Search

Searching for "createUser"...
services/user.ts
async function createUser(data) {
  // ... function body
}

No infrastructure context.

No downstream visibility.

1 file found. 0 infrastructure context.

OCXP: Full-Stack Awareness

createUser()DynamoDB:users-tableSQS:user-created-queuebilling-service
4 downstream impacts. 1 breaking change.
"RAG helps you write the function. OCXP prevents you from taking down production."

2,261 Resource Types

Parses Terraform, CloudFormation, and CDK. Sees every cloud resource.

Service Mesh Tracing

Follows API calls, queues, and event buses across microservices.

Blast Radius Detection

Identifies downstream consumers before you ship.

Built for platform teams running microservices at scale.

// PRECISION

Sniper rifle, not shotgun.

RAG retrieves 12 random chunks hoping one is right. OCXP navigates directly to the exact code in 2-3 hops.

RAG: Spray and Pray

utilsconfigtestsauthapidbloggercache
~12 hops
8,563 tokens~40% accuracy

OCXP: Precision Navigation

utilsconfigtestsauthapidbloggercache
2-3 hops
4,062 tokens~90% accuracy
Hops
~12
->
2-3
Tokens Used
8,563
->
4,062
Accuracy
~40%
->
~90%
// CONTEXT PRECISION

Ask for a city name. Get a city name. Not the entire atlas.

Progressive zoom delivers exactly the right detail level. 10 tokens at L0. Not 8,563 scanning every file.

Zoom LevelModule Names
Tokens:10
Context Budget
1%
ocxp-zoom-l0.json
{
"modules": ["auth", "users", "payments", "api"],
"token_count": 10
}
1
2
3
4
120x reduction from L3 to L0
// SHARED INTELLIGENCE

Your AI gets smarter the more your team uses it.

RAG is amnesiac -- same wrong chunks tomorrow. OCXP remembers what works. Developer A finds a tricky path; Developer B's agent knows it five minutes later.

Personal Learning

Every query teaches the graph. Repeated questions get instant, precise answers.

Team Memory

Shared workspace = shared knowledge. When your teammate maps the payment flow, your agent knows it too. No Slack threads. No tribal knowledge.

Protocol, Not Tool

OCXP is a protocol for code understanding. Any MCP-compatible agent speaks it. The graph becomes the shared language for your entire engineering org.

The Cost

Cut your AI agent's bill in half.

Precision is not just better results -- it is cheaper results. 53% fewer tokens for the same understanding.

RAG Approach

Traditional retrieval-augmented generation

0

tokens consumed

Blind file searching, redundant context, token bloat. Agents waste cycles re-reading irrelevant code.

OCXP Engine

Structured context delivery

0

tokens consumed

L0
Module Map
L1
Signatures
L2
Relationships
L3
Full Source

Structured context, precise delivery, zero waste. Agents get exactly the context they need at the right zoom level.

Token Usage Comparison

RAG Approach8,563 tokens
OCXP Engine4,062 tokens
OCXP uses 52.6% fewer tokens for the same query
0%
Fewer tokens

Average reduction in token consumption across benchmark tasks

0x
Per-entity efficiency

More efficient context delivery per code entity compared to RAG

0 sec
Time to full context

From install to full codebase understanding in seconds, not minutes

// HOW IT WORKS

30 seconds to context. Zero Docker containers.

Competitors need 5+ services to get started. OCXP is a single binary. Install, index, ship.

1

Install in One Line

$ curl -fsSL https://get.ocxp.dev | sh

A single binary. No Docker, no databases, no configuration files. Point it at your codebase and go.

45MB. Zero Dependencies.
2

Connect Your Agent

$ ocxp-engine serve --mcp

OCXP Engine exposes an MCP-compatible server. Claude Code, Cursor, Windsurf -- any MCP client connects instantly.

MCP Compatible
3

Cut Your API Bill

agent.query("add auth to /api/users")

53% fewer tokens means your AI budget goes twice as far. Precision context, not bulk retrieval.

Free for individuals

Working with a team? Deploy the shared workspace on AWS for team-wide intelligence.

// DEVELOPER EXPERIENCE

Built by developers, for developers.

Clean APIs, zero boilerplate, and everything you need to integrate in minutes.

// Query at different zoom levels
const overview = await ocxp.query("auth module", { zoom: "L0" });
// Returns: { modules: ["auth", "oauth", "session"], tokens: 10 }

const details = await ocxp.query("auth module", { zoom: "L2" });
// Returns: function signatures, call graphs, complexity metrics
// tokens: 340 (vs 8,563 with RAG)

Get exactly the detail level your agent needs. No more, no less.

TypeScript SDK
MCP Server
CLI Tool
REST API
Infrastructure Parsing

Stop guessing.
Start knowing.

GPS for your codebase. Precision context for AI agents. Free for individuals. Shared workspaces for teams.

Open source. Apache 2.0 licensed. Built with Rust.