// PRECISION CONTEXT ENGINE

GPS for Your Codebase. Not Pins on a Map.

OCXP Engine builds an infrastructure-aware graph of your entire codebase, then gives your AI agent turn-by-turn directions instead of random file fragments.

// THE PROBLEM

RAG gives your AI a phone book. You need a GPS.

Your AI agent asks "how does authentication work?" RAG returns 10 file chunks based on keyword similarity. Maybe some are relevant. Maybe they're from a deprecated module. Maybe they miss the database connection that's the actual bottleneck.

~40%
RAG failure rate
~12
Hops per query
8,563
Tokens wasted
// THE SOLUTION

OCXP maps your infrastructure. Then navigates it.

OCXP Engine indexes your project and builds a graph that understands not just your code, but your databases, queues, API endpoints, and service connections. When your AI asks a question, it gets a direct route to the answer — including every downstream system that could be affected.

Your Codebase
Source files
IaC configs
API routes
DB schemas
Queue configs
OCXP Engine2,261 types
FunctionsDatabaseDatabaseQueuesQueuesServicesServicesEndpoints
Your AI Agent

"How does auth work?"

2-3
hops
340
tokens
~90%
accuracy
// CAPABILITIES

What makes OCXP different.

Not a better search engine. A fundamentally different approach to AI context.

Infrastructure Awareness

Traces function calls through databases, message queues, and downstream services. When you change userService.createUser, OCXP knows it writes to DynamoDB, triggers an SQS queue, and affects billing-service.

Progressive Zoom

Five detail levels from L0 (module names, 10 tokens) to L4 (full source code). Your AI agent requests exactly the depth it needs for the current task. No more dumping entire files.

Team Intelligence

The graph learns from every query. Common patterns surface faster. Team knowledge persists across sessions. Your AI gets smarter the more your team uses it.

Task-Aware Routing

"Add rate limiting to POST /api/users" returns only the route handler, existing rate limiter, and config file. Sniper rifle precision instead of shotgun scatter.

Blast Radius Detection

Before you ship, know exactly what a change will affect. Function-level impact analysis that traces through your entire infrastructure stack.

53% Fewer Tokens

Structured context blocks instead of scattered file chunks. 4,062 tokens instead of 8,563. Cut your AI agent costs in half while doubling accuracy.

// IN ACTION

See the difference.

Impact Analysis

blast-radius.json
$ ocxp-engine impact "userService.createUser"

{
  "function": "userService.createUser",
  "writes_to": "DynamoDB:users-table",
  "triggers": "SQS:user-created-queue",
  "consumers": ["billing-service", "notification-service"],
  "warning": "Schema change breaks billing-service"
}

One command. Full downstream visibility. No guessing.

Progressive Zoom

zoom-levels.ts
// L0: Module overview (10 tokens)
{ "modules": ["auth", "oauth", "session"] }

// L1: Entry points and dependencies (85 tokens)
{ "endpoints": ["POST /login", "POST /refresh"],
  "deps": ["bcrypt", "jsonwebtoken", "redis"] }

// L2: Function signatures and call graphs (340 tokens)
{ "functions": [...], "call_graph": [...],
  "complexity": { "cyclomatic": 12 } }

// L4: Full source with line-by-line context
// Only when your agent actually needs it

From 10 tokens to full source. Your agent picks the right level.

// HEAD TO HEAD

RAG vs OCXP Engine

Same question. Fundamentally different approach.

RAGOCXP Engine
How it finds contextKeyword search across file chunksGraph traversal across infrastructure
What it returns~10 random file fragmentsPrecise module with dependencies
Infrastructure awarenessNone - files onlyDB, queues, services, endpoints
Accuracy~60% (40% failure rate)~90% precision
Token usage~8,563 tokens per query~4,062 tokens per query
Hops to answer~12 hops (scattered)2-3 hops (direct)
LearningStateless - starts fresh every timeLearns from team usage patterns
Detail controlAll or nothingProgressive zoom (L0-L4)
// HOW IT WORKS

Three steps. Zero configuration.

01

Index

OCXP parses your source code, IaC configs, API routes, database schemas, and queue configurations. Builds a complete infrastructure graph in under 30 seconds.

02

Query

Your AI agent asks a question via MCP. OCXP navigates the graph, finds the precise context, and returns it at the right zoom level. 2-3 hops, not 12.

03

Learn

Every query refines the graph. Common patterns surface faster. Team knowledge compounds. The more you use it, the smarter it gets.

// MCP NATIVE

MCP is your AI's hands. OCXP is its memory.

MCP (Model Context Protocol)

  • -Runtime tool execution
  • -Gives AI the ability to take actions
  • -File operations, API calls, shell commands
  • -Stateless per session

OCXP Engine (via MCP)

  • +Precision context delivery
  • +Gives AI the knowledge to decide what to do
  • +Infrastructure graph, impact analysis, zoom levels
  • +Learns and improves over time

OCXP runs as an MCP server. Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client. No plugins, no extensions — just add the config and go.

// OPEN SOURCE

No vendor lock-in. No black boxes.

Apache 2.0
License
Rust
Language
Zero
Dependencies
Single file
Binary

View Sourcesoon

Every line of code is on GitHub. Read it, fork it, contribute.

Report Issuessoon

Found a bug? Open an issue. We track everything publicly.

Contributesoon

PRs welcome. Check the contributing guide to get started.

Stop guessing. Start knowing.

Install the CLI, index your project, and give your AI agent GPS-grade context in 30 seconds.

Free forever for individual developers. Open source. Apache 2.0 licensed.