Document-Grounded AI Code Review

The Code Review
Revolution Is Here.

AI code review that reads your documentation and understands your codebase. Code-backed findings from static analysis. Doc-backed findings from your policies. One-click fixes verified against your own standards. Every finding cited or killed.

// See it in action

MatrixReview Setup

// What is MatrixReview

Your docs become
your reviewer.

Every engineering team writes documentation. Security policies, architecture standards, style guides, contribution rules. MatrixReview reads all of it, then enforces it on every PR. Automatically.

Generic AI code review tools give opinions based on general best practices. MatrixReview gives cited findings. Every flag references your company's own documentation or is proven directly from your code structure.

On setup, MatrixReview clones your repo, builds a full dependency map of your source files, and discovers all your documentation in one pass. When a PR arrives, it identifies every file directly affected by your changes and pulls only the relevant docs for review. No context flooding. Focused analysis.

Findings come in three tiers: ⚙️ CODE-BACKED (proven from your code structure), 🔎 DOC-BACKED (cited from your documentation), and 💭 AI SUGGESTION (model observation, not proven). If a finding can't be proven from your code or docs, it gets killed before reaching your PR.

// Review Pipeline

WEBHOOK GitHub PR opened or updated. Triggers review.
SCAN Maps your codebase, discovers docs, classifies into 5 gates
CODE Static code analysis finds structural issues deterministically
DOCS Reviews PR against your documentation with focused context
VERIFY Verification pipeline kills unproven findings
CITE Matches every surviving finding to exact doc lines
OUTPUT Traffic light with cited, deduped, expandable findings

// Review Gates

Five gates. Zero guesswork.

Every PR is reviewed across five specialized gates. Each gate pulls only the documents relevant to its category and reviews the diff against that subset.

GATE:SECURITY

Security

API keys, auth patterns, secrets in code, data exposure, injection risks. Reviewed against your security policies and incident response docs.

GATE:ARCHITECTURE

Architecture

Design patterns, module boundaries, dependency rules, API contracts. Catches architectural drift before it becomes tech debt.

GATE:LEGAL

Legal & Compliance

Licensing, CLA requirements, copyright headers, redistribution compliance. Every commit checked against your legal requirements.

GATE:STYLE

Style & Standards

Naming conventions, formatting, linting, import order, code standards. Your style guide enforced automatically. Not someone else's opinion.

GATE:ONBOARDING

Onboarding & Process

PR process, testing requirements, commit format, contribution workflow. New contributors stay in compliance from their first PR.

SYSTEM:VERIFICATION

Multi-Pass Verification Pipeline

Every finding goes through a verification pipeline before it reaches your PR. Findings are generated, then independently verified against your source documentation. Anything that contradicts or can't be proven from your docs gets killed. A dedicated citation step then matches every surviving finding to the exact filename and line range in your documentation.

If a finding can't survive verification, it never reaches your PR comment. The result: zero hallucinations shipped. Code-backed findings from static analysis skip verification entirely because they are deterministic.

SYSTEM:FRESHNESS

Auto-Updating Knowledge Base

Docs stay fresh automatically. Someone updates a security policy, merges a new architecture decision, adds a testing requirement. MatrixReview detects the change via SHA comparison before the next review and re-ingests the updated content.

No stale rules. No re-setup. The knowledge base tracks the repo. If a document can't be fetched during the freshness check, the review runs with cached docs. Stale docs are better than no review.

// Why MatrixReview

What makes us different.

Document-Grounded, Not Opinion-Based

Every finding cites your team's actual documentation. The specific document, section title, and line range. Not generic advice from a training set.

vs. competitors who give general "best practice" suggestions

Fail-Closed Architecture

If the system can't prove a finding from your documentation, the finding doesn't ship. If GitHub is unreachable, the review degrades gracefully. Never fails silently.

vs. systems that pass everything through when uncertain

Zero Configuration

Install the app. Open a PR. That's it. MatrixReview auto-discovers your docs, classifies them, and builds your review knowledge base. No YAML files. No rule authoring.

vs. tools that require hours of setup and config files

Three-Tier Confidence

Every finding is tagged ⚙️ CODE-BACKED (proven from your code structure), 🔎 DOC-BACKED (cited from your documentation), or 💭 AI SUGGESTION. Your team always knows what's deterministic, what's policy, and what's the model's opinion.

vs. tools that blend opinions with rules into a single output

// Deep Dive

How it actually works.

Under the hood, MatrixReview is a fail-closed execution system. Here's what happens from the moment you install to the moment findings land on your PR.

01 INSTALLATION &
DISCOVERY

Install once. Setup runs itself.

When you install MatrixReview on a repository, the system scans your entire codebase and builds a dependency map of every source file. It identifies entry points, security-sensitive modules, and the relationships between files. This is how code-backed findings work: structural issues are detected directly from your code, not guessed by an AI.

During the same scan, MatrixReview discovers all your documentation files and classifies them into review categories automatically. Multi-topic docs are decomposed into category-specific sections. No configuration files. No YAML. No rule authoring.

Install the GitHub App, confirm the scan results, and you're live. Under 2 minutes for a 25,000 file repo.

02 DOCUMENT
DECOMPOSITION

One doc, multiple categories. Handled automatically.

Real-world documentation doesn't follow neat categories. A CONTRIBUTING.md might contain security policies, style rules, and PR process requirements all in one file. MatrixReview handles this automatically.

Multi-topic documents are decomposed into category-specific sections with exact line ranges. The content extracted is exactly what was written, never paraphrased or regenerated. Each section is assigned to the right review gate and used as the source of truth for that category.

03 THE REVIEW
PIPELINE

Two engines, one review. Zero hallucinations shipped.

When a PR arrives, MatrixReview runs two independent analysis engines. The code engine performs deterministic static analysis: hardcoded secrets, broken imports, security boundary crossings, circular dependencies, and entry point modifications. These findings are proven directly from your code structure with zero AI involvement.

The doc engine reviews the PR against your team's documentation across five specialized gates, each scoped to its own domain. A verification pipeline then kills any finding that can't be proven from your docs, and a citation step matches survivors to exact filenames and line ranges.

The two engines merge into a single review with three confidence tiers: code-backed, doc-backed, and AI suggestion. Cross-gate dedup consolidates duplicates. The output is a compact, expandable PR comment with a traffic light summary.

04 FAIL-CLOSED
EVERYWHERE

Every decision path has a fallback.

MatrixReview is built on a fail-closed architecture. If the system can't prove a finding from your documentation or code, the finding doesn't ship. If GitHub is unreachable, the review degrades gracefully and posts an error comment rather than failing silently. If a document can't be fetched during freshness check, the review runs with cached docs. Stale docs are better than no review.

If the code analysis engine fails, the doc engine still runs and produces a complete review. If the doc engine fails, the code engine findings still post. Multiple fallbacks at every layer. The review always fires.

05 FIX GENERATION
& DASHBOARD

Generate fixes. Verify them. Ship with confidence.

When MatrixReview flags a finding, engineers click "Generate Fix" from the PR comment. The system runs a pre-flight intent check first. If the finding is fixable, a fix is generated with full context: the diff, your documentation, and the import graph. The fix then runs back through the entire review pipeline. If it would trigger new findings or violate your policies, it gets rejected before posting. Self-proving verification.

If the finding needs a design decision instead of a code change, the system explains why and suggests next steps. If the PR intent itself violates policy, fix generation is blocked entirely. The system will not generate code that circumvents your security rules.

Coming next: a full codebase intelligence dashboard. Explore your dependency map visually. Track PR history by engineer. View codebase health scores and gate compliance trends. The PR review tells you what's wrong with this PR. The dashboard tells you what's happening to your codebase.

// Fix Generation

One-click fixes.
Verified before they ship.

When MatrixReview flags a finding, engineers click "Generate Fix" directly from the PR comment. Every fix runs back through the full review pipeline before posting. If the fix would violate your policies, it gets rejected.

🔎 DOC-BACKED HIGH

Your SECURITY.md (lines 14-22) requires all API endpoints to validate authentication tokens. This PR adds a new endpoint /api/users/export without auth middleware.

SECURITY.md lines 14-22: "All public-facing API routes MUST include the authMiddleware wrapper."

// Stop Shipping Blind

Your docs already have the answers.
Start enforcing them.

Install in 30 seconds. Free during beta. First PR review lands before your coffee gets cold.