Patent-Protected AI Code Review
AI code review that actually reads your documentation. Not generic best practices. Your rules. Your standards. Enforced on every pull request. With citations.
// What is MatrixReview
Every engineering team writes documentation. Security policies, architecture standards, style guides, contribution rules. MatrixReview reads all of it, then enforces it on every PR. Automatically.
Generic AI code review tools give opinions based on general best practices. MatrixReview gives document-backed findings. Every flag references your company's own documentation, not someone else's standards.
Findings are tagged 🔍 DOC-BACKED or 💭 AI SUGGESTION so your team knows exactly what's policy and what's observation. Doc-backed findings cite the specific document, section title, and line range.
When you install, MatrixReview auto-discovers your repository's documentation, classifies it into review categories, and builds a knowledge base unique to your codebase. No configuration files. No rule authoring. Just install and open a PR.
// Review Gates
Every PR is reviewed across five specialized gates. Each gate pulls only the documents relevant to its category and reviews the diff against that subset.
API keys, auth patterns, secrets in code, data exposure, injection risks. Reviewed against your security policies and incident response docs.
Design patterns, module boundaries, dependency rules, API contracts. Catches architectural drift before it becomes tech debt.
Licensing, CLA requirements, copyright headers, redistribution compliance. Every commit checked against your legal requirements.
Naming conventions, formatting, linting, import order, code standards. Your style guide enforced automatically. Not someone else's opinion.
PR process, testing requirements, commit format, contribution workflow. New contributors stay in compliance from their first PR.
Every finding goes through a two-pass verification pipeline before it reaches the PR. Pass 1 generates findings by reviewing the diff against your documentation. Pass 2 is a separate verification model that re-reads each finding against the source document and asks: can this be proven from what's written?
If the answer is no, the finding gets killed. Findings that can't survive verification never reach your PR comment. This isn't a confidence threshold. It's a second model independently checking the first model's work.
Docs stay fresh automatically. Someone updates a security policy, merges a new architecture decision, adds a testing requirement. MatrixReview detects the change via SHA comparison before the next review and re-ingests the updated content.
No stale rules. No re-setup. The knowledge base tracks the repo. If a document can't be fetched during the freshness check, the review runs with cached docs. Stale docs are better than no review.
// Why MatrixReview
Every finding cites your team's actual documentation. The specific document, section title, and line range. Not generic advice from a training set.
vs. competitors who give general "best practice" suggestionsIf the system can't prove a finding from your documentation, the finding doesn't ship. If GitHub is unreachable, the review degrades gracefully. Never fails silently.
vs. systems that pass everything through when uncertainInstall the app. Open a PR. That's it. MatrixReview auto-discovers your docs, classifies them, and builds your review knowledge base. No YAML files. No rule authoring.
vs. tools that require hours of setup and config filesEvery finding is clearly tagged DOC-BACKED or AI SUGGESTION. Your team always knows what's policy enforcement and what's the model's opinion.
vs. tools that blend opinions with rules into a single output// Intellectual Property
MatrixReview isn't just another AI wrapper. It's built on a portfolio of provisional patent filings covering the core architectures that make reliable AI code review possible. This isn't technology anyone can copy.
The traffic light system. Decomposes code quality into independent dimensions (security, architecture, style, legal, onboarding) assessed separately with configurable thresholds and worst-case aggregation rather than collapsed into a single score.
Multi-tier fallback classification that discovers repository documentation and classifies it into review gates using cascading deterministic-to-probabilistic methods with human-in-the-loop confirmation. The core of the two-minute setup.
The document decomposer. LLM identifies section boundaries with line ranges. Code extracts content by line number. The AI identifies WHERE sections are. Code pulls the content. Extracted text is exactly what was written, never a paraphrase.
Independent verification model that re-reads each finding against source documentation to kill unproven claims, with statistical tracking of removal rates per gate to detect and surface prompt drift.
Every finding typed as DOCUMENT_BACKED or LLM_OPINION with code-enforced type restrictions. AI opinions structurally cannot be classified as policy violations. Provenance includes specific document, section, and line range.
Multi-mode detection with domain-adaptive rigor for systems operating on authoritative ground truth. The foundational verification architecture that powers the entire review pipeline.
System architectures that prevent execution from completing unless required structural conditions are satisfied. Results are trustworthy by construction, not validated after the fact.
Externalized authority for AI-assisted software modification. Stateless context rehydration that ensures reproducible, auditable modifications across sessions independent of AI memory.
Cascading gate architecture ensuring AI outputs pass through multiple verification stages before reaching users, with fail-closed behavior at each stage.
provisional patents filed across AI safety, deterministic verification, and autonomous systems
// Deep Dive
Under the hood, MatrixReview is a fail-closed execution system. Here's what happens from the moment you install to the moment findings land on your PR.
When you install MatrixReview on a repository, the discovery module scans your entire repo tree in a single API call. It identifies documentation files (markdown, rst, txt) and classifies each one into review gates using a three-step pipeline.
Step 1: Filename heuristics match known patterns (~80% accuracy). Step 2: LLM content analysis reads the actual document (~95% accuracy). Step 3: You confirm the classifications in a visual UI (100% accuracy). The system is honest about what it knows and what it's guessing.
The discovery module has its own fallback chain: full tree scan via the git/trees API first (single request, fastest), targeted path checks via the contents API if the tree is truncated, and an empty-with-flag return if the API is down entirely.
Real-world documentation doesn't follow neat categories. A CONTRIBUTING.md might contain security policies, style rules, and PR process requirements all in one file. The decomposer handles this.
Step 1: LLM identifies distinct topical sections with line ranges. Step 2: Content is extracted using those line ranges. Deterministic extraction, not LLM regeneration. The AI identifies where sections are; code pulls the content by line number. This means the extracted text is exactly what was written, never a paraphrase or hallucination.
When a PR is opened, five gate reviews run in parallel. Each gate loads only the documents relevant to its category. Security doesn't see your style guide, Architecture doesn't see your CLA requirements. This focused context produces more accurate findings.
Pass 1 generates structured findings with citations. Pass 2 is a completely separate verification model that re-reads every finding against the source document. If a finding can't be proven from what's written, it gets killed. Pass 2 also tracks removal rates per gate. If a gate's findings are getting killed at a high rate, it flags the prompt for tuning. The system monitors its own accuracy.
The output is a traffic light: RED (blocking issues found), YELLOW (fixable issues), or GREEN (ready to merge). Every finding includes the source document, section, and whether it's doc-backed or an AI observation.
MatrixReview is built on a fail-closed architecture. If the system can't prove a finding from your documentation, the finding doesn't ship. If GitHub is unreachable, the review degrades gracefully and posts an error comment rather than failing silently. If a document can't be fetched during freshness check, the review runs with cached docs. Stale docs are better than no review.
Structured logging with JSON output and request IDs runs through the entire pipeline. Every scan, classification, decomposition, review, and verification step is traceable. When something goes wrong, you can follow the exact path of execution. When something goes right, you can prove it.
Tier 1 is everything above. Document-grounded PR review with five gates and two-pass verification. Tier 2 adds full codebase analysis on setup. When you install, MatrixReview scores your entire repository across all five gates with a baseline strength rating for each.
A historical dashboard tracks how each gate trends over time. Are your security practices improving since last month? Is architectural drift creeping in? Are new contributors following the PR process? Tier 1 tells you what's wrong with this PR. Tier 2 tells you what's happening to your codebase.
// Stop Shipping Blind
Install in 30 seconds. Free during beta. First PR review lands before your coffee gets cold.