TL;DR: We built one Rust scoring engine that works identically across Node, Python, and Rust tools. 33 slots define your entire project DNA. Claude, Gemini, OpenAI, and Grok now score the same file the same way. 596/596 tests. Zero parity drift.
The Problem: Context-Impairment
Most developers think their AI is "fine" because it knows the code. But partial context is a silent ROI killer.
When your AI misses your conventions, unstated goals, or architectural blind spots, it doesn't stop — it hallucinates with high confidence.
$5,460/year per developer
The cost of re-explaining your project every session.
We call this Context-Impairment. And today, the FAF-1 team is shipping the cure.
The Frontier 4 — Unified
For the first time, four major AI platforms share one scoring standard:
One project.faf file. One score. Every AI. The IANA-registered standard that ends context fragmentation.
Under the Hood
Unified Rust Brain
One scoring engine compiled to native, WASM, or embedded. 100% parity across every runtime. No more "works in Node but scores differently in Python."
33-Slot Enterprise DNA
Every project slot is explicitly defined:
Base (21 slots)
├── Project Meta (3) name, goal, main_language
├── Human Context (6) who, what, why, where, when, how
├── Frontend Stack (4) framework, css, ui_library, state
├── Backend Stack (5) backend, api, runtime, db, connection
└── Universal Stack (3) hosting, build, cicd
Enterprise (+12 slots = 33 total)
├── Infra (5) monorepo_tool, pkg_manager, workspaces...
├── App (4) admin, cache, search, storage
└── Ops (3) versioning, shared_configs, remote_cache Three States. No Ambiguity.
Every slot is one of:
- Populated — Valid, project-specific data
- Empty — Missing or placeholder (honest zero)
- Slotignored — Not applicable to this project type
A CLI app doesn't have a frontend. That's 12 slotignored. Score: 9/9 = 100%, not 9/21 = 43%. The math is honest.
Placeholder Rejection
The engine rejects filler. "null", "unknown", "n/a", "describe your project goal" — all score as Empty. No gaming the system.
The Score Formula
score = populated / (total - slotignored) × 100That's it. Transparent. Deterministic. The same YAML scores identically whether Claude, Gemini, or Grok reads it.
Try It
npm install -g faf-cli@latest faf auto One command. Zero to AI-optimized. The Mk4 engine scores your project and syncs context to every AI you use.
The Numbers
- 596/596 — Tests passing (27 Mk4 engine + 569 existing)
- 33 slots — Enterprise DNA map
- 21 slots — Base tier (free forever)
- 4 platforms — Claude, Gemini, OpenAI, Grok
- 1 engine — Rust, compiled once, runs everywhere
- 91% — Token reclaim out the gate. Measured. Not claimed. Relentless pursuit to 100%
faf auto.
Why Rust?
Because scoring engines don't get to be slow, wrong, or platform-dependent.
- WASM — One compile target for browser, edge, and CLI. Same binary, every runtime.
- xAI momentum — Grok's infrastructure runs Rust. So does Cloudflare. So does the industry.
- Zero ambiguity — No garbage collector. No runtime surprises. The score is the score.
- In Rust We Trust — When the engine defines parity across four AI platforms, it needs to be correct by construction. Not by hope.
TypeScript built the ecosystem. Rust is the engine.
FAF — Format for AI Context
FAF is the specification layer for AI-readable repositories. It doesn't replace your docs. It defines what your docs can't.
src/ → implementation
package.json → runtime metadata
README.md → human documentation
project.faf → AI context MD prose explains. FAF defines. AI consumes.
CLAUDE.md, GEMINI.md, AGENTS.md, GROK.md — they're all sync targets. FAF is the source. One file writes them all.
What Changes
Before Mk4, each runtime had its own scoring logic. Small divergences crept in. A project could score 95% in one tool and 92% in another.
After Mk4: one engine, one truth. The Rust implementation compiles to WASM for browser/edge, links natively for CLI tools, and runs identically everywhere. Parity isn't aspirational — it's tested.
Context-Impairment: Cured
Define once in .faf. Score everywhere. Sync to every AI.