Technical Overview

How ModelPassport Works

From raw CSV to certified AI — a transparent, four-layer audit pipeline designed for accountability.

AI systems are making life-changing decisions — with zero accountability

Loan approvals, job screenings, healthcare diagnoses, criminal risk scores. Algorithms trained on biased historical data are deployed at scale, affecting millions of people — often those least able to challenge the decision.

The gap? There is no mandatory pre-deployment bias check. Models ship without certification. Organizations have no standardized way to prove their AI is fair.

🏦
Financial Exclusion
Loan algorithms trained on historical data systematically deny credit to protected groups — perpetuating poverty cycles.
⚖️
Criminal Justice
Risk assessment tools like COMPAS assign higher recidivism scores to Black defendants with no transparent audit trail.
💼
Hiring Discrimination
Automated resume screeners replicate the biases of their training data, filtering out qualified candidates from minority groups.

Four Audit Layers. One Certificate.

01
Data Forensics
Layer 1
What it checks

Before a model even sees training, the data tells a story. Layer 1 audits your dataset for representation gaps (are protected groups proportionally represented?), proxy columns (is ZIP code secretly standing in for race?), and class imbalance (is the outcome skewed toward one group?). It produces a Data Health Score from 0–100.

Why it matters

A model can only be as fair as the data it learned from. Biased data produces biased models — every time. Catching this before training is 10× cheaper than fixing it post-deployment.

02
Synthetic Stress Test
Layer 2
What it checks

We create synthetic twin profiles — pairs of applicants identical in every way except protected attributes (race, gender, age). We send both twins through the model and measure how often the decision flips based solely on the protected attribute. This is the counterfactual flipping rate.

We also apply the EEOC 80% Rule: if any protected group receives a positive outcome at less than 80% the rate of the highest group, that's legally significant disparate impact.

Why it matters

This test is the gold standard for detecting hidden discrimination. The model doesn't need to explicitly use race — it just needs to rely on a correlated feature. Twin testing catches it.

03
Fairness Metrics
Layer 3
What it checks

Demographic Parity: Does the model approve loans / hire / recommend treatment at equal rates across groups? A perfect model would have zero parity difference.

Equalized Odds: When someone truly deserves a positive outcome, does the model give it to them equally regardless of group? This measures false negative rate parity — the cost of being wrongly denied.

Disparate Impact Ratio: The EU AI Act and EEOC require that the selection rate for any protected group is at least 80% of the highest group. We compute this directly.

Why it matters

These are the metrics cited in regulatory frameworks globally — EU AI Act, US EEOC guidelines, India's DPDP Act. A ModelPassport certificate proves compliance.

04
Gemini Governance
Layer 4
What it checks

The first three layers produce numbers. Layer 4 makes them human-readable. Google Gemini AI synthesizes all findings into a plain-language narrative that a policymaker, procurement officer, or journalist can understand — without a data science degree.

It also generates a prioritized remediation checklist: specific, actionable steps ranked by severity. And it produces a severity summary — LOW / MEDIUM / HIGH / CRITICAL — for executive dashboards.

Why it matters

Audit reports that only engineers can read don't drive accountability. Gemini Governance bridges the gap between technical findings and policy action.

What is MP-2026-000001?

MP-2026-000001

Every completed audit produces a uniquely identified, tamper-evident certificate. The ID encodes the platform name, year, and a sequential counter (thread-safe, no duplicates). The certificate is hashed with SHA-256 and stored in a verifiable registry.

Anyone — regulator, competitor, citizen — can verify the certificate is authentic by entering the ID at modelpassport.ai/verify. No login required. The verification returns the full audit results, scores, and hash.

Certificate ID
MP-2026-000001
Format
MP-{YEAR}-{6-digit sequential}
Hash Algorithm
SHA-256
Public Verification
GET /verify/{id}
Storage
JSON → Firestore (prod)

Every organization deploying AI that affects people

From government ministries to early-stage startups, if your model makes decisions about people, you need a ModelPassport.

🏛️
Government
Benefits eligibility, welfare allocation, public sector hiring — prove compliance before deployment, not after a scandal.
🏦
Banks & Finance
Credit scoring, insurance underwriting, fraud detection — demonstrate fair lending to regulators with a verifiable certificate.
🏥
Healthcare
Diagnostic prioritization, resource allocation, drug trial matching — ensure equitable care delivery across demographic groups.
💼
HR Technology
Resume screening, candidate ranking, performance evaluation — protect against discriminatory hiring and meet EEOC requirements.

Ready to certify your model?

Upload your dataset and get a full 4-layer audit report in minutes. Free to start.

Run Your First Audit