Our Story

About ModelPassport

Built for the Google Solution Challenge 2026. Designed to make AI accountability real, measurable, and verifiable.

AI is making decisions about your loan, your job, your healthcare. ModelPassport ensures those decisions are fair — and that organizations can prove it.

We live in a world where an algorithm trained on historical data decides whether you get a mortgage, whether you pass a resume screen, whether you receive priority medical care. These models operate invisibly, at scale, affecting millions of lives — often those least able to challenge the outcome.

The governance gap is real: organizations want to deploy AI responsibly but lack tooling to audit it objectively. Regulators want to enforce fairness but lack the technical standards to measure it. Citizens want accountability but have no way to verify claims. ModelPassport addresses all three.

🏆   Built for Google Solution Challenge 2026 — Unbiased AI Decision Theme

Meet the Founder

S
Shivam Kumar S
Founder & Developer

Final-year Computer Science student with a focus on Python, AI/ML, and backend systems. Built ModelPassport end-to-end — from FastAPI backend to four-layer audit pipeline to the frontend interface you're reading right now.

Passionate about the intersection of technology and social equity. Believes that the most powerful thing you can build is a system that holds other systems accountable.

Tech Stack

🐍
Python
Core language
FastAPI
Backend API
Google Gemini AI
Governance layer
🔬
scikit-learn
Model training & eval
⚖️
fairlearn
Fairness metrics
☁️
Google Cloud Run
Deployment

Real Cases. Real Harm.

These aren't hypotheticals. These are documented harms from unaudited AI systems — each of which a pre-deployment bias audit could have caught.

🤖
Amazon Hiring Algorithm (2018)
Amazon's AI resume screener, trained on historical hiring data, systematically penalized résumés that included the word "women's" and downgraded graduates of all-women's colleges — the project was scrapped after the bias was discovered. A fairness audit before deployment would have flagged gender as a significant proxy variable.
🇳🇱
Dutch Childcare Benefit Scandal (2021)
The Dutch tax authority's fraud detection algorithm flagged families with dual nationalities at disproportionate rates, leading to 26,000 families being wrongly accused of fraud and facing financial ruin. A demographic parity check would have surfaced the nationality-based disparate impact immediately.
⚖️
COMPAS Recidivism Algorithm (2016)
ProPublica's investigation found that COMPAS assigned higher recidivism risk scores to Black defendants compared to white defendants with the same criminal history — false positives for Black defendants were nearly twice the rate for white defendants. An equalized odds audit would have detected this disparity before deployment.