AI is making decisions about your loan, your job, your healthcare. ModelPassport ensures those decisions are fair — and that organizations can prove it.
We live in a world where an algorithm trained on historical data decides whether you get a mortgage, whether you pass a resume screen, whether you receive priority medical care. These models operate invisibly, at scale, affecting millions of lives — often those least able to challenge the outcome.
The governance gap is real: organizations want to deploy AI responsibly but lack tooling to audit it objectively. Regulators want to enforce fairness but lack the technical standards to measure it. Citizens want accountability but have no way to verify claims. ModelPassport addresses all three.
Meet the Founder
Final-year Computer Science student with a focus on Python, AI/ML, and backend systems. Built ModelPassport end-to-end — from FastAPI backend to four-layer audit pipeline to the frontend interface you're reading right now.
Passionate about the intersection of technology and social equity. Believes that the most powerful thing you can build is a system that holds other systems accountable.
Tech Stack
Real Cases. Real Harm.
These aren't hypotheticals. These are documented harms from unaudited AI systems — each of which a pre-deployment bias audit could have caught.