Featured Project
GPU-Accelerated OCR & RAG Architecture
Designed and deployed a GPU-accelerated OCR and RAG microservice architecture for a fraud and risk intelligence platform. Includes PaddleOCR deployment, containerized inference endpoints, vector database integration, and structured document-to-decision traceability — enabling word-level explainability in regulated environments.
Selected Work
First Production AI Deployment
Built the initial production-grade AI platform for an early-stage fraud and risk intelligence startup — including modular backend architecture, AI inference pipelines, containerized GPU services, and API endpoints for document processing and scoring.
Risk Monitoring & Scoring System
Designed an interpretable risk scoring framework combining public and private financial data sources, with calibrated thresholds, multi-source data ingestion, monitoring workflows, and screening tools — prioritizing false negative minimization and regulatory alignment.
How I Think About AI Systems
Design for consequences
I design AI systems for environments where mistakes have consequences. In high-stakes contexts, not all errors are equal.
Prioritize false negatives
When building scoring or screening systems, I pay special attention to false negatives — because missing a critical signal can be more costly than flagging a benign case.
Systems must be accountable
Performance metrics alone are not enough. A system must be interpretable, calibrated, measurable, auditable, and adaptable.
Robustness over perfection
I prioritize robustness and operational clarity over theoretical perfection. AI should support responsible decisions — not obscure them. Explainability and accountability are not features — they are architectural principles.