LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries
ACL 2025 · ICML 2025
Making AI systems fairer, more transparent, and actually useful. I design evaluation frameworks, bias mitigation tools, and open-source libraries for responsible AI — bridging the gap between what AI can do and what it should do.
Ph.D. in Electrical Engineering · 15+ publications · NeurIPS, ACL, NAACL · Peru, Brazil, UK
ACL 2025 · ICML 2025
NAACL 2025 · NeurIPS 2024
NeurIPS 2024 Workshop
arXiv:2302.12094
arXiv:2302.11562