Carter Buckner

Research Interests

Model Transparency and Privacy

Explainable AI (XAI) techniques can increase trust in AI systems. XAI comes in part from an effort by researches to build ethical AI systems or “Trustworthy AI”. Trustworthy AI as a whole encompasses work in privacy and security, explainability and interpretability, fairness, causality, and robustness.

I am interested in problems that attempt to reduce conflict between privacy and other trustworthy values (e.g., privacy and fairness). I hope this work allows us to build more transparent, secure AI models.

This interest is supported by a belief that privacy solutions for AI look different based on sector application and user demographic. Dominant privacy solutions still popularly operate under compliance-based and static, “one size fits all” frameworks.

Privacy Mechanisms and Governance

I am interested in how local and national policy could be leveraged to build safer, more equitable technology. Clear themes emerged by the time I was finishing my undergraduate degree – technology affects some demographic groups negatively; safety and privacy are not always prioritized by developers; and technical jargon can obscure an individual’s ability to make informed decions. I am interested in how aligned privacy mechanisms are with data privacy regulations and rights-based frameworks. Core to this is area is how to expand popular privacy mechanisms to respond to differing individual privacy values.

Trustworthy ML and Security

Security vulnerability assessments benefit from trustworthy approaches. I am interested in using model expanations and causal approaches to benefit security operators. Key goals in this area are leveraging model transparency to reduce vulnerability search space and using causal approaches to support ML-based vulnerability detection techniques.