While I am fascinated by a variety of security and privacy problems, I focus on the intersection of security and machine learning. In particular, I seek to understand how and whether adversarial examples pose a real threat to deployed systems. My early researched showed that popular computer vision deep learning models are vulnerable to physical attacks with stickers that do not require digital access to the system. Some of that work has been covered by Ars Technica, IEEE Spectrum, and others. I have also studied how deployed systems incorporating AI can fail and argued that we should take a systems-wide approach to securing them. In addition, I have looked into the vulnerability of multimodal (text + image) classification models applied for content integrity. This work has won a Distinguished Paper award at the Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges.
Furthermore, I take an active interest in the broader implications of this new adversarial capability. As a member of the Tech Policy Lab, I have co-authored a paper on the legal and policy consequences of tricking an automated system without compromising any traditional security mechanisms.
My more recent work focused on applying adversarial machine learning "for good." In FoggySight, we studied privacy protections against facial searches. In Adversarial Shortcuts, we propose a mechanism to prevent unauthorized model training by modifying the training dataset.
Before becoming a graduate student at UW, I completed a Bachelor of Science degree in Computer Science with a minor in Mathematics at Lafayette College in the beautiful city of Easton, Pennsylvania.