Ivan Evtimov

Ivan Evtimov
Ph.D. Student of Computer Science


About me

I am a Ph.D. student in the Computer Security and Privacy Lab at the Paul G. Allen School of Computer Science at the University of Washington where I am advised by Yoshi Kohno and work closely with Earlence Fernandes.

While I am fascinated by a variety of security and privacy problems, I focus on the intersection of security and machine learning. In particular, I seek to understand how and whether adversarial examples pose a real threat to deployed systems. My research so far has shown that popular computer vision deep learning models are vulnerable to physical attacks with stickers that do not require digital access to the system. Some of that work has been covered by Wired, Ars Technica, IEEE Spectrum, and others.

I also take an active interest in the broader implications of this new adversarial capability. As a member of the Tech Policy Lab, I have co-authored a paper on the legal and policy consequences of tricking an automated system without compromising any traditional security mechanisms. I also enjoy participating in law and policy discussions around technology beyond their connections to my work. That is why I attend the lab's weekly meetings with scholars from different academic departments where we talk about the latest news around tech policy.

Before becoming a graduate student at UW, I completed a Bachelor of Science degree in Computer Science with a minor in Mathematics at Lafayette College in the beautiful city of Easton, Pennsylvania.

Publications

  • FoggySight: A Scheme for Facial Lookup Privacy
    Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno
    To appear in Proceedings on Privacy Enhancing Technologies (PoPETs), 2021(3).
    Previously: arXiv preprint arXiv:2012.08588 July 2020
    Facial recognition technology is ubiquitous and many individuals have had their photos collected in large databases. This enables unchecked facial search with potentially detrimental consequences for privacy and civil liberties. Adversarial examples are an enticing solution that can be used to frustrate such facial searches. However, simply modifying future uploads of facial photos does not increase privacy, as facial lookup can be performed with clean photos that already exist in the facial database. With FoggySight, social media users can coordinate their adversarial modifications in order to poison facial search databases by providing decoys that "crowd out" existing, clean photos that individuals have not had a chance to modify. In this paper, we explore the conditions under which such coordination can be successful.

  • Security and Machine Learning in the Real World
    Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li
    arXiv preprint arXiv:2007.07205 July 2020
    AI/ML components make systems vulnerable to novel attacks including adversarial examples and less sophisticated attacks. This paper shares the lessons we learned from studying the security of real AI systems at Microsoft Research.

  • Robust Physical-World Attacks on Deep Learning Visual Classification
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash,Tadayoshi Kohno, Dawn Song
    Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT (supersedes arXiv:1707.08945), June 2018
    Press: IEEE Spectrum, Yahoo News, Wired, Engagdet, Telegraph, Car and Driver, CNET, Digital Trends, SCMagazine, Schneier on Security, Ars Technica, Fortune

    For answers to Frequently Asked Questions (FAQs), about this work, please refer to this webpage. Please, direct all inquiries about this work to our team e-mail address roadsigns@umich.edu.


  • Physical Adversarial Examples for Object Detectors
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
    12th USENIX Workshop on Offensive Technologies (WOOT 2018), Baltimore, MD (arXiv:1712.08062) , August 2018
    Berkeley Artificial Intelligence Research (BAIR) blog

  • Is Tricking A Robot Hacking?
    Ryan Calo, Ivan Evtimov, Earlence Fernandes, Tadayoshi Kohno, David O'Hair
    Berkeley Tech. Law Journal 34, 891. Previously: Proceedings of WeRobot 2018, Stanford, CA, April 2018
    Press: Quartz, BoingBoing

Contact Information

Address

Ivan Evtimov
Paul G. Allen School of Computer Science & Engineering
185 Stevens Way
Campus Box 352350
Seattle, WA 98195

E-mail

ie5 {at} cs {.} washington {.} edu
PGP Key

Social

Twitter
GitHub