Ivan Evtimov

Ivan Evtimov
Computer Science Researcher


About me

I recentley defended my PhD in the Computer Security and Privacy Lab at the Paul G. Allen School of Computer Science at the University of Washington where I was advised by Yoshi Kohno.

While I am fascinated by a variety of security and privacy problems, I focus on the intersection of security and machine learning. In particular, I seek to understand how and whether adversarial examples pose a real threat to deployed systems. My early researched showed that popular computer vision deep learning models are vulnerable to physical attacks with stickers that do not require digital access to the system. Some of that work has been covered by Ars Technica, IEEE Spectrum, and others. I have also studied how deployed systems incorporating AI can fail and argued that we should take a systems-wide approach to securing them. In addition, I have looked into the vulnerability of multimodal (text + image) classification models applied for content integrity. This work has won a Distinguished Paper award at the Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges.

Furthermore, I take an active interest in the broader implications of this new adversarial capability. As a member of the Tech Policy Lab, I have co-authored a paper on the legal and policy consequences of tricking an automated system without compromising any traditional security mechanisms.

My more recent work focused on applying adversarial machine learning "for good." In FoggySight, we studied privacy protections against facial searches. In Adversarial Shortcuts, we propose a mechanism to prevent unauthorized model training by modifying the training dataset.

Before becoming a graduate student at UW, I completed a Bachelor of Science degree in Computer Science with a minor in Mathematics at Lafayette College in the beautiful city of Easton, Pennsylvania.

Publications

  • Disrupting Model Training with Adversarial Shortcuts
    Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno
    To appear at this ICML workshop A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning
    Previously: arXiv preprint arXiv:2011.12902 November 2020
    When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes. Successful model training may be preventable with carefully designed dataset modifications, and we present a proof-of-concept approach for the image classification setting. We propose methods based on the notion of adversarial shortcuts, which encourage models to rely on non-robust signals rather than semantic features, and our experiments demonstrate that these measures successfully prevent deep learning models from achieving high accuracy on real, unmodified data examples.

  • Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption
    Ivan Evtimov, Russel Howes, Brian Dolhansky, Hamed Firooz, Cristian Canton Ferrer
    Appeared at the Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges
    Distinguished Paper Award
    Previously: arXiv preprint arXiv:2011.12902 November 2020
    Multimodal models are an important component of AI for integrity. Here, we evaluate their vulnerability to adversarial examples and similar attacks.

  • FoggySight: A Scheme for Facial Lookup Privacy
    Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno
    In Proceedings on Privacy Enhancing Technologies (PoPETs), 2021(3).
    Previously: arXiv preprint arXiv:2012.08588 July 2020
    Facial recognition technology is ubiquitous and many individuals have had their photos collected in large databases. This enables unchecked facial search with potentially detrimental consequences for privacy and civil liberties. Adversarial examples are an enticing solution that can be used to frustrate such facial searches. However, simply modifying future uploads of facial photos does not increase privacy, as facial lookup can be performed with clean photos that already exist in the facial database. With FoggySight, social media users can coordinate their adversarial modifications in order to poison facial search databases by providing decoys that "crowd out" existing, clean photos that individuals have not had a chance to modify. In this paper, we explore the conditions under which such coordination can be successful.

  • Security and Machine Learning in the Real World
    Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li
    arXiv preprint arXiv:2007.07205 July 2020
    AI/ML components make systems vulnerable to novel attacks including adversarial examples and less sophisticated attacks. This paper shares the lessons we learned from studying the security of real AI systems at Microsoft Research.

  • Robust Physical-World Attacks on Deep Learning Visual Classification
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash,Tadayoshi Kohno, Dawn Song
    Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT (supersedes arXiv:1707.08945), June 2018
    Press: IEEE Spectrum, Yahoo News, Wired, Engagdet, Telegraph, Car and Driver, CNET, Digital Trends, SCMagazine, Schneier on Security, Ars Technica, Fortune

    For answers to Frequently Asked Questions (FAQs), about this work, please refer to this webpage. Please, direct all inquiries about this work to our team e-mail address roadsigns@umich.edu.


  • Physical Adversarial Examples for Object Detectors
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
    12th USENIX Workshop on Offensive Technologies (WOOT 2018), Baltimore, MD (arXiv:1712.08062) , August 2018
    Berkeley Artificial Intelligence Research (BAIR) blog

  • Is Tricking A Robot Hacking?
    Ryan Calo, Ivan Evtimov, Earlence Fernandes, Tadayoshi Kohno, David O'Hair
    Berkeley Tech. Law Journal 34, 891. Previously: Proceedings of WeRobot 2018, Stanford, CA, April 2018
    Press: Quartz, BoingBoing

Contact Information

Address

Ivan Evtimov
Online

E-mail

ivanevtimov5 {at} gmail {.} com
PGP Key

Social

Twitter
GitHub