PhD Seminar Course on

Foundations of Adversarial Machine Learning

Cagliari, Thursday, July 3, 2008

Instructor: Daniel Lowd
University of Washington
Duration: 2 hours
Schedule: Thursday, July 3, 2008, h10.00
Venue: Aula X
Topics: As classifiers are deployed to detect malicious behavior ranging from spam to terrorism, adversaries modify their behaviors to avoid detection. This makes the very behavior the classifier is trying to detect a function of the classifier itself. Learners that account for concept drift are not sufficient since they do not allow the change in concept to depend on the classifier. As a result, humans must adapt the classifier with each new attack. Ideally, we would like to see classifiers that are resistant to attack and that respond to successful attacks automatically. In this talk, I argue that the development of such classifiers requires new frameworks combining machine learning and game theory, taking into account the utilities and costs of both the classification system and its adversary. We have recently developed such a framework that allows us to identify weaknesses in classification systems, predict how an adversary could exploit them, and even deploy preemptive defenses against these exploits. Although theoretically motivated, these methods achieve excellent empirical results in realistic email spam filtering domains.
Organizer: Prof. Fabio Roli
Dep. of Electrical and Electronic Engineering
University of Cagliari, Italy