Privacy & Security

The session Privacy & Security will be held on wednesday, 2019-09-18, from 16:20 to 17:30, at room 0.002. The session chair is Nikolaj Tatti.


16:20 - 16:40
A Differentially Private Kernel Two-Sample Test (85)
Anant Raj (Max Planck Institute for Intelligent Systems, Tübingen), Ho Chung Leon Law (University of Oxford), Dino Sejdinovic (University of Oxford), Mijung Park (Max Planck Institute for Intelligent Systems, Tübingen)

Kernel two-sample testing is a useful statistical tool in determining whether data samples arise from different distributions without imposing any parametric assumptions on those distributions. However, raw data samples can expose sensitive information about individuals who participate in scientific studies, which makes the current tests vulnerable to privacy breaches. Hence, we design a new framework for kernel two-sample testing conforming to differential privacy constraints, in order to guarantee the privacy of subjects in the data. Unlike existing differentially private parametric tests that simply add noise to data, kernel-based testing imposes a challenge due to a complex dependence of test statistics on the raw data, as these statistics correspond to estimators of distances between representations of probability measures in Hilbert spaces. Our approach considers finite dimensional approximations to those representations. As a result, a simple chi-squared test is obtained, where a test statistic depends on a mean and covariance of empirical differences between the samples, which we perturb for a privacy guarantee. We investigate the utility of our framework in two realistic settingsand conclude that our method requires only a relatively modest increase in sample size to achieve a similar level of power to the non-private tests in both settings.

Reproducible Research
16:40 - 17:00
Learning to Signal in the Goldilocks Zone: Improving Adversary Compliance in Security Games (923)
Sarah Cooney (University of Southern California), Kai Wang (University of Southern California), Elizabeth Bondi (University of Southern California), Thanh Nguyen (University of Oregon), Phebe Vayanos (University of Southern California), Hailey Winetrobe (University of Southern California), Edward A. Cranford (Carnegie Mellon University), Cleotilde Gonzalez (Carnegie Mellon University), Christian Lebiere (Carnegie Mellon University), Milind Tambe (University of Southern California)

Many real-world security scenarios can be modeled via a game-theoretic framework known as a security game in which there is a defender trying to protect potential targets from an attacker. Recent work in security games has shown that deceptive signaling by the defender can convince an attacker to withdraw his attack. For instance, a warning message to commuters indicating speed enforcement is in progress ahead might lead to them driving more slowly, even if it turns out no enforcement is in progress. However, the results of this work are limited by the unrealistic assumption that the attackers will behave with perfect rationality, meaning they always choose an action that gives them the best expected reward. We address the problem of training boundedly rational (human) attackers to comply with signals via repeated interaction with signaling without incurring a loss to the defender, and offer the four following contributions: (i) We learn new decision tree and neural network-based models of attacker compliance with signaling. (ii) Based on these machine learning models of a boundedly rational attacker's response to signaling, we develop a theory of signaling in the Goldilocks zone, a balance of signaling and deception that increases attacker compliance and improves defender utility. (iii) We present game-theoretic algorithms to solve for signaling schemes based on the learned models of attacker compliance with signaling. (iv) We conduct extensive human subject experiments using an online game. The game simulates the scenario of an inside attacker trying to steal sensitive information from company computers, and results show that our algorithms based on learned models of attacker behavior lead to better attacker compliance and improved defender utility compared to the state-of-the-art algorithm for rational attackers with signaling.

17:00 - 17:20
Joint Detection of Malicious Domains and Infected Clients (J19)
Paul Prasse, René Knaebel, Lukáš Machlica, Tomáš Pevný, Tobias Scheffer

Parallel Sessions