A backdoor data poisoning attack is an adversarial attack wherein the
attacker injects several watermarked, mislabeled training examples into a
training set. The watermark does not impact the test-time performance of the
model on typical data; however, the model reliably errs on watermarked
examples.

To gain a better foundational understanding of backdoor data poisoning
attacks, we present a formal theoretical framework within which one can discuss
backdoor data poisoning attacks for classification problems. We then use this
to analyze important statistical and computational issues surrounding these
attacks.

On the statistical front, we identify a parameter we call the memorization
capacity that captures the intrinsic vulnerability of a learning problem to a
backdoor attack. This allows us to argue about the robustness of several
natural learning problems to backdoor attacks. Our results favoring the
attacker involve presenting explicit constructions of backdoor attacks, and our
robustness results show that some natural problem settings cannot yield
successful backdoor attacks.

From a computational standpoint, we show that under certain assumptions,
adversarial training can detect the presence of backdoors in a training set. We
then show that under similar assumptions, two closely related problems we call
backdoor filtering and robust generalization are nearly equivalent. This
implies that it is both asymptotically necessary and sufficient to design
algorithms that can identify watermarked examples in the training set in order
to obtain a learning algorithm that both generalizes well to unseen data and is
robust to backdoors.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Manoj_N/0/1/0/all/0/1">Naren Sarayu Manoj</a>, <a href="http://arxiv.org/find/cs/1/au:+Blum_A/0/1/0/all/0/1">Avrim Blum</a>

By admin