Federated Learning (FL) allows multiple clients to collaboratively train a
Neural Network (NN) model on their private data without revealing the data.
Recently, several targeted poisoning attacks against FL have been introduced.
These attacks inject a backdoor into the resulting model that allows
adversary-controlled inputs to be misclassified. Existing countermeasures
against backdoor attacks are inefficient and often merely aim to exclude
deviating models from the aggregation. However, this approach also removes
benign models of clients with deviating data distributions, causing the
aggregated model to perform poorly for such clients.

To address this problem, we propose DeepSight, a novel model filtering
approach for mitigating backdoor attacks. It is based on three novel techniques
that allow to characterize the distribution of data used to train model updates
and seek to measure fine-grained differences in the internal structure and
outputs of NNs. Using these techniques, DeepSight can identify suspicious model
updates. We also develop a scheme that can accurately cluster model updates.
Combining the results of both components, DeepSight is able to identify and
eliminate model clusters containing poisoned models with high attack impact. We
also show that the backdoor contributions of possibly undetected poisoned
models can be effectively mitigated with existing weight clipping-based
defenses. We evaluate the performance and effectiveness of DeepSight and show
that it can mitigate state-of-the-art backdoor attacks with a negligible impact
on the model’s performance on benign data.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Rieger_P/0/1/0/all/0/1">Phillip Rieger</a>, <a href="http://arxiv.org/find/cs/1/au:+Nguyen_T/0/1/0/all/0/1">Thien Duc Nguyen</a>, <a href="http://arxiv.org/find/cs/1/au:+Miettinen_M/0/1/0/all/0/1">Markus Miettinen</a>, <a href="http://arxiv.org/find/cs/1/au:+Sadeghi_A/0/1/0/all/0/1">Ahmad-Reza Sadeghi</a>

By admin