Diffusion models have achieved great success in a range of tasks, such as
image synthesis and molecule design. As such successes hinge on large-scale
training data collected from diverse sources, the trustworthiness of these
collected data is hard to control or audit. In this work, we aim to explore the
vulnerabilities of diffusion models under potential training data manipulations
and try to answer: How hard is it to perform Trojan attacks on well-trained
diffusion models? What are the adversarial targets that such Trojan attacks can
achieve? To answer these questions, we propose an effective Trojan attack
against diffusion models, TrojDiff, which optimizes the Trojan diffusion and
generative processes during training. In particular, we design novel
transitions during the Trojan diffusion process to diffuse adversarial targets
into a biased Gaussian distribution and propose a new parameterization of the
Trojan generative process that leads to an effective training objective for the
attack. In addition, we consider three types of adversarial targets: the
Trojaned diffusion models will always output instances belonging to a certain
class from the in-domain distribution (In-D2D attack), out-of-domain
distribution (Out-D2D-attack), and one specific instance (D2I attack). We
evaluate TrojDiff on CIFAR-10 and CelebA datasets against both DDPM and DDIM
diffusion models. We show that TrojDiff always achieves high attack performance
under different adversarial targets using different types of triggers, while
the performance in benign environments is preserved. The code is available at
https://github.com/chenweixin107/TrojDiff.
Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Chen_W/0/1/0/all/0/1">Weixin Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Song_D/0/1/0/all/0/1">Dawn Song</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_B/0/1/0/all/0/1">Bo Li</a>