Deep Learning (DL) is being applied in various domains, especially in
safety-critical applications such as autonomous driving. Consequently, it is of
great significance to ensure the robustness of these methods and thus
counteract uncertain behaviors caused by adversarial attacks. In this paper, we
use gradient heatmaps to analyze the response characteristics of the VGG-16
model when the input images are mixed with adversarial noise and statistically
similar Gaussian random noise. In particular, we compare the network response
layer by layer to determine where errors occurred. Several interesting findings
are derived. First, compared to Gaussian random noise, intentionally generated
adversarial noise causes severe behavior deviation by distracting the area of
concentration in the networks. Second, in many cases, adversarial examples only
need to compromise a few intermediate blocks to mislead the final decision.
Third, our experiments revealed that specific blocks are more vulnerable and
easier to exploit by adversarial examples. Finally, we demonstrate that the
layers $Block4_conv1$ and $Block5_cov1$ of the VGG-16 model are more
susceptible to adversarial attacks. Our work could provide valuable insights
into developing more reliable Deep Neural Network (DNN) models.

Go to Source of this post
Author Of this post: <a href="">Justus Renkhoff</a>, <a href="">Wenkai Tan</a>, <a href="">Alvaro Velasquez</a>, <a href="">illiam Yichen Wang</a>, <a href="">Yongxin Liu</a>, <a href="">Jian Wang</a>, <a href="">Shuteng Niu</a>, <a href="">Lejla Begic Fazlic</a>, <a href="">Guido Dartmann</a>, <a href="">Houbing Song</a>

By admin