Neural network-based image classifiers are powerful tools for computer vision
tasks, but they inadvertently reveal sensitive attribute information about
their classes, raising concerns about their privacy. To investigate this
privacy leakage, we introduce the first Class Attribute Inference Attack
(Caia), which leverages recent advances in text-to-image synthesis to infer
sensitive attributes of individual classes in a black-box setting, while
remaining competitive with related white-box attacks. Our extensive experiments
in the face recognition domain show that Caia can accurately infer undisclosed
sensitive attributes, such as an individual’s hair color, gender and racial
appearance, which are not part of the training labels. Interestingly, we
demonstrate that adversarial robust models are even more vulnerable to such
privacy leakage than standard models, indicating that a trade-off between
robustness and privacy exists.

Go to Source of this post
Author Of this post: <a href="">Lukas Struppek</a>, <a href="">Dominik Hintersdorf</a>, <a href="">Felix Friedrich</a>, <a href="">Manuel Brack</a>, <a href="">Patrick Schramowski</a>, <a href="">Kristian Kersting</a>

By admin