LDP (Local Differential Privacy) has recently attracted much attention as a
metric of data privacy that prevents the inference of personal data from
obfuscated data in the local model. However, there are scenarios in which the
adversary wants to perform re-identification attacks to link the obfuscated
data to users in this model. LDP can cause excessive obfuscation and destroy
the utility in these scenarios because it is not designed to directly prevent
re-identification. In this paper, we propose a measure of re-identification
risks, which we call PIE (Personal Information Entropy). The PIE is designed so
that it directly prevents re-identification attacks in the local model. It
lower-bounds the lowest possible re-identification error probability (i.e.,
Bayes error probability) of the adversary. We analyze the relation between LDP
and the PIE, and analyze the PIE and utility in distribution estimation for two
obfuscation mechanisms providing LDP. Through experiments, we show that when we
consider re-identification as a privacy risk, LDP can cause excessive
obfuscation and destroy the utility. Then we show that the PIE can be used to
guarantee low re-identification risks for the local obfuscation mechanisms
while keeping high utility.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Murakami_T/0/1/0/all/0/1">Takao Murakami</a>, <a href="http://arxiv.org/find/cs/1/au:+Takahashi_K/0/1/0/all/0/1">Kenta Takahashi</a>

By admin