We provide an end-to-end Renyi DP based-framework for differentially private
top-$k$ selection. Unlike previous approaches, which require a data-independent
choice on $k$, we propose to privately release a data-dependent choice of $k$
such that the gap between $k$-th and the $(k+1)$st “quality” is large. This is
achieved by a novel application of the Report-Noisy-Max. Not only does this
eliminate one hyperparameter, the adaptive choice of $k$ also certifies the
stability of the top-$k$ indices in the unordered set so we can release them
using a variant of propose-test-release (PTR) without adding noise. We show
that our construction improves the privacy-utility trade-offs compared to the
previous top-$k$ selection algorithms theoretically and empirically.
Additionally, we apply our algorithm to “Private Aggregation of Teacher
Ensembles (PATE)” in multi-label classification tasks with a large number of
labels and show that it leads to significant performance gains.
Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Zhu_Y/0/1/0/all/0/1">Yuqing Zhu</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_Y/0/1/0/all/0/1">Yu-Xiang Wang</a>