TY - UNPB
T1 - Directional privacy for deep learning
AU - Faustini, Pedro
AU - Fernandes, Natasha
AU - Tonni, Shakila
AU - McIver, Annabelle
AU - Dras, Mark
PY - 2022/11/9
Y1 - 2022/11/9
N2 - Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. It applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility. In this paper, we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides both ϵ-DP and ϵd-privacy for deep learning training, rather than the (ϵ,δ)-privacy of the Gaussian mechanism. Experiments on key datasets then indicate that the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off. In particular, our experiments provide a direct empirical comparison of privacy between the two approaches in terms of their ability to defend against reconstruction and membership inference.
AB - Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. It applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility. In this paper, we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides both ϵ-DP and ϵd-privacy for deep learning training, rather than the (ϵ,δ)-privacy of the Gaussian mechanism. Experiments on key datasets then indicate that the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off. In particular, our experiments provide a direct empirical comparison of privacy between the two approaches in terms of their ability to defend against reconstruction and membership inference.
UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85142696814&partnerID=MN8TOARS
U2 - 10.48550/arXiv.2211.04686
DO - 10.48550/arXiv.2211.04686
M3 - Preprint
T3 - arXiv
BT - Directional privacy for deep learning
ER -