TY - GEN
T1 - Oriole
T2 - 26th Australasian Conference on Information Security and Privacy, ACISP 2021
AU - Chen, Liuqiao
AU - Wang, Hu
AU - Zhao, Benjamin Zi Hao
AU - Xue, Minhui
AU - Qian, Haifeng
PY - 2021
Y1 - 2021
N2 - Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes [37] (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
AB - Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes [37] (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
KW - Data poisoning
KW - Deep learning privacy
KW - Facial recognition
KW - Multi-cloaks
UR - http://www.scopus.com/inward/record.url?scp=85120090859&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/arc/DP210102670
U2 - 10.1007/978-3-030-90567-5_28
DO - 10.1007/978-3-030-90567-5_28
M3 - Conference proceeding contribution
AN - SCOPUS:85120090859
SN - 9783030905668
T3 - Lecture Notes in Computer Science
SP - 550
EP - 568
BT - Information security and privacy
A2 - Baek, Joonsang
A2 - Ruj, Sushmita
PB - Springer, Springer Nature
CY - Cham
Y2 - 1 December 2021 through 3 December 2021
ER -