Student surpasses teacher: imitation attack for black-box NLP APIs

Qiongkai Xu*, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

11 Citations (Scopus)

Abstract

Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we conduct unsupervised domain adaptation and multi-victim ensemble to showing that attackers could potentially surpass victims, which is beyond previous understanding of model extraction. Extensive experiments on both benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models on transferred domains. We consider our work as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.

Original languageEnglish
Title of host publicationProceedings of the 29th International Conference on Computational Linguistics
Place of PublicationNew York
PublisherInternational Committee on Computational Linguistics
Pages2849-2860
Number of pages12
Publication statusPublished - 2022
Externally publishedYes
Event29th International Conference on Computational Linguistics, COLING 2022 - Gyeongju, Korea, Republic of
Duration: 12 Oct 202217 Oct 2022

Publication series

Name
ISSN (Electronic)2951-2093

Conference

Conference29th International Conference on Computational Linguistics, COLING 2022
Country/TerritoryKorea, Republic of
CityGyeongju
Period12/10/2217/10/22

Fingerprint

Dive into the research topics of 'Student surpasses teacher: imitation attack for black-box NLP APIs'. Together they form a unique fingerprint.

Cite this