TY - GEN
T1 - Privacy preserving text data encoding and topic modelling
AU - Vatsalan, Dinusha
AU - Bhaskar, Raghav
AU - Gkoulalas-Divanis, Aris
AU - Karapiperis, Dimitrios
PY - 2021
Y1 - 2021
N2 - Textual data, such as clinical notes, product or movie reviews in online stores, transcripts, chat records, and business documents, are widely collected nowadays and can be used to support a large spectrum of Big Data applications. At the same time, textual data, collected about individuals or from individuals, can be susceptible to inference attacks that may leak private and/or sensitive information about individuals.
The increasing concerns of privacy risks in textual data preclude sharing or exchanging textual data across different parties/organizations for various applications such as record linkage, similar entity matching, natural language processing (NLP), or machine learning on large collections of textual data. This has led to the development of privacy preserving techniques for applying matching, machine learning or NLP techniques on textual data that contain personal and sensitive information about individuals. While cryptographic techniques are highly secure and accurate, they incur significant amount of computational cost for encoding and matching data -- especially textual data -- due to the complex nature of text.
In this paper, we propose an efficient textual data encoding and matching algorithm using probabilistic techniques based on counting Bloom filters combined with Differential privacy. We apply our algorithm to a popular use case scenario that involves privacy preserving topic modeling -- a widely used NLP technique -- in order to identify common or collective topics in texts across multiple parties without learning the individual topics of each party, and show its effectiveness in supporting this application. Finally, through extensive experimental evaluation on three large text datasets against a state-of-the-art probabilistic encoding algorithm for privacy preserving LDA topic modelling, we show that our method provides a better privacy-utility trade-off at the cost of more computation complexity and memory space, while still being computationally efficient (log-linear complexity in the size of documents) for Big data compared to cryptographic techniques that have quadratic complexity.
AB - Textual data, such as clinical notes, product or movie reviews in online stores, transcripts, chat records, and business documents, are widely collected nowadays and can be used to support a large spectrum of Big Data applications. At the same time, textual data, collected about individuals or from individuals, can be susceptible to inference attacks that may leak private and/or sensitive information about individuals.
The increasing concerns of privacy risks in textual data preclude sharing or exchanging textual data across different parties/organizations for various applications such as record linkage, similar entity matching, natural language processing (NLP), or machine learning on large collections of textual data. This has led to the development of privacy preserving techniques for applying matching, machine learning or NLP techniques on textual data that contain personal and sensitive information about individuals. While cryptographic techniques are highly secure and accurate, they incur significant amount of computational cost for encoding and matching data -- especially textual data -- due to the complex nature of text.
In this paper, we propose an efficient textual data encoding and matching algorithm using probabilistic techniques based on counting Bloom filters combined with Differential privacy. We apply our algorithm to a popular use case scenario that involves privacy preserving topic modeling -- a widely used NLP technique -- in order to identify common or collective topics in texts across multiple parties without learning the individual topics of each party, and show its effectiveness in supporting this application. Finally, through extensive experimental evaluation on three large text datasets against a state-of-the-art probabilistic encoding algorithm for privacy preserving LDA topic modelling, we show that our method provides a better privacy-utility trade-off at the cost of more computation complexity and memory space, while still being computationally efficient (log-linear complexity in the size of documents) for Big data compared to cryptographic techniques that have quadratic complexity.
KW - Differential privacy
KW - counting Bloom filters
KW - distance-preserving encoding
KW - textual data
KW - topic models
UR - http://www.scopus.com/inward/record.url?scp=85125312891&partnerID=8YFLogxK
U2 - 10.1109/BigData52589.2021.9671552
DO - 10.1109/BigData52589.2021.9671552
M3 - Conference proceeding contribution
T3 - IEEE International Conference on Big Data
SP - 1308
EP - 1316
BT - Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021
A2 - Chen, Yixin
A2 - Ludwig, Heiko
A2 - Tu, Yicheng
A2 - Fayyad, Usama
A2 - Zhu, Xingquan
A2 - Hu, Xiaohua Tony
A2 - Byna, Suren
A2 - Liu, Xiong
A2 - Zhang, Jianping
A2 - Pan, Shirui
A2 - Papalexakis, Vagelis
A2 - Wang, Jianwu
A2 - Cuzzocrea, Alfredo
A2 - Ordonez, Carlos
PB - Institute of Electrical and Electronics Engineers (IEEE)
CY - Piscataway, NJ
T2 - 2021 IEEE International Conference on Big Data (Big Data)
Y2 - 15 December 2021 through 18 December 2021
ER -