TY - JOUR
T1 - OFEI
T2 - a semi-black-box android adversarial sample attack framework against DLaaS
AU - Xu, Guangquan
AU - Xin, Guohua
AU - Jiao, Litao
AU - Liu, Jian
AU - Liu, Shaoying
AU - Feng, Meiqi
AU - Zheng, Xi
PY - 2024/4/1
Y1 - 2024/4/1
N2 - With the growing popularity of Android devices, Android malware is seriously threatening the safety of users. Although such threats can be detected by deep learning as a service (DLaaS), deep neural networks as the weakest part of DLaaS are often deceived by the adversarial samples elaborated by attackers. In this paper, we propose a new semi-black-box attack framework called one-feature-each-iteration (OFEI) to craft Android adversarial samples. This framework modifies as few features as possible and requires less classifier information to fool the classifier. We conduct a controlled experiment to evaluate our OFEI framework by comparing it with the benchmark methods JSMF, GenAttack and pointwise attack. The experimental results show that our OFEI has a higher misclassification rate of 98.25%. Furthermore, OFEI can extend the traditional white-box attack methods in the image field, such as fast gradient sign method (FGSM) and DeepFool, to craft adversarial samples for Android. Finally, to enhance the security of DLaaS, we use two uncertainties of the Bayesian neural network to construct the combined uncertainty, which is used to detect adversarial samples and achieves a high detection rate of 99.28%.
AB - With the growing popularity of Android devices, Android malware is seriously threatening the safety of users. Although such threats can be detected by deep learning as a service (DLaaS), deep neural networks as the weakest part of DLaaS are often deceived by the adversarial samples elaborated by attackers. In this paper, we propose a new semi-black-box attack framework called one-feature-each-iteration (OFEI) to craft Android adversarial samples. This framework modifies as few features as possible and requires less classifier information to fool the classifier. We conduct a controlled experiment to evaluate our OFEI framework by comparing it with the benchmark methods JSMF, GenAttack and pointwise attack. The experimental results show that our OFEI has a higher misclassification rate of 98.25%. Furthermore, OFEI can extend the traditional white-box attack methods in the image field, such as fast gradient sign method (FGSM) and DeepFool, to craft adversarial samples for Android. Finally, to enhance the security of DLaaS, we use two uncertainties of the Bayesian neural network to construct the combined uncertainty, which is used to detect adversarial samples and achieves a high detection rate of 99.28%.
KW - Android adversarial samples
KW - deep learning as a service
KW - malware detection
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=85147302167&partnerID=8YFLogxK
UR - https://doi.org/10.48550/arXiv.2105.11593
U2 - 10.1109/TC.2023.3236872
DO - 10.1109/TC.2023.3236872
M3 - Article
AN - SCOPUS:85147302167
SN - 0018-9340
VL - 73
SP - 956
EP - 969
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
IS - 4
ER -