TY - GEN
T1 - Decamouflage: A Framework to Detect Image-Scaling Attacks on CNN
AU - Kim, Bedeuro
AU - Abuadbba, Alsharif
AU - Gao, Yansong
AU - Zheng, Yifeng
AU - Ahmed, Muhammad Ejaz
AU - Nepal, Surya
AU - Kim, Hyoungshick
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/6
Y1 - 2021/6
N2 - Image-scaling is a typical operation that processes the input image before feeding it into convolutional neural network models. However, it is vulnerable to the newly revealed image-scaling attack. This work presents an image-scaling attack detection framework, Decamouflage, consisting of three independent detection methods: scaling, filtering, and steganalysis, to detect the attack through examining distinct image characteristics. Decamouflage has a pre-determined detection threshold that is generic. More precisely, as we have validated, the threshold determined from one dataset is also applicable to other different datasets. Extensive experiments show that Decamouflage achieves detection accuracy of 99.9% and 98.5% in the white-box and the black-box settings, respectively. We also measured its running time overhead on a PC with an Intel i5 CPU and 8GB RAM. The experimental results show that image-scaling attacks can be detected in milliseconds. Moreover, Decamouflage is highly robust against adaptive image-scaling attacks (e.g., attack image size variances).
AB - Image-scaling is a typical operation that processes the input image before feeding it into convolutional neural network models. However, it is vulnerable to the newly revealed image-scaling attack. This work presents an image-scaling attack detection framework, Decamouflage, consisting of three independent detection methods: scaling, filtering, and steganalysis, to detect the attack through examining distinct image characteristics. Decamouflage has a pre-determined detection threshold that is generic. More precisely, as we have validated, the threshold determined from one dataset is also applicable to other different datasets. Extensive experiments show that Decamouflage achieves detection accuracy of 99.9% and 98.5% in the white-box and the black-box settings, respectively. We also measured its running time overhead on a PC with an Intel i5 CPU and 8GB RAM. The experimental results show that image-scaling attacks can be detected in milliseconds. Moreover, Decamouflage is highly robust against adaptive image-scaling attacks (e.g., attack image size variances).
KW - Adversarial detection
KW - Backdoor detection
KW - Image-scaling attack
UR - http://www.scopus.com/inward/record.url?scp=85114889824&partnerID=8YFLogxK
U2 - 10.1109/DSN48987.2021.00023
DO - 10.1109/DSN48987.2021.00023
M3 - Conference article published in proceeding or book
AN - SCOPUS:85114889824
T3 - Proceedings - 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2021
SP - 63
EP - 74
BT - Proceedings - 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2021
Y2 - 21 June 2021 through 24 June 2021
ER -