TY - JOUR
T1 - Efficient source camera identification with diversity-enhanced patch selection and deep residual prediction
AU - Liu, Yunxia
AU - Zou, Zeyu
AU - Yang, Yang
AU - Law, Ngai Fong Bonnie
AU - Bharath, Anil Anthony
N1 - Funding Information:
This research was funded by the National Key Research and Development Program of China grant number 2018YFC0831100, Shandong Provincial Natural Science Foundation of China grant number ZR2020MF027 and ZR2020MF143, and the fundamental research funds for the central universities of China, grant number 11170032008069.
Funding Information:
Acknowledgments: Yunxia Liu acknowledges the research scholarships provided by the Chinese Scholarship Council funding and the Department of Bioengineering, Imperial College London, where the work was partially done.
Publisher Copyright:
© 2020 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2021/7/2
Y1 - 2021/7/2
N2 - Source camera identification has long been a hot topic in the field of image forensics. Besides conventional feature engineering algorithms developed based on studying the traces left upon shooting, several deep-learning-based methods have also emerged recently. However, identification performance is susceptible to image content and is far from satisfactory for small image patches in real demanding applications. In this paper, an efficient patch-level source camera identification method is proposed based on a convolutional neural network. First, in order to obtain improved robustness with reduced training cost, representative patches are selected according to multiple criteria for enhanced diversity in training data. Second, a fine-grained multiscale deep residual prediction module is proposed to reduce the impact of scene content. Finally, a modified VGG network is proposed for source camera identification at brand, model, and instance levels. A more critical patch-level evaluation protocol is also proposed for fair performance comparison. Abundant experimental results show that the proposed method achieves better results as compared with the state-of-the-art algorithms.
AB - Source camera identification has long been a hot topic in the field of image forensics. Besides conventional feature engineering algorithms developed based on studying the traces left upon shooting, several deep-learning-based methods have also emerged recently. However, identification performance is susceptible to image content and is far from satisfactory for small image patches in real demanding applications. In this paper, an efficient patch-level source camera identification method is proposed based on a convolutional neural network. First, in order to obtain improved robustness with reduced training cost, representative patches are selected according to multiple criteria for enhanced diversity in training data. Second, a fine-grained multiscale deep residual prediction module is proposed to reduce the impact of scene content. Finally, a modified VGG network is proposed for source camera identification at brand, model, and instance levels. A more critical patch-level evaluation protocol is also proposed for fair performance comparison. Abundant experimental results show that the proposed method achieves better results as compared with the state-of-the-art algorithms.
KW - Convolutional neural network
KW - Deep learning
KW - Image forensics
KW - Imaging sensors
KW - Source camera identification
UR - http://www.scopus.com/inward/record.url?scp=85109331482&partnerID=8YFLogxK
U2 - 10.3390/s21144701
DO - 10.3390/s21144701
M3 - Journal article
C2 - 34300441
AN - SCOPUS:85109331482
SN - 1424-8220
VL - 21
SP - 1
EP - 22
JO - Sensors
JF - Sensors
IS - 14
M1 - 4701
ER -