TY - JOUR
T1 - Deep Saliency Hashing for Fine-grained Retrieval
AU - Jin, Sheng
AU - Yao, Hongxun
AU - Sun, Xiaoshuai
AU - Zhou, Shangchen
AU - Zhang, Lei
AU - Hua, Xiansheng
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020/3
Y1 - 2020/3
N2 - In recent years, hashing methods have been proved to be effective and efficient for large-scale Web media search. However, the existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have a subtle difference. To solve this problem, we for the first time introduce the attention mechanism to the learning of fine-grained hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. As the core of DSaH, the saliency loss guides the attention network to mine discriminative regions from pairs of images. We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on fine-grained datasets, including Oxford Flowers, Stanford Dogs, and CUB Birds demonstrate that our DSaH performs the best for the fine-grained retrieval task and beats the strongest competitor (DTQ) by approximately 10% on both Stanford Dogs and CUB Birds. DSaH is also comparable to several state-of-the-art hashing methods on CIFAR-10 and NUS-WIDE.
AB - In recent years, hashing methods have been proved to be effective and efficient for large-scale Web media search. However, the existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have a subtle difference. To solve this problem, we for the first time introduce the attention mechanism to the learning of fine-grained hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. As the core of DSaH, the saliency loss guides the attention network to mine discriminative regions from pairs of images. We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on fine-grained datasets, including Oxford Flowers, Stanford Dogs, and CUB Birds demonstrate that our DSaH performs the best for the fine-grained retrieval task and beats the strongest competitor (DTQ) by approximately 10% on both Stanford Dogs and CUB Birds. DSaH is also comparable to several state-of-the-art hashing methods on CIFAR-10 and NUS-WIDE.
KW - deep supervised hashing
KW - Fine-grained retrieval
KW - salient region mining
UR - http://www.scopus.com/inward/record.url?scp=85083195743&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.2971105
DO - 10.1109/TIP.2020.2971105
M3 - Journal article
SN - 1057-7149
VL - 29
SP - 5336
EP - 5351
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9037360
ER -