Abstract
Due to the powerful capability of the data representation, deep learning has achieved a remarkable performance in supervised hash function learning. However, most of the existing hashing methods focus on point-to-point matching that is too strict and unnecessary. In this article, we propose a novel deep supervised hashing method by relaxing the matching between each pair of instances to a point-to-angle way. Specifically, an inner product is introduced to asymmetrically measure the similarity and dissimilarity between the real-valued output and the binary code. Different from existing methods that strictly enforce each element in the real-valued output to be either +1 or -1, we only encourage the output to be close to its corresponding semantic-related binary code under the cross-angle. This asymmetric product not only projects both the real-valued output and the binary code into the same Hamming space but also relaxes the output with wider choices. To further exploit the semantic affinity, we propose a novel Hamming-distance-based triplet loss, efficiently making a ranking for the positive and negative pairs. An algorithm is then designed to alternatively achieve optimal deep features and binary codes. Experiments on four real-world data sets demonstrate the effectiveness and superiority of our approach to the state of the art.
Original language | English |
---|---|
Article number | 31902779 |
Pages (from-to) | 4791-4805 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 31 |
Issue number | 11 |
DOIs | |
Publication status | Published - Nov 2020 |
Keywords
- Asymmetric , deep learning , hashing learning , point-to-angle , triplet loss