TY - GEN
T1 - (ML)2P-Encoder: On Exploration of Channel-Class Correlation for Multi-Label Zero-Shot Learning
AU - Liu, Ziming
AU - Guo, Song
AU - Lu, Xiaocheng
AU - Guo, Jingcai
AU - Zhang, Jiewei
AU - Zeng, Yue
AU - Huo, Fushuo
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/8
Y1 - 2023/8
N2 - Recent studies usually approach multi-label zeroshot learning (MLZSL) with visual-semantic mapping on spatial-class correlation, which can be computationally costly, and worse still, fails to capture fine-grained classspecific semantics. We observe that different channels may usually have different sensitivities on classes, which can correspond to specific semantics. Such an intrinsic channelclass correlation suggests a potential alternative for the more accurate and class-harmonious feature representations. In this paper, our interest is to fully explore the power of channel-class correlation as the unique base for MLZSL. Specifically, we propose a light yet efficient Multi-Label Multi-Layer Perceptron-based Encoder, dubbed (ML)2P-Encoder, to extract and preserve channel-wise semantics. We reorganize the generated feature maps into several groups, of which each of them can be trained independently with (ML)2P-Encoder. On top of that, a global groupwise attention module is further designed to build the multilabel specific class relationships among different classes, which eventually fulfills a novel Channel-Class Correlation MLZSL framework (C3-MLZSL)11Released code:github.com/simonzmliu/cvpr23-mlzsl. Extensive experiments on large-scale MLZSL benchmarks including NUS-WIDE and Open-Images-V4 demonstrate the superiority of our model against other representative state-of-the-art models.
AB - Recent studies usually approach multi-label zeroshot learning (MLZSL) with visual-semantic mapping on spatial-class correlation, which can be computationally costly, and worse still, fails to capture fine-grained classspecific semantics. We observe that different channels may usually have different sensitivities on classes, which can correspond to specific semantics. Such an intrinsic channelclass correlation suggests a potential alternative for the more accurate and class-harmonious feature representations. In this paper, our interest is to fully explore the power of channel-class correlation as the unique base for MLZSL. Specifically, we propose a light yet efficient Multi-Label Multi-Layer Perceptron-based Encoder, dubbed (ML)2P-Encoder, to extract and preserve channel-wise semantics. We reorganize the generated feature maps into several groups, of which each of them can be trained independently with (ML)2P-Encoder. On top of that, a global groupwise attention module is further designed to build the multilabel specific class relationships among different classes, which eventually fulfills a novel Channel-Class Correlation MLZSL framework (C3-MLZSL)11Released code:github.com/simonzmliu/cvpr23-mlzsl. Extensive experiments on large-scale MLZSL benchmarks including NUS-WIDE and Open-Images-V4 demonstrate the superiority of our model against other representative state-of-the-art models.
KW - continual
KW - low-shot
KW - meta
KW - or long-tail learning
KW - Transfer
UR - http://www.scopus.com/inward/record.url?scp=85160824279&partnerID=8YFLogxK
U2 - 10.1109/CVPR52729.2023.02285
DO - 10.1109/CVPR52729.2023.02285
M3 - Conference article published in proceeding or book
AN - SCOPUS:85160824279
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 23859
EP - 23868
BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
PB - IEEE Computer Society
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
Y2 - 18 June 2023 through 22 June 2023
ER -