TY - GEN
T1 - Few-shot Class-agnostic Counting with Occlusion Augmentation and Localization
AU - Su, Yuejiao
AU - Wang, Yi
AU - Yao, Lei
AU - Chau, Lap Pui
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/5
Y1 - 2024/5
N2 - Most existing few-shot class-agnostic counting (FCAC) methods follow the extract-and-compare pipeline to count all instances of an arbitrary category in the query image given a few exemplars. However, these methods generate the density map rather than the exact instance location for counting, which is less intuitive and accurate than the latter. Besides, how to alleviate the problem of occlusion is ignored in most existing work. To solve the above problems, this paper proposes an Occlusion-Augmented Localization Network (OALNet), which extracts multiple occluded features of exemplars for comparison and utilizes the precise position of instances for more accurate and confident counting results. Specifically, the OALNet is in an extract-and-attention manner. It includes an Occluded Feature Generation module to deal with the occlusion problem in query images. Besides, the OALNet adopts the Feature Attention module to improve the extracted feature by self-attention and model the relationship between the exemplar features and query features by cross-attention. Compared with other FCAC methods, experimental results demonstrate that the proposed OALNet achieves superior performance.
AB - Most existing few-shot class-agnostic counting (FCAC) methods follow the extract-and-compare pipeline to count all instances of an arbitrary category in the query image given a few exemplars. However, these methods generate the density map rather than the exact instance location for counting, which is less intuitive and accurate than the latter. Besides, how to alleviate the problem of occlusion is ignored in most existing work. To solve the above problems, this paper proposes an Occlusion-Augmented Localization Network (OALNet), which extracts multiple occluded features of exemplars for comparison and utilizes the precise position of instances for more accurate and confident counting results. Specifically, the OALNet is in an extract-and-attention manner. It includes an Occluded Feature Generation module to deal with the occlusion problem in query images. Besides, the OALNet adopts the Feature Attention module to improve the extracted feature by self-attention and model the relationship between the exemplar features and query features by cross-attention. Compared with other FCAC methods, experimental results demonstrate that the proposed OALNet achieves superior performance.
KW - class-agnostic counting
KW - cross-attention
KW - few-shot learning
KW - localization
KW - occlusion augmentation
KW - self-attention
UR - http://www.scopus.com/inward/record.url?scp=85198537989&partnerID=8YFLogxK
U2 - 10.1109/ISCAS58744.2024.10558069
DO - 10.1109/ISCAS58744.2024.10558069
M3 - Conference article published in proceeding or book
AN - SCOPUS:85198537989
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
BT - ISCAS 2024 - IEEE International Symposium on Circuits and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024
Y2 - 19 May 2024 through 22 May 2024
ER -