Abstract
As one of the vital topics in intelligent surveillance, weakly supervised online video anomaly detection (WS-OVAD) aims to identify the ongoing anomalous events moment-to-moment in streaming videos, trained with only video-level annotations. Previous studies tended to utilize a unified single-stage framework, which struggled to simultaneously address the issues of online constraints and weakly supervised settings. To solve this dilemma, in this paper, we propose a two-stage-based framework, namely “decouple and resolve” (DAR), which consists of two modules, i.e., temporal proposal producer (TPP) and online anomaly localizer (OAL). With the supervision of video-level binary labels, the TPP module targets fully exploiting hierarchical temporal relations among snippets for generating precise snippet-level pseudo-labels. Then, given fine-grained supervisory signals produced by TPP, the Transformer-based OAL module is trained to aggregate both the useful cues retrieved from historical observations and anticipated future semantics, for making predictions at the current time step. Both the TPP and OAL modules are jointly trained to share the beneficial knowledge in a multi-task learning paradigm. Extensive experimental results on three public data sets validate the superior performance of the proposed DAR framework over the competing methods.
Original language | English |
---|---|
Article number | 9926133 |
Pages (from-to) | 1 |
Number of pages | 1 |
Journal | IEEE Transactions on Information Forensics and Security |
DOIs | |
Publication status | Accepted/In press - 2022 |
Keywords
- Annotations
- Anomaly detection
- long-short-term context
- multi-task learning
- Online video anomaly detection
- Proposals
- Task analysis
- Training
- Transformers
- Videos
- weakly supervised learning
ASJC Scopus subject areas
- Safety, Risk, Reliability and Quality
- Computer Networks and Communications