TY - GEN
T1 - Restoration of User Videos Shared on Social Media
AU - Luo, Hongming
AU - Zhou, Fei
AU - Lam, Kin Man
AU - Qiu, Guoping
N1 - Funding Information:
In this paper, we have developed a general video restoration method and applied it to the problem of restoring user videos shared on social media. A characteristic of such videos is that the types of degradation the videos have undergone are unknown. To address such problems we have developed the video restoration through adaptive degradation sensing (VOTES) framework, in which we first introduced the degradation feature map (DFM) concept to measure the difficulty of restoring each location of a frame and then, used convolutional operations to extract hierarchical degradation features from the DFM to modulate an end-to-end video restoration backbone neural network such that more attention can be paid to the more heavily degraded areas, which in turn helped improving the restoration results both visually and quantitatively. We have also contributed a large database for researching the problem of restoring user videos shared on social media platforms. ACKNOWLEDGMENTS This work was supported by the Guangdong Basic and Applied Basic Research Foundation (No. 2021A1515011584), Guangdong Basic and Applied Basic Research Foundation (Grant 2019B151502001) and Shenzhen R&D Program (Grant JCYJ20200109105008228).
Publisher Copyright:
© 2022 ACM.
PY - 2022/10/10
Y1 - 2022/10/10
N2 - User videos shared on social media platforms usually suffer from degradations caused by unknown proprietary processing procedures, which means that their visual quality is poorer than that of the originals. This paper presents a new general video restoration framework for the restoration of user videos shared on social media platforms. In contrast to most deep learning-based video restoration methods that perform end-to-end mapping, where feature extraction is mostly treated as a black box, in the sense that what role a feature plays is often unknown, our new method, termed Video restOration through adapTive dEgradation Sensing (VOTES), introduces the concept of a degradation feature map (DFM) to explicitly guide the video restoration process. Specifically, for each video frame, we first adaptively estimate its DFM to extract features representing the difficulty of restoring its different regions. We then feed the DFM to a convolutional neural network (CNN) to compute hierarchical degradation features to modulate an end-to-end video restoration backbone network, such that more attention is paid explicitly to potentially more difficult to restore areas, which in turn leads to enhanced restoration performance. We will explain the design rationale of the VOTES framework and present extensive experimental results to show that the new VOTES method outperforms various state-of-the-art techniques both quantitatively and qualitatively. In addition, we contribute a large scale real-world database of user videos shared on different social media platforms. Codes and datasets are available at https://github.com/luohongming/VOTES.git
AB - User videos shared on social media platforms usually suffer from degradations caused by unknown proprietary processing procedures, which means that their visual quality is poorer than that of the originals. This paper presents a new general video restoration framework for the restoration of user videos shared on social media platforms. In contrast to most deep learning-based video restoration methods that perform end-to-end mapping, where feature extraction is mostly treated as a black box, in the sense that what role a feature plays is often unknown, our new method, termed Video restOration through adapTive dEgradation Sensing (VOTES), introduces the concept of a degradation feature map (DFM) to explicitly guide the video restoration process. Specifically, for each video frame, we first adaptively estimate its DFM to extract features representing the difficulty of restoring its different regions. We then feed the DFM to a convolutional neural network (CNN) to compute hierarchical degradation features to modulate an end-to-end video restoration backbone network, such that more attention is paid explicitly to potentially more difficult to restore areas, which in turn leads to enhanced restoration performance. We will explain the design rationale of the VOTES framework and present extensive experimental results to show that the new VOTES method outperforms various state-of-the-art techniques both quantitatively and qualitatively. In addition, we contribute a large scale real-world database of user videos shared on different social media platforms. Codes and datasets are available at https://github.com/luohongming/VOTES.git
KW - degradation sensing
KW - social media
KW - video restoration
UR - http://www.scopus.com/inward/record.url?scp=85151062549&partnerID=8YFLogxK
U2 - 10.1145/3503161.3547847
DO - 10.1145/3503161.3547847
M3 - Conference article published in proceeding or book
AN - SCOPUS:85151062549
T3 - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
SP - 2749
EP - 2757
BT - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
T2 - 30th ACM International Conference on Multimedia, MM 2022
Y2 - 10 October 2022 through 14 October 2022
ER -