TY - GEN
T1 - Motion Feature Extraction and Stylization for Character Animation using Hilbert-Huang Transform
AU - Dong, Ran
AU - Lin, Yangfei
AU - Chang, Qiong
AU - Zhong, Junpei
AU - Cai, Dongsheng
AU - Ikuno, Soichiro
N1 - Funding Information:
This work was supported by JSPS KAKENHI Grant Number JP20K23352, JP21K17833, JP21K17868.
Publisher Copyright:
© 2021 ACM.
PY - 2021/12/28
Y1 - 2021/12/28
N2 - This paper presents novel insights to feature extraction and stylization of character motion in the instantaneous frequency domain by proposing a method using the Hilbert-Huang transform (HHT). HHT decomposes human motion capture data in the frequency domain into several pseudo monochromatic signals, so-called intrinsic mode functions (IMFs). We propose an algorithm to reconstruct these IMFs and extract motion features automatically using the Fibonacci sequence in the link-dynamical structure of the human body. Our research revealed that these reconstructed motions could be mainly divided into three parts, a primary motion and a secondary motion, corresponding to the animation principles, and a basic motion consisting of posture and position. Our method help animators edit target motions by extracting and blending the primary or secondary motions extracted from a source motion. To demonstrate results, we applied our proposed method to general motions (jumping, punching, and walking motions) to achieve different stylizations.
AB - This paper presents novel insights to feature extraction and stylization of character motion in the instantaneous frequency domain by proposing a method using the Hilbert-Huang transform (HHT). HHT decomposes human motion capture data in the frequency domain into several pseudo monochromatic signals, so-called intrinsic mode functions (IMFs). We propose an algorithm to reconstruct these IMFs and extract motion features automatically using the Fibonacci sequence in the link-dynamical structure of the human body. Our research revealed that these reconstructed motions could be mainly divided into three parts, a primary motion and a secondary motion, corresponding to the animation principles, and a basic motion consisting of posture and position. Our method help animators edit target motions by extracting and blending the primary or secondary motions extracted from a source motion. To demonstrate results, we applied our proposed method to general motions (jumping, punching, and walking motions) to achieve different stylizations.
KW - biomechanics
KW - deep learning
KW - feature extraction
KW - Hilbert-Huang transform
KW - motion stylization
UR - http://www.scopus.com/inward/record.url?scp=85122618597&partnerID=8YFLogxK
U2 - 10.1145/3491396.3506524
DO - 10.1145/3491396.3506524
M3 - Conference article published in proceeding or book
AN - SCOPUS:85122618597
T3 - ACM International Conference Proceeding Series
SP - 16
EP - 21
BT - Proceedings of 2021 ACM International Conference on Intelligent Computing and its Emerging Applications, ICEA 2021
PB - Association for Computing Machinery
T2 - 2021 ACM International Conference on Intelligent Computing and its Emerging Applications, ICEA 2021
Y2 - 28 December 2021 through 29 December 2021
ER -