TY - GEN
T1 - On The Generation and Removal of Speaker Adversarial Perturbation For Voice-Privacy Protection
AU - Guo, Chenyang
AU - Chen, Liping
AU - Li, Zhuhai
AU - Lee, Kong Aik
AU - Ling, Zhen Hua
AU - Guo, Wu
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/12
Y1 - 2024/12
N2 - Neural networks are commonly known to be vulnerable to adversarial attacks mounted through subtle perturbation on the input data. Recent development in voice-privacy protection has shown the positive use cases of the same technique to conceal speaker's voice attribute with additive perturbation signal generated by an adversarial network. This paper examines the reversibility property where an entity generating the adversarial perturbations is authorized to remove them and restore original speech (e.g., the speaker him/herself). A similar technique could also be used by an investigator to deanonymize a voice-protected speech to restore criminals' identities in security and forensic analysis. In this setting, the perturbation generative module is assumed to be known in the removal process. To this end, a joint training of perturbation generation and removal modules is proposed. Experimental results on the LibriSpeech dataset demonstrated that the subtle perturbations added to the original speech can be predicted from the anonymized speech while achieving the goal of privacy protection. By removing these perturbations from the anonymized sample, the original speech can be restored.
AB - Neural networks are commonly known to be vulnerable to adversarial attacks mounted through subtle perturbation on the input data. Recent development in voice-privacy protection has shown the positive use cases of the same technique to conceal speaker's voice attribute with additive perturbation signal generated by an adversarial network. This paper examines the reversibility property where an entity generating the adversarial perturbations is authorized to remove them and restore original speech (e.g., the speaker him/herself). A similar technique could also be used by an investigator to deanonymize a voice-protected speech to restore criminals' identities in security and forensic analysis. In this setting, the perturbation generative module is assumed to be known in the removal process. To this end, a joint training of perturbation generation and removal modules is proposed. Experimental results on the LibriSpeech dataset demonstrated that the subtle perturbations added to the original speech can be predicted from the anonymized speech while achieving the goal of privacy protection. By removing these perturbations from the anonymized sample, the original speech can be restored.
KW - perturbation removal
KW - speaker adversarial perturbation
KW - speaker recognition
KW - voice-privacy protection
UR - https://www.scopus.com/pages/publications/85217398034
U2 - 10.1109/SLT61566.2024.10832243
DO - 10.1109/SLT61566.2024.10832243
M3 - Conference article published in proceeding or book
AN - SCOPUS:85217398034
T3 - Proceedings of 2024 IEEE Spoken Language Technology Workshop, SLT 2024
SP - 1179
EP - 1184
BT - Proceedings of 2024 IEEE Spoken Language Technology Workshop, SLT 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Spoken Language Technology Workshop, SLT 2024
Y2 - 2 December 2024 through 5 December 2024
ER -