应用科学学报 ›› 2024, Vol. 42 ›› Issue (4): 709-722.doi: 10.3969/j.issn.0255-8297.2024.04.012

• 计算机科学与应用 • 上一篇    

基于Group-Res2Block的智能合成语音说话人确认方法

李菲1, 苏兆品1,2, 王年松3, 杨波3, 张国富1,2   

  1. 1. 合肥工业大学 计算机与信息学院, 安徽 合肥 230601;
    2. 合肥工业大学 工业安全与应急技术安徽省重点实验室, 安徽 合肥 230601;
    3. 安徽省公安厅 物证鉴定管理处, 安徽 合肥 230000
  • 收稿日期:2023-02-27 发布日期:2024-08-01
  • 通信作者: 苏兆品,副教授,研究方向为多媒体安全和机器学习。E-mail:Szp@hfut.edu.cn E-mail:Szp@hfut.edu.cn
  • 基金资助:
    安徽省重点研究与开发计划(No.202004d07020011,No.202104d07020001);广东省类脑智能计算重点实验室开放课题(No.GBL202117);中央高校基本科研业务费专项资金项目(No.PA2021GDSK0073,No.PA2021GDSK0074,No.PA2022GDSK0037)资助

Intelligent Synthetic Voice Speaker Verification Method Based on Group-Res2Block

LI Fei1, SU Zhaopin1,2, WANG Niansong3, YANG Bo3, ZHANG Guofu1,2   

  1. 1. School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230601, Anhui, China;
    2. Anhui Province Key Laboratory of Industry Safety and Emergency Technology, Hefei University of Technology, Hefei 230601, Anhui, China;
    3. Institute of Forensic Science, Department of Public Security of Anhui Province, Hefei 230000, Anhui, China
  • Received:2023-02-27 Published:2024-08-01

摘要: 针对现有说话人确认任务基于自然语音条件下并不适用于智能合成语音的问题,提出一种基于Group-Res2Block的智能合成语音说话人确认方法。首先,设计了GroupRes2Block结构,在Res2Block的基础上将当前分组与相邻前后分组进行合并形成新的分组,以增强说话人局部特征的上下文联系;其次,设计了并行结构的多尺度通道注意力特征融合机制,利用不同大小卷积核实现同一层级的特征在通道维度的特征选择,以获取更具表现力的说话人特征,避免信息冗余;最后,设计了串行结构的多尺度层注意力特征融合机制,构建层结构,将深浅层特征整体进行融合并赋予不同权重,以获取最优的特征表达。为验证所提出特征提取网络的有效性,构建了中英文两种智能合成语音数据集进行消融实验和对比实验。结果表明本文方法在该任务的评价指标精确度(accuracy,ACC)、等错误率(equal errorrate,EER)和最小检测代价函数(minimum detection cost function,minDCF)上是最优的。此外,通过对模型泛化性能进行测试,验证了本文方法对未知智能语音算法的适用性。

关键词: 说话人确认, 智能合成语音, Group-Res2Block深度神经网络, 多尺度特征, 注意力机制

Abstract: The existing speaker verification task is primarily based on natural speech conditions, rendering it unsuitable for intelligent speech synthesis. In response, this paper proposes an intelligent synthetic voice speaker verification method based on Group-Res2Block. Firstly, the Group-Res2Block structure is designed, integrating the current group with adjacent front and rear groups to foster a stronger contextual connection of the speaker’s local characteristics. Secondly, a multi-scale channel attention feature fusion mechanism with parallel structure is designed. This mechanism employs various-sized convolution kernels to select features of the same level in the channel dimension, thereby extracting more expressive speaker features and avoiding information redundancy. Finally, a multi-scale attention feature fusion mechanism of serial structure is designed, and a layer structure is constructed to integrate the deep and shallow features as a whole and give different weights to obtain the optimal feature expression. To verify the effectiveness of the proposed feature extraction network, this paper constructs two kinds of intelligent synthetic speech datasets in Chinese and English. Through ablation and comparative experiments, it is shown that the proposed method outperforms others on evaluation metrics such as accuracy (ACC), equal error rate (EER) and minimum detection cost function (minDCF) for the task. Furthermore, the test results of the generalization performance of the model verify its applicability to unknown intelligent speech algorithms.

Key words: speaker verification, intelligent voice synthesis, Group-Res2Block deep neural network, multi-scale features, attention mechanism

中图分类号: