全部版块 我的主页
论坛 经济学人 二区 外文文献专区
519 0
2022-04-09
摘要翻译:
本工作致力于捕获中立和喊话环境中的阿联酋口音语音数据库(阿拉伯联合酋长国数据库),以一阶循环超音段隐马尔可夫模型(CSPHMM1s)、二阶循环超音段隐马尔可夫模型(CSPHMM2s)和三阶循环超音段隐马尔可夫模型(CSPHMM3s)为分类器,研究和提高喊话环境中与文本无关的阿联酋口音说话人识别性能。在这项研究中,我们的数据库收集了50个阿联酋语母语者(男女各25人),他们在中性和大喊大叫的谈话环境中分别说出了八个常见的阿联酋语句子。从我们收集的数据库中提取的特征称为梅尔频率倒谱系数(MFCCs)。实验结果表明,基于CSPHMM1s、CSPHMM2s和CSPHMM3s的阿联酋口音说话人识别在中性环境下的平均性能分别为94.0%、95.2%和95.9%。在喊叫环境下,基于CSPHMM1s、CSPHMM2s和CSPHMM3S的平均性能分别为51.3%、55.5%和59.3%。基于CSPHMM3s的说话人识别在喊叫环境下的平均识别性能与人类听者主观评价的结果非常相似。
---
英文标题:
《Emirati-Accented Speaker Identification in each of Neutral and Shouted
  Talking Environments》
---
作者:
Ismail Shahin, Ali Bou Nassif, Mohammed Bahutair
---
最新提交年份:
2018
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Sound        声音
分类描述:Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound, analysis and synthesis, audio user interfaces, sonification of data, computer music, and sound signal processing. Includes ACM Subject Class H.5.5, and intersects with H.1.2, H.5.1, H.5.2, I.2.7, I.5.4, I.6.3, J.5, K.4.2.
涵盖了声音计算的各个方面,以及声音作为一种信息通道。包括声音模型、分析和合成、音频用户界面、数据的可听化、计算机音乐和声音信号处理。包括ACM学科类H.5.5,并与H.1.2、H.5.1、H.5.2、I.2.7、I.5.4、I.6.3、J.5、K.4.2交叉。
--
一级分类:Electrical Engineering and Systems Science        电气工程与系统科学
二级分类:Audio and Speech Processing        音频和语音处理
分类描述:Theory and methods for processing signals representing audio, speech, and language, and their applications. This includes analysis, synthesis, enhancement, transformation, classification and interpretation of such signals as well as the design, development, and evaluation of associated signal processing systems. Machine learning and pattern analysis applied to any of the above areas is also welcome.  Specific topics of interest include: auditory modeling and hearing aids; acoustic beamforming and source localization; classification of acoustic scenes; speaker separation; active noise control and echo cancellation; enhancement; de-reverberation; bioacoustics; music signals analysis, synthesis and modification; music information retrieval;  audio for multimedia and joint audio-video processing; spoken and written language modeling, segmentation, tagging, parsing, understanding, and translation; text mining; speech production, perception, and psychoacoustics; speech analysis, synthesis, and perceptual modeling and coding; robust speech recognition; speaker recognition and characterization; deep learning, online learning, and graphical models applied to speech, audio, and language signals; and implementation aspects ranging from system architecture to fast algorithms.
处理代表音频、语音和语言的信号的理论和方法及其应用。这包括分析、合成、增强、转换、分类和解释这些信号,以及相关信号处理系统的设计、开发和评估。机器学习和模式分析应用于上述任何领域也是受欢迎的。感兴趣的具体主题包括:听觉建模和助听器;声波束形成与声源定位;声场景分类;说话人分离;有源噪声控制和回声消除;增强;去混响;生物声学;音乐信号的分析、合成与修饰;音乐信息检索;多媒体音频和联合音视频处理;口语和书面语建模、切分、标注、句法分析、理解和翻译;文本挖掘;言语产生、感知和心理声学;语音分析、合成、感知建模和编码;鲁棒语音识别;说话人识别与特征描述;应用于语音、音频和语言信号的深度学习、在线学习和图形模型;以及从系统架构到快速算法的实现方面。
--

---
英文摘要:
  This work is devoted to capturing Emirati-accented speech database (Arabic United Arab Emirates database) in each of neutral and shouted talking environments in order to study and enhance text-independent Emirati-accented speaker identification performance in shouted environment based on each of First-Order Circular Suprasegmental Hidden Markov Models (CSPHMM1s), Second-Order Circular Suprasegmental Hidden Markov Models (CSPHMM2s), and Third-Order Circular Suprasegmental Hidden Markov Models (CSPHMM3s) as classifiers. In this research, our database was collected from fifty Emirati native speakers (twenty five per gender) uttering eight common Emirati sentences in each of neutral and shouted talking environments. The extracted features of our collected database are called Mel-Frequency Cepstral Coefficients (MFCCs). Our results show that average Emirati-accented speaker identification performance in neutral environment is 94.0%, 95.2%, and 95.9% based on CSPHMM1s, CSPHMM2s, and CSPHMM3s, respectively. On the other hand, the average performance in shouted environment is 51.3%, 55.5%, and 59.3% based, respectively, on CSPHMM1s, CSPHMM2s, and CSPHMM3s. The achieved average speaker identification performance in shouted environment based on CSPHMM3s is very similar to that obtained in subjective assessment by human listeners.
---
PDF链接:
https://arxiv.org/pdf/1804.00981
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群