全部版块 我的主页
论坛 经济学人 二区 外文文献专区
384 0
2022-03-06
摘要翻译:
基于注意力的编码器-解码器体系结构,如Listen、Attent和Spall(LAS),将传统自动语音识别(ASR)系统的声学、发音和语言模型组件包含在单个神经网络中。在以前的工作中,我们已经证明这种体系结构在听写任务上可以与现有的ASR系统相媲美,但尚不清楚这种体系结构是否适用于更具挑战性的任务,如语音搜索。在这项工作中,我们探索了对我们的LAS模型的各种结构和优化改进,这些改进显著地提高了性能。在结构方面,我们表明可以用词块模型代替字形。我们还介绍了一种多头注意力结构,它提供了对常用的单头注意力的改进。在优化方面,我们探索了同步训练、调度采样、标签平滑和最小错误率优化,这些都表明提高了准确率。我们给出了一个用于流识别的单向LSTM编码器的结果。在一个12500小时的语音搜索任务中,我们发现提出的改进将WER从9.2%提高到5.6%,而最好的常规系统达到6.7%;在听写任务中,我们的模型实现了4.1%的WER,而传统系统为5%。
---
英文标题:
《State-of-the-art Speech Recognition With Sequence-to-Sequence Models》
---
作者:
Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar,
  Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao,
  Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, Michiel Bacchiani
---
最新提交年份:
2018
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Computation and Language        计算与语言
分类描述:Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.
涵盖自然语言处理。大致包括ACM科目I.2.7类的材料。请注意,人工语言(编程语言、逻辑学、形式系统)的工作,如果没有明确地解决广义的自然语言问题(自然语言处理、计算语言学、语音、文本检索等),就不适合这个领域。
--
一级分类:Computer Science        计算机科学
二级分类:Sound        声音
分类描述:Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound, analysis and synthesis, audio user interfaces, sonification of data, computer music, and sound signal processing. Includes ACM Subject Class H.5.5, and intersects with H.1.2, H.5.1, H.5.2, I.2.7, I.5.4, I.6.3, J.5, K.4.2.
涵盖了声音计算的各个方面,以及声音作为一种信息通道。包括声音模型、分析和合成、音频用户界面、数据的可听化、计算机音乐和声音信号处理。包括ACM学科类H.5.5,并与H.1.2、H.5.1、H.5.2、I.2.7、I.5.4、I.6.3、J.5、K.4.2交叉。
--
一级分类:Electrical Engineering and Systems Science        电气工程与系统科学
二级分类:Audio and Speech Processing        音频和语音处理
分类描述:Theory and methods for processing signals representing audio, speech, and language, and their applications. This includes analysis, synthesis, enhancement, transformation, classification and interpretation of such signals as well as the design, development, and evaluation of associated signal processing systems. Machine learning and pattern analysis applied to any of the above areas is also welcome.  Specific topics of interest include: auditory modeling and hearing aids; acoustic beamforming and source localization; classification of acoustic scenes; speaker separation; active noise control and echo cancellation; enhancement; de-reverberation; bioacoustics; music signals analysis, synthesis and modification; music information retrieval;  audio for multimedia and joint audio-video processing; spoken and written language modeling, segmentation, tagging, parsing, understanding, and translation; text mining; speech production, perception, and psychoacoustics; speech analysis, synthesis, and perceptual modeling and coding; robust speech recognition; speaker recognition and characterization; deep learning, online learning, and graphical models applied to speech, audio, and language signals; and implementation aspects ranging from system architecture to fast algorithms.
处理代表音频、语音和语言的信号的理论和方法及其应用。这包括分析、合成、增强、转换、分类和解释这些信号,以及相关信号处理系统的设计、开发和评估。机器学习和模式分析应用于上述任何领域也是受欢迎的。感兴趣的具体主题包括:听觉建模和助听器;声波束形成与声源定位;声场景分类;说话人分离;有源噪声控制和回声消除;增强;去混响;生物声学;音乐信号的分析、合成与修饰;音乐信息检索;多媒体音频和联合音视频处理;口语和书面语建模、切分、标注、句法分析、理解和翻译;文本挖掘;言语产生、感知和心理声学;语音分析、合成、感知建模和编码;鲁棒语音识别;说话人识别与特征描述;应用于语音、音频和语言信号的深度学习、在线学习和图形模型;以及从系统架构到快速算法的实现方面。
--
一级分类:Statistics        统计学
二级分类:Machine Learning        机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--

---
英文摘要:
  Attention-based encoder-decoder architectures such as Listen, Attend, and Spell (LAS), subsume the acoustic, pronunciation and language model components of a traditional automatic speech recognition (ASR) system into a single neural network. In previous work, we have shown that such architectures are comparable to state-of-theart ASR systems on dictation tasks, but it was not clear if such architectures would be practical for more challenging tasks such as voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural side, we show that word piece models can be used instead of graphemes. We also introduce a multi-head attention architecture, which offers improvements over the commonly-used single-head attention. On the optimization side, we explore synchronous training, scheduled sampling, label smoothing, and minimum word error rate optimization, which are all shown to improve accuracy. We present results with a unidirectional LSTM encoder for streaming recognition. On a 12, 500 hour voice search task, we find that the proposed changes improve the WER from 9.2% to 5.6%, while the best conventional system achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to 5% for the conventional system.
---
PDF链接:
https://arxiv.org/pdf/1712.01769
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群