摘要翻译:
提出了一种基于区域的卷积递归
神经网络(R-CRNN)用于音频事件检测(AED)。该网络受Faster-RCNN的启发,它是一种基于区域的卷积网络框架,用于视觉目标检测。与原始的Faster-RCNN不同的是,在卷积网络的基础上增加了一个递归层,从提取的高层特征中捕获长期的时间背景。而以往的AED工作大多先在帧级生成预测,然后利用后处理从一个概率序列中预测事件的开始/偏移时间戳;该方法直接在事件级生成预测,并可以在多任务丢失的情况下进行端到端的训练,同时优化了音频事件的分类和定位。在DCASE 2017挑战数据集上对所提出的方法进行了测试。据我们所知,R-CRNN是所有方法中性能最好的单模型方法,在开发集和评价集上都不使用集成。与其它基于区域的AED网络(R-FCN)相比,基于事件的错误率(ER)在发展集上为0.18,我们的方法将ER降低了一半。
---
英文标题:
《R-CRNN: Region-based Convolutional Recurrent Neural Network for Audio
Event Detection》
---
作者:
Chieh-Chi Kao, Weiran Wang, Ming Sun, Chao Wang
---
最新提交年份:
2018
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Sound 声音
分类描述:Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound, analysis and synthesis, audio user interfaces, sonification of data, computer music, and sound signal processing. Includes ACM Subject Class H.5.5, and intersects with H.1.2, H.5.1, H.5.2, I.2.7, I.5.4, I.6.3, J.5, K.4.2.
涵盖了声音计算的各个方面,以及声音作为一种信息通道。包括声音模型、分析和合成、音频用户界面、数据的可听化、计算机音乐和声音信号处理。包括ACM学科类H.5.5,并与H.1.2、H.5.1、H.5.2、I.2.7、I.5.4、I.6.3、J.5、K.4.2交叉。
--
一级分类:Electrical Engineering and Systems Science 电气工程与系统科学
二级分类:Audio and Speech Processing 音频和语音处理
分类描述:Theory and methods for processing signals representing audio, speech, and language, and their applications. This includes analysis, synthesis, enhancement, transformation, classification and interpretation of such signals as well as the design, development, and evaluation of associated signal processing systems. Machine learning and pattern analysis applied to any of the above areas is also welcome. Specific topics of interest include: auditory modeling and hearing aids; acoustic beamforming and source localization; classification of acoustic scenes; speaker separation; active noise control and echo cancellation; enhancement; de-reverberation; bioacoustics; music signals analysis, synthesis and modification; music information retrieval; audio for multimedia and joint audio-video processing; spoken and written language modeling, segmentation, tagging, parsing, understanding, and translation; text mining; speech production, perception, and psychoacoustics; speech analysis, synthesis, and perceptual modeling and coding; robust speech recognition; speaker recognition and characterization; deep learning, online learning, and graphical models applied to speech, audio, and language signals; and implementation aspects ranging from system architecture to fast algorithms.
处理代表音频、语音和语言的信号的理论和方法及其应用。这包括分析、合成、增强、转换、分类和解释这些信号,以及相关信号处理系统的设计、开发和评估。机器学习和模式分析应用于上述任何领域也是受欢迎的。感兴趣的具体主题包括:听觉建模和助听器;声波束形成与声源定位;声场景分类;说话人分离;有源噪声控制和回声消除;增强;去混响;生物声学;音乐信号的分析、合成与修饰;音乐信息检索;多媒体音频和联合音视频处理;口语和书面语建模、切分、标注、句法分析、理解和翻译;文本挖掘;言语产生、感知和心理声学;语音分析、合成、感知建模和编码;鲁棒语音识别;说话人识别与特征描述;应用于语音、音频和语言信号的
深度学习、在线学习和图形模型;以及从系统架构到快速算法的实现方面。
--
---
英文摘要:
This paper proposes a Region-based Convolutional Recurrent Neural Network (R-CRNN) for audio event detection (AED). The proposed network is inspired by Faster-RCNN, a well known region-based convolutional network framework for visual object detection. Different from the original Faster-RCNN, a recurrent layer is added on top of the convolutional network to capture the long-term temporal context from the extracted high level features. While most of the previous works on AED generate predictions at frame level first, and then use post-processing to predict the onset/offset timestamps of events from a probability sequence; the proposed method generates predictions at event level directly and can be trained end-to-end with a multitask loss, which optimizes the classification and localization of audio events simultaneously. The proposed method is tested on DCASE 2017 Challenge dataset. To the best of our knowledge, R-CRNN is the best performing single-model method among all methods without using ensembles both on development and evaluation sets. Compared to the other region-based network for AED (R-FCN) with an event-based error rate (ER) of 0.18 on the development set, our method reduced the ER to half.
---
PDF链接:
https://arxiv.org/pdf/1808.06627