摘要翻译:
最先进的音频事件检测(AED)系统依赖于使用强标记数据的监督学习。然而,这种依赖性严重限制了大规模数据集的可伸缩性,在这些数据集中,获得精细分辨率注释的成本太高。在本文中,我们提出了一个使用弱标注标签的小占用空间多实例学习(MIL)框架,用于多类AED。提出的MIL框架使用从预先训练的卷积神经网络中提取的音频嵌入作为输入特征。我们表明,通过使用音频嵌入,MIL框架可以用一个简单的DNN来实现,其性能可以与递归神经网络相媲美。我们通过使用AudioSet的一个子集训练一个音频标记系统来评估我们的方法,AudioSet是一个大量弱标记的YouTube视频摘录的集合。结合后期融合方法,我们将基线音频标记系统的F1得分提高了17%。结果表明,卷积
神经网络提取的音频嵌入显著提高了所有MIL模型的性能。该框架降低了AED系统的模型复杂度,适用于计算资源有限的应用。
---
英文标题:
《Multiple Instance Deep Learning for Weakly Supervised Small-Footprint
Audio Event Detection》
---
作者:
Shao-Yen Tseng, Juncheng Li, Yun Wang, Joseph Szurley, Florian Metze,
Samarjit Das
---
最新提交年份:
2018
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Sound 声音
分类描述:Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound, analysis and synthesis, audio user interfaces, sonification of data, computer music, and sound signal processing. Includes ACM Subject Class H.5.5, and intersects with H.1.2, H.5.1, H.5.2, I.2.7, I.5.4, I.6.3, J.5, K.4.2.
涵盖了声音计算的各个方面,以及声音作为一种信息通道。包括声音模型、分析和合成、音频用户界面、数据的可听化、计算机音乐和声音信号处理。包括ACM学科类H.5.5,并与H.1.2、H.5.1、H.5.2、I.2.7、I.5.4、I.6.3、J.5、K.4.2交叉。
--
一级分类:Electrical Engineering and Systems Science 电气工程与系统科学
二级分类:Audio and Speech Processing 音频和语音处理
分类描述:Theory and methods for processing signals representing audio, speech, and language, and their applications. This includes analysis, synthesis, enhancement, transformation, classification and interpretation of such signals as well as the design, development, and evaluation of associated signal processing systems. Machine learning and pattern analysis applied to any of the above areas is also welcome. Specific topics of interest include: auditory modeling and hearing aids; acoustic beamforming and source localization; classification of acoustic scenes; speaker separation; active noise control and echo cancellation; enhancement; de-reverberation; bioacoustics; music signals analysis, synthesis and modification; music information retrieval; audio for multimedia and joint audio-video processing; spoken and written language modeling, segmentation, tagging, parsing, understanding, and translation; text mining; speech production, perception, and psychoacoustics; speech analysis, synthesis, and perceptual modeling and coding; robust speech recognition; speaker recognition and characterization; deep learning, online learning, and graphical models applied to speech, audio, and language signals; and implementation aspects ranging from system architecture to fast algorithms.
处理代表音频、语音和语言的信号的理论和方法及其应用。这包括分析、合成、增强、转换、分类和解释这些信号,以及相关信号处理系统的设计、开发和评估。机器学习和模式分析应用于上述任何领域也是受欢迎的。感兴趣的具体主题包括:听觉建模和助听器;声波束形成与声源定位;声场景分类;说话人分离;有源噪声控制和回声消除;增强;去混响;生物声学;音乐信号的分析、合成与修饰;音乐信息检索;多媒体音频和联合音视频处理;口语和书面语建模、切分、标注、句法分析、理解和翻译;文本挖掘;言语产生、感知和心理声学;语音分析、合成、感知建模和编码;鲁棒语音识别;说话人识别与特征描述;应用于语音、音频和语言信号的
深度学习、在线学习和图形模型;以及从系统架构到快速算法的实现方面。
--
---
英文摘要:
State-of-the-art audio event detection (AED) systems rely on supervised learning using strongly labeled data. However, this dependence severely limits scalability to large-scale datasets where fine resolution annotations are too expensive to obtain. In this paper, we propose a small-footprint multiple instance learning (MIL) framework for multi-class AED using weakly annotated labels. The proposed MIL framework uses audio embeddings extracted from a pre-trained convolutional neural network as input features. We show that by using audio embeddings the MIL framework can be implemented using a simple DNN with performance comparable to recurrent neural networks. We evaluate our approach by training an audio tagging system using a subset of AudioSet, which is a large collection of weakly labeled YouTube video excerpts. Combined with a late-fusion approach, we improve the F1 score of a baseline audio tagging system by 17%. We show that audio embeddings extracted by the convolutional neural networks significantly boost the performance of all MIL models. This framework reduces the model complexity of the AED system and is suitable for applications where computational resources are limited.
---
PDF链接:
https://arxiv.org/pdf/1712.09673