摘要翻译:
估计图像序列中像素之间的对应关系是包括视觉辅助导航(如视觉里程计(VO)、视觉惯性里程计(VIO)和视觉同步定位与映射(VSLAM))和异常检测在内的许多任务的关键第一步。介绍了一种新的无监督深度神经网络结构,称为视觉惯性流(VIFlow)网络,并通过一个无监督多假设深度
神经网络接收灰度图像和超视觉惯性测量,演示了图像对应和光流估计。VIFlow学习组合异构传感器流和从未知的、非参数化的噪声分布中采样,以在源图像和目标图像之间的像素级对应映射上产生几个(在本工作中为4或8个)可能的假设。我们定量地将VIFlow与几种领先的仅视觉密集对应和流方法进行了比较,并显示出与所有性能与最先进(SOA)密集对应匹配方法相似的方法相比,运行时间显著减少,效率显著提高。我们也给出了定性的结果,说明VIFlow是如何用于检测异常独立运动的。
---
英文标题:
《Multi-Hypothesis Visual-Inertial Flow》
---
作者:
E. Jared Shamwell, William D. Nothwang, Donald Perlis
---
最新提交年份:
2018
---
分类信息:
一级分类:Electrical Engineering and Systems Science 电气工程与系统科学
二级分类:Image and Video Processing 图像和视频处理
分类描述:Theory, algorithms, and architectures for the formation, capture, processing, communication, analysis, and display of images, video, and multidimensional signals in a wide variety of applications. Topics of interest include: mathematical, statistical, and perceptual image and video modeling and representation; linear and nonlinear filtering, de-blurring, enhancement, restoration, and reconstruction from degraded, low-resolution or tomographic data; lossless and lossy compression and coding; segmentation, alignment, and recognition; image rendering, visualization, and printing; computational imaging, including ultrasound, tomographic and magnetic resonance imaging; and image and video analysis, synthesis, storage, search and retrieval.
用于图像、视频和多维信号的形成、捕获、处理、通信、分析和显示的理论、算法和体系结构。感兴趣的主题包括:数学,统计,和感知图像和视频建模和表示;线性和非线性滤波、去模糊、增强、恢复和重建退化、低分辨率或层析数据;无损和有损压缩编码;分割、对齐和识别;图像渲染、可视化和打印;计算成像,包括超声、断层和磁共振成像;以及图像和视频的分析、合成、存储、搜索和检索。
--
一级分类:Computer Science 计算机科学
二级分类:Robotics 机器人学
分类描述:Roughly includes material in ACM Subject Class I.2.9.
大致包括ACM科目I.2.9类的材料。
--
---
英文摘要:
Estimating the correspondences between pixels in sequences of images is a critical first step for a myriad of tasks including vision-aided navigation (e.g., visual odometry (VO), visual-inertial odometry (VIO), and visual simultaneous localization and mapping (VSLAM)) and anomaly detection. We introduce a new unsupervised deep neural network architecture called the Visual Inertial Flow (VIFlow) network and demonstrate image correspondence and optical flow estimation by an unsupervised multi-hypothesis deep neural network receiving grayscale imagery and extra-visual inertial measurements. VIFlow learns to combine heterogeneous sensor streams and sample from an unknown, un-parametrized noise distribution to generate several (4 or 8 in this work) probable hypotheses on the pixel-level correspondence mappings between a source image and a target image . We quantitatively benchmark VIFlow against several leading vision-only dense correspondence and flow methods and show a substantial decrease in runtime and increase in efficiency compared to all methods with similar performance to state-of-the-art (SOA) dense correspondence matching approaches. We also present qualitative results showing how VIFlow can be used for detecting anomalous independent motion.
---
PDF链接:
https://arxiv.org/pdf/1803.05727