Yoav Y. Schechner: Research

Home
Contact

Pixels That Sound

People and animals fuse auditory and visual information to obtain robust perception. A particular benefit of such cross-modal analysis is the ability to localize visual events associated with sound sources. We aim to achieve this using computer-vision aided by a single microphone.

Past efforts encountered problems stemming from the huge gap between the dimensions involved and the available data. This has led to solutions suffering from low spatio-temporal resolutions. We present a rigorous analysis of the fundamental problems associated with this task. Then, we present a stable and robust algorithm which overcomes past deficiencies. It grasps dynamic audio-visual events with high spatial resolution and derives a unique solution. The algorithm effectively detects pixels that are associated with the sound, while filtering out other dynamic pixels.

The algorithm is based on canonical correlation analysis (CCA), where we remove inherent ill-posedness by exploiting the typical spatial sparsity of audio-visual events. The algorithm is simple and efficient thanks to its reliance on linear programming and is free of user-defined parameters. To quantitatively assess the performance, we devise a localization criterion. The algorithm capabilities were demonstrated in experiments, where it overcame substantial visual distractions and audio noise.

Publications

  1. Einat Kidron, Yoav Y. Schechner and Michael Elad, “Pixels that sound,” Proc. IEEE CVPR, Vol. 1, pp. 88-96 (2005).
  2. Einat Kidron, Yoav Y. Schechner and Michael Elad, “Cross-modal localization via sparsity,” IEEE Trans. Signal Processing, Vol. 55 , No. 4 , pp. 1390-1404 (2007).

Presentations

  1. Pixels that sound” (9.5 Mb, PowerPoint)

Related Research

  1. Cross-modal Denoising
  2. Audio Inpainting
  3. Harmony in Motion
8.3 MB MPEG
1.9 MB MPEG
Movie #1- original and results

 

9.0 MB MPEG
1.9 MB MPEG
Movie #2 - original and results