Yoav Y. Schechner: Research

Home
Contact

Foveated Video Extrapolation

Video extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video extrapolation which increases the complexity of the task. Some video extrapolation methods simplify the task by using a rough color extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video extrapolation, but fails to preserve the structure of the scene. This work introduces a multi-scale method which combines a coarse to fine approach with foveated video extrapolation. Foveated video extrapolation reduces the effective number of pixels that need to be extrapolated, making the extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time. We further study the incorporation of Mosaicing and other state of the art completion algorithms into our framework. We discuss their advantages and shortcomings.

Publications

  1. Amit Aides, Tamar Avraham and Yoav Y. Schechner, “Multiscale ultrawide foveated video extrapolation,” Proc. IEEE ICCP (2011)
  2. Tamar Avraham and Yoav Y. Schechner, “Ultrawide foveated video extrapolation,” IEEE Selected Topics in Signal Processing 5, pp. 321-334 (2011), special issue on Recent Advances in Processing for Consumer Displays

Presentations

  1. Foveated Extrapolation” (7 Mb, PowerPoint)
  2. The above presentation links to seven movies: SemiTrailer_original (0.25 Mb, AVI), SemiTrailer_insideout (0.25 Mb, AVI), pad (16.5 Mb, AVI), insideout_examples (2.7 Mb, AVI), bad_insideout (0.6 Mb, AVI), examples (5.3 Mb, AVI) and hollywood_examples2 (9.5 Mb, AVI).
  3. Multiscale Ultrawide Foveated Video Extrapolation” Talk given in the IEEE International Conference on Computational Photography (ICCP), 2011.
5.3 MB MPEG
Outside-In and Inside-Out Comparision (using original clips taken from “ReefVid”)

 

9.5 MB MPEG
Extrapolation using the Outside-In Method (orignal clips taken from the “Human Actions and Scenes Dataset”)