CGM Project by Yael Berberian & Alina Koifman

Supervised by Dmitry Rudoy

Abstract

The 3D Reconstruction of a scene from its 2D images (taken from different directions) is a fundamental problem in the areas of computer vision and computer graphics.

Our project’s aim is to reconstruct a dynamic scene from synchronized video streams captured by calibrated cameras surrounding the scene.

 

Algorithm

Our solution addresses motion capture from synchronized cameras as a 3D tracking problem, and is based on the algorithm proposed by Furukawa and Ponce. The 3D model is represented by vertices in the 3D space.  It includes three optimization processes of which the first two are local and the third is global. The local optimizations use a local rigid model for small neighborhoods of vertices, while the global one uses a non-rigid model for the whole mesh.

The main steps of the algorithm are described in the following chart:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Results

Vertices are tracked from frame to frame forming the 3D model.

Example 1:  Vertices’ motion between two frames. The arrows directed at the new location of each vertex on the next frame.

 

 

 

 

 

 

 

 

 

 

 

Example 2:  Projection of the reconstructed 3D scene on one camera.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Example 3: 3D reconstruction - Tracked motion of the vertices.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Downloads

Presentation  •  Report  •  Code  •  Poster  •  Data

 

Bibliography

1. Y. Furukawa and J. Ponce, Dense 3D Motion Capture from Synchronized Video Streams, 2008.

2. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press 2003.

3. L. Carlos, 3D Reconstruction of Dynamic Scenes, 2003.

4. M. Pollefeys, Visual 3D Modeling from Images: https://www.cs.unc.edu/~marc/tutorial/