Project Title:   Coded Aperture solution using de-convolution 
Students:                                 Saar Yoskovitz

Supervisor:                             Gur Harary (Technion)         Omri Govrin (Intel)

 

Semester Registered:                    2009 (Summer)

 


Abstract

Patterned mask inserted at the aperture stop of a camera preserves high frequency components and increases the sensitivity of details sharpness to defocus. This can be used to derive depth map through an estimated blurring function as well as to control the focal plane, in order to be able to focus on different objects in an image.


The Problem

Traditional photography projects the 3D world on a 2D plane.

Conventional apertures cause blurring of objects far from the focus plane and loss of valuable data.

A way to extract the 3D data (depth) of the scene is needed.

Should be based on one picture, with minimal sacrifice of the image-data.

 

 

Aperture in Frequency Domain

 

Taking a picture is equivalent to convolution with the aperture kernel

Each depth plane has a different coded-aperture kernel

De-convolution with a “wrong” kernel results in ringing artifacts

 

System Implementation

 

 

System Inputs:

      Image taken with Coded Aperture

      Aperture Filter(s) representation for each depth

System Outputs:

      Segmented Depth Image

 

 

De-convolution Algorithms

Used for image deblurring – Finding x:                         

Better results on images using Natural Image Priors:

                        - Images are mostly smooth

 

We find the deconvolution by maximizing the following argument:

 

 

 

 

 

 


In the Frequency domain:                     

F(v,w)

Filter in freq. domain

G(v,w)

Derivative filter in freq. domain

 

Error Map calculation

We calculate the error of each pixel for depth k using:          

fk

the coded aperture filter of depth k

xk*

the deblurred image for depth k

Y

the original image

 

Example:

 

 

Raw Depth Map

The Depth map is given by finding the depth with the minimum error, for each pixel:

Where  are a set of parameters which are learned from natural images

 

 

Depth Map Segmentation

We wish to have a very smooth map, where each object has a constant depth. We achieve this by using MRF – Markov Random Fields.

Markov Random Field (MRF) – each pixel’s depth relies only on its neighbors.

We have a large penalty for 2 neighboring pixels with different depths à  smooth output.

We allow depth changes only on the edges of original image.

 

 

 

 

 

 

More Results

 

 

 

Suggestions for future work

1.      Implement the deconvolution algorithm in C for better performance

2.      Build a coded aperture lens and check the system on “real-world” images.

3.      Use the implemented system to find better coded-aperture functions.

4.      Calculate depths’ weights () based on numerous images.

 

 

References

[1] Levin, A., Fergus, R., Durand, F., Freeman,W.: Image and Depth from a Conventional Camera with a Coded Aperture, SIGGRAPH (2007)

 

[2] Levin, A., Fergus, R., Durand, F., Freeman,W.: Deconvolution using natural image priors, MIT CS and AI Lab

 

Related Documents

For more information:              Power Point presentation (pdf)

                                                   Project Poster (pdf)

                                                   Final report and Code (docx)

                                                   Final thoughts (docx)