Removing Image Artifacts,
Caused By Dirty Camera Lenses

by Boyarski Alexander
Supervised by Harary Gur

Abstract

Dirt and dust on camera lenses are important artifacts in digital imaging systems.
These artifacts are not only an annoyance for photographers, but also a hindrance to computer
vision and digital forensics. In this work I'll show two methods used to remove the
artifacts, while effectiveness of proposed methods are demonstrated by experimental results.
In addition, I will present some results of processing pictures taken from non standard
cameras, such as cell phone and surveillance cameras.


Background:

By saying Image Artifacts we mean any feature which appears in an image which is not
present in the original imaged object. Usually, artifacts are caused by improper
operation of the imager or it could be consequence of natural processes or properties
of the human body.

In this project I'll concentrate on artifacts that were caused by spots on camera
lens due to dirt or dust. Those artifacts are very common in images taken by cell phones
and surveillance cameras. This is due to their exposure to dust or dirty fingers in cell phone
case and unfriendly weather, rain, or car exhaust in surveillance camera.

Main goals of the project were:

1 - Implement an algorithm
2 - Achieve satisfying results
3 - Run algorithm on pictures from non standard cameras



Solution:

There are two main approaches to estimate attenuation and intensification zones, two causes for spots.
First: calibration method, when we first take several pictures of a structured calibration
pattern in order to estimate spots. Second approach: we use number of images taken by the camera
and implement natural image statistics. Each method has its pros and cones: first one is more intuitive,
but needs access to the "dirty" camera. Second shows better results, but needs more images.


We can represent the recorded image as follows:

Where the original image I (the one we get to process) consists of scene image Io, which pass through obstruction
that attenuates it. In addition, radiance from another light source reflects from obstruction layer and returns
to camera. Thus, we can say that

where Io is the scene image, I is the original image, "a" is the attenuation coefficient and K is the blur kernel.
The blur is caused since the obstruction layer isn't in focus.

If we posses attenuation and scattering maps we can retrieve scene image by

so now we need to estimate attenuation map - a(x,y) and scattering map - b(x,y)


Calibration method:

As said earlier, we prepare a set of pictures for calibration, black and white pattern, taken by the camera with dirty lens.
Those pictures taken so each camera pixel could "see" both black and white regions. Black regions provide us no
information about scene image, but completely consist of scattering. White regions, on the other side, provide us
both attenuation and scattering info, so we get pure attenuation by subtracting one out of other. To get attenuation
and scattering maps we build a minimum and maximum images where each pixel gets the minimum/maximum value it "saw"
during calibration. From those maps we extract a(x,y) and b(x,y).


Non-Calibration method:

Since we don't have access to the camera, we make an assumption, that neighboring pixels are almost equal to each
other. We can say that both for the original image and for the scene image, especially for their average pictures and average
picture gradients. Therefore, we can also say that differences between two nearby pixels in the original image and
in the scene image are almost equal, fixed to attenuation coefficient a(x,y):

Knowing a(x,y) we can also estimate b(x,y) by:



The problem is that we don't have pure, clean, scene image to extract the maps, not even its average...
Therefore, we need to estimate this image. Since we assumed that the average scene image is rather
smooth, we can estimate it by a simple iterative polynomial fitting of the average original image. We use third order
two dimensional polynomial and after number of iteration get the estimated scene image.

Results:


original (up) and result (down): calibration method


original (up) and result (down): non-calibration method


"Surveillance" camera:

I also tried the algorithm on pictures taken from non standard camera. I put my camera
in a box with window covered with dirty glass, just as in a regular surveillance camera that is installed in the city.
Results I got from this camera are shown here:


original (up) and result (down)


original (up) and result (down)


Conclusions

My project ends here. It was very interesting and sometimes challenging experience.
We can say that dirty lens problem is solved, but still more other artifacts awaiting a solution. Top leading camera
manufacturers working nonstop on the issue and worldwide universities conduct research on topic.

As for me, first of all, that was an opportunity to learn new methods and techniques in image processing and in
addition I could check and implement previously acquired knowledge.

Acknowledgment

I want to thank my supervisor Gur for directing and helping through the project
I also want to thank CGM lab and Hovav Gazit for the equipment and work conditions.



RELATED DOCUMENTATION