Saliency Driven Image Manipulation


Roey Mechrez     Eli Shechtman     Lihi Zelnik-Manor

 

Technion – Israel Institute of Technology, Adobe Research
In WACV 2018
Best paper (People's Choice)

[Paper] [Supplementary] [Data] [arXiv] [video]

Results by application:
[Object Enhancment] [Distractor Attenuation] [Background Decluttering] [Saliency Shift]

 

Top: Our saliency driven image manipulation algorithm can increase or decrease the saliency of a region. In this example the manipulation highlighted the bird while obscuring the leaf. Bottom: Since our framework allows both increasing and decreasing of saliency it enables four applications: (i) Object Enhancment, (ii) Distractor Attenuation, where the target’s saliency is decreased, (iii) Background Decluttering, where the target is unchanged while salient pixels in the background are demoted and (iv) Saliency Shift, where the saliency of an object is transfered to another object (from the plain to the woman on the right)

 

Abstract

Have you ever taken a picture only to find out that an unimportant background object ended up being overly salient? Or one of those team sports photos where your favorite player blends with the rest? Wouldn’t it be nice if you could tweak these pictures just a little bit so that the distractor would be attenuated and your favorite player will stand-out among her peers? Manipulating images in order to control the saliency of objects is the goal of this paper. We propose an approach that considers the internal color and saliency properties of the image. It changes the saliency map via an optimization framework that relies on patch-based manipulation using only patches from within the same image to maintain its appearance characteristics. Comparing our method to previous ones shows significant improvement, both in the achieved saliency manipulation and in the realistic appearance of the resulting images.

 

Paper

“Saliency Driven Image Manipulation”, to appear in WACV 2018 [pdf] [BibTex] [Supplementary]

 

Dataset

The dataset described in our paper is available for download:
[README], [Data]

 

Try our code

Code to reporduce the experiments described in our paper is available in [GitHub]