Dynamic-Net: Tuning the Objective Without Re-Training for Synthesis Tasks
Alon Shoshan
Roey Mechrez
Lihi Zelnik-Manor
Technion – Israel Institute of Technology
ICCV 2019 [Paper]
[Supplementary]
Code [GitHub]
Dynamic-Net: We propose an approach that enables traversing the “objective-space”, spanned by two different
objectives, at test-time, without re-training, as illustrated by the blue dot moving along the blue curve in the plot. This is
different from the common practice of training a separate network for each objective, represented by X’s on the plot. Using a
single Dynamic-Net we can tune the level of stylization of an image, monitor completion quality per image, or control facial
attributes, all interactively at test-time, without re-training.
Abstract
One of the key ingredients for successful optimization of
modern CNNs is identifying a suitable objective. To date,
the objective is fixed a-priori at training time, and any variation
to it requires re-training a new network. In this paper
we present a first attempt at alleviating the need for
re-training. Rather than fixing the network at training time,
we train a “Dynamic-Net” that can be modified at inference
time. Our approach considers an “objective-space” as the
space of all linear combinations of two objectives, and the
Dynamic-Net is emulating the traversing of this objective-space at test-time,
without any further training. We show that this upgrades
pre-trained networks by providing an out-of-learning extension,
while maintaining the performance quality. The
solution we propose is fast and allows a user to interactively
modify the network, in real-time, in order to obtain
the result he/she desires. We show the benefits of such an
approach via several different applications.
Proposed framework: Our training has two steps: (i) First the “main” network \(\theta\) (green blocks) is trained to
minimize \({\cal O}_0\). (ii) Then \(\theta\) is fixed, one or more tuning-blocks \(\psi\) are added (orange block), and trained to minimize \({\cal O}_1\). The
output \(\hat y_1\) approximates the output \(y_1\) one would get from training the main network \(\theta\) with objective \({\cal O}_1\). At test-time, we
can emulate results equivalent to a network trained with objective \({\cal O}_m\) by tuning the parameter \(\alpha_m\) (in blue) that determines
the latent representation \(z_m\). Our method can be applied as (a) a single-block framework or as (b) multi-block framework.
Applications
Dynamic Style Transfer
Control over Stylization level
Interpolation between two Styles
Dynamic DC-GAN: Controlled Image Generation
The proposed method allow us to generate faces with control over the facial attributes e.g gender or hair color.
Image Completion
Input |
\(\alpha=0\) |
\(\alpha=0.4\) |
\(\alpha=1\) |
|
Input |
\(\alpha=0\) |
\(\alpha=0.5\) |
\(\alpha=1\) |
 |
 |
 |
 |
|
 |
 |
 |
 |
|
Original result |
Improved result |
|
|
|
Original result |
Improved result |
|
Dynamic-Net allows the user to select the best working point for each image, improving results of networks that were trained with sub-optimal objectives.
Dynamic-Net: Tuning the Objective Without Re-Training for Synthesis Tasks
Paper
[pdf]
Supplementary
[pdf]
Try Our Code
Code and demo of the the experiments described in our paper is available in [GitHub]
Dynamic style transfer demo