Improving Patch-Based Synthesis by Learning Patch Masks


Proceedings of ICCP 2014

     Nima Khademi Kalantari1          Eli Shechtman2          Soheil Darabi2     
     Dan B Goldman2          Pradeep Sen1     


1 University of California, Santa Barbara         2 Adobe         

Abstract

Patch-based synthesis is a powerful framework for numerous image and video editing applications such as hole-filling, retargeting, and reshuffling. In all these applications, a patch-based objective function is optimized through a patch search-and-vote process. However, existing techniques typically use fixed-size square patches when comparing the distance between two patches in the search process. This presents a fundamental limitation for these methods, since many patches cover multiple regions that can move, occlude, or otherwise behave independently in source and target images. We address this problem by using masks to down-weight some pixels in the patch-comparison operation. The main challenge is to choose the right mask according to the content during the search-and-vote process. We show how simple user assistance can lead to excellent results in challenging hole-filling examples. In addition, we propose a fully automated solution by learning a model to predict an appropriate mask using a set of features extracted around each patch. The model is trained using a manually annotated dataset, augmented with simulated divergence from ground truth. We demonstrate that our proposed method improves over existing approaches for single- and multi-image hole-filling applications.


Paper and Additional Materials


Bibtex

@article{MaskedPatches,

author = {Nima Khademi Kalantari and Eli Shechtman and Soheil Darabi

and Dan B Goldman and Pradeep Sen},

title = {{Improving Patch-Based Synthesis by Learning Patch Masks}},

conference = {International Conference on Computational Photography (ICCP 2014)},

year = {2014},

}