PhD Defense: "Utilizing Machine Learning for Filtering General Monte Carlo Noise"

Nima Khademi Kalantari

September 25th (Friday), 3:00pm
Harold Frank Hall (HFH), Rm 4164

Producing photorealistic images from a scene model requires computing a complex multidimensional integral of the scene function at every pixel of the image. Monte Carlo (MC) rendering systems approximate this integral by tracing light rays (samples) in the multidimensional space to evaluate the scene function. Although an approximation to this integral can be quickly evaluated with just a few samples, the inaccuracy of this estimate relative to the true value appears as unacceptable noise in the resulting image.

One way to mitigate this problem is to quickly render a noisy image with a few samples and then filter it as a post-process to generate an acceptable, noise-free result. This approach has been the subject of extensive research in recent years and many algorithms have been developed. However, the majority of these approaches use simple, heuristic rules to design the algorithm and, as a result, cannot handle complex scenes.

We begin by studying how standard image denoising techniques can be applied to the problem of Monte Carlo rendering. To do this, we propose a way to use any standard image denoising method (e.g., BM3D) to remove noise from MC rendered images. We do this by estimating the amount of noise at each pixel of the image, coupled with a multilevel algorithm that denoises the image in a spatially-varying manner. We then show that although this approach works better than the previous color-based schemes, i.e., the methods that only use color information, it cannot handle complex scenes with severe noise. This is due to the fact that this algorithm does not utilize additional scene features such as world positions, shading normals, and texture values, which are available in MC rendering.

To address the filtering problem systematically, we then present a new way of analyzing the MC filtering approaches. We observe that the major challenge in all filtering techniques is filter parameter estimation. Our key contribution is to address this challenging problem using machine learning. Specifically, we first propose to estimate the optimal filter parameters at each pixel directly from the output of the MC renderer using a neural network. We train the network on a set of scenes by minimizing the error between the filtered and ground truth images. Second, we propose to find the optimal filter parameter sets in an error-minimization filtering approach to produce filtered results as close as possible to the ground truth. We optimize these candidate filter parameter sets on a set of training scenes by minimizing the error between the filtered and ground truth images.

We show that the proposed approaches outperform state-of-the-art methods in removing general MC noise. In this thesis, we present the first attempt to use machine learning for removing noise from MC rendered images. We believe this opens a new avenue for future work and we hope other researchers can build upon the ideas presented here to further advance the MC filtering field.

About Nima Khademi Kalantari:

photo of Nima Khademi Kalantari Nima Khademi Kalantari is a Ph.D. candidate at UCSB working under the supervision of Dr. Pradeep Sen in the MIRAGE Lab. His research interests span a diverse set of computer graphics and vision applications with an emphasis on both image synthesis and computational image processing. He has published 10 journal papers in different areas including rendering, imaging, and sampling.

Hosted by: Professor Pradeep Sen