Inverse Problems Seminar - Bilevel learning framework for variational image reconstruction with learned convex regularizer
In inverse problems, a classical reconstruction framework is based on the idea of variational regularization. This approach encodes prior information about potential reconstructions through a variational regularizer. Traditional methods often employ hand-crafted regularizers, such as total variation (TV) and total generalized variation (TGV). With the success of deep learning image reconstruction, there has been increasing interest in data-driven regularizers, potentially replacing manually designed regularizers with neural networks. In this talk, we explore learned convex regularizers within the variational setting, specifically when the convex regularizer is parameterized by an Input Convex Neural Network (ICNN). While various attempts have been made to learn regularizers, the adversarial training approach utilizes a loss that aims to separate ground-truth images from those with artifacts. However, the adversarial regularizer is not explicitly trained for the reconstruction task. To address this, we present a bilevel optimization framework to incorporate reconstruction quality in the learning process. We will also present a novel approach of solving the variational problem within a primal-dual framework. By introducing auxiliary variables corresponding to the activations of intermediate layers, we eliminate the nested nature of a neural network. This reformulates the variational problem as a constrained convex problem.