Society for Industrial and Applied Mathematics: SIAM Journal on Imaging Sciences: Table of Contents
Table of Contents for SIAM Journal on Imaging Sciences. List of articles from both the latest and ahead of print issues.
https://epubs.siam.org/loi/sjisbi?ai=sd&mi=3bfys9&af=R
Society for Industrial and Applied Mathematics: SIAM Journal on Imaging Sciences: Table of Contents
Society for Industrial and Applied Mathematics
en-US
SIAM Journal on Imaging Sciences
SIAM Journal on Imaging Sciences
https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjisbi/cover.jpg
https://epubs.siam.org/loi/sjisbi?ai=sd&mi=3bfys9&af=R
-
A Variational Model for Nonuniform Low-Light Image Enhancement
https://epubs.siam.org/doi/abs/10.1137/22M1543161?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 1-30, March 2024. <br/> Abstract. Low-light image enhancement plays an important role in computer vision applications, which is a fundamental low-level task and can affect high-level computer vision tasks. To solve this ill-posed problem, a lot of methods have been proposed to enhance low-light images. However, their performance degrades significantly under nonuniform lighting conditions. Due to the rapid variation of illuminance in different regions in natural images, it is challenging to enhance low-light parts and retain normal-light parts simultaneously in the same image. Commonly, either the low-light parts are underenhanced or the normal-light parts are overenhanced, accompanied by color distortion and artifacts. To overcome this problem, we propose a simple and effective Retinex-based model with reflectance map reweighting for images under nonuniform lighting conditions. An alternating proximal gradient (APG) algorithm is proposed to solve the proposed model, in which the illumination map, the reflectance map, and the weighting map are updated iteratively. To make our model applicable to a wide range of light conditions, we design an initialization scheme for the weighting map. A theoretical analysis of the existence of the solution to our model and the convergence of the APG algorithm are also established. A series of experiments on real-world low-light images are conducted, which demonstrate the effectiveness of our method.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 1-30, March 2024. <br/> Abstract. Low-light image enhancement plays an important role in computer vision applications, which is a fundamental low-level task and can affect high-level computer vision tasks. To solve this ill-posed problem, a lot of methods have been proposed to enhance low-light images. However, their performance degrades significantly under nonuniform lighting conditions. Due to the rapid variation of illuminance in different regions in natural images, it is challenging to enhance low-light parts and retain normal-light parts simultaneously in the same image. Commonly, either the low-light parts are underenhanced or the normal-light parts are overenhanced, accompanied by color distortion and artifacts. To overcome this problem, we propose a simple and effective Retinex-based model with reflectance map reweighting for images under nonuniform lighting conditions. An alternating proximal gradient (APG) algorithm is proposed to solve the proposed model, in which the illumination map, the reflectance map, and the weighting map are updated iteratively. To make our model applicable to a wide range of light conditions, we design an initialization scheme for the weighting map. A theoretical analysis of the existence of the solution to our model and the convergence of the APG algorithm are also established. A series of experiments on real-world low-light images are conducted, which demonstrate the effectiveness of our method.
A Variational Model for Nonuniform Low-Light Image Enhancement
10.1137/22M1543161
SIAM Journal on Imaging Sciences
2024-01-04T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Fan Jia
Shen Mao
Xue-Cheng Tai
Tieyong Zeng
A Variational Model for Nonuniform Low-Light Image Enhancement
17
1
1
30
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1543161
https://epubs.siam.org/doi/abs/10.1137/22M1543161?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Learning Sparsity-Promoting Regularizers Using Bilevel Optimization
https://epubs.siam.org/doi/abs/10.1137/22M1506547?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 31-60, March 2024. <br/> Abstract. We present a gradient-based heuristic method for supervised learning of sparsity-promoting regularizers for denoising signals and images. Sparsity-promoting regularization is a key ingredient in solving modern signal reconstruction problems; however, the operators underlying these regularizers are usually either designed by hand or learned from data in an unsupervised way. The recent success of supervised learning (e.g., with convolutional neural networks) in solving image reconstruction problems suggests that it could be a fruitful approach to designing regularizers. Towards this end, we propose to denoise signals using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of ground truth image and measurement pairs. Training involves solving a challenging bilevel optimization problem; we derive an expression for the gradient of the training loss using the closed-form solution of the denoising problem and provide an accompanying gradient descent algorithm to minimize it. Our experiments with structured 1D signals and natural images indicate that the proposed method can learn an operator that outperforms well-known regularizers (total variation, DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering for denoising.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 31-60, March 2024. <br/> Abstract. We present a gradient-based heuristic method for supervised learning of sparsity-promoting regularizers for denoising signals and images. Sparsity-promoting regularization is a key ingredient in solving modern signal reconstruction problems; however, the operators underlying these regularizers are usually either designed by hand or learned from data in an unsupervised way. The recent success of supervised learning (e.g., with convolutional neural networks) in solving image reconstruction problems suggests that it could be a fruitful approach to designing regularizers. Towards this end, we propose to denoise signals using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of ground truth image and measurement pairs. Training involves solving a challenging bilevel optimization problem; we derive an expression for the gradient of the training loss using the closed-form solution of the denoising problem and provide an accompanying gradient descent algorithm to minimize it. Our experiments with structured 1D signals and natural images indicate that the proposed method can learn an operator that outperforms well-known regularizers (total variation, DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering for denoising.
Learning Sparsity-Promoting Regularizers Using Bilevel Optimization
10.1137/22M1506547
SIAM Journal on Imaging Sciences
2024-01-10T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Avrajit Ghosh
Michael McCann
Madeline Mitchell
Saiprasad Ravishankar
Learning Sparsity-Promoting Regularizers Using Bilevel Optimization
17
1
31
60
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1506547
https://epubs.siam.org/doi/abs/10.1137/22M1506547?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Identification of Sparsely Representable Diffusion Parameters in Elliptic Problems
https://epubs.siam.org/doi/abs/10.1137/23M1565346?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 61-90, March 2024. <br/> Abstract. We consider the task of estimating the unknown diffusion parameter in an elliptic PDE as a model problem to develop and test the effectiveness and robustness to noise of reconstruction schemes with sparsity regularization. To this end, the model problem is recast as a nonlinear infinite dimensional optimization problem, where the logarithm of the unknown diffusion parameter is modeled using a linear combination of the elements of a dictionary, i.e., a known bounded sequence of [math] functions, with unknown coefficients that form a sequence in [math]. We show that the regularization of this nonlinear optimization problem using a weighted [math]-norm has minimizers that are finitely supported. We then propose modifications of well-known algorithms (ISTA and FISTA) to find a minimizer of this weighted [math]-norm regularized nonlinear optimization problem that accounts for the fact that in general the smooth part of the functional being optimized is a functional only defined over [math]. We also introduce semismooth methods (ASISTA and FASISTA) for finding a minimizer, which locally uses Gauss–Newton type surrogate models that additionally are stabilized by means of a Levenberg–Marquardt type approach. Our numerical examples show that the regularization with the weighted [math]-norm indeed does make the estimation more robust with respect to noise. Moreover, the numerical examples also demonstrate that the ASISTA and FASISTA methods are quite efficient, outperforming both ISTA and FISTA.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 61-90, March 2024. <br/> Abstract. We consider the task of estimating the unknown diffusion parameter in an elliptic PDE as a model problem to develop and test the effectiveness and robustness to noise of reconstruction schemes with sparsity regularization. To this end, the model problem is recast as a nonlinear infinite dimensional optimization problem, where the logarithm of the unknown diffusion parameter is modeled using a linear combination of the elements of a dictionary, i.e., a known bounded sequence of [math] functions, with unknown coefficients that form a sequence in [math]. We show that the regularization of this nonlinear optimization problem using a weighted [math]-norm has minimizers that are finitely supported. We then propose modifications of well-known algorithms (ISTA and FISTA) to find a minimizer of this weighted [math]-norm regularized nonlinear optimization problem that accounts for the fact that in general the smooth part of the functional being optimized is a functional only defined over [math]. We also introduce semismooth methods (ASISTA and FASISTA) for finding a minimizer, which locally uses Gauss–Newton type surrogate models that additionally are stabilized by means of a Levenberg–Marquardt type approach. Our numerical examples show that the regularization with the weighted [math]-norm indeed does make the estimation more robust with respect to noise. Moreover, the numerical examples also demonstrate that the ASISTA and FASISTA methods are quite efficient, outperforming both ISTA and FISTA.
Identification of Sparsely Representable Diffusion Parameters in Elliptic Problems
10.1137/23M1565346
SIAM Journal on Imaging Sciences
2024-01-17T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Luzia N. Felber
Helmut Harbrecht
Marc Schmidlin
Identification of Sparsely Representable Diffusion Parameters in Elliptic Problems
17
1
61
90
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1565346
https://epubs.siam.org/doi/abs/10.1137/23M1565346?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Learning Weakly Convex Regularizers for Convergent Image-Reconstruction Algorithms
https://epubs.siam.org/doi/abs/10.1137/23M1565243?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 91-115, March 2024. <br/> Abstract.We propose to learn non-convex regularizers with a prescribed upper bound on their weak-convexity modulus. Such regularizers give rise to variational denoisers that minimize a convex energy. They rely on few parameters (less than 15,000) and offer a signal-processing interpretation as they mimic handcrafted sparsity-promoting regularizers. Through numerical experiments, we show that such denoisers outperform convex-regularization methods as well as the popular BM3D denoiser. Additionally, the learned regularizer can be deployed to solve inverse problems with iterative schemes that provably converge. For both CT and MRI reconstruction, the regularizer generalizes well and offers an excellent tradeoff between performance, number of parameters, guarantees, and interpretability when compared to other data-driven approaches.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 91-115, March 2024. <br/> Abstract.We propose to learn non-convex regularizers with a prescribed upper bound on their weak-convexity modulus. Such regularizers give rise to variational denoisers that minimize a convex energy. They rely on few parameters (less than 15,000) and offer a signal-processing interpretation as they mimic handcrafted sparsity-promoting regularizers. Through numerical experiments, we show that such denoisers outperform convex-regularization methods as well as the popular BM3D denoiser. Additionally, the learned regularizer can be deployed to solve inverse problems with iterative schemes that provably converge. For both CT and MRI reconstruction, the regularizer generalizes well and offers an excellent tradeoff between performance, number of parameters, guarantees, and interpretability when compared to other data-driven approaches.
Learning Weakly Convex Regularizers for Convergent Image-Reconstruction Algorithms
10.1137/23M1565243
SIAM Journal on Imaging Sciences
2024-01-18T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Alexis Goujon
Sebastian Neumayer
Michael Unser
Learning Weakly Convex Regularizers for Convergent Image-Reconstruction Algorithms
17
1
91
115
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1565243
https://epubs.siam.org/doi/abs/10.1137/23M1565243?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Polynomial Preconditioners for Regularized Linear Inverse Problems
https://epubs.siam.org/doi/abs/10.1137/22M1530355?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 116-146, March 2024. <br/> Abstract. This work aims to accelerate the convergence of proximal gradient methods used to solve regularized linear inverse problems. This is achieved by designing a polynomial-based preconditioner that targets the eigenvalue spectrum of the normal operator derived from the linear operator. The preconditioner does not assume any explicit structure on the linear function and thus can be deployed in diverse applications of interest. The efficacy of the preconditioner is validated on three different Magnetic Resonance Imaging applications, where it is seen to achieve faster iterative convergence (around [math] faster, depending on the application of interest) while achieving similar reconstruction quality.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 116-146, March 2024. <br/> Abstract. This work aims to accelerate the convergence of proximal gradient methods used to solve regularized linear inverse problems. This is achieved by designing a polynomial-based preconditioner that targets the eigenvalue spectrum of the normal operator derived from the linear operator. The preconditioner does not assume any explicit structure on the linear function and thus can be deployed in diverse applications of interest. The efficacy of the preconditioner is validated on three different Magnetic Resonance Imaging applications, where it is seen to achieve faster iterative convergence (around [math] faster, depending on the application of interest) while achieving similar reconstruction quality.
Polynomial Preconditioners for Regularized Linear Inverse Problems
10.1137/22M1530355
SIAM Journal on Imaging Sciences
2024-01-22T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Siddharth S. Iyer
Frank Ong
Xiaozhi Cao
Congyu Liao
Luca Daniel
Jonathan I. Tamir
Kawin Setsompop
Polynomial Preconditioners for Regularized Linear Inverse Problems
17
1
116
146
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1530355
https://epubs.siam.org/doi/abs/10.1137/22M1530355?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Conductivity Imaging from Internal Measurements with Mixed Least-Squares Deep Neural Networks
https://epubs.siam.org/doi/abs/10.1137/23M1562536?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 147-187, March 2024. <br/> Abstract. In this work, we develop a novel approach using deep neural networks (DNNs) to reconstruct the conductivity distribution in elliptic problems from one measurement of the solution over the whole domain. The approach is based on a mixed reformulation of the governing equation and utilizes the standard least-squares objective, with DNNs as ansatz functions to approximate the conductivity and flux simultaneously. We provide a thorough analysis of the DNN approximations of the conductivity for both continuous and empirical losses, including rigorous error estimates that are explicit in terms of the noise level, various penalty parameters, and neural network architectural parameters (depth, width, and parameter bounds). We also provide multiple numerical experiments in two dimensions and multidimensions to illustrate distinct features of the approach, e.g., excellent stability with respect to data noise and capability of solving high-dimensional problems.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 147-187, March 2024. <br/> Abstract. In this work, we develop a novel approach using deep neural networks (DNNs) to reconstruct the conductivity distribution in elliptic problems from one measurement of the solution over the whole domain. The approach is based on a mixed reformulation of the governing equation and utilizes the standard least-squares objective, with DNNs as ansatz functions to approximate the conductivity and flux simultaneously. We provide a thorough analysis of the DNN approximations of the conductivity for both continuous and empirical losses, including rigorous error estimates that are explicit in terms of the noise level, various penalty parameters, and neural network architectural parameters (depth, width, and parameter bounds). We also provide multiple numerical experiments in two dimensions and multidimensions to illustrate distinct features of the approach, e.g., excellent stability with respect to data noise and capability of solving high-dimensional problems.
Conductivity Imaging from Internal Measurements with Mixed Least-Squares Deep Neural Networks
10.1137/23M1562536
SIAM Journal on Imaging Sciences
2024-01-23T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Bangti Jin
Xiyao Li
Qimeng Quan
Zhi Zhou
Conductivity Imaging from Internal Measurements with Mixed Least-Squares Deep Neural Networks
17
1
147
187
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1562536
https://epubs.siam.org/doi/abs/10.1137/23M1562536?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Direct Imaging Methods for Reconstructing a Locally Rough Interface from Phaseless Total-Field Data or Phased Far-Field Data
https://epubs.siam.org/doi/abs/10.1137/23M1571393?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 188-224, March 2024. <br/> Abstract. This paper is concerned with the problem of inverse scattering of time-harmonic acoustic plane waves by a two-layered medium with a locally rough interface in two dimensions. A direct imaging method is proposed to reconstruct the locally rough interface from the phaseless total-field data measured on the upper half of the circle with a large radius at a fixed frequency or from the phased far-field data measured on the upper half of the unit circle at a fixed frequency. The presence of the locally rough interface poses challenges in the theoretical analysis of the imaging methods. To address these challenges, a technically involved asymptotic analysis is provided for the relevant oscillatory integrals involved in the imaging methods, based mainly on the techniques and results in our recent work [L. Li, J. Yang, B. Zhang, and H. Zhang, arXiv:2208.00456, 2022] on the uniform far-field asymptotics of the scattered field for acoustic scattering in a two-layered medium. Finally, extensive numerical experiments are conducted to demonstrate the feasibility and robustness of our imaging algorithms.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 188-224, March 2024. <br/> Abstract. This paper is concerned with the problem of inverse scattering of time-harmonic acoustic plane waves by a two-layered medium with a locally rough interface in two dimensions. A direct imaging method is proposed to reconstruct the locally rough interface from the phaseless total-field data measured on the upper half of the circle with a large radius at a fixed frequency or from the phased far-field data measured on the upper half of the unit circle at a fixed frequency. The presence of the locally rough interface poses challenges in the theoretical analysis of the imaging methods. To address these challenges, a technically involved asymptotic analysis is provided for the relevant oscillatory integrals involved in the imaging methods, based mainly on the techniques and results in our recent work [L. Li, J. Yang, B. Zhang, and H. Zhang, arXiv:2208.00456, 2022] on the uniform far-field asymptotics of the scattered field for acoustic scattering in a two-layered medium. Finally, extensive numerical experiments are conducted to demonstrate the feasibility and robustness of our imaging algorithms.
Direct Imaging Methods for Reconstructing a Locally Rough Interface from Phaseless Total-Field Data or Phased Far-Field Data
10.1137/23M1571393
SIAM Journal on Imaging Sciences
2024-01-24T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Long Li
Jiansheng Yang
Bo Zhang
Haiwen Zhang
Direct Imaging Methods for Reconstructing a Locally Rough Interface from Phaseless Total-Field Data or Phased Far-Field Data
17
1
188
224
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1571393
https://epubs.siam.org/doi/abs/10.1137/23M1571393?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery with Sparse Corruptions
https://epubs.siam.org/doi/abs/10.1137/23M1574282?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 225-247, March 2024. <br/> Abstract. We study the tensor robust principal component analysis (TRPCA) problem, a tensorial extension of matrix robust principal component analysis, which aims to split the given tensor into an underlying low-rank component and a sparse outlier component. This work proposes a fast algorithm, called robust tensor CUR decompositions (RTCUR), for large-scale nonconvex TRPCA problems under the Tucker rank setting. RTCUR is developed within a framework of alternating projections that projects between the set of low-rank tensors and the set of sparse tensors. We utilize the recently developed tensor CUR decomposition to substantially reduce the computational complexity in each projection. In addition, we develop four variants of RTCUR for different application settings. We demonstrate the effectiveness and computational advantages of RTCUR against state-of-the-art methods on both synthetic and real-world datasets.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 225-247, March 2024. <br/> Abstract. We study the tensor robust principal component analysis (TRPCA) problem, a tensorial extension of matrix robust principal component analysis, which aims to split the given tensor into an underlying low-rank component and a sparse outlier component. This work proposes a fast algorithm, called robust tensor CUR decompositions (RTCUR), for large-scale nonconvex TRPCA problems under the Tucker rank setting. RTCUR is developed within a framework of alternating projections that projects between the set of low-rank tensors and the set of sparse tensors. We utilize the recently developed tensor CUR decomposition to substantially reduce the computational complexity in each projection. In addition, we develop four variants of RTCUR for different application settings. We demonstrate the effectiveness and computational advantages of RTCUR against state-of-the-art methods on both synthetic and real-world datasets.
Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery with Sparse Corruptions
10.1137/23M1574282
SIAM Journal on Imaging Sciences
2024-01-25T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
HanQin Cai
Zehan Chao
Longxiu Huang
Deanna Needell
Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery with Sparse Corruptions
17
1
225
247
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1574282
https://epubs.siam.org/doi/abs/10.1137/23M1574282?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Image Segmentation Using Bayesian Inference for Convex Variant Mumford–Shah Variational Model
https://epubs.siam.org/doi/abs/10.1137/23M1545379?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 248-272, March 2024. <br/> Abstract. The Mumford–Shah model is a classical segmentation model, but its objective function is nonconvex. The smoothing and thresholding (SaT) approach is a convex variant of the Mumford–Shah model, which seeks a smoothed approximation solution to the Mumford–Shah model. The SaT approach separates the segmentation into two stages: first, a convex energy function is minimized to obtain a smoothed image; then, a thresholding technique is applied to segment the smoothed image. The energy function consists of three weighted terms and the weights are called the regularization parameters. Selecting appropriate regularization parameters is crucial to achieving effective segmentation results. Traditionally, the regularization parameters are chosen by trial-and-error, which is a very time-consuming procedure and is not practical in real applications. In this paper, we apply a Bayesian inference approach to infer the regularization parameters and estimate the smoothed image. We analyze the convex variant Mumford–Shah variational model from a statistical perspective and then construct a hierarchical Bayesian model. A mean field variational family is used to approximate the posterior distribution. The variational density of the smoothed image is assumed to have a Gaussian density, and the hyperparameters are assumed to have Gamma variational densities. All the parameters in the Gaussian density and Gamma densities are iteratively updated. Experimental results show that the proposed approach is capable of generating high-quality segmentation results. Although the proposed approach contains an inference step to estimate the regularization parameters, it requires less CPU running time to obtain the smoothed image than previous methods.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 248-272, March 2024. <br/> Abstract. The Mumford–Shah model is a classical segmentation model, but its objective function is nonconvex. The smoothing and thresholding (SaT) approach is a convex variant of the Mumford–Shah model, which seeks a smoothed approximation solution to the Mumford–Shah model. The SaT approach separates the segmentation into two stages: first, a convex energy function is minimized to obtain a smoothed image; then, a thresholding technique is applied to segment the smoothed image. The energy function consists of three weighted terms and the weights are called the regularization parameters. Selecting appropriate regularization parameters is crucial to achieving effective segmentation results. Traditionally, the regularization parameters are chosen by trial-and-error, which is a very time-consuming procedure and is not practical in real applications. In this paper, we apply a Bayesian inference approach to infer the regularization parameters and estimate the smoothed image. We analyze the convex variant Mumford–Shah variational model from a statistical perspective and then construct a hierarchical Bayesian model. A mean field variational family is used to approximate the posterior distribution. The variational density of the smoothed image is assumed to have a Gaussian density, and the hyperparameters are assumed to have Gamma variational densities. All the parameters in the Gaussian density and Gamma densities are iteratively updated. Experimental results show that the proposed approach is capable of generating high-quality segmentation results. Although the proposed approach contains an inference step to estimate the regularization parameters, it requires less CPU running time to obtain the smoothed image than previous methods.
Image Segmentation Using Bayesian Inference for Convex Variant Mumford–Shah Variational Model
10.1137/23M1545379
SIAM Journal on Imaging Sciences
2024-01-30T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Xu Xiao
Youwei Wen
Raymond Chan
Tieyong Zeng
Image Segmentation Using Bayesian Inference for Convex Variant Mumford–Shah Variational Model
17
1
248
272
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1545379
https://epubs.siam.org/doi/abs/10.1137/23M1545379?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
A Majorization-Minimization Algorithm for Neuroimage Registration
https://epubs.siam.org/doi/abs/10.1137/22M1516907?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 273-300, March 2024. <br/> Abstract. Intensity-based image registration is critical for neuroimaging tasks, such as 3D reconstruction, times-series alignment, and common coordinate mapping. The gradient-based optimization methods commonly used to solve this problem require a careful selection of step-length. This limitation imposes substantial time and computational costs. Here we propose a gradient-independent rigid-motion registration algorithm based on the majorization-minimization (MM) principle. Each iteration of our intensity-based MM algorithm reduces to a simple point-set rigid registration problem with a closed form solution that avoids the step-length issue altogether. The details of the algorithm are presented, and an error bound for its more practical truncated form is derived. The performance of the MM algorithm is shown to be more effective than gradient descent on simulated images and Nissl stained coronal slices of mouse brain. We also compare and contrast the similarities and differences between the MM algorithm and another gradient-free registration algorithm called the block-matching method. Finally, extensions of this algorithm to more complex problems are discussed.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 273-300, March 2024. <br/> Abstract. Intensity-based image registration is critical for neuroimaging tasks, such as 3D reconstruction, times-series alignment, and common coordinate mapping. The gradient-based optimization methods commonly used to solve this problem require a careful selection of step-length. This limitation imposes substantial time and computational costs. Here we propose a gradient-independent rigid-motion registration algorithm based on the majorization-minimization (MM) principle. Each iteration of our intensity-based MM algorithm reduces to a simple point-set rigid registration problem with a closed form solution that avoids the step-length issue altogether. The details of the algorithm are presented, and an error bound for its more practical truncated form is derived. The performance of the MM algorithm is shown to be more effective than gradient descent on simulated images and Nissl stained coronal slices of mouse brain. We also compare and contrast the similarities and differences between the MM algorithm and another gradient-free registration algorithm called the block-matching method. Finally, extensions of this algorithm to more complex problems are discussed.
A Majorization-Minimization Algorithm for Neuroimage Registration
10.1137/22M1516907
SIAM Journal on Imaging Sciences
2024-02-05T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Gaiting Zhou
Daniel Tward
Kenneth Lange
A Majorization-Minimization Algorithm for Neuroimage Registration
17
1
273
300
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1516907
https://epubs.siam.org/doi/abs/10.1137/22M1516907?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Posterior-Variance–Based Error Quantification for Inverse Problems in Imaging
https://epubs.siam.org/doi/abs/10.1137/23M1546129?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 301-333, March 2024. <br/> Abstract.In this work, a method for obtaining pixelwise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that, in practice, the obtained error bounds are rather tight. For realizing the numerical experiments, a novel primal-dual Langevin algorithm for sampling from nonsmooth distributions is also introduced in this work, showing promising results in practice. While a proof of convergence for this primal-dual algorithm is still open, the theoretical guarantees of the proposed method do not require a guaranteed convergence of the sampling algorithm.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 301-333, March 2024. <br/> Abstract.In this work, a method for obtaining pixelwise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that, in practice, the obtained error bounds are rather tight. For realizing the numerical experiments, a novel primal-dual Langevin algorithm for sampling from nonsmooth distributions is also introduced in this work, showing promising results in practice. While a proof of convergence for this primal-dual algorithm is still open, the theoretical guarantees of the proposed method do not require a guaranteed convergence of the sampling algorithm.
Posterior-Variance–Based Error Quantification for Inverse Problems in Imaging
10.1137/23M1546129
SIAM Journal on Imaging Sciences
2024-02-07T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Dominik Narnhofer
Andreas Habring
Martin Holler
Thomas Pock
Posterior-Variance–Based Error Quantification for Inverse Problems in Imaging
17
1
301
333
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1546129
https://epubs.siam.org/doi/abs/10.1137/23M1546129?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Reduced Order Modeling Inversion of Monostatic Data in a Multi-scattering Environment
https://epubs.siam.org/doi/abs/10.1137/23M1564365?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 334-350, March 2024. <br/>Abstract.Data-driven reduced order models (ROMs) have recently emerged as an efficient tool for the solution of inverse scattering problems with applications to seismic and sonar imaging. One requirement of this approach is that it uses the full square multiple-input/multiple-output (MIMO) matrix-valued transfer function as the data for multidimensional problems. The synthetic aperture radar (SAR), however, is limited to the single-input/single-output (SISO) measurements corresponding to the diagonal of the matrix transfer function. Here we present a ROM-based Lippmann–Schwinger approach overcoming this drawback. The ROMs are constructed to match the data for each source-receiver pair separately, and these are used to construct internal solutions for the corresponding source using only the data-driven Gramian. Efficiency of the proposed approach is demonstrated on 2D and 2.5D (3D propagation and 2D reflectors) numerical examples. The new algorithm not only suppresses multiple echoes seen in the Born imaging but also takes advantage of their illumination of some back sides of the reflectors, improving the quality of their mapping.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 334-350, March 2024. <br/>Abstract.Data-driven reduced order models (ROMs) have recently emerged as an efficient tool for the solution of inverse scattering problems with applications to seismic and sonar imaging. One requirement of this approach is that it uses the full square multiple-input/multiple-output (MIMO) matrix-valued transfer function as the data for multidimensional problems. The synthetic aperture radar (SAR), however, is limited to the single-input/single-output (SISO) measurements corresponding to the diagonal of the matrix transfer function. Here we present a ROM-based Lippmann–Schwinger approach overcoming this drawback. The ROMs are constructed to match the data for each source-receiver pair separately, and these are used to construct internal solutions for the corresponding source using only the data-driven Gramian. Efficiency of the proposed approach is demonstrated on 2D and 2.5D (3D propagation and 2D reflectors) numerical examples. The new algorithm not only suppresses multiple echoes seen in the Born imaging but also takes advantage of their illumination of some back sides of the reflectors, improving the quality of their mapping.
Reduced Order Modeling Inversion of Monostatic Data in a Multi-scattering Environment
10.1137/23M1564365
SIAM Journal on Imaging Sciences
2024-02-08T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Vladimir Druskin
Shari Moskow
Mikhail Zaslavsky
Reduced Order Modeling Inversion of Monostatic Data in a Multi-scattering Environment
17
1
334
350
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1564365
https://epubs.siam.org/doi/abs/10.1137/23M1564365?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
The [math]-Laplace “Signature” for Quasilinear Inverse Problems
https://epubs.siam.org/doi/abs/10.1137/22M1527192?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 351-388, March 2024. <br/> Abstract. This paper refers to an imaging problem in the presence of nonlinear materials. Specifically, the problem we address falls within the framework of Electrical Resistance Tomography and involves two different materials, one or both of which are nonlinear. Tomography with nonlinear materials is in the early stages of development, although breakthroughs are expected in the not-too-distant future. The original contribution this work makes is that the nonlinear problem can be approximated by a weighted [math]-Laplace problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the [math]-Laplacian in inverse problems with nonlinear materials. Moreover, when [math], this result allows all the imaging methods and algorithms developed for linear materials to be brought into the arena of problems with nonlinear materials. The main result of this work is that for “small” Dirichlet data, (i) one material can be replaced by a perfect electric conductor and (ii) the other material can be replaced by a material giving rise to a weighted [math]-Laplace problem.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 351-388, March 2024. <br/> Abstract. This paper refers to an imaging problem in the presence of nonlinear materials. Specifically, the problem we address falls within the framework of Electrical Resistance Tomography and involves two different materials, one or both of which are nonlinear. Tomography with nonlinear materials is in the early stages of development, although breakthroughs are expected in the not-too-distant future. The original contribution this work makes is that the nonlinear problem can be approximated by a weighted [math]-Laplace problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the [math]-Laplacian in inverse problems with nonlinear materials. Moreover, when [math], this result allows all the imaging methods and algorithms developed for linear materials to be brought into the arena of problems with nonlinear materials. The main result of this work is that for “small” Dirichlet data, (i) one material can be replaced by a perfect electric conductor and (ii) the other material can be replaced by a material giving rise to a weighted [math]-Laplace problem.
The [math]-Laplace “Signature” for Quasilinear Inverse Problems
10.1137/22M1527192
SIAM Journal on Imaging Sciences
2024-02-15T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Antonio Corbo Esposito
Luisa Faella
Gianpaolo Piscitelli
Vincenzo Mottola
Ravi Prakash
Antonello Tamburrino
The [math]-Laplace “Signature” for Quasilinear Inverse Problems
17
1
351
388
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1527192
https://epubs.siam.org/doi/abs/10.1137/22M1527192?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
The Cortical V1 Transform as a Heterogeneous Poisson Problem
https://epubs.siam.org/doi/abs/10.1137/23M1555958?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 389-414, March 2024. <br/> Abstract. Receptive profiles of the primary visual cortex (V1) cortical cells are very heterogeneous and act by differentiating the stimulus image as operators changing from point to point. In this paper we aim to show that the distribution of cells in V1, although not complete to reconstruct the original image, is sufficient to reconstruct the perceived image with subjective constancy. We show that a color constancy image can be reconstructed as the solution of the associated inverse problem, which is a Poisson equation with heterogeneous differential operators. At the neural level the weights of short-range connectivity constitute the fundamental solution of the Poisson problem adapted point by point. A first demonstration of convergence of the result towards homogeneous reconstructions is proposed by means of homogenization techniques.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 389-414, March 2024. <br/> Abstract. Receptive profiles of the primary visual cortex (V1) cortical cells are very heterogeneous and act by differentiating the stimulus image as operators changing from point to point. In this paper we aim to show that the distribution of cells in V1, although not complete to reconstruct the original image, is sufficient to reconstruct the perceived image with subjective constancy. We show that a color constancy image can be reconstructed as the solution of the associated inverse problem, which is a Poisson equation with heterogeneous differential operators. At the neural level the weights of short-range connectivity constitute the fundamental solution of the Poisson problem adapted point by point. A first demonstration of convergence of the result towards homogeneous reconstructions is proposed by means of homogenization techniques.
The Cortical V1 Transform as a Heterogeneous Poisson Problem
10.1137/23M1555958
SIAM Journal on Imaging Sciences
2024-02-21T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Alessandro Sarti
Mattia Galeotti
Giovanna Citti
The Cortical V1 Transform as a Heterogeneous Poisson Problem
17
1
389
414
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1555958
https://epubs.siam.org/doi/abs/10.1137/23M1555958?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Analysis of View Aliasing for the Generalized Radon Transform in [math]
https://epubs.siam.org/doi/abs/10.1137/23M1554746?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 415-440, March 2024. <br/> Abstract. In this paper we consider the generalized Radon transform [math] in the plane. Let [math] be a piecewise smooth function, which has a jump across a smooth curve [math]. We obtain a formula, which accurately describes view aliasing artifacts away from [math] when [math] is reconstructed from the data [math] discretized in the view direction. The formula is asymptotic, it is established in the limit as the sampling rate [math]. The proposed approach does not require that [math] be band-limited. Numerical experiments with the classical Radon transform and generalized Radon transform (which integrates over circles) demonstrate the accuracy of the formula.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 415-440, March 2024. <br/> Abstract. In this paper we consider the generalized Radon transform [math] in the plane. Let [math] be a piecewise smooth function, which has a jump across a smooth curve [math]. We obtain a formula, which accurately describes view aliasing artifacts away from [math] when [math] is reconstructed from the data [math] discretized in the view direction. The formula is asymptotic, it is established in the limit as the sampling rate [math]. The proposed approach does not require that [math] be band-limited. Numerical experiments with the classical Radon transform and generalized Radon transform (which integrates over circles) demonstrate the accuracy of the formula.
Analysis of View Aliasing for the Generalized Radon Transform in [math]
10.1137/23M1554746
SIAM Journal on Imaging Sciences
2024-02-23T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Alexander Katsevich
Analysis of View Aliasing for the Generalized Radon Transform in [math]
17
1
415
440
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1554746
https://epubs.siam.org/doi/abs/10.1137/23M1554746?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Learnable Nonlocal Self-Similarity of Deep Features for Image Denoising
https://epubs.siam.org/doi/abs/10.1137/22M1536996?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 441-475, March 2024. <br/> Abstract. High-dimensional deep features extracted by convolutional neural networks have nonlocal self-similarity. However, incorporating this nonlocal prior of deep features into deep network architectures with an interpretable variational framework is rarely explored. In this paper, we propose a learnable nonlocal self-similarity deep feature network for image denoising. Our method is motivated by the fact that the high-dimensional deep features obey a mixture probability distribution based on the Parzen–Rosenblatt window method. Then a regularizer with learnable nonlocal weights is proposed by considering the dual representation of the log-probability prior of the deep features. Specifically, the nonlocal weights are introduced as dual variables that can be learned by unrolling the associated numerical scheme. This leads to nonlocal modules (NLMs) in newly designed networks. Our method provides a statistical and variational interpretation for the nonlocal self-attention mechanism widely used in various networks. By adopting nonoverlapping window and region decomposition techniques, we can significantly reduce the computational complexity of nonlocal self-similarity, thus enabling parallel computation of the NLM. The solution to the proposed variational problem can be formulated as a learnable nonlocal self-similarity network for image denoising. This work offers a novel approach for constructing network structures that consider self-similarity and nonlocality. The improvements achieved by this method are predictable and partially controllable. Compared with several closely related denoising methods, the experimental results show the effectiveness of the proposed method in image denoising.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 441-475, March 2024. <br/> Abstract. High-dimensional deep features extracted by convolutional neural networks have nonlocal self-similarity. However, incorporating this nonlocal prior of deep features into deep network architectures with an interpretable variational framework is rarely explored. In this paper, we propose a learnable nonlocal self-similarity deep feature network for image denoising. Our method is motivated by the fact that the high-dimensional deep features obey a mixture probability distribution based on the Parzen–Rosenblatt window method. Then a regularizer with learnable nonlocal weights is proposed by considering the dual representation of the log-probability prior of the deep features. Specifically, the nonlocal weights are introduced as dual variables that can be learned by unrolling the associated numerical scheme. This leads to nonlocal modules (NLMs) in newly designed networks. Our method provides a statistical and variational interpretation for the nonlocal self-attention mechanism widely used in various networks. By adopting nonoverlapping window and region decomposition techniques, we can significantly reduce the computational complexity of nonlocal self-similarity, thus enabling parallel computation of the NLM. The solution to the proposed variational problem can be formulated as a learnable nonlocal self-similarity network for image denoising. This work offers a novel approach for constructing network structures that consider self-similarity and nonlocality. The improvements achieved by this method are predictable and partially controllable. Compared with several closely related denoising methods, the experimental results show the effectiveness of the proposed method in image denoising.
Learnable Nonlocal Self-Similarity of Deep Features for Image Denoising
10.1137/22M1536996
SIAM Journal on Imaging Sciences
2024-02-23T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Junying Meng
Faqiang Wang
Jun Liu
Learnable Nonlocal Self-Similarity of Deep Features for Image Denoising
17
1
441
475
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1536996
https://epubs.siam.org/doi/abs/10.1137/22M1536996?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Fractional Fourier Transforms Meet Riesz Potentials and Image Processing
https://epubs.siam.org/doi/abs/10.1137/23M1555442?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 476-500, March 2024. <br/>Abstract.Via chirp functions from fractional Fourier transforms, we introduce fractional Riesz potentials related to chirp functions, which are further used to give a new image encryption method with double phase coding. In a comparison with the image encryption method based on fractional Fourier transforms, via a series of image encryption and decryption experiments, we demonstrate that the symbols of fractional Riesz potentials related to chirp functions and the order of fractional Fourier transforms essentially provide greater flexibility and information security. We also establish the relations of fractional Riesz potentials related to chirp functions with fractional Fourier transforms, fractional Laplace operators, and fractional Riesz transforms, and we obtain their boundedness on rotation invariant spaces.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 476-500, March 2024. <br/>Abstract.Via chirp functions from fractional Fourier transforms, we introduce fractional Riesz potentials related to chirp functions, which are further used to give a new image encryption method with double phase coding. In a comparison with the image encryption method based on fractional Fourier transforms, via a series of image encryption and decryption experiments, we demonstrate that the symbols of fractional Riesz potentials related to chirp functions and the order of fractional Fourier transforms essentially provide greater flexibility and information security. We also establish the relations of fractional Riesz potentials related to chirp functions with fractional Fourier transforms, fractional Laplace operators, and fractional Riesz transforms, and we obtain their boundedness on rotation invariant spaces.
Fractional Fourier Transforms Meet Riesz Potentials and Image Processing
10.1137/23M1555442
SIAM Journal on Imaging Sciences
2024-02-27T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Zunwei Fu
Yan Lin
Dachun Yang
Shuhui Yang
Fractional Fourier Transforms Meet Riesz Potentials and Image Processing
17
1
476
500
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1555442
https://epubs.siam.org/doi/abs/10.1137/23M1555442?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
A Deep Learning Framework for Diffeomorphic Mapping Problems via Quasi-conformal Geometry Applied to Imaging
https://epubs.siam.org/doi/abs/10.1137/22M1516099?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 501-539, March 2024. <br/> Abstract. Many imaging problems can be formulated as mapping problems. A general mapping problem aims to obtain an optimal mapping that minimizes an energy functional subject to the given constraints. Existing methods to solve the mapping problems are often inefficient and can sometimes get trapped in local minima. An extra challenge arises when the optimal mapping is required to be diffeomorphic. In this work, we address the problem by proposing a deep-learning framework based on the Quasiconformal (QC) Teichmüller theories. The main strategy is to learn the Beltrami coefficient (BC) that represents a mapping as the latent feature vector in the deep neural network. The BC measures the local geometric distortion under the mapping, with which the interpretability of the deep neural network can be enhanced. Under this framework, the diffeomorphic property of the mapping can be controlled via a simple activation function within the network. The optimal mapping can also be easily regularized by integrating the BC into the loss function. A crucial advantage of the proposed framework is that once the network is successfully trained, the optimized mapping corresponding to each input data information can be obtained in real time. To examine the efficacy of the proposed framework, we apply the method to the diffeomorphic image registration problem. Experimental results outperform other state-of-the-art registration algorithms in both efficiency and accuracy, which demonstrate the effectiveness of our proposed framework to solve the mapping problem.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 501-539, March 2024. <br/> Abstract. Many imaging problems can be formulated as mapping problems. A general mapping problem aims to obtain an optimal mapping that minimizes an energy functional subject to the given constraints. Existing methods to solve the mapping problems are often inefficient and can sometimes get trapped in local minima. An extra challenge arises when the optimal mapping is required to be diffeomorphic. In this work, we address the problem by proposing a deep-learning framework based on the Quasiconformal (QC) Teichmüller theories. The main strategy is to learn the Beltrami coefficient (BC) that represents a mapping as the latent feature vector in the deep neural network. The BC measures the local geometric distortion under the mapping, with which the interpretability of the deep neural network can be enhanced. Under this framework, the diffeomorphic property of the mapping can be controlled via a simple activation function within the network. The optimal mapping can also be easily regularized by integrating the BC into the loss function. A crucial advantage of the proposed framework is that once the network is successfully trained, the optimized mapping corresponding to each input data information can be obtained in real time. To examine the efficacy of the proposed framework, we apply the method to the diffeomorphic image registration problem. Experimental results outperform other state-of-the-art registration algorithms in both efficiency and accuracy, which demonstrate the effectiveness of our proposed framework to solve the mapping problem.
A Deep Learning Framework for Diffeomorphic Mapping Problems via Quasi-conformal Geometry Applied to Imaging
10.1137/22M1516099
SIAM Journal on Imaging Sciences
2024-03-05T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Qiguang Chen
Zhiwen Li
Lok Ming Lui
A Deep Learning Framework for Diffeomorphic Mapping Problems via Quasi-conformal Geometry Applied to Imaging
17
1
501
539
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/22M1516099
https://epubs.siam.org/doi/abs/10.1137/22M1516099?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural Networks
https://epubs.siam.org/doi/abs/10.1137/23M1586355?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 540-594, March 2024. <br/> Abstract. For problems in image processing and many other fields, a large class of effective neural networks has encoder-decoder-based architectures. Although these networks have shown impressive performance, mathematical explanations of their architectures are still underdeveloped. In this paper, we study the encoder-decoder-based network architecture from the algorithmic perspective and provide a mathematical explanation. We use the two-phase Potts model for image segmentation as an example for our explanations. We associate the segmentation problem with a control problem in the continuous setting. Then, the continuous control model is time discretized by an operator-splitting scheme, the PottsMGNet, and space discretized by the multigrid method. We show that the resulting discrete PottsMGNet is equivalent to an encoder-decoder-based network. With minor modifications, it is shown that a number of the popular encoder-decoder-based neural networks are just instances of the proposed PottsMGNet. By incorporating the soft-threshold-dynamics into the PottsMGNet as a regularizer, the PottsMGNet has shown to be robust with the network parameters such as network width and depth and has achieved remarkable performance on datasets with very large noise. In nearly all our experiments, the new network always performs better than or as well as on accuracy and dice score compared to existing networks for image segmentation.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 540-594, March 2024. <br/> Abstract. For problems in image processing and many other fields, a large class of effective neural networks has encoder-decoder-based architectures. Although these networks have shown impressive performance, mathematical explanations of their architectures are still underdeveloped. In this paper, we study the encoder-decoder-based network architecture from the algorithmic perspective and provide a mathematical explanation. We use the two-phase Potts model for image segmentation as an example for our explanations. We associate the segmentation problem with a control problem in the continuous setting. Then, the continuous control model is time discretized by an operator-splitting scheme, the PottsMGNet, and space discretized by the multigrid method. We show that the resulting discrete PottsMGNet is equivalent to an encoder-decoder-based network. With minor modifications, it is shown that a number of the popular encoder-decoder-based neural networks are just instances of the proposed PottsMGNet. By incorporating the soft-threshold-dynamics into the PottsMGNet as a regularizer, the PottsMGNet has shown to be robust with the network parameters such as network width and depth and has achieved remarkable performance on datasets with very large noise. In nearly all our experiments, the new network always performs better than or as well as on accuracy and dice score compared to existing networks for image segmentation.
PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural Networks
10.1137/23M1586355
SIAM Journal on Imaging Sciences
2024-03-07T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Xue-Cheng Tai
Hao Liu
Raymond Chan
PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural Networks
17
1
540
594
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1586355
https://epubs.siam.org/doi/abs/10.1137/23M1586355?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Numerical Implementation of Generalized V-Line Transforms on 2D Vector Fields and their Inversions
https://epubs.siam.org/doi/abs/10.1137/23M1573112?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 595-631, March 2024. <br/> Abstract.The paper discusses numerical implementations of various inversion schemes for generalized V-line transforms on vector fields introduced in [G. Ambartsoumian, M. J. Latifi, and R. K. Mishra, Inverse Problems, 36 (2020), 104002]. It demonstrates the possibility of efficient recovery of an unknown vector field from five different types of data sets, with and without noise. We examine the performance of the proposed algorithms in a variety of setups, and illustrate our results with numerical simulations on different phantoms.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 595-631, March 2024. <br/> Abstract.The paper discusses numerical implementations of various inversion schemes for generalized V-line transforms on vector fields introduced in [G. Ambartsoumian, M. J. Latifi, and R. K. Mishra, Inverse Problems, 36 (2020), 104002]. It demonstrates the possibility of efficient recovery of an unknown vector field from five different types of data sets, with and without noise. We examine the performance of the proposed algorithms in a variety of setups, and illustrate our results with numerical simulations on different phantoms.
Numerical Implementation of Generalized V-Line Transforms on 2D Vector Fields and their Inversions
10.1137/23M1573112
SIAM Journal on Imaging Sciences
2024-03-07T08:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Gaik Ambartsoumian
Mohammad J. Latifi Jebelli
Rohit K. Mishra
Numerical Implementation of Generalized V-Line Transforms on 2D Vector Fields and their Inversions
17
1
595
631
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1573112
https://epubs.siam.org/doi/abs/10.1137/23M1573112?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Polarimetric Fourier Phase Retrieval
https://epubs.siam.org/doi/abs/10.1137/23M1570971?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 632-671, March 2024. <br/> Abstract. This work introduces polarimetric Fourier phase retrieval (PPR), a physically inspired model to leverage polarization of light information in Fourier phase retrieval problems. We provide a complete characterization of its uniqueness properties by unraveling equivalencies with two related problems, namely, bivariate phase retrieval and a polynomial autocorrelation factorization problem. In particular, we show that the problem admits a unique solution, which can be formulated as a greatest common divisor (GCD) of measurement polynomials. As a result, we propose algebraic solutions for PPR based on approximate GCD computations using the null-space properties of Sylvester matrices. Alternatively, existing iterative algorithms for phase retrieval, semidefinite positive relaxation and Wirtinger flow, are carefully adapted to solve the PPR problem. Finally, a set of numerical experiments permits a detailed assessment of the numerical behavior and relative performances of each proposed reconstruction strategy. They further demonstrate the fruitful combination of algebraic and iterative approaches toward a scalable, computationally efficient, and robust to noise reconstruction strategy for PPR.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 632-671, March 2024. <br/> Abstract. This work introduces polarimetric Fourier phase retrieval (PPR), a physically inspired model to leverage polarization of light information in Fourier phase retrieval problems. We provide a complete characterization of its uniqueness properties by unraveling equivalencies with two related problems, namely, bivariate phase retrieval and a polynomial autocorrelation factorization problem. In particular, we show that the problem admits a unique solution, which can be formulated as a greatest common divisor (GCD) of measurement polynomials. As a result, we propose algebraic solutions for PPR based on approximate GCD computations using the null-space properties of Sylvester matrices. Alternatively, existing iterative algorithms for phase retrieval, semidefinite positive relaxation and Wirtinger flow, are carefully adapted to solve the PPR problem. Finally, a set of numerical experiments permits a detailed assessment of the numerical behavior and relative performances of each proposed reconstruction strategy. They further demonstrate the fruitful combination of algebraic and iterative approaches toward a scalable, computationally efficient, and robust to noise reconstruction strategy for PPR.
Polarimetric Fourier Phase Retrieval
10.1137/23M1570971
SIAM Journal on Imaging Sciences
2024-03-11T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Julien Flamant
Konstantin Usevich
Marianne Clausel
David Brie
Polarimetric Fourier Phase Retrieval
17
1
632
671
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1570971
https://epubs.siam.org/doi/abs/10.1137/23M1570971?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
A Boundary Integral Equation Method for the Complete Electrode Model in Electrical Impedance Tomography with Tests on Experimental Data
https://epubs.siam.org/doi/abs/10.1137/23M1585696?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 672-705, March 2024. <br/> Abstract. We develop a boundary integral equation–based numerical method to solve for the electrostatic potential in two dimensions, inside a medium with piecewise constant conductivity, where the boundary condition is given by the complete electrode model (CEM). The CEM is seen as the most accurate model of the physical setting where electrodes are placed on the surface of an electrically conductive body, currents are injected through the electrodes, and the resulting voltages are measured again on these same electrodes. The integral equation formulation is based on expressing the electrostatic potential as the solution to a finite number of Laplace equations which are coupled through boundary matching conditions. This allows us to re-express the solution in terms of single-layer potentials; the problem is thus recast as a system of integral equations on a finite number of smooth curves. We discuss an adaptive method for the solution of the resulting system of mildly singular integral equations. This forward solver is both fast and accurate. We then present a numerical inverse solver for electrical impedance tomography which uses our forward solver at its core. To demonstrate the applicability of our results we test our numerical methods on an open electrical impedance tomography data set provided by the Finnish Inverse Problems Society.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 672-705, March 2024. <br/> Abstract. We develop a boundary integral equation–based numerical method to solve for the electrostatic potential in two dimensions, inside a medium with piecewise constant conductivity, where the boundary condition is given by the complete electrode model (CEM). The CEM is seen as the most accurate model of the physical setting where electrodes are placed on the surface of an electrically conductive body, currents are injected through the electrodes, and the resulting voltages are measured again on these same electrodes. The integral equation formulation is based on expressing the electrostatic potential as the solution to a finite number of Laplace equations which are coupled through boundary matching conditions. This allows us to re-express the solution in terms of single-layer potentials; the problem is thus recast as a system of integral equations on a finite number of smooth curves. We discuss an adaptive method for the solution of the resulting system of mildly singular integral equations. This forward solver is both fast and accurate. We then present a numerical inverse solver for electrical impedance tomography which uses our forward solver at its core. To demonstrate the applicability of our results we test our numerical methods on an open electrical impedance tomography data set provided by the Finnish Inverse Problems Society.
A Boundary Integral Equation Method for the Complete Electrode Model in Electrical Impedance Tomography with Tests on Experimental Data
10.1137/23M1585696
SIAM Journal on Imaging Sciences
2024-03-20T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Teemu Tyni
Adam R. Stinchcombe
Spyros Alexakis
A Boundary Integral Equation Method for the Complete Electrode Model in Electrical Impedance Tomography with Tests on Experimental Data
17
1
672
705
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1585696
https://epubs.siam.org/doi/abs/10.1137/23M1585696?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Bijective Density-Equalizing Quasiconformal Map for Multiply Connected Open Surfaces
https://epubs.siam.org/doi/abs/10.1137/23M1594376?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/17/1">Volume 17, Issue 1</a>, Page 706-755, March 2024. <br/> Abstract.This paper proposes a novel method for computing bijective density-equalizing quasiconformal flattening maps for multiply connected open surfaces. In conventional density-equalizing maps, shape deformations are solely driven by prescribed constraints on the density distribution, defined as the population per unit area, while the bijectivity and local geometric distortions of the mappings are uncontrolled. Also, prior methods have primarily focused on simply connected open surfaces but not surfaces with more complicated topologies. Our proposed method overcomes these issues by formulating the density diffusion process as a quasiconformal flow, which allows us to effectively control the local geometric distortion and guarantee the bijectivity of the mapping by solving an energy minimization problem involving the Beltrami coefficient of the mapping. To achieve an optimal parameterization of multiply connected surfaces, we develop an iterative scheme that optimizes both the shape of the target planar circular domain and the density-equalizing quasiconformal map onto it. In addition, landmark constraints can be incorporated into our proposed method for consistent feature alignment. The method can also be naturally applied to simply connected open surfaces. By changing the prescribed population, a large variety of surface flattening maps with different desired properties can be achieved. The method is tested on both synthetic and real examples, demonstrating its efficacy in various applications in computer graphics and medical imaging.
SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 706-755, March 2024. <br/> Abstract.This paper proposes a novel method for computing bijective density-equalizing quasiconformal flattening maps for multiply connected open surfaces. In conventional density-equalizing maps, shape deformations are solely driven by prescribed constraints on the density distribution, defined as the population per unit area, while the bijectivity and local geometric distortions of the mappings are uncontrolled. Also, prior methods have primarily focused on simply connected open surfaces but not surfaces with more complicated topologies. Our proposed method overcomes these issues by formulating the density diffusion process as a quasiconformal flow, which allows us to effectively control the local geometric distortion and guarantee the bijectivity of the mapping by solving an energy minimization problem involving the Beltrami coefficient of the mapping. To achieve an optimal parameterization of multiply connected surfaces, we develop an iterative scheme that optimizes both the shape of the target planar circular domain and the density-equalizing quasiconformal map onto it. In addition, landmark constraints can be incorporated into our proposed method for consistent feature alignment. The method can also be naturally applied to simply connected open surfaces. By changing the prescribed population, a large variety of surface flattening maps with different desired properties can be achieved. The method is tested on both synthetic and real examples, demonstrating its efficacy in various applications in computer graphics and medical imaging.
Bijective Density-Equalizing Quasiconformal Map for Multiply Connected Open Surfaces
10.1137/23M1594376
SIAM Journal on Imaging Sciences
2024-03-28T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Zhiyuan Lyu
Gary P. T. Choi
Lok Ming Lui
Bijective Density-Equalizing Quasiconformal Map for Multiply Connected Open Surfaces
17
1
706
755
2024-03-31T07:00:00Z
2024-03-31T07:00:00Z
10.1137/23M1594376
https://epubs.siam.org/doi/abs/10.1137/23M1594376?ai=sd&mi=3bfys9&af=R
© 2024 Society for Industrial and Applied Mathematics
-
Subaperture-Based Digital Aberration Correction for Optical Coherence Tomography: A Novel Mathematical Approach
https://epubs.siam.org/doi/abs/10.1137/22M1543240?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 1857-1885, December 2023. <br/> Abstract. In this paper, we consider subaperture-based approaches for the digital aberration correction (DAC) of optical coherence tomography (OCT) images. In particular, we introduce a mathematical framework for describing this class of approaches, leading to new insights for the subaperture-correlation method. Furthermore, we propose a novel DAC approach requiring only minimal statistical assumptions on the spectral phase of the scanned object. Finally, we demonstrate the applicability of our novel DAC approach via numerical examples based on both simulated and experimental OCT data.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 1857-1885, December 2023. <br/> Abstract. In this paper, we consider subaperture-based approaches for the digital aberration correction (DAC) of optical coherence tomography (OCT) images. In particular, we introduce a mathematical framework for describing this class of approaches, leading to new insights for the subaperture-correlation method. Furthermore, we propose a novel DAC approach requiring only minimal statistical assumptions on the spectral phase of the scanned object. Finally, we demonstrate the applicability of our novel DAC approach via numerical examples based on both simulated and experimental OCT data.
Subaperture-Based Digital Aberration Correction for Optical Coherence Tomography: A Novel Mathematical Approach
10.1137/22M1543240
SIAM Journal on Imaging Sciences
2023-10-11T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Simon Hubmer
Ekaterina Sherina
Ronny Ramlau
Michael Pircher
Rainer Leitgeb
Subaperture-Based Digital Aberration Correction for Optical Coherence Tomography: A Novel Mathematical Approach
16
4
1857
1885
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M1543240
https://epubs.siam.org/doi/abs/10.1137/22M1543240?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
[math] Minimization for Signal and Image Recovery
https://epubs.siam.org/doi/abs/10.1137/22M1525363?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 1886-1928, December 2023. <br/> Abstract. The nonconvex optimization method has attracted increasing attention due to its excellent ability of promoting sparsity in signal processing, image restoration, and machine learning. In this paper, we consider a new minimization method [math] [math] and its applications in signal recovery and image reconstruction because [math] minimization provides an effective way to solve the [math]-ratio sparsity minimization model. Our main contributions are to establish a convex hull decomposition for [math] and investigate RIP-based conditions for stable signal recovery and image reconstruction by [math] minimization. For one-dimensional signal recovery, our derived RIP condition extends existing results. For two-dimensional image recovery under [math] minimization of image gradients, we provide the error estimate of the resulting optimal solutions in terms of sparsity and noise level, which is missing in the literature. Numerical results of the limited angle problem in computed tomography imaging and image deblurring are presented to validate the efficiency and superiority of the proposed minimization method among the state-of-art image recovery methods.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 1886-1928, December 2023. <br/> Abstract. The nonconvex optimization method has attracted increasing attention due to its excellent ability of promoting sparsity in signal processing, image restoration, and machine learning. In this paper, we consider a new minimization method [math] [math] and its applications in signal recovery and image reconstruction because [math] minimization provides an effective way to solve the [math]-ratio sparsity minimization model. Our main contributions are to establish a convex hull decomposition for [math] and investigate RIP-based conditions for stable signal recovery and image reconstruction by [math] minimization. For one-dimensional signal recovery, our derived RIP condition extends existing results. For two-dimensional image recovery under [math] minimization of image gradients, we provide the error estimate of the resulting optimal solutions in terms of sparsity and noise level, which is missing in the literature. Numerical results of the limited angle problem in computed tomography imaging and image deblurring are presented to validate the efficiency and superiority of the proposed minimization method among the state-of-art image recovery methods.
[math] Minimization for Signal and Image Recovery
10.1137/22M1525363
SIAM Journal on Imaging Sciences
2023-10-11T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Limei Huo
Wengu Chen
Huanmin Ge
Michael K. Ng
[math] Minimization for Signal and Image Recovery
16
4
1886
1928
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M1525363
https://epubs.siam.org/doi/abs/10.1137/22M1525363?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
A Data-Assisted Two-Stage Method for the Inverse Random Source Problem
https://epubs.siam.org/doi/abs/10.1137/23M1562561?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 1929-1952, December 2023. <br/> Abstract. We propose a data-assisted two-stage method for solving an inverse random source problem of the Helmholtz equation. In the first stage, the regularized Kaczmarz method is employed to generate initial approximations of the mean and variance based on the mild solution of the stochastic Helmholtz equation. A dataset is then obtained by sampling the approximate and corresponding true profiles from a certain a priori criterion. The second stage is formulated as an image-to-image translation problem, and several data-assisted approaches are utilized to handle the dataset and obtain enhanced reconstructions. Numerical experiments demonstrate that the data-assisted two-stage method provides satisfactory reconstruction for both homogeneous and inhomogeneous media with fewer realizations.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 1929-1952, December 2023. <br/> Abstract. We propose a data-assisted two-stage method for solving an inverse random source problem of the Helmholtz equation. In the first stage, the regularized Kaczmarz method is employed to generate initial approximations of the mean and variance based on the mild solution of the stochastic Helmholtz equation. A dataset is then obtained by sampling the approximate and corresponding true profiles from a certain a priori criterion. The second stage is formulated as an image-to-image translation problem, and several data-assisted approaches are utilized to handle the dataset and obtain enhanced reconstructions. Numerical experiments demonstrate that the data-assisted two-stage method provides satisfactory reconstruction for both homogeneous and inhomogeneous media with fewer realizations.
A Data-Assisted Two-Stage Method for the Inverse Random Source Problem
10.1137/23M1562561
SIAM Journal on Imaging Sciences
2023-10-12T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Peijun Li
Ying Liang
Yuliang Wang
A Data-Assisted Two-Stage Method for the Inverse Random Source Problem
16
4
1929
1952
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1562561
https://epubs.siam.org/doi/abs/10.1137/23M1562561?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Convolutional Forward Models for X-Ray Computed Tomography
https://epubs.siam.org/doi/abs/10.1137/21M1464191?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 1953-1977, December 2023. <br/> Abstract. This paper presents a framework for efficient and accurate computation of X-ray optics, a key ingredient in optimization-based computed tomography (CT) reconstruction algorithms. Based on an algebraic framework for directional convolution in image space and detector space, we construct forward models for X-ray imaging whose computational cost can be optimized for each specific CT geometry. While the framework allows for modeling various sources of blur in the X-ray imaging process for any CT geometry, we demonstrate and characterize its effectiveness in fan-beam and cone-beam geometries with flat detectors. The experiments show improvements in computational efficiency as well as accuracy, in optics calculations and reconstruction error, of the proposed projector compared to the state-of-the-art methods used in forward- and back-projection algorithms.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 1953-1977, December 2023. <br/> Abstract. This paper presents a framework for efficient and accurate computation of X-ray optics, a key ingredient in optimization-based computed tomography (CT) reconstruction algorithms. Based on an algebraic framework for directional convolution in image space and detector space, we construct forward models for X-ray imaging whose computational cost can be optimized for each specific CT geometry. While the framework allows for modeling various sources of blur in the X-ray imaging process for any CT geometry, we demonstrate and characterize its effectiveness in fan-beam and cone-beam geometries with flat detectors. The experiments show improvements in computational efficiency as well as accuracy, in optics calculations and reconstruction error, of the proposed projector compared to the state-of-the-art methods used in forward- and back-projection algorithms.
Convolutional Forward Models for X-Ray Computed Tomography
10.1137/21M1464191
SIAM Journal on Imaging Sciences
2023-10-12T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Kai Zhang
Alireza Entezari
Convolutional Forward Models for X-Ray Computed Tomography
16
4
1953
1977
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/21M1464191
https://epubs.siam.org/doi/abs/10.1137/21M1464191?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
A Common Lines Approach for Ab Initio Modeling of Molecules with Tetrahedral and Octahedral Symmetry
https://epubs.siam.org/doi/abs/10.1137/22M150383X?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 1978-2014, December 2023. <br/> Abstract. A main task in cryo-electron microscopy single particle reconstruction is to find a three-dimensional model of a molecule given a set of its randomly oriented and positioned projection-images. In this work, we propose an algorithm for ab initio reconstruction for molecules with tetrahedral or octahedral symmetry. The algorithm exploits the multiple common lines between each pair of projection-images as well as self common lines within each image, and integrates the information from all images at once. The applicability of the proposed algorithm is demonstrated using simulated and experimental cryo-electron microscopy data.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 1978-2014, December 2023. <br/> Abstract. A main task in cryo-electron microscopy single particle reconstruction is to find a three-dimensional model of a molecule given a set of its randomly oriented and positioned projection-images. In this work, we propose an algorithm for ab initio reconstruction for molecules with tetrahedral or octahedral symmetry. The algorithm exploits the multiple common lines between each pair of projection-images as well as self common lines within each image, and integrates the information from all images at once. The applicability of the proposed algorithm is demonstrated using simulated and experimental cryo-electron microscopy data.
A Common Lines Approach for Ab Initio Modeling of Molecules with Tetrahedral and Octahedral Symmetry
10.1137/22M150383X
SIAM Journal on Imaging Sciences
2023-10-18T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Adi Shasha Geva
Yoel Shkolnisky
A Common Lines Approach for Ab Initio Modeling of Molecules with Tetrahedral and Octahedral Symmetry
16
4
1978
2014
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M150383X
https://epubs.siam.org/doi/abs/10.1137/22M150383X?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Sequential Model Correction for Nonlinear Inverse Problems
https://epubs.siam.org/doi/abs/10.1137/23M1549286?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2015-2039, December 2023. <br/> Abstract. Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss–Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show that a fixed approximation can provide competitive results with considerable computational speed-up.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2015-2039, December 2023. <br/> Abstract. Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss–Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show that a fixed approximation can provide competitive results with considerable computational speed-up.
Sequential Model Correction for Nonlinear Inverse Problems
10.1137/23M1549286
SIAM Journal on Imaging Sciences
2023-10-19T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Arttu Arjas
Mikko J. Sillanpää
Andreas S. Hauptmann
Sequential Model Correction for Nonlinear Inverse Problems
16
4
2015
2039
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1549286
https://epubs.siam.org/doi/abs/10.1137/23M1549286?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
The Split Gibbs Sampler Revisited: Improvements to Its Algorithmic Structure and Augmented Target Distribution
https://epubs.siam.org/doi/abs/10.1137/22M1506122?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2040-2071, December 2023. <br/> Abstract. Developing efficient Bayesian computation algorithms for imaging inverse problems is challenging due to the dimensionality involved and because Bayesian imaging models are often not smooth. Current state-of-the-art methods often address these difficulties by replacing the posterior density with a smooth approximation that is amenable to efficient exploration by using Langevin Markov chain Monte Carlo (MCMC) methods. Such methods rely on gradient or proximal operators to exploit geometric information about the target posterior density and scale efficiently to large problems. An alternative approach is based on data augmentation and relaxation, where auxiliary variables are introduced in order to construct an approximate augmented posterior distribution that is amenable to efficient exploration by Gibbs sampling. This paper proposes a new accelerated proximal MCMC method called latent space SK-ROCK (ls-SK-ROCK), which tightly combines the benefits of the two aforementioned strategies. Additionally, instead of viewing the augmented posterior distribution as an approximation of the original model, we propose to consider it as a generalization of this model. Following on from this, we empirically show that there is a range of values for the relaxation parameter for which the accuracy of the model improves and propose a stochastic optimization algorithm to automatically identify the optimal amount of relaxation for a given problem. In this regime, ls-SK-ROCK converges faster than competing approaches from the state of the art, and it also achieves better accuracy since the underlying augmented Bayesian model has a higher Bayesian evidence. The proposed methodology is demonstrated with a range of numerical experiments related to image deblurring and inpainting, as well as with comparisons with alternative approaches from the state of the art. An open-source implementation of the proposed MCMC methods is available from https://github.com/luisvargasmieles/ls-MCMC.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2040-2071, December 2023. <br/> Abstract. Developing efficient Bayesian computation algorithms for imaging inverse problems is challenging due to the dimensionality involved and because Bayesian imaging models are often not smooth. Current state-of-the-art methods often address these difficulties by replacing the posterior density with a smooth approximation that is amenable to efficient exploration by using Langevin Markov chain Monte Carlo (MCMC) methods. Such methods rely on gradient or proximal operators to exploit geometric information about the target posterior density and scale efficiently to large problems. An alternative approach is based on data augmentation and relaxation, where auxiliary variables are introduced in order to construct an approximate augmented posterior distribution that is amenable to efficient exploration by Gibbs sampling. This paper proposes a new accelerated proximal MCMC method called latent space SK-ROCK (ls-SK-ROCK), which tightly combines the benefits of the two aforementioned strategies. Additionally, instead of viewing the augmented posterior distribution as an approximation of the original model, we propose to consider it as a generalization of this model. Following on from this, we empirically show that there is a range of values for the relaxation parameter for which the accuracy of the model improves and propose a stochastic optimization algorithm to automatically identify the optimal amount of relaxation for a given problem. In this regime, ls-SK-ROCK converges faster than competing approaches from the state of the art, and it also achieves better accuracy since the underlying augmented Bayesian model has a higher Bayesian evidence. The proposed methodology is demonstrated with a range of numerical experiments related to image deblurring and inpainting, as well as with comparisons with alternative approaches from the state of the art. An open-source implementation of the proposed MCMC methods is available from https://github.com/luisvargasmieles/ls-MCMC.
The Split Gibbs Sampler Revisited: Improvements to Its Algorithmic Structure and Augmented Target Distribution
10.1137/22M1506122
SIAM Journal on Imaging Sciences
2023-11-10T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Marcelo Pereyra
Luis A. Vargas-Mieles
Konstantinos C. Zygalakis
The Split Gibbs Sampler Revisited: Improvements to Its Algorithmic Structure and Augmented Target Distribution
16
4
2040
2071
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M1506122
https://epubs.siam.org/doi/abs/10.1137/22M1506122?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Spherical Framelets from Spherical Designs
https://epubs.siam.org/doi/abs/10.1137/22M1542362?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2072-2104, December 2023. <br/> Abstract. In this paper, we investigate in detail the structures of the variational characterization [math] of the spherical [math]-design, its gradient [math], and its Hessian [math] in terms of fast spherical harmonic transforms. Moreover, we propose solving the minimization problem of [math] using the trust-region method to provide spherical [math]-designs with large values of [math]. Based on the obtained spherical [math]-designs, we develop (semidiscrete) spherical tight framelets as well as their truncated systems and their fast spherical framelet transforms for the practical spherical signal/image processing. Thanks to the large spherical [math]-designs and localization property of our spherical framelets, we are able to provide signal/image denoising using local thresholding techniques based on a fine-tuned spherical cap restriction. Many numerical experiments are conducted to demonstrate the efficiency and effectiveness of our spherical framelets and spherical designs, including Wendland function approximation, ETOPO data processing, and spherical image denoising.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2072-2104, December 2023. <br/> Abstract. In this paper, we investigate in detail the structures of the variational characterization [math] of the spherical [math]-design, its gradient [math], and its Hessian [math] in terms of fast spherical harmonic transforms. Moreover, we propose solving the minimization problem of [math] using the trust-region method to provide spherical [math]-designs with large values of [math]. Based on the obtained spherical [math]-designs, we develop (semidiscrete) spherical tight framelets as well as their truncated systems and their fast spherical framelet transforms for the practical spherical signal/image processing. Thanks to the large spherical [math]-designs and localization property of our spherical framelets, we are able to provide signal/image denoising using local thresholding techniques based on a fine-tuned spherical cap restriction. Many numerical experiments are conducted to demonstrate the efficiency and effectiveness of our spherical framelets and spherical designs, including Wendland function approximation, ETOPO data processing, and spherical image denoising.
Spherical Framelets from Spherical Designs
10.1137/22M1542362
SIAM Journal on Imaging Sciences
2023-11-14T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Yuchen Xiao
Xiaosheng Zhuang
Spherical Framelets from Spherical Designs
16
4
2072
2104
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M1542362
https://epubs.siam.org/doi/abs/10.1137/22M1542362?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
An Operator Theory for Analyzing the Resolution of Multi-illumination Imaging Modalities
https://epubs.siam.org/doi/abs/10.1137/23M1551730?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2105-2143, December 2023. <br/> Abstract. By introducing a new operator theory, we provide a unified mathematical theory for general source resolution in the multi-illumination imaging problem. Our main idea is to transform multi-illumination imaging into single-snapshot imaging with a new imaging kernel that depends on both the illumination patterns and the point spread function of the imaging system. We therefore prove that the resolution of multi-illumination imaging is approximately determined by the essential cutoff frequency of the new imaging kernel, which is roughly limited by the sum of the cutoff frequency of the point spread function and the maximum essential frequency in the illumination patterns. Our theory provides a unified way to estimate the resolution of various existing super-resolution modalities and results in the same estimates as those obtained in experiments. In addition, based on the reformulation of the multi-illumination imaging problem, we also estimate the resolution limits for resolving both complex and positive sources by sparsity-based approaches. We show that the resolution of multi-illumination imaging is approximately determined by the new imaging kernel from our operator theory and better resolution can be realized by sparsity-promoting techniques in practice but only for resolving very sparse sources. This explains experimentally observed phenomena in some sparsity-based super-resolution modalities.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2105-2143, December 2023. <br/> Abstract. By introducing a new operator theory, we provide a unified mathematical theory for general source resolution in the multi-illumination imaging problem. Our main idea is to transform multi-illumination imaging into single-snapshot imaging with a new imaging kernel that depends on both the illumination patterns and the point spread function of the imaging system. We therefore prove that the resolution of multi-illumination imaging is approximately determined by the essential cutoff frequency of the new imaging kernel, which is roughly limited by the sum of the cutoff frequency of the point spread function and the maximum essential frequency in the illumination patterns. Our theory provides a unified way to estimate the resolution of various existing super-resolution modalities and results in the same estimates as those obtained in experiments. In addition, based on the reformulation of the multi-illumination imaging problem, we also estimate the resolution limits for resolving both complex and positive sources by sparsity-based approaches. We show that the resolution of multi-illumination imaging is approximately determined by the new imaging kernel from our operator theory and better resolution can be realized by sparsity-promoting techniques in practice but only for resolving very sparse sources. This explains experimentally observed phenomena in some sparsity-based super-resolution modalities.
An Operator Theory for Analyzing the Resolution of Multi-illumination Imaging Modalities
10.1137/23M1551730
SIAM Journal on Imaging Sciences
2023-11-15T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Ping Liu
Habib Ammari
An Operator Theory for Analyzing the Resolution of Multi-illumination Imaging Modalities
16
4
2105
2143
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1551730
https://epubs.siam.org/doi/abs/10.1137/23M1551730?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Transionospheric Autofocus for Synthetic Aperture Radar
https://epubs.siam.org/doi/abs/10.1137/22M153570X?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2144-2174, December 2023. <br/> Abstract. Turbulent fluctuations of the electron number density in the Earth’s ionosphere may hamper the performance of spaceborne synthetic aperture radar (SAR). Previously, we have quantified the extent of the possible degradation of transionospheric SAR images as it depends on the state of the ionosphere and parameters of the SAR instrument. Yet no attempt has been made to mitigate the adverse effect of the ionospheric turbulence. In the current work, we propose a new optimization-based autofocus algorithm that helps correct the turbulence-induced distortions of spaceborne SAR images. Unlike the traditional autofocus procedures available in the literature, the new algorithm allows for the dependence of the phase perturbations of SAR signals not only on slow time but also on the target coordinates. This dependence is central for the analysis of image distortions due to turbulence, but in the case of traditional autofocus where the distortions are due to uncertainties in the antenna position, it is not present.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2144-2174, December 2023. <br/> Abstract. Turbulent fluctuations of the electron number density in the Earth’s ionosphere may hamper the performance of spaceborne synthetic aperture radar (SAR). Previously, we have quantified the extent of the possible degradation of transionospheric SAR images as it depends on the state of the ionosphere and parameters of the SAR instrument. Yet no attempt has been made to mitigate the adverse effect of the ionospheric turbulence. In the current work, we propose a new optimization-based autofocus algorithm that helps correct the turbulence-induced distortions of spaceborne SAR images. Unlike the traditional autofocus procedures available in the literature, the new algorithm allows for the dependence of the phase perturbations of SAR signals not only on slow time but also on the target coordinates. This dependence is central for the analysis of image distortions due to turbulence, but in the case of traditional autofocus where the distortions are due to uncertainties in the antenna position, it is not present.
Transionospheric Autofocus for Synthetic Aperture Radar
10.1137/22M153570X
SIAM Journal on Imaging Sciences
2023-11-20T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Mikhail Gilman
Semyon V. Tsynkov
Transionospheric Autofocus for Synthetic Aperture Radar
16
4
2144
2174
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M153570X
https://epubs.siam.org/doi/abs/10.1137/22M153570X?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
IFF: A Superresolution Algorithm for Multiple Measurements
https://epubs.siam.org/doi/abs/10.1137/23M1568569?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2175-2201, December 2023. <br/> Abstract. We consider the problem of reconstructing one-dimensional point sources from their Fourier measurements in a bounded interval [math]. This problem is known to be challenging in the regime where the spacing of the sources is below the Rayleigh length [math]. In this paper, we propose a superresolution algorithm, called iterative focusing-localization and filtering, to resolve closely spaced point sources from their multiple measurements that are obtained by using multiple unknown illumination patterns. The new proposed algorithm has a distinct feature in that it reconstructs the point sources one by one in an iterative manner and hence requires no prior information about the source numbers. The new feature also allows for a subsampling strategy that can reconstruct sources using small-sized Hankel matrices and thus circumvent the computation of singular-value decomposition for large matrices as in the usual subspace methods. In addition, the algorithm can be paralleled. A theoretical analysis of the methods behind the algorithm is also provided. The derived results imply a phase transition phenomenon in the reconstruction of source locations which is confirmed in the numerical experiment. Numerical results show that the algorithm can achieve a stable reconstruction for point sources with a minimum separation distance that is close to the theoretical limit. The efficiency and robustness of the algorithm have also been tested. This algorithm can be generalized to higher dimensions.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2175-2201, December 2023. <br/> Abstract. We consider the problem of reconstructing one-dimensional point sources from their Fourier measurements in a bounded interval [math]. This problem is known to be challenging in the regime where the spacing of the sources is below the Rayleigh length [math]. In this paper, we propose a superresolution algorithm, called iterative focusing-localization and filtering, to resolve closely spaced point sources from their multiple measurements that are obtained by using multiple unknown illumination patterns. The new proposed algorithm has a distinct feature in that it reconstructs the point sources one by one in an iterative manner and hence requires no prior information about the source numbers. The new feature also allows for a subsampling strategy that can reconstruct sources using small-sized Hankel matrices and thus circumvent the computation of singular-value decomposition for large matrices as in the usual subspace methods. In addition, the algorithm can be paralleled. A theoretical analysis of the methods behind the algorithm is also provided. The derived results imply a phase transition phenomenon in the reconstruction of source locations which is confirmed in the numerical experiment. Numerical results show that the algorithm can achieve a stable reconstruction for point sources with a minimum separation distance that is close to the theoretical limit. The efficiency and robustness of the algorithm have also been tested. This algorithm can be generalized to higher dimensions.
IFF: A Superresolution Algorithm for Multiple Measurements
10.1137/23M1568569
SIAM Journal on Imaging Sciences
2023-11-27T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Zetao Fei
Hai Zhang
IFF: A Superresolution Algorithm for Multiple Measurements
16
4
2175
2201
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1568569
https://epubs.siam.org/doi/abs/10.1137/23M1568569?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
https://epubs.siam.org/doi/abs/10.1137/23M1552486?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2202-2246, December 2023. <br/> Abstract. We introduce a method for the fast estimation of data-adapted, spatially and temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV) minimization. The proposed approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs) and relies on two distinct subnetworks. The first subnetwork estimates the regularization parameter-map from the input data. The second subnetwork unrolls [math] iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean and corrupted data but crucially without the need for access to labels for the optimal regularization parameter-maps. We first prove consistency of the unrolled scheme by showing that the unrolled minimizing energy functional used for the supervised learning [math]-converges, as [math] tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. Then, we apply and evaluate the proposed method on a variety of large-scale and dynamic imaging problems with retrospectively simulated measurement data for which the automatic computation of such regularization parameters has been so far challenging using the state-of-the-art methods: a 2D dynamic cardiac magnetic resonance imaging (MRI) reconstruction problem, a quantitative brain MRI reconstruction problem, a low-dose computed tomography problem, and a dynamic image denoising problem. The proposed method consistently improves the TV reconstructions using scalar regularization parameters, and the obtained regularization parameter-maps adapt well to imaging problems and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the subsequent reconstruction algorithm is interpretable since it inherits the properties (e.g., convergence guarantees) of the iterative reconstruction method from which the network is implicitly defined.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2202-2246, December 2023. <br/> Abstract. We introduce a method for the fast estimation of data-adapted, spatially and temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV) minimization. The proposed approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs) and relies on two distinct subnetworks. The first subnetwork estimates the regularization parameter-map from the input data. The second subnetwork unrolls [math] iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean and corrupted data but crucially without the need for access to labels for the optimal regularization parameter-maps. We first prove consistency of the unrolled scheme by showing that the unrolled minimizing energy functional used for the supervised learning [math]-converges, as [math] tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. Then, we apply and evaluate the proposed method on a variety of large-scale and dynamic imaging problems with retrospectively simulated measurement data for which the automatic computation of such regularization parameters has been so far challenging using the state-of-the-art methods: a 2D dynamic cardiac magnetic resonance imaging (MRI) reconstruction problem, a quantitative brain MRI reconstruction problem, a low-dose computed tomography problem, and a dynamic image denoising problem. The proposed method consistently improves the TV reconstructions using scalar regularization parameters, and the obtained regularization parameter-maps adapt well to imaging problems and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the subsequent reconstruction algorithm is interpretable since it inherits the properties (e.g., convergence guarantees) of the iterative reconstruction method from which the network is implicitly defined.
Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
10.1137/23M1552486
SIAM Journal on Imaging Sciences
2023-11-29T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Andreas Kofler
Fabian Altekrüger
Fatima Antarou Ba
Christoph Kolbitsch
Evangelos Papoutsellis
David Schote
Clemens Sirotenko
Felix Frederik Zimmermann
Kostas Papafitsoros
Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
16
4
2202
2246
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1552486
https://epubs.siam.org/doi/abs/10.1137/23M1552486?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach
https://epubs.siam.org/doi/abs/10.1137/23M1548025?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page 2247-2284, December 2023. <br/> Abstract. Deep learning has proved to be a powerful tool for solving inverse problems in imaging, and most of the related work is based on supervised learning. In many applications, collecting truth images is a challenging and costly task, and the prerequisite of having a training dataset of truth images limits its applicability. This paper proposes a self-supervised deep learning method for solving inverse imaging problems that does not require any training samples. The proposed approach is built on a reparametrization of latent images using a convolutional neural network, and the reconstruction is motivated by approximating the minimum mean square error estimate of the latent image using a Langevin dynamics–based Monte Carlo (MC) method. To efficiently sample the network weights in the context of image reconstruction, we propose a Langevin MC scheme called Adam-LD, inspired by the well-known optimizer in deep learning, Adam. The proposed method is applied to solve linear and nonlinear inverse problems, specifically, sparse-view computed tomography image reconstruction and phase retrieval. Our experiments demonstrate that the proposed method outperforms existing unsupervised or self-supervised solutions in terms of reconstruction quality.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2247-2284, December 2023. <br/> Abstract. Deep learning has proved to be a powerful tool for solving inverse problems in imaging, and most of the related work is based on supervised learning. In many applications, collecting truth images is a challenging and costly task, and the prerequisite of having a training dataset of truth images limits its applicability. This paper proposes a self-supervised deep learning method for solving inverse imaging problems that does not require any training samples. The proposed approach is built on a reparametrization of latent images using a convolutional neural network, and the reconstruction is motivated by approximating the minimum mean square error estimate of the latent image using a Langevin dynamics–based Monte Carlo (MC) method. To efficiently sample the network weights in the context of image reconstruction, we propose a Langevin MC scheme called Adam-LD, inspired by the well-known optimizer in deep learning, Adam. The proposed method is applied to solve linear and nonlinear inverse problems, specifically, sparse-view computed tomography image reconstruction and phase retrieval. Our experiments demonstrate that the proposed method outperforms existing unsupervised or self-supervised solutions in terms of reconstruction quality.
Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach
10.1137/23M1548025
SIAM Journal on Imaging Sciences
2023-11-30T08:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Ji Li
Weixi Wang
Hui Ji
Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach
16
4
2247
2284
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/23M1548025
https://epubs.siam.org/doi/abs/10.1137/23M1548025?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics
-
Short Communication: Localized Adversarial Artifacts for Compressed Sensing MRI
https://epubs.siam.org/doi/abs/10.1137/22M1503221?ai=sd&mi=3bfys9&af=R
SIAM Journal on Imaging Sciences, <a href="https://epubs.siam.org/toc/sjisbi/16/4">Volume 16, Issue 4</a>, Page SC14-SC26, December 2023. <br/> Abstract. As interest in deep neural networks (DNNs) for image reconstruction tasks grows, their reliability has been called into question [V. Antun, F. Renna, C. Poon, B. Adcock, and A. C. Hansen, Proc. Natl. Acad. Sci. USA, 117 (2020), pp. 30088–30095; N. M. Gottschling, V. Antun, B. Adcock, and A. C. Hansen, The Troublesome Kernel: Why Deep Learning for Inverse Problems Is Typically Unstable, preprint, arXiv:2001.01258, 2020]. However, recent work has shown that, compared to total variation (TV) minimization, when appropriately regularized, DNNs show similar robustness to adversarial noise in terms of [math]-reconstruction error [M. Genzel, J. Macdonald, and M. März, IEEE Trans. Pattern Anal., 45 (2022), pp. 1119–1134]. We consider a different notion of robustness, using the [math]-norm, and argue that localized reconstruction artifacts are a more relevant defect than the [math]-error. We create adversarial perturbations to undersampled magnetic resonance imaging measurements (in the frequency domain) which induce severe localized artifacts in the TV-regularized reconstruction. Notably, the same attack method is not as effective against DNN-based reconstruction. Finally, we show that this phenomenon is inherent to reconstruction methods for which exact recovery can be guaranteed, as with compressed sensing reconstructions with [math]- or TV-minimization.
SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page SC14-SC26, December 2023. <br/> Abstract. As interest in deep neural networks (DNNs) for image reconstruction tasks grows, their reliability has been called into question [V. Antun, F. Renna, C. Poon, B. Adcock, and A. C. Hansen, Proc. Natl. Acad. Sci. USA, 117 (2020), pp. 30088–30095; N. M. Gottschling, V. Antun, B. Adcock, and A. C. Hansen, The Troublesome Kernel: Why Deep Learning for Inverse Problems Is Typically Unstable, preprint, arXiv:2001.01258, 2020]. However, recent work has shown that, compared to total variation (TV) minimization, when appropriately regularized, DNNs show similar robustness to adversarial noise in terms of [math]-reconstruction error [M. Genzel, J. Macdonald, and M. März, IEEE Trans. Pattern Anal., 45 (2022), pp. 1119–1134]. We consider a different notion of robustness, using the [math]-norm, and argue that localized reconstruction artifacts are a more relevant defect than the [math]-error. We create adversarial perturbations to undersampled magnetic resonance imaging measurements (in the frequency domain) which induce severe localized artifacts in the TV-regularized reconstruction. Notably, the same attack method is not as effective against DNN-based reconstruction. Finally, we show that this phenomenon is inherent to reconstruction methods for which exact recovery can be guaranteed, as with compressed sensing reconstructions with [math]- or TV-minimization.
Short Communication: Localized Adversarial Artifacts for Compressed Sensing MRI
10.1137/22M1503221
SIAM Journal on Imaging Sciences
2023-10-10T07:00:00Z
© 2023 Society for Industrial and Applied Mathematics
Rima Alaifari
Giovanni S. Alberti
Tandri Gauksson
Short Communication: Localized Adversarial Artifacts for Compressed Sensing MRI
16
4
SC14
SC26
2023-12-31T08:00:00Z
2023-12-31T08:00:00Z
10.1137/22M1503221
https://epubs.siam.org/doi/abs/10.1137/22M1503221?ai=sd&mi=3bfys9&af=R
© 2023 Society for Industrial and Applied Mathematics