Society for Industrial and Applied Mathematics: SIAM Review: Table of Contents
Table of Contents for SIAM Review. List of articles from both the latest and ahead of print issues.
https://epubs.siam.org/loi/siread?ai=s5&mi=3bfys9&af=R
Society for Industrial and Applied Mathematics: SIAM Review: Table of Contents
Society for Industrial and Applied Mathematics
en-US
SIAM Review
SIAM Review
https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/siread/cover.jpg
https://epubs.siam.org/loi/siread?ai=s5&mi=3bfys9&af=R
-
Survey and Review
https://epubs.siam.org/doi/abs/10.1137/24N975827?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 1-1, February 2024. <br/> Numerical methods for partial differential equations can only be successful if their numerical solutions reflect fundamental properties of the physical solution of the respective PDE. For convection-diffusion equations, the conservation of some specific scalar quantities is crucial. When physical solutions satisfy maximum principles representing physical bounds, then the numerical solutions should respect the same bounds. In a mathematical setting, this requirement is known as the discrete maximum principle (DMP). Discretizations which fail to fulfill the DMP are prone to numerical solutions with unphysical values, e.g., spurious oscillations. However, when convection largely dominates diffusion, many discretization methods do not satisfy a DMP. In the only article of the Survey and Review section of this issue, “Finite Element Methods Respecting the Discrete Maximum Principle for Convection-Diffusion Equations,” Gabriel R. Barrenechea, Volker John, and Petr Knobloch study and analyze finite element methods that succeed in complying with DMP while providing accurate numerical solutions at the same time. This is a nontrivial task and, thus, even for the steady-state problem there are only a few such discretizations, all of them nonlinear. Most of these methods have been developed quite recently, so that the presentation highlights the state of the art and spotlights the huge progress accomplished in recent years. The goal of the paper consists in providing a survey on finite element methods that satisfy local or global DMPs for linear elliptic or parabolic problems. It is worth reading for a large audience.
SIAM Review, Volume 66, Issue 1, Page 1-1, February 2024. <br/> Numerical methods for partial differential equations can only be successful if their numerical solutions reflect fundamental properties of the physical solution of the respective PDE. For convection-diffusion equations, the conservation of some specific scalar quantities is crucial. When physical solutions satisfy maximum principles representing physical bounds, then the numerical solutions should respect the same bounds. In a mathematical setting, this requirement is known as the discrete maximum principle (DMP). Discretizations which fail to fulfill the DMP are prone to numerical solutions with unphysical values, e.g., spurious oscillations. However, when convection largely dominates diffusion, many discretization methods do not satisfy a DMP. In the only article of the Survey and Review section of this issue, “Finite Element Methods Respecting the Discrete Maximum Principle for Convection-Diffusion Equations,” Gabriel R. Barrenechea, Volker John, and Petr Knobloch study and analyze finite element methods that succeed in complying with DMP while providing accurate numerical solutions at the same time. This is a nontrivial task and, thus, even for the steady-state problem there are only a few such discretizations, all of them nonlinear. Most of these methods have been developed quite recently, so that the presentation highlights the state of the art and spotlights the huge progress accomplished in recent years. The goal of the paper consists in providing a survey on finite element methods that satisfy local or global DMPs for linear elliptic or parabolic problems. It is worth reading for a large audience.
Survey and Review
10.1137/24N975827
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Marlis Hochbruck
Survey and Review
66
1
1
1
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/24N975827
https://epubs.siam.org/doi/abs/10.1137/24N975827?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Finite Element Methods Respecting the Discrete Maximum Principle for Convection-Diffusion Equations
https://epubs.siam.org/doi/abs/10.1137/22M1488934?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 3-88, February 2024. <br/> Convection-diffusion-reaction equations model the conservation of scalar quantities. From the analytic point of view, solutions of these equations satisfy, under certain conditions, maximum principles, which represent physical bounds of the solution. That the same bounds are respected by numerical approximations of the solution is often of utmost importance in practice. The mathematical formulation of this property, which contributes to the physical consistency of a method, is called the discrete maximum principle (DMP). In many applications, convection dominates diffusion by several orders of magnitude. It is well known that standard discretizations typically do not satisfy the DMP in this convection-dominated regime. In fact, in this case it turns out to be a challenging problem to construct discretizations that, on the one hand, respect the DMP and, on the other hand, compute accurate solutions. This paper presents a survey on finite element methods, with the main focus on the convection-dominated regime, that satisfy a local or a global DMP. The concepts of the underlying numerical analysis are discussed. The survey reveals that for the steady-state problem there are only a few discretizations, all of them nonlinear, that at the same time both satisfy the DMP and compute reasonably accurate solutions, e.g., algebraically stabilized schemes. Moreover, most of these discretizations have been developed in recent years, showing the enormous progress that has been achieved lately. Similarly, methods based on algebraic stabilization, both nonlinear and linear, are currently the only finite element methods that combine the satisfaction of the global DMP and accurate numerical results for the evolutionary equations in the convection-dominated scenario.
SIAM Review, Volume 66, Issue 1, Page 3-88, February 2024. <br/> Convection-diffusion-reaction equations model the conservation of scalar quantities. From the analytic point of view, solutions of these equations satisfy, under certain conditions, maximum principles, which represent physical bounds of the solution. That the same bounds are respected by numerical approximations of the solution is often of utmost importance in practice. The mathematical formulation of this property, which contributes to the physical consistency of a method, is called the discrete maximum principle (DMP). In many applications, convection dominates diffusion by several orders of magnitude. It is well known that standard discretizations typically do not satisfy the DMP in this convection-dominated regime. In fact, in this case it turns out to be a challenging problem to construct discretizations that, on the one hand, respect the DMP and, on the other hand, compute accurate solutions. This paper presents a survey on finite element methods, with the main focus on the convection-dominated regime, that satisfy a local or a global DMP. The concepts of the underlying numerical analysis are discussed. The survey reveals that for the steady-state problem there are only a few discretizations, all of them nonlinear, that at the same time both satisfy the DMP and compute reasonably accurate solutions, e.g., algebraically stabilized schemes. Moreover, most of these discretizations have been developed in recent years, showing the enormous progress that has been achieved lately. Similarly, methods based on algebraic stabilization, both nonlinear and linear, are currently the only finite element methods that combine the satisfaction of the global DMP and accurate numerical results for the evolutionary equations in the convection-dominated scenario.
Finite Element Methods Respecting the Discrete Maximum Principle for Convection-Diffusion Equations
10.1137/22M1488934
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Gabriel R. Barrenechea
Volker John
Petr Knobloch
Finite Element Methods Respecting the Discrete Maximum Principle for Convection-Diffusion Equations
66
1
3
88
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/22M1488934
https://epubs.siam.org/doi/abs/10.1137/22M1488934?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Research Spotlights
https://epubs.siam.org/doi/abs/10.1137/24N975839?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 89-89, February 2024. <br/> As modeling, simulation, and data-driven capabilities continue to advance and be adopted for an ever expanding set of applications and downstream tasks, there has been an increased need for quantifying the uncertainty in the resulting predictions. In “Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output,” authors Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting provide a methodology for moving beyond deterministic scalar-valued predictions to obtain particular statistical distributions for these predictions. The approach relies on training data of model output-observation pairs of scalars, and hence does not require access to higher-dimensional inputs or latent variables. The authors use numerical weather prediction as a particular example, where one can obtain repeated forecasts, and corresponding observations, of temperatures at a specific location. Given a predicted temperature, the EasyUQ approach provides a nonparametric distribution of temperatures around this value. EasyUQ uses the training data to effectively minimize an empirical score subject to a stochastic monotonicity constraint, which ensures that the predictive distribution values become larger as the model output value grows. In doing so, the approach inherits the theoretical properties of optimality and consistency enjoyed by so-called isotonic distributional regression methods. The authors emphasize that the basic version of EasyUQ does not require elaborate hyperparameter tuning. They also introduce a more sophisticated version that relies on kernel smoothing to yield predictive probability densities while preserving key properties of the basic version. The paper demonstrates how EasyUQ compares with the standard technique of applying a Gaussian error distribution to a deterministic forecast as well as how EasyUQ can be used to obtain uncertainty estimates for artificial neural network outputs. The approach will be especially of interest for settings when inputs or other latent variables are unreliable or unavailable since it offers a straightforward yet statistically principled and computationally efficient way for working only with outputs and observations.
SIAM Review, Volume 66, Issue 1, Page 89-89, February 2024. <br/> As modeling, simulation, and data-driven capabilities continue to advance and be adopted for an ever expanding set of applications and downstream tasks, there has been an increased need for quantifying the uncertainty in the resulting predictions. In “Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output,” authors Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting provide a methodology for moving beyond deterministic scalar-valued predictions to obtain particular statistical distributions for these predictions. The approach relies on training data of model output-observation pairs of scalars, and hence does not require access to higher-dimensional inputs or latent variables. The authors use numerical weather prediction as a particular example, where one can obtain repeated forecasts, and corresponding observations, of temperatures at a specific location. Given a predicted temperature, the EasyUQ approach provides a nonparametric distribution of temperatures around this value. EasyUQ uses the training data to effectively minimize an empirical score subject to a stochastic monotonicity constraint, which ensures that the predictive distribution values become larger as the model output value grows. In doing so, the approach inherits the theoretical properties of optimality and consistency enjoyed by so-called isotonic distributional regression methods. The authors emphasize that the basic version of EasyUQ does not require elaborate hyperparameter tuning. They also introduce a more sophisticated version that relies on kernel smoothing to yield predictive probability densities while preserving key properties of the basic version. The paper demonstrates how EasyUQ compares with the standard technique of applying a Gaussian error distribution to a deterministic forecast as well as how EasyUQ can be used to obtain uncertainty estimates for artificial neural network outputs. The approach will be especially of interest for settings when inputs or other latent variables are unreliable or unavailable since it offers a straightforward yet statistically principled and computationally efficient way for working only with outputs and observations.
Research Spotlights
10.1137/24N975839
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Stefan M. Wild
Research Spotlights
66
1
89
89
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/24N975839
https://epubs.siam.org/doi/abs/10.1137/24N975839?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output
https://epubs.siam.org/doi/abs/10.1137/22M1541915?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 91-122, February 2024. <br/> How can we quantify uncertainty if our favorite computational tool---be it a numerical, statistical, or machine learning approach, or just any computer model---provides single-valued output only? In this article, we introduce the Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output--outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced isotonic distributional regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both stochastic monotonicity with respect to the original model output and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. We use simulation examples and forecast data from weather prediction to illustrate the techniques. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning, and we find EasyUQ to be competitive with conformal prediction as well as more elaborate input-based approaches.
SIAM Review, Volume 66, Issue 1, Page 91-122, February 2024. <br/> How can we quantify uncertainty if our favorite computational tool---be it a numerical, statistical, or machine learning approach, or just any computer model---provides single-valued output only? In this article, we introduce the Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output--outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced isotonic distributional regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both stochastic monotonicity with respect to the original model output and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. We use simulation examples and forecast data from weather prediction to illustrate the techniques. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning, and we find EasyUQ to be competitive with conformal prediction as well as more elaborate input-based approaches.
Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output
10.1137/22M1541915
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Eva-Maria Walz
Alexander Henzi
Johanna Ziegel
Tilmann Gneiting
Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output
66
1
91
122
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/22M1541915
https://epubs.siam.org/doi/abs/10.1137/22M1541915?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
SIGEST
https://epubs.siam.org/doi/abs/10.1137/24N975840?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 123-123, February 2024. <br/> The SIGEST article in this issue is “A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators,” by Bjørn Fredrik Nielsen and Zdeněk Strakoš. This paper studies the eigenvalues of second-order self-adjoint differential operators in the continuum and discrete settings. In particular, they investigate second-order diffusion with a diffusion tensor preconditioned by the inverse Laplacian. They prove that there is a one-to-one correspondence between the spectrum of the preconditioned system and the eigenvalues of the diffusion tensor. Moreover, they investigate the relationship between the spectrum of the preconditioned operator and the generalized eigenvalue problem for its discretized counterpart and show that the latter asymptotically approximates the former. The results presented in the paper are fundamental to anyone wanting to solve elliptic PDEs. Understanding the distribution of eigenvalues is crucial for solving associated linear systems via, e.g., conjugate gradient descent whose convergence rate depends on the spread of the spectrum of the system matrix. The approach of operator preconditioning as done here with the inverse Laplacian turns the unbounded spectrum of a second-order diffusion operator into one that is completely characterized by the diffusion tensor itself. This carries over to the discrete setting, where the support of the spectrum without preconditioning is increasing as one over the squared mesh size, while in the operator preconditioned case mesh independent bounds for the eigenvalues, completely determined by the diffusion tensor, can be obtained. The original version of this article appeared in the SIAM Journal on Numerical Analysis in 2020 and has been recognized as an outstanding and well-presented result in the community. In preparing this SIGEST version, the authors have added new material to sections 1 and 2 in order to increase accessibility, added clarifications to sections 6 and 7, and added the new section 8, which contains a description of more recent results concerning the numerical approximation of the continuous spectrum. It also comments on the related differences between the (generalized) PDE eigenvalue problems for compact and noncompact operators and provides several new references.
SIAM Review, Volume 66, Issue 1, Page 123-123, February 2024. <br/> The SIGEST article in this issue is “A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators,” by Bjørn Fredrik Nielsen and Zdeněk Strakoš. This paper studies the eigenvalues of second-order self-adjoint differential operators in the continuum and discrete settings. In particular, they investigate second-order diffusion with a diffusion tensor preconditioned by the inverse Laplacian. They prove that there is a one-to-one correspondence between the spectrum of the preconditioned system and the eigenvalues of the diffusion tensor. Moreover, they investigate the relationship between the spectrum of the preconditioned operator and the generalized eigenvalue problem for its discretized counterpart and show that the latter asymptotically approximates the former. The results presented in the paper are fundamental to anyone wanting to solve elliptic PDEs. Understanding the distribution of eigenvalues is crucial for solving associated linear systems via, e.g., conjugate gradient descent whose convergence rate depends on the spread of the spectrum of the system matrix. The approach of operator preconditioning as done here with the inverse Laplacian turns the unbounded spectrum of a second-order diffusion operator into one that is completely characterized by the diffusion tensor itself. This carries over to the discrete setting, where the support of the spectrum without preconditioning is increasing as one over the squared mesh size, while in the operator preconditioned case mesh independent bounds for the eigenvalues, completely determined by the diffusion tensor, can be obtained. The original version of this article appeared in the SIAM Journal on Numerical Analysis in 2020 and has been recognized as an outstanding and well-presented result in the community. In preparing this SIGEST version, the authors have added new material to sections 1 and 2 in order to increase accessibility, added clarifications to sections 6 and 7, and added the new section 8, which contains a description of more recent results concerning the numerical approximation of the continuous spectrum. It also comments on the related differences between the (generalized) PDE eigenvalue problems for compact and noncompact operators and provides several new references.
SIGEST
10.1137/24N975840
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
The Editors
SIGEST
66
1
123
123
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/24N975840
https://epubs.siam.org/doi/abs/10.1137/24N975840?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators
https://epubs.siam.org/doi/abs/10.1137/23M1600992?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 125-146, February 2024. <br/> We analyze the spectrum of the operator $\Delta^{-1} [\nabla \cdot (K\nabla u)]$ subject to homogeneous Dirichlet or Neumann boundary conditions, where $\Delta$ denotes the Laplacian and $K=K(x,y)$ is a symmetric tensor. Our main result shows that this spectrum can be derived from the spectral decomposition $K=Q \Lambda Q^T$, where $Q=Q(x,y)$ is an orthogonal matrix and $\Lambda=\Lambda(x,y)$ is a diagonal matrix. More precisely, provided that $K$ is continuous, the spectrum equals the convex hull of the ranges of the diagonal function entries of $\Lambda$. The domain involved is assumed to be bounded and Lipschitz. In addition to studying operators defined on infinite-dimensional Sobolev spaces, we also report on recent results concerning their discretized finite-dimensional counterparts. More specifically, even though $\Delta^{-1} [\nabla \cdot (K\nabla u)]$ is not compact, it turns out that every point in the spectrum of this operator can, to an arbitrary accuracy, be approximated by eigenvalues of the associated generalized algebraic eigenvalue problems arising from discretizations. Our theoretical investigations are illuminated by numerical experiments. The results presented in this paper extend previous analyses which have addressed elliptic differential operators with scalar coefficient functions. Our investigation is motivated by both preconditioning issues (efficient numerical computations) and the need to further develop the spectral theory of second order PDEs (core analysis).
SIAM Review, Volume 66, Issue 1, Page 125-146, February 2024. <br/> We analyze the spectrum of the operator $\Delta^{-1} [\nabla \cdot (K\nabla u)]$ subject to homogeneous Dirichlet or Neumann boundary conditions, where $\Delta$ denotes the Laplacian and $K=K(x,y)$ is a symmetric tensor. Our main result shows that this spectrum can be derived from the spectral decomposition $K=Q \Lambda Q^T$, where $Q=Q(x,y)$ is an orthogonal matrix and $\Lambda=\Lambda(x,y)$ is a diagonal matrix. More precisely, provided that $K$ is continuous, the spectrum equals the convex hull of the ranges of the diagonal function entries of $\Lambda$. The domain involved is assumed to be bounded and Lipschitz. In addition to studying operators defined on infinite-dimensional Sobolev spaces, we also report on recent results concerning their discretized finite-dimensional counterparts. More specifically, even though $\Delta^{-1} [\nabla \cdot (K\nabla u)]$ is not compact, it turns out that every point in the spectrum of this operator can, to an arbitrary accuracy, be approximated by eigenvalues of the associated generalized algebraic eigenvalue problems arising from discretizations. Our theoretical investigations are illuminated by numerical experiments. The results presented in this paper extend previous analyses which have addressed elliptic differential operators with scalar coefficient functions. Our investigation is motivated by both preconditioning issues (efficient numerical computations) and the need to further develop the spectral theory of second order PDEs (core analysis).
A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators
10.1137/23M1600992
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Bjørn Fredrik Nielsen
Zdeněk Strakoš
A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators
66
1
125
146
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/23M1600992
https://epubs.siam.org/doi/abs/10.1137/23M1600992?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Education
https://epubs.siam.org/doi/abs/10.1137/24N975852?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 147-147, February 2024. <br/> In this issue the Education section presents two contributions. The first paper, “Resonantly Forced ODEs and Repeated Roots,” is written by Allan R. Willms. The resonant forcing problem is as follows: find $y(\cdot)$ such that $L[y(x)]=u(x)$, where $L[u(x)]=0$ and $L=a_0(x) + \sum_{j=1}^n a_j(x) \frac{d^j}{dx^j}$. The repeated roots problem consists in finding $mn$ linearly independent solutions to $L^m[y(x)]=0$ under the assumption that $n$ linearly independent solutions to $L[y(x)]= 0$ are known. A recent article by B. Gouveia and H. A. Stone, “Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods” [SIAM Rev., 64 (2022), pp. 485--499], discusses a method for finding solutions to these two problems. This new contribution observes that by applying the same mathematical justifications, one may get similar results in a simpler way. The starting point consists in defining operators $L_\lambda := \hat L -g(\lambda)$ with $L_{\lambda_0}=L$ for some $\lambda_0$ and of a parameter-dependent family of solutions to the homogeneous equations $L_\lambda[y(x;\lambda)]=0$. Under appropriate assumptions on $g$, differentiating this equality allows one to get solutions to problems of interest. This approach is illustrated on nine examples, seven of which are the same as in the publication of B. Gouveia and H. A. Stone, where for each example $g$ and $\hat L$ are appropriately chosen. This approach may be included in a course of ordinary differential equations (ODEs) as a methodology for finding solutions to these two particular classes of ODEs. It can also be used by undergraduate students for individual training as an alternative to variation of parameters. The second paper, “NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators,” is presented by Zongren Zou, Xuhui Meng, Apostolos Psaros, and George E. Karniadakis. In machine learning uncertainty quantification (UQ) is a hot research topic, driven by various questions arising in computer vision and natural language processing, and by risk-sensitive applications. Numerous machine learning models, such as, for instance, physics-informed neural networks and deep operator networks, help in solving partial differential equations and learning operator mappings, respectively. However, some data may be noisy and/or sampled at random locations. This paper presents an open-source Python library (https://github.com/Crunch-UQ4MI) for employing a reliable toolbox of UQ methods for scientific machine learning. It is designed for both educational and research purposes and is illustrated on four examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs. NeuralUQ is planned to be constantly updated.
SIAM Review, Volume 66, Issue 1, Page 147-147, February 2024. <br/> In this issue the Education section presents two contributions. The first paper, “Resonantly Forced ODEs and Repeated Roots,” is written by Allan R. Willms. The resonant forcing problem is as follows: find $y(\cdot)$ such that $L[y(x)]=u(x)$, where $L[u(x)]=0$ and $L=a_0(x) + \sum_{j=1}^n a_j(x) \frac{d^j}{dx^j}$. The repeated roots problem consists in finding $mn$ linearly independent solutions to $L^m[y(x)]=0$ under the assumption that $n$ linearly independent solutions to $L[y(x)]= 0$ are known. A recent article by B. Gouveia and H. A. Stone, “Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods” [SIAM Rev., 64 (2022), pp. 485--499], discusses a method for finding solutions to these two problems. This new contribution observes that by applying the same mathematical justifications, one may get similar results in a simpler way. The starting point consists in defining operators $L_\lambda := \hat L -g(\lambda)$ with $L_{\lambda_0}=L$ for some $\lambda_0$ and of a parameter-dependent family of solutions to the homogeneous equations $L_\lambda[y(x;\lambda)]=0$. Under appropriate assumptions on $g$, differentiating this equality allows one to get solutions to problems of interest. This approach is illustrated on nine examples, seven of which are the same as in the publication of B. Gouveia and H. A. Stone, where for each example $g$ and $\hat L$ are appropriately chosen. This approach may be included in a course of ordinary differential equations (ODEs) as a methodology for finding solutions to these two particular classes of ODEs. It can also be used by undergraduate students for individual training as an alternative to variation of parameters. The second paper, “NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators,” is presented by Zongren Zou, Xuhui Meng, Apostolos Psaros, and George E. Karniadakis. In machine learning uncertainty quantification (UQ) is a hot research topic, driven by various questions arising in computer vision and natural language processing, and by risk-sensitive applications. Numerous machine learning models, such as, for instance, physics-informed neural networks and deep operator networks, help in solving partial differential equations and learning operator mappings, respectively. However, some data may be noisy and/or sampled at random locations. This paper presents an open-source Python library (https://github.com/Crunch-UQ4MI) for employing a reliable toolbox of UQ methods for scientific machine learning. It is designed for both educational and research purposes and is illustrated on four examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs. NeuralUQ is planned to be constantly updated.
Education
10.1137/24N975852
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Helene Frankowska
Education
66
1
147
147
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/24N975852
https://epubs.siam.org/doi/abs/10.1137/24N975852?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Resonantly Forced ODEs and Repeated Roots
https://epubs.siam.org/doi/abs/10.1137/23M1545148?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 149-160, February 2024. <br/> In a recent article in this journal, Gouveia and Stone [``Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods,” SIAM Rev., 64 (2022), pp. 485--499] described a method for finding exact solutions to resonantly forced linear ordinary differential equations, and for finding the general solution of repeated root linear systems. It is shown here that applying their mathematical justification directly yields a method that is faster and algebraically simpler than the method they described. This method seems to be unknown in the undergraduate textbook literature, although it certainly should be present there as it is elegant and simple to apply, generally giving solutions with much less work than variation of parameters.
SIAM Review, Volume 66, Issue 1, Page 149-160, February 2024. <br/> In a recent article in this journal, Gouveia and Stone [``Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods,” SIAM Rev., 64 (2022), pp. 485--499] described a method for finding exact solutions to resonantly forced linear ordinary differential equations, and for finding the general solution of repeated root linear systems. It is shown here that applying their mathematical justification directly yields a method that is faster and algebraically simpler than the method they described. This method seems to be unknown in the undergraduate textbook literature, although it certainly should be present there as it is elegant and simple to apply, generally giving solutions with much less work than variation of parameters.
Resonantly Forced ODEs and Repeated Roots
10.1137/23M1545148
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Allan R. Willms
Resonantly Forced ODEs and Repeated Roots
66
1
149
160
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/23M1545148
https://epubs.siam.org/doi/abs/10.1137/23M1545148?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators
https://epubs.siam.org/doi/abs/10.1137/22M1518189?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 161-190, February 2024. <br/> Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest, driven by the rapid deployment of deep neural networks across different fields, such as computer vision and natural language processing, and by the need for reliable tools in risk-sensitive applications. Recently, various machine learning models have also been developed to tackle problems in the field of scientific computing with applications to computational science and engineering (CSE). Physics-informed neural networks and deep operator networks are two such models for solving partial differential equations (PDEs) and learning operator mappings, respectively. In this regard, a comprehensive study of UQ methods tailored specifically for scientific machine learning (SciML) models has been provided in [A. F. Psaros et al., J. Comput. Phys., 477 (2023), art. 111902]. Nevertheless, and despite their theoretical merit, implementations of these methods are not straightforward, especially in large-scale CSE applications, hindering their broad adoption in both research and industry settings. In this paper, we present an open-source Python library (ŭlhttps://github.com/Crunch-UQ4MI), termed NeuralUQ and accompanied by an educational tutorial, for employing UQ methods for SciML in a convenient and structured manner. The library, designed for both educational and research purposes, supports multiple modern UQ methods and SciML models. It is based on a succinct workflow and facilitates flexible employment and easy extensions by the users. We first present a tutorial of NeuralUQ and subsequently demonstrate its applicability and efficiency in four diverse examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs.
SIAM Review, Volume 66, Issue 1, Page 161-190, February 2024. <br/> Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest, driven by the rapid deployment of deep neural networks across different fields, such as computer vision and natural language processing, and by the need for reliable tools in risk-sensitive applications. Recently, various machine learning models have also been developed to tackle problems in the field of scientific computing with applications to computational science and engineering (CSE). Physics-informed neural networks and deep operator networks are two such models for solving partial differential equations (PDEs) and learning operator mappings, respectively. In this regard, a comprehensive study of UQ methods tailored specifically for scientific machine learning (SciML) models has been provided in [A. F. Psaros et al., J. Comput. Phys., 477 (2023), art. 111902]. Nevertheless, and despite their theoretical merit, implementations of these methods are not straightforward, especially in large-scale CSE applications, hindering their broad adoption in both research and industry settings. In this paper, we present an open-source Python library (ŭlhttps://github.com/Crunch-UQ4MI), termed NeuralUQ and accompanied by an educational tutorial, for employing UQ methods for SciML in a convenient and structured manner. The library, designed for both educational and research purposes, supports multiple modern UQ methods and SciML models. It is based on a succinct workflow and facilitates flexible employment and easy extensions by the users. We first present a tutorial of NeuralUQ and subsequently demonstrate its applicability and efficiency in four diverse examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs.
NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators
10.1137/22M1518189
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Zongren Zou
Xuhui Meng
Apostolos F. Psaros
George E. Karniadakis
NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators
66
1
161
190
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/22M1518189
https://epubs.siam.org/doi/abs/10.1137/22M1518189?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics
-
Book Reviews
https://epubs.siam.org/doi/abs/10.1137/24N975864?ai=s5&mi=3bfys9&af=R
SIAM Review, <a href="https://epubs.siam.org/toc/siread/66/1">Volume 66, Issue 1</a>, Page 193-201, February 2024. <br/> If you are keen to understand the world around us by developing mathematical or data-driven models, or if you are interested in the methodologies that can be used to analyze those models, this collection of reviews may help you identify a useful book or two. Our featured review was written by Tim Hoheisel, on the book Convex Optimization: Introductory Course, written by Mikhail Moklyachuk. Hoheisel argues that convex optimization is not “solved” and certainly not “dead,” as had been deemed by some academics. Indeed, he believes that the explosive growth of machine learning problems, which often rely on convexity, has posed new challenges and renders convex optimization all the more relevant. Hoheisel notes pros and cons of the book, and concluded that it “can serve as an introductory text for students who want to learn the fundamentals of convex analysis and some theoretical aspects of convex optimization,” even though it may not be necessarily useful for researchers. After making a brief appearance in the first review, machine learning is featured in the second review, written by Diyora Salimova, on the volume, Mathematical Aspects of Deep Learning, edited by Philipp Grohs and Gitta Kutyniok. The edited volume encompasses a collection of topics concerning the mathematics of deep learning. After describing each of the eleven chapters, Salimova concludes that “it is nice to have this book in one's library,” given the increasing popularity and applications of deep learning everywhere. While some edited volumes lack cohesiveness, Salimonva notes that a strength of the book is that “it approaches modern deep learning from many different perspectives and provides various theoretical insights.” Continuing on the theme of data science, the next book is Optimization for Data Analysis, by Stephen J. Wright and Benjamin Recht. The review was written by our former section editor Volker Schulz, who commends the authors for providing “a very good basis for a course on optimization algorithms in data science.” Outside of the classroom, the book is also suitable for self-learning, as helpful exercises are provided to deepen the context. I reviewed the next book, Foundations of Computational Imaging: A Model-Based Approach, written by Charles A. Bouman. The author first started writing the book 20 years ago for a course that he was teaching---at a time when “Computational Imaging” did not exist as a field. What I like most about this book is that Bouman has succeeded in his stated goal of providing “a foundation for a collection of theoretical material that can serve as a common language for both researchers and practitioners of Computational Imaging.” The next review was written by Shaun Hendy, on the book Climate, Chaos and COVID: How Mathematical Models Describe the Universe, by Chris Budd. The book describes recent examples of how mathematical modeling has helped us navigate the world and formulate critical policies, such as climate change and COVID. While the book is engaging, Hendy notes the limited representation of women and mathematicians from minority groups. We conclude with a review on the book An Introduction to the Numerical Simulation of Stochastic Differential Equations, authored by Desmond J. Higham and Peter E. Kloeden. Minh-Binh Tran calls the book “a marvelous introduction into the theory of numerical SDEs for undergraduate students and young researchers.” Tran also notes that the book also gives excellent instructions on how to efficiently implement SDE-based models and simulations.
SIAM Review, Volume 66, Issue 1, Page 193-201, February 2024. <br/> If you are keen to understand the world around us by developing mathematical or data-driven models, or if you are interested in the methodologies that can be used to analyze those models, this collection of reviews may help you identify a useful book or two. Our featured review was written by Tim Hoheisel, on the book Convex Optimization: Introductory Course, written by Mikhail Moklyachuk. Hoheisel argues that convex optimization is not “solved” and certainly not “dead,” as had been deemed by some academics. Indeed, he believes that the explosive growth of machine learning problems, which often rely on convexity, has posed new challenges and renders convex optimization all the more relevant. Hoheisel notes pros and cons of the book, and concluded that it “can serve as an introductory text for students who want to learn the fundamentals of convex analysis and some theoretical aspects of convex optimization,” even though it may not be necessarily useful for researchers. After making a brief appearance in the first review, machine learning is featured in the second review, written by Diyora Salimova, on the volume, Mathematical Aspects of Deep Learning, edited by Philipp Grohs and Gitta Kutyniok. The edited volume encompasses a collection of topics concerning the mathematics of deep learning. After describing each of the eleven chapters, Salimova concludes that “it is nice to have this book in one's library,” given the increasing popularity and applications of deep learning everywhere. While some edited volumes lack cohesiveness, Salimonva notes that a strength of the book is that “it approaches modern deep learning from many different perspectives and provides various theoretical insights.” Continuing on the theme of data science, the next book is Optimization for Data Analysis, by Stephen J. Wright and Benjamin Recht. The review was written by our former section editor Volker Schulz, who commends the authors for providing “a very good basis for a course on optimization algorithms in data science.” Outside of the classroom, the book is also suitable for self-learning, as helpful exercises are provided to deepen the context. I reviewed the next book, Foundations of Computational Imaging: A Model-Based Approach, written by Charles A. Bouman. The author first started writing the book 20 years ago for a course that he was teaching---at a time when “Computational Imaging” did not exist as a field. What I like most about this book is that Bouman has succeeded in his stated goal of providing “a foundation for a collection of theoretical material that can serve as a common language for both researchers and practitioners of Computational Imaging.” The next review was written by Shaun Hendy, on the book Climate, Chaos and COVID: How Mathematical Models Describe the Universe, by Chris Budd. The book describes recent examples of how mathematical modeling has helped us navigate the world and formulate critical policies, such as climate change and COVID. While the book is engaging, Hendy notes the limited representation of women and mathematicians from minority groups. We conclude with a review on the book An Introduction to the Numerical Simulation of Stochastic Differential Equations, authored by Desmond J. Higham and Peter E. Kloeden. Minh-Binh Tran calls the book “a marvelous introduction into the theory of numerical SDEs for undergraduate students and young researchers.” Tran also notes that the book also gives excellent instructions on how to efficiently implement SDE-based models and simulations.
Book Reviews
10.1137/24N975864
SIAM Review
2024-02-08T08:00:00Z
© 2024, Society for Industrial and Applied Mathematics
Anita T. Layton
Book Reviews
66
1
193
201
2024-02-05T08:00:00Z
2024-02-05T08:00:00Z
10.1137/24N975864
https://epubs.siam.org/doi/abs/10.1137/24N975864?ai=s5&mi=3bfys9&af=R
© 2024, Society for Industrial and Applied Mathematics