Abstract.

The regularity of images generated by a class of convolutional neural networks, such as the U-net, generative networks, or the deep image prior, is analyzed. In a resolution-independent, infinite dimensional setting, it is shown that such images, represented as functions, are always continuous and, in some circumstances, even continuously differentiable, contradicting the widely accepted modeling of sharp edges in images via jump discontinuities. While such statements require an infinite dimensional setting, the connection to (discretized) neural networks used in practice is made by considering the limit as the resolution approaches infinity. As a practical consequence, the results of this paper in particular provide analytical evidence that basic \(L^2\) regularization of network weights (also known as weight decay) might lead to oversmoothed outputs.

Keywords

  1. convolutional neural networks
  2. machine learning
  3. functional analysis
  4. mathematical imaging

MSC codes

  1. 65J20
  2. 68U10
  3. 94A08

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
R. A. Adams and J. J. Fournier, Sobolev Spaces, Elsevier, Amsterdam, 2003.
2.
H. W. Alt, Linear Functional Analysis, An Application-Oriented Introduction, Springer, London, 2016.
3.
L. Ambrosio, N. Fusco, and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Math. Monogr., Clarendon Press, Oxford, 2000.
4.
A. N. Angelopoulos, A. P. Kohli, S. Bates, M. I. Jordan, J. Malik, T. Alshaabi, S. Upadhyayula, and Y. Romano, Source Code for Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging, https://github.com/aangelopoulos/im2im-uq (2022).
5.
S. Arridge, P. Maass, O. Öktem, and C.-B. Schönlieb, Solving inverse problems using data-driven models, Acta Numer., 28 (2019), pp. 1–174.
6.
M. Asim, M. Daniels, O. Leong, A. Ahmed, and P. Hand, Invertible generative models for inverse problems: Mitigating representation error and dataset bias, Proc. Mach. Learn. Res. (PMLR), 119 (2020), pp. 399–409.
7.
A. Bora, A. Jalal, E. Price, and A. G. Dimakis, Compressed sensing using generative models, Proc. Mach. Learn. Res. (PMLR), 70 (2017), pp. 537–546.
8.
A. Braides, Gamma-Convergence for Beginners, Oxford Lecture Ser. Math. Appl. 22, Clarendon Press, Oxford, 2002.
9.
K. Bredies and D. Lorenz, Mathematical Image Processing, Springer, Cham, Switzerland, 2018.
10.
H. Brézis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Vol. 2, Universitext, Springer, New York, 2011.
11.
A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, An introduction to total variation for image analysis, in Theoretical Foundations and Numerical Methods for Sparse Recovery, Radon Ser. Comput. Appl. Math. 9, De Gruyter, New York, 2010, pp. 263–340.
12.
A. Chambolle, M. Holler, and T. Pock, A convex variational model for learning convolutional image atoms from incomplete data, J. Math. Imaging Vision, 62 (2020), pp. 417–444, https://doi.org/10.1007/s10851-019-00919-7.
13.
T. F. Chan and C.-K. Wong, Total variation blind deconvolution, IEEE Trans. Image Process., 7 (1998), pp. 370–375.
14.
H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, Automatic brain tumor detection and segmentation using u-net based fully convolutional networks, in Annual Conference on Medical Image Understanding and Analysis, Springer, Cham, Switzerland, 2017, pp. 506–517.
15.
L. C. Evans, Partial Differential Equations, Grad. Stud. Math. 19, American Mathematical Society, Providence, RI, 2010.
16.
M. Genzel, I. Gühring, J. Macdonald, and M. März, Near-exact recovery for tomographic inverse problems via deep learning, Proc. Mach. Learn. Res. (PMLR), 162 (2022), pp. 7368–7381.
17.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, Vol. 27, Curran Associates, Red Hook, NY, 2014.
18.
Y. Gousseau and J.-M. Morel, Are natural images of bounded variation?, SIAM J. Math. Anal., 33 (2001), pp. 634–648.
19.
A. Habring and M. Holler, A generative variational model for inverse problems in imaging, SIAM J. Math. Data Sci., 4 (2022), pp. 306–335.
20.
A. Habring and M. Holler, CNN Regularity, https://github.com/habring/cnn_regularity (2022).
21.
P. Hand, O. Leong, and V. Voroninski, Phase retrieval under a generative prior, in Advances in Neural Information Processing Systems, Vol. 31, Curran Associates, Red Hook, NY, 2018.
22.
R. Heckel and P. Hand, Deep decoder: Concise image representations from untrained non-convolutional networks, in International Conference on Learning Representations, 2019, https://openreview.net/forum?id=rylV-2C9KQ.
23.
R. Heckel and M. Soltanolkotabi, Denoising and regularization via exploiting the structural bias of convolutional generators, in International Conference on Learning Representations, 2020, https://openreview.net/forum?id=HJeqhA4YDS.
24.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, Image-to-image translation with conditional adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Piscataway, NJ, 2017, pp. 1125–1134.
25.
V. Jain and S. Seung, Natural image denoising with convolutional networks, in Advances in Neural Information Processing Systems 21, Curran Associates, Red Hook, NY, 2008, pp. 769–776.
26.
Z. Jiang, C. Ding, M. Liu, and D. Tao, Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, Revised Selected Papers, Part I 5, Springer, Cham, Switzerland, 2020, pp. 231–241.
27.
K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Trans. Image Process., 26 (2017), pp. 4509–4522.
28.
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, Photo-realistic single image super-resolution using a generative adversarial network, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Piscataway, NJ, 2017, pp. 4681–4690.
29.
V. Lempitsky, A. Vedaldi, and D. Ulyanov, Deep image prior, Int. J. Comput. Vis., 128 (2020), pp. 1867–1887, https://doi.org/10.1007/s11263-020-01303-4.
30.
V. Lempitsky, A. Vedaldi, and D. Ulyanov, Deep-Image-Prior, https://github.com/DmitryUlyanov/deep-image-prior (2018).
31.
G. Leoni, A First Course in Sobolev Spaces, American Mathematical Society, Providence, RI, 2017.
32.
S. Mallat, Understanding deep convolutional networks, Philos. Trans. R. Soc. A, 374 (2016), 20150203.
33.
X. Mao, C. Shen, and Y.-B. Yang, Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, in Advances in Neural Information. Processing Systems, Vol. 29, Curran Associates, Red Hook, NY, 2016.
34.
S. Mohan, Z. Kadkhodaie, E. P. Simoncelli, and C. Fernandez-Granda, Robust and interpretable blind image denoising via bias-free convolutional neural networks, in International Conference on Learning Representations, 2020, https://openreview.net/forum?id=HJlSmC4FPS.
35.
D. Obmann, J. Schwab, and M. Haltmeier, Sparse synthesis regularization with deep neural networks, in 2019 13th International Conference on Sampling Theory and Applications (SampTA), IEEE, Piscataway, NJ, 2019, pp. 1–5.
36.
D. Obmann, J. Schwab, and M. Haltmeier, Deep Synthesis Regularization of Inverse Problems, preprint, https://arxiv.org/abs/2002.00155, 2020.
37.
N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, On the spectral bias of neural networks, Proc. Mach. Learn. Res. (PMLR), 97 (2019), pp. 5301–5310.
38.
O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-assisted Intervention, Springer, Cham, Switzerland, 2015, pp. 234–241.
39.
L. I. Rudin, Images, Numerical Analysis of Singularities and Shock Filters, Ph.D. thesis, California Institute of Technology, Pasadena, CA, 1987.
40.
L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), pp. 259–268.
41.
Y. Skandarani, P.-M. Jodoin, and A. Lalande, Gans for medical image synthesis: An empirical study, J. Imaging, 9 (2023), 69.
42.
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE Trans. Image Process., 26 (2017), pp. 3142–3155.

Information & Authors

Information

Published In

cover image SIAM Journal on Mathematics of Data Science
SIAM Journal on Mathematics of Data Science
Pages: 670 - 692
ISSN (online): 2577-0187

History

Submitted: 29 September 2022
Accepted: 1 March 2023
Published online: 21 July 2023

Keywords

  1. convolutional neural networks
  2. machine learning
  3. functional analysis
  4. mathematical imaging

MSC codes

  1. 65J20
  2. 68U10
  3. 94A08

Authors

Affiliations

Andreas Habring Contact the author
Department of Mathematics and Scientific Computing, University of Graz, Graz, Austria.
Department of Mathematics and Scientific Computing, University of Graz, Graz, Austria.

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

There are no citations for this item

View Options

View options

PDF

View PDF

Full Text

View Full Text

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media

On May 28, 2024, our site will enter Read Only mode for a limited time in order to complete a platform upgrade. As a result, the following functions will be temporarily unavailable: registering new user accounts, any updates to existing user accounts, access token activations, and shopping cart transactions. Contact [email protected] with any questions.