- ASA-SIAM Series on Statistics and Applied Mathematics20
- Advances in Design and Control39
- Bentley Institute Press1
- CBMS-NSF Regional Conference Series in Applied Mathematics94
- Classics in Applied Mathematics89
- Computational Science & Engineering28
- Data Science2
- Discrete Mathematics and Applications11
- Financial Mathematics2
- Frontiers in Applied Mathematics32
- Fundamentals of Algorithms20
- IAS/Park City Mathematics Series/AMS4
- Johns Hopkins University Press2
- MOS-SIAM Series on Optimization33
- Mathematical Modeling and Computation23
- Mathematics in Industry6
- Other Titles in Applied Mathematics167
- SIAM Spotlights6
- Software, Environments, and Tools31
- Studies in Applied and Numerical Mathematics16
- University of California Press1
- Wellesley-Cambridge Press12
- Applied Geometry25
- Applied Mathematics Education44
- Applied Probability57
- Astronomy, Planetary Sciences, and Optics9
- Atmospheric and Oceanographic Sciences16
- Biological Sciences29
- Chemical Kinetics13
- Communication Theory6
- Computational Mathematics238
- Computer Sciences53
- Control and Systems Theory103
- Data Mining1
- Data Mining and Information Retrieval22
- Data Sciences15
- Discrete Mathematics and Graph Theory46
- Dynamical Systems54
- Economics and Finance19
- Electromagnetic Theory and Semiconductor and Circuit Analysis17
- Environmental Sciences14
- Fluid Mechanics64
- Functional Analysis40
- General Interest21
- Geophysical Sciences15
- Image Processing36
- Life Sciences32
- Linear Algebra and Matrix Theory151
- Management Sciences and Operations Research34
- Materials Science12
- Math and Computation in Industrial Applications62
- Mathematical Physics17
- Mechanics of Solids28
- Nonlinear Waves and Coherent Structures15
- Numerical Analysis167
- Optimization Theory and Mathematical Programming125
- Ordinary Differential Equations82
- Partial Differential Equations126
- Real and Complex Analysis55
- Simulation and Modeling104
- Social Sciences12
The conjugate gradient (CG) algorithm is almost always the iterative method of choice for solving linear systems with symmetric positive definite matrices. This book describes and analyzes techniques based on Gauss quadrature rules to cheaply compute bounds on norms of the error. The techniques can be used to derive reliable stopping criteria. Computation of estimates of the smallest and largest eigenvalues during CG iterations is also shown. The algorithms are illustrated by many numerical experiments, and they can be easily incorporated into existing CG codes.
Today the conjugate gradient (CG) algorithm is almost always the iterative method of choice for solving linear systems with symmetric positive definite matrices. It was developed independently by M. R. Hestenes in the United States and E. L. Stiefel in Switzerland at the beginning of the 1950s. After a visit of Stiefel in 1951 to the Institute of Numerical Analysis (INA), located on the campus of the University of California at Los Angeles where Hestenes was working, they wrote a joint paper  which was published in the December 1952 issue of the Journal of the National Bureau of Standards. For the early history of CG, we refer the reader to  and the references therein as well as to [38, 79, 80].
The biologist J. B. S. Haldane, when asked what we can learn about the Creator by examining the world, is said to have replied “an inordinate fondness for beetles” . Today's biologists could be forgiven for pointing to an inordinate fondness for networks. Networks are ubiquitous in the life sciences: examples include genetic regulatory networks, neural circuitry, ecological food webs, phylogenetic trees, and epidemic networks. Networks are also common in many other branches of science, including physics, chemistry, computer science, electrical and electronic engineering, psychology, and sociology. In recent years there has been an explosion of interest in network-based modeling, and the research literature, including both theory and applications, now extends to many thousands of papers. The questions addressed and the techniques involved are extremely diverse, reflecting the broad range of backgrounds and interests of researchers.
This second edition of Matrix Analysis and Applied Linear Algebra differs substantially from the first edition in that this edition has been completely rewritten to include reformulations, extensions, and pedagogical enhancements. The goal in preparing this edition was to create an easily readable and flexible textbook that is adaptable for a single semester course or a more complete two-semester course. The following features are some of the characteristics of this edition.
This is a different sort of book (indeed, we're a bit doubtful about calling it a “book,” even), intended for a different sort of course. The content is intended to be outside the normal mathematics curriculum. The book won't teach calculus or linear algebra, for example, although it will reinforce, support, and illuminate those subjects, which we imagine you will be taking eventually (possibly even concurrently). We assume only high school mathematics to start, although we get pretty deep pretty quickly. What does this mean? We assume that you have been exposed to algebra, matrices, functions, and basic calculus (informal limits, continuity, some rules for taking derivatives and antiderivatives). We assume no prior programming knowledge. We assume you've met with mathematical induction. What we mostly assume is that you're willing to take charge of your own learning. You should be prepared to actually do things as you read. There are activities to do, ranging from simple (with reports on what we did in the Reports section so you have more models of the process) to hard, namely projects that can be done by teams (and if you answer them, you could contribute a chapter to this book, maybe), and some actually open. If you answer one of those, you should publish a paper in a journal. Maple Transactions1 might be a good place.
The development of automatic digital computers has made it possible to carry out computations involving a very large number of arithmetic operations and this has stimulated a study of the cumulative effect of rounding errors. In this book I have given an elementary introduction to this subject which is based on the work which has been done in the Mathematics Division of N.P.L. in the past few years. Some of the material presented here has already appeared in published papers, but much of it has hitherto been available only in the form of rough notes for lectures given in this country and the United States.
Recent technological advances in communications and computation have spurred a broad interest in control of networks and control over networks. Network systems involve distributed decision-making for coordination of networks of dynamic agents and address a broad area of applications, including cooperative control of unmanned air vehicles, microsatellite clusters, mobile robotics, battle space management, congestion control in communication networks, intelligent vehicle/highway systems, large-scale manufacturing systems, and biological networks, to cite but a few examples. To address the problem of autonomy and complexity for control and coordination of network systems, in this monograph we look to system thermodynamics and dynamical systems theory for inspiration in developing innovative architectures for controlling network systems.
Moment and polynomial optimization has received high attention in recent decades. It has beautiful theory and efficient methods, as well as broad applications for various mathematical, scientific, and engineering fields. The research status of optimization has been enhanced extensively due to its recent developments. Nowadays, moment and polynomial optimization is an important technique in many fields.
Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second EditionDescription
Preface to the Second Edition: The second edition features two significant enhancements to the first edition.
1. Python codes were added on top of the existing MATLAB codes to illustrate and demonstrate different aspects of the algorithmic and applicative nature of nonlinear optimization. Since the first edition's publication, Python has become one of the leading software languages for scientific computing and is used in many applications, most notably those arising in data science. Readers interested in implementation may choose to follow either the MATLAB or Python codes which appear, sometimes literally, side by side. A new section on the Python module CVXPY (Section 8.5) describes how to solve convex optimization problems using Python.
Financial derivatives, such as stock options for instance, are indispensable instruments in modern financial markets. The introduction of options markets in the early 1970s, and the continuous appearance of new types of complex derivative contracts, resulted in a rapid growth of interest in the theoretical aspects of options valuation. Since its inception in 1900 by the French mathematician Bachelier, options pricing theory has evolved into a modern discipline with a vast literature and dedicated university courses. While still considered in large part an advanced topic, it has become more and more common to find courses on options pricing theory in the undergraduate program of major universities all around the world. Moreover, financial institutions have since long recognized the pivotal role of mathematical models for their proper functionality and consistently offer employment opportunities for mathematicians.
The classical analysis of real-valued functions of one or more variables includes (along with other disciplines) elementary number theory, sequences and series, continuity and differentiability, proper and improper Riemann (Riemann-Stieltjes) integrals, elementary functions, investigation of graphs, implicit functions and dependence, uniform convergence, and integrals depending on a parameter. This book contains all these subjects and combines them based on a unified approach, starting with the theory of real numbers. In addition it contains Lebesgue measure and Lebesgue integration. The approach here is similar to Jordan measure and corresponding Riemann integration. Usually this material is not included in university courses for first- and second-year students, but we include it in this book on classical analysis due to the importance of Lebesgue's ideas and their connections with classical analysis.
This textbook is intended for a two-semester course on calculus of one variable. The target audience is comprised of students in biology, chemistry, mathematics, physics, and related disciplines, as well as professionals in these areas. It grew out of the Symbiosis Project at East Tennessee State University.
This monograph is based on my personal lecture notes for the graduate course on Mathematical Theory of Finite Elements (EM394H) that I have been teaching at ICES (now the Oden Institute), at the University of Texas at Austin, since 2005. The class has been offered in two versions. The first version is devoted to a study of the energy spaces corresponding to the exact grad-curl-div sequence. The class is rather involved mathematically, and I taught it only every three or four years; see  for the corresponding lecture notes. The second, more popular version is covered in the presented notes.
Problems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete OptimizationDescription
The authors of this book wanted for a long time to have at hand a large number of problems with solutions while teaching. The first author has taught IE303 Modeling and Methods in Optimization for Bilkent University students for longer than 20 years. A graduate student (a PhD candidate) assistant joined next and started compiling solved problems to be used in exams, quizzes, and homework assignments. The first author added some problems of his own. The result is this book. There are not many books out there dedicated to problems in integer optimization and related topics. The book focuses on the topics covered in IE303 Modeling and Methods in Optimization, a third year required course for Industrial Engineering students at Bilkent University. However, it should be useful for any undergraduate student in industrial engineering or in related disciplines. These are the motivations behind the preparation of this book.
Since we will never really know why the prices of financial assets move, we should at least make a faithful model of how they move. This was the motivation of Bachelier in 1900, when he wrote in the very first page of his thesis that “contradictory opinions in regard to [price] fluctuations are so diverse that at the same instant buyers believe the market is rising and sellers that it is falling.”1 He went on to propose the first mathematical model of prices: the Brownian motion. He then built an option pricing theory that he compared to empirical data available to him—which already revealed, quite remarkably, what is now called the volatility smile.2
2 Looking at Bachelier's table on page 30, one clearly see a smile that flattens with the maturity of the options, as routinely observed nowadays! As we now understand, this flattening comes from the slow convergence of returns towards Gaussian random variables as the time-lag increases.
Convex analysis, convex optimization, and algorithms are important topics in modern applied mathematics. In this text, we provide an introduction to a selection of these topics accessible at the advanced undergraduate or beginning graduate level. The only background required is some core knowledge of calculus, linear algebra, and analysis.
Over seventy years ago, Richard Bellman coined the term the curse of dimensionality to describe phenomena and computational challenges that arise in high dimensions. These challenges, in tandem with the ubiquity of high-dimensional functions in real-world applications, have led to a lengthy, focused research effort on high-dimensional approximation—that is, the development of methods for approximating functions of many variables accurately and efficiently from data. This book is about one of the latest chapters in this long and ongoing story: sparse polynomial approximation methods. Such methods differ from more classical techniques in high-dimensional approximation in that they combine unstructured grids, typically obtained via Monte Carlo sampling, with ideas from best s-term approximation, least squares, sparse recovery, and compressed sensing. This allows one to address high- or even formally infinite-dimensional problems in which the target function is smooth, with rates of convergence that can, in certain settings, be independent of the dimension. For suitable classes of problems, such methods provably mitigate the curse of dimensionality to a substantial extent. It is due in part to these desirable theoretical properties that sparse polynomial approximation methods have emerged over the past 10–15 years as useful tools for various high-dimensional approximation tasks arising in a range of applications in computational science and engineering. This includes problems involving parametric models and, in particular, parametric differential equations.
This book gives an introduction to iterative methods and preconditioning for solving discretized elliptic partial differential equations (PDEs) and optimal control problems governed by elliptic PDEs, for which the use of matrix-free procedures is crucial. It grew out of a set of lecture notes going back to the Ph.D. days of the second author at Stanford, prepared for CS137. The lecture notes evolved through lectures given at McGill University and the University of Geneva by the second author, where the topic became a specialized advanced undergraduate/early graduate course. In 2017, when the first author was a teaching assistant in Geneva, the lecture notes were restructured, expanded, and enriched into an earlier form of the book, without optimal control yet. After that, the same subject was taught again (and in several different forms) by the second author at the University of Geneva and the first author as professor at the University of Konstanz. These last years allowed the authors to further improve the manuscript, further enrich it with several examples, and add new insights, new sections, and the new chapter about optimal control problems.
G. H. Hardy published A Mathematician's Apology in 1940. The sense of the term is that of apologia, a defense of a field. It could be said (and some of my friends have said) that a more accurate title for the present piece would have been Confessions of a Numerical Analyst. To be sure, this essay differs in many ways from Hardy's, containing more biographical material and also more mathematics, especially in the second half. But its purpose is the same, a serious and personal meditation about mathematics