Generalizing Koopman Theory to allow for inputs and control

We develop a new generalization of Koopman operator theory that incorporates the effects of inputs and control. Koopman spectral analysis is a theoretical tool for the analysis of nonlinear dynamical systems. Moreover, Koopman is intimately connected to Dynamic Mode Decomposition (DMD), a method that discovers spatial-temporal coherent modes from data, connects local-linear analysis to nonlinear operator theory, and importantly creates an equation-free architecture allowing investigation of complex systems. In actuated systems, standard Koopman analysis and DMD are incapable of producing input-output models; moreover, the dynamics and the modes will be corrupted by external forcing. Our new theoretical developments extend Koopman operator theory to allow for systems with nonlinear input-output characteristics. We show how this generalization is rigorously connected and generalizes a recent development called Dynamic Mode Decomposition with control (DMDc). We demonstrate this new theory on nonlinear dynamical systems, including a standard Susceptible-Infectious-Recovered model with relevance to the analysis of infectious disease data with mass vaccination (actuation).

1. Introduction. We introduce a new method called Koopman with inputs and control (KIC) that generalizes Koopman spectral theory to allow for the analysis of complex, input-output systems. Koopman operator theory, which is built on the seminal contribution of Bernard Koopman in 1931 [25], is a powerful and increasingly prominent theory that allows one to transform a nonlinear dynamical system into an infinite-dimensional, linear system [25,29,38]. Linear operator theory [12], specifically eigenfunction expansion techniques, can then be used to construct solutions of the original system. As such, Koopman theory is perhaps an early theoretical predecessor of what is now called nonlinear manifold learning, i.e. discovering nonlinear manifolds on which data live. In Koopman theory, the data considered is generated from a nonlinear dynamical system and candidate manifolds are constructed from observables of the original state-space variables. In our KIC innovation, we consider a nonlinear dynamical system with inputs and outputs, thus requiring a generalization of Koopman's original definition. We demonstrate the method on a number of examples to demonstrate the effectiveness and success of the technique. Importantly, the Koopman method is a data-driven, model-free method that is capable of constructing the best (in a leastsquare sense) underlying dynamics and control of a given system from data alone. This makes it an attractive data-driven architectures in modern dynamical systems theory.
Although proposed more than eight decades ago, few results followed the original formulation by Koopman [25]. This was partly due to the fact that there was no efficient way proposed to compute the Koopman operator itself. Additionally, even if an algorithm had been proposed, there were no computers available to compute them in practice during that time period. Interest was revived once again in 2004/5 by Igor Mezić et al [31,29] who showed that Koopman theory could be used for the spectral analysis of nonlinear dynamical systems. Two critical and enabling breakthroughs came shortly after. In 2008/10, Schmid and Sessterhen [43] and Schmid [40] proposed the Dynamic Mode Decomposition (DMD) algorithm for decomposing complex, spatio-temporal The rapid adoption of Koopman theory across a number of scientific and engineering fields [8,30] is not surprising. Its fundamental success stems from the fact that it is an equation-free method, relying on data alone to reconstruct a linear dynamical system characterizing the nonlinear system under consideration. Such linear systems may be characterized using basic methods from ordinary differential equations and spectral analysis, as shown by Mezić [29]. The method can be applied to high-dimensional measurement data collected from complex systems where governing equations are not readily available; and the numerical instantiation of Koopman can be orders of magnitude faster than solving for solutions of PDEs with complex domains. KIC inherits these advantageous characteristics, but extends the domain of applicability to input-output systems.
The control of high-dimensional, nonlinear systems is a challenging task that is of paramount importance for applications such as flow control [7] and eradicating infectious diseases [34]. The construction of effective controllers typically rely on relatively few states, a computationally feasible model to implement, and fast solvers to minimize latencies introduced by computing estimates of the system [1]. Further, control laws often rely on solving a single large Riccati equation (H 2 ) or iteratively through another set of equations (H ∞ ). For modern engineering systems with highdimensional measurement data and possibly high-dimensional input data, the requirements of the controllers are too restrictive. Thus, most practical methods for handling these modern systems rely heavily on dimensionality-reduction techniques. These model reduction techniques typically employ the singular value decomposition to discover low-dimensional subspaces where the dynamics evolve [19]. On these low-dimensional subspaces, controllers can be described, constructed, and implemented [32,23,19,37,39,36,50,17]. Further, this paradigm is exemplified in the classic method called balanced truncation which utilizes both the low-dimensional controllable and observable subspaces to produce a balanced, reduced-order model for control [32]. Notably, balanced truncation has been extended and generalized to handle high-dimensional measurement data by a method called balanced proper orthogonal decomposition (BPOD), but the method requires a linear adjoint calculation [27,46,39,20], which is not possible in many data-driven experiments.
The models produced by BPOD have been previously demonstrated to be equivalent to the balanced input-output models produced by the Eigensystem Realization Algorithm (ERA), a method developed to be used on linear and low-dimensional systems [28]. ERA and the Observer Kalman Identification method (OKID) are apart of a class of methods developed for system identification [23,24,11]. Similar to DMD and DMDc, system identification methods are inherently equation-free, acting only on measurement and input data. In fact, the modal decomposition methods have been shown to be intimately connected to ERA, OKID, and other system identification methods called subspace identification methods such as Numerical algorithms for Subspace State Space System Identification (N4SID) [35,45,33]. In this manuscript, we demonstrate how KIC reduces to DMDc for linear input-output systems. KIC can be interpreted in terms of nonlinear system identification since the architecture allows for the analysis of nonlinear systems.
The outline of the paper is as follows: § 2 describes the background on Koopman operator theory and its connections to DMD. § 3 describes the new development called KIC and the strong connections to DMDc. The following section § 4 presents a number of numerical examples including nonlinear input-output systems.
2. Background: Koopman and Dynamic Mode Decomposition. Koopman operator theory and DMD are powerful and intimately connected methods for analyzing complex systems. Data collected from numerical simulations, experiments, or historical records can be utilized by Koopman and DMD to extract important dynamic characteristics relevant for prediction, bifurcation analysis, and parameter optimization. This section provides the mathematical background for Koopman operator theory, DMD, and how they are connected [29,43,42,38,45].

The Koopman
Operator for dynamical systems. The Koopman operator is a linear operator defined for any nonlinear system [25]. Spectral analysis of this linear operator provides an analytic and numerical tool to analyze flows arising from nonlinear dynamical systems [29,38,8,30]. In this section, we describe the background on Koopman operator theory.
Consider the discrete nonlinear dynamical system: evolving on a smooth manifold M where x k ∈ M. f is a map from M to itself, and k is an integer index. For most practical engineering problems, we consider our state and manifold to be x ∈ R nx . We could equivalently describe Koopman operator theory for continuous-time systems, but here we restrict to the discrete-time setting as most engineering problems collect discrete time data. We also define a set of scalar valued observable functions g : H → R, which forms an infinite-dimensional Hilbert space. This space consists of the Lebesque square-integrable functions on H. The Koopman operator K acts on this set of observable functions: The Koopman operator is linear and infinite-dimensional, as defined in Eq. (2.2). The nonlinear dynamical system is often considered finite-dimensional, but can be infinite-dimensional. The linear characteristics of the Koopman Operator allow us to perform an eigendecomposition of K: Consider a vector-valued observable function g : M → R ny . Using the infinite expansion shown in Eq. (2.3), the observable g, and if the n y components of g lie within the span of eigenfunctions ϕ j , the observable can be rewritten: . . .
where the vector valued coefficients v j are called Koopman modes. Measure-preserving flows, as original considered in [25], allow for a specific description of the Koopman modes based on projections of the observables on to the span of K: The Koopman operator K is defined for all observables functions. We later denote a finitedimensional approximation of the Koopman operator (from data) as K. Rearranging terms from Eqs. (2.2) and (2.3) provides a new representation of the observable function g in terms of Koopman modes and the corresponding Koopman eigenvalues λ j : where the Koopman eigenvalues provide the growth/decay and frequency content of each Koopman modes, v j . For DMD, ϕ j (x) is a constant and is typically absorbed in to each of the modes. For a linear operator and identity observable functions, e.g. g(x) = x, the eigenfunctions ϕ(x) can be shown to be the inner product of the state x with the left eigenvectors of the linear Koopman operator w j [38].
A significant amount of recent work has focused on the application of the correct observable functions g in order to uncover a Koopman operator that describes the nonlinear vector field [47,4]. In particular, expanding the measured state in to a set of augmented states that either capture nonlinearities, i.e. x 2 , x 3 , sin(x), etc, or using the eigenfunctions of the underlying system. In the examples for this manuscript, we will utilize these ideas to explore KIC.

Koopman and DMD.
We describe how the Koopman operator theory connects to DMD, thus intimately connecting measurement data with Koopman spectral analysis. Here, we follow the recent description provided in [45]. We describe a set of internal states x k where k = 1, 2, . . . , m by their respective measurements provided by Eqs. (2.4) and (2.6): Remark: The measurements y 1 , y 2 , . . . , y m do not have to be sequentially sampled. The important relationship is between the current measurement and the future measurement, for example y 1 and z 1 . The states x i do not have to be from a single trajectory of f , but can be from a sample of the phase space. Of course, collecting data from an experiment or a historical records, often this data will be collected from a single trajectory.
We can then compute DMD modes from the measurement pair by finding eigenvectors and eigenvalues that satisfy the standard eigenvalue problem: Assuming the matrix A has a full set of eigenvectors, each measurement column y k can be represented by expanding by the eigenvectors of A: If we have linearly consistent data, the relationship Ay k = z k is satisfied allowing us to apply the operator A to Eq. (2.10): Ac jk v j , In the case of linearly consistent data matrices, the DMD modes and eigenvalues of Eq. (2.11) correspond to the Koopman modes of Eq. (2.6). We refer the reader to [45] for a more detailed description. The top row describes an underlying nonlinear dynamical system that is measured thought an observable function g. The second row shows how the inputs, which can either be exogenous inputs or apart of a controller, are also measured. The last row indicates the goal of the Koopman operator with inputs and control. Namely, to find an operator that takes all observable functions g j (x, u) to the same observable function, but at a future internal state g j (f (x, u), u).

Measurements Dynamics
3. Generalizing Koopman to allow for inputs and control. In this section, Koopman operator theory is generalized to allow exogenous inputs to the systems. In the first subsection, we show how Koopman operator theory can be generalized to include inputs. Then, we show how this formulation can be applied to linear systems. We describe the connection of this analysis to the DMDc resulting in a perspective on KIC that describes how to define a different output space for the Koopman operator.

Koopman with inputs and control.
Consider a nonlinear dynamical system that allows for external inputs where x ∈ M and u ∈ N where both M and N are smooth manifolds. As before, we dispense with the manifolds and consider x ∈ R nx and u ∈ R nu . Further, we do not need u to be constrained

Historical -Google Flu Trends
Koopman with control An illustration about one of the goals of Koopman operator theory with or without inputs. The first row shows that there might be an unknown system evolving according to some dynamical system. The second row shows that we can measure the system experimentally, as in the case of optical systems, or historically, as in the case of historical infectious disease data. The last row shows one of the goals of Koopman operator theory: to discover an operator that can propagate forward in time a set of measurements for prediction and control.
to a manifold. We define a set of scalar-valued observation functions, but now the functions are dependent on the state and the input where g : M⊗N → R. Each observable function is an element of an infinite-dimensional Hilbert space H. Again, we choose the Hilbert space given by the Lebesque square-integrable functions. Note that H can be broken in to three separate Hilbert spaces where the functions g(x, u) = g(x) are in H X , g(x, u) = g(u) are in H U , and finally the complement H XU which contain observable functions that offer mixed terms such as g x,u = x 1 u 1 . Thus, we can consider the Hilbert space to be composed of three components such as H = H X ⊗ H U ⊗ H XU . This partitioning could be extended to include linear identity observables, i.e. g(x) = x 1 where x 1 is the first element of x, versus nonlinear observables. We take advantage of this construction later in this section to determine how the Koopman operator project to different partitions of H allowing us to connect KIC to DMDc.
The Koopman operator with inputs and control K : H → H acts on the Hilbert space of observable functions given by the following: where * indicates a choice of definition. Consider the following choices: 1. * = u: The inputs are evolving dynamically whether from state-dependent controllers or externally evolving systems such as those found in multi-scale modeling.
2. * = 0: The inputs affect the state evolution, but the inputs are not evolving dynamically. This is the case with impulse-response measurements and random exogenous disturbances.
The linear characteristics of the Koopman Operator allow us to perform an eigendecomposition of K given in the standard form: The operator is now spanned by eigenfunctions that are defined by the inputs and state. Using the infinite expansion shown in Eq. (2.3), the observable functions g j can be rewritten in terms of the right eigenfunctions ϕ j , . . .
where n y is the number of measurements. The new Koopman operator can be applied to this representation of the measurement Note that the expansion is in terms of Koopman eigenfunctions with vector valued coefficients that we call Koopman modes v j . The terminology of Koopman operator theory now allows for measurement functions that accept inputs.
Remark 1: In (3.2), we could use * = 0 for the definition if we are not attempting to discover dynamics for the inputs. Considering the discrete dynamical system of (3.1), the definition could also be Kg(x k , u k ) g(f (x k , u k ), u k ) where the operator K will discover an identity map from u k to u k instead of a map from u k to 0. This choice requires some a-priori information about the system and helps define a family of Koopman operators for the set of observable functions. This general perspective is discussed in more detail in §3.4.

Remark 2:
In (3.2), we could use * = u for the definition if there is prior information that the inputs are evolving according to a set of dynamics. The discrete dynamical system of (3.1) would define the following operator: In this case, u k could technically be adjoined to the state x k and the original definition of Koopman applied. We believe that partitioning H according to state and input observables also helps partition the operator K to disambiguate the impact of the state dynamics and the inputs. Further, if the system being measured is multi-scale in nature such that the inputs to one scale of the system are evolving according to their own dynamics, then we could define the operator as * = h(u). The Koopman operator would then discover how u is evolving on one scale as well as how x is evolving with u as an input.

KIC for linear systems .
In this subsection, we demonstrate how this new definition of KIC with * = u can be applied to linear systems. Consider the linear dynamical system with inputs We consider full-state access and full-input access by choosing observable functions that are the identity i.e. g(x) = x. The linear dynamical system can be rewritten in terms of a new state z The eigenvalues of G are also the eigenvalues of K and the left and right eigenvectors of G are related to the eigenfunctions of K. The description of this linear system for an input-output system is clearly not canonical. Typically, the future state would not include the future input. There are exceptions, though, especially when considering a common method for analyzing non-autonomous dynamical systems where time is treated as a state (creating an augmented state) and a vector field f augmented with a simple ODE,ṫ = 1 [15]. Further, the inputs may actually have dynamics. Later in this section, we comment on modifying the Koopman operator with inputs and control to illustrate a more canonical view of an input-output system, thus connecting previous work on DMDc [33]. The decomposition of G, with eigenvalues λ j and eigenvectors v j is The state can be represented by an expansion in terms of the singular vectors v: where we specify the Hilbert space H that the inner product is defined on · H . Also, w j are the left eigenvectors of the operator G. Further, the eigenfunctions ϕ j are projections of the state on the eigenvectors w. For linear systems, the Koopman operator is equivalent to the linear map G. Further, the Koopman modes (both the left and right) coincide with the eigenvectors of G.

Remark 1:
The analysis of the linear system with the new definition of the Koopman operator illustrates how the previous definition of the Koopman operator can be extended to handle inputs. However, the example also indicates a challenge presented by this choice of methodology. We may not be interested in finding a Koopman operator that predicts the future input u k+1 . Further, if the inputs are random disturbances or exogenous inputs, there will not likely be an operator that can predict the future input. Thus, G 21 and G 22 create issues for solving for the approximate Koopman operator. There are a number of ways to alleviate these challenges, which are addressed in §3.4.

KIC and connections to DMDc.
In this subsection, we describe how KIC is connected to DMDc. As stated in the previous subsection, the definition of KIC does not appear to fit with the canonical view of linear input-output systems. In this subsection, we will illustrate the flexibility of the new definition by demonstrating how to connect the new theory with a recently developed method called DMDc [33]. This connection parallels the link between Koopman operator theory and DMD [38].
Similar to §2.2, we describe a set of internal states x k where k = 1, 2, . . . , m and now with a set of internal inputs u k with linear, identity measurements given by the following: As with Exact DMD, the set of states x k and inputs u k do not need to be from a single trajectory of the dynamical system [45]. Each of the measurements can be collected to form two large data matrices, described by the following: The DMDc is defined for three measurement matrices (Z, Y, Υ) providing a non-square opera-torG that helps identify input-output characteristics. The DMD modes from the measurement trio can be found by performing the singular value decomposition: Assuming the matrixG has a full set of singular vectors, each measurement column of Ω can be represented by expanding by the eigenvectors v j : If we have linearly consistent data, the relationshipG y k γ k = z k is satisfied allowing us to apply the operatorG to Eq. (2.10) giving: Note the difference between DMDc and KIC. In DMDc, the expansion can be in terms of either the input or output space since the operatorG is not square. In the previous subsection, we illustrated how G is effectively square which can step forward not only observables on the state, but also for the inputs. In the following section, we describe how we synthesize these two perspectives.

Adapting KIC to allow for different input and output spaces.
In this subsection, we connect the KIC architecture to DMDc. Further, we illustrate how the Koopman operator can be viewed as projecting from the complete Hilbert space H to a subspace of H. This perspective of the Koopman operator with inputs allows for a different input and output space, thus facilitating the connection to not only DMDc, but also to recent developments such as Kernel DMD [48,47] and Sparse Identification of Nonlinear Dynamics (SINDy) [5]. We begin with slightly different definitions of the Koopman operator itself to show the connections to DMDc. We then demonstrate how the Koopman operator can be viewed in terms of different input and output spaces. Kg(x, u) g(f (x, u), 0). (3.15) In this case, the operator is no longer attempting to fit a future input prediction. Instead, this new modified Koopman operator is only attempting to propagate the observable functions at the current state and inputs to the future observable functions on the state. Numerically, this definition modifies (3.7) for linear systems so that which can be reduced to This interpretation of the Koopman operator connects the non-canonical form found in §3.2 with the canonical version of system identification methods described in §3. 3. This construction of the Koopman operator forces a closer inspection of the eigenfunction expansion in (3.3). There is no longer a requirement for having equivalent eigenfunctions ϕ j for both the input and output spaces of the operator K. Here, the eigenfunctions ϕ j could be mapped to a restricted subspace of H that only concerns the prediction of the future state (without the future input).

Input and output spaces for the Koopman operator.
We investigate how the output space of the Koopman operator can be restricted to a subspace of H. In §3.1, we illustrated how the Koopman operator is defined on H for all observable functions g(x, u). This space H can be broken up in to different subspaces. We illustrate how these subspaces can be utilized to describe the output space of the Koopman operator. We could expand the input and output spaces of K by the following: where ψ j are eigenfunctions that span a subspace H X . These are a subset of the eigenfunctions of H. The span of H includes the eigenfunctions of the linear measurements on x, but also the nonlinear measurements on x and u. The vector of observable functions g(x, u) can still be defined as in (3.4), but now the Koopman operator applied to this set of observables becomes where n x is a smaller set of observable functions than n y and q j are left Koopman modes. This allows for their to be different input and output spaces for the Koopman operator expansion. The distinction allows for the practitioner to investigate how the Koopman operator projects a large input space of observables that includes linear, nonlinear, and mixed terms to a restricted output space of only linear observables. Clearly in the case of DMDc, measurements of the state and input are collected, but the Koopman analog is only focused on determining the future measurements on the state (with the impact of the input included). This perspective expands this view to allow for many more types of measurements than in the DMDc context. Namely, the input space can include a large functional library expanding the measurement set as in the work by Williams et al. [47] which has been shown to discover the underlying nonlinear dynamics with more clarity.
In this framework the input space of the Koopman operator can be expanded in linear measurements of the state and input, nonlinear measurements of the state and inputs, and mixed state-input terms. The output space, though, can be restricted to a set that spans, for example, the linear state measurements.

On the linear case for different input / output spaces of the Koopman operator
We are only interested in how the Koopman operator maps to observables that are of the form g(x, u) = g(x). This is exactly the case for DMDc. For observables that are the identity on the state and input measurements, the input space can then be represented by a similar expansion of §3.1, in terms of the right Koopman modes v j : The Koopman operator with control K can be applied to the input state z.
where now the output space is expanded by q j . The Koopman operator and the DMDc operator are equivalent. This required restricting the output space to a subspace H X of H.

Applications. This section explores the theoretical development of KIC on various linear and nonlinear examples.
For examples 1-3, we assume the perspective of the applied scientist which will be taking a finite set of measurements that are intuitive for their system. The first example shows KIC when we assume there are dynamics on the inputs. The second example explore a nonlinear dynamical system with a quadratic nonlinearity well-studied in the Koopman and DMD literature where we assume there is not dynamics on the inputs. The final example looks at a more difficult example inspired by the study of infectious disease. This example illustrates a difficulty facing the community applying Koopman and extended DMD on certain types of nonlinear problems. It also illustrates a potential advantage of this framework.

Example 1 -Linear system with inputs.
Consider the following linear dynamical system: A similar example can be found in [33]. If |λ| and/or |µ| is > 1, the system is unstable. The goal is to recover the underlying dynamics and input matrix when there are various types of inputs including random disturbances, a state-feedback controller, or a multi-scale system. We assume full access to the state and inputs giving the following relationship between the states, inputs, and measurements: The dynamical system can be rewritten in the KIC form with the definition * = u where we are interested in finding dynamics for the inputs: where a, b, and c depend on the types of inputs. We first investigate when the inputs are random disturbances. We collect measurements of the state and inputs to investigate the reconstruction of a finite-dimensional Koopman operator. The following is the first five snapshots of a single realization: The parameters used for this example are µ = 0.1, λ = 1.5, and δ = 1, where the linear system is unstable. The random disturbances for the input are zero mean and gaussian distributed with a variance of 0.01 Six snapshots of data are used for the computation. Using these data matrices, a restricted Koopman operator can be constructed, see (3.7) for an example. The solution using these data matrices is: The underlying system of (4.2) is reconstructed with the random disturbances for inputs. Note that G 11 and G 12 are accurately reconstructed from the data. The restricted Koopman operator also attempts to fit G 21 and G 22 as a propagator on the random inputs, which will not be accurate by construction.
If the controller has state-feedback, for example u = −Kx 2 where K = 1. The data in the last row becomes correlated with the second row. In order to disambiguate the control from the y 2 , a small disturbance is added to the input u to only the snapshot matrix Ω. This provides the following approximate restricted Koopman operator: where the dynamics on the controller now mimic the actual dynamics of x 2 . In this example, the restricted Koopman operator recovers the unstable underlying dynamics and discover that the inputs are being generated by a controller that is dependent on x 2 .
Consider the final input type: the input has dynamics, but is not state dependent, for examplė u = −ru with r = 0.01 and u(0) = 1. Similar to the other input types, we collect the data and find a restricted Koopman operator for the discrete system: The KIC architecture discovers the underlying dynamics of x and the impact of u, but also finds the dynamics on u. This perspective could be beneficial when considering multi-scale modeling where one scale is considered a forcing on another.
The restricted KIC operator can be recovered from the data despite the unstable eigenvalue and various types of inputs. Note that both the operator A and B are recovered from the underlying dynamical system (4.2). The left Koopman modes, as in (3.12) of this operator are q = 1 0 0 1 (4.8) where these Koopman modes can be used to construct the eigenfunctions Ψ j , described in §3.4.2.
A similar procedure can be utilized to find the right Koopman modes v j and eigenfunctions ϕ j . The right Koopman modes span both the states and the inputs.

Example 2 -Nonlinear system with inputs.
We investigate how Koopman with control can be used to solve a nonlinear example with inputs. In this example, we take the KIC form with the definition * = 0. Consider the following nonlinear dynamical system from [45] and [4], but modified to include an input u where λ = 0.5, µ = 2, and γ = 2. We use this example to investigate the effect of inputs or control on the nonlinear system. The observable functions are carefully chosen, as in [4], to investigate this dynamical system given by the following: where the nonlinear function y 3 = x 2 1 has a convenient derivative which allows for closure of the dynamical system defined for the observables, see [4] for more about closure of these dynamical systems. We can transform the problem to include the inputs Now, we can collect measurement data in terms of the input and output variables y i and u. In this case we used fifteen iterations with an initial conditions of [52] T . The restricted Koopman operator on these observables can be reconstructed: The left Koopman modes can be constructed similar to [4] and described in (3.12). These Koopman modes q j can then be used to construct eigenfunctions Ψ j (x) = x, q j . These eigenfunctions span the Koopman operator for this nonlinear dynamical system. The right Koopman modes and eigenfunctions can also be computed as described by (3.12). Despite the nonlinear dynamical system, the KIC perspective constructs a linear dynamical system on the measurements that can be used for prediction and control. where β = 10 is an infectious parameter, ν = 1 is a birthrate parameter depending on the total population of the community S + I + R = 1, µ = 1 is the death rate, γ = 1 is a recovery rate from infection, and Vacc is a rate of vaccination. The left panel shows the output of Fig. 3 with seeding a 1% infection at time zero and adding a small random amount of vaccination at each time step. The nonlinearity in this example SI is a mixed state quadratic nonlinearity. We transform this continuous nonlinear dynamical system into a discrete linear dynamical system with a simple forward-euler scheme, augment the input space to include the nonlinearity SI and inputs Vacc: giving the dynamical system: where K is the KIC operator from data, Y o is the data in the output observables, and Y i is the data in the input observables. If the term SI is included in the output observables, the derivative does not lend itself to a closed form. TheṠI =ṠI + Sİ introduces the need for even more nonlinearities thus increasing the number of needed augmented observable functions, requiring S 2 I, SI 2 , and I 2 . We have not included a row in (4.15a) for the time evolution of y 4 for this reason. This introduces a practical difficulty in the implementation of this method on realistic complex systems. For example, if we try to solve (4.15b) with the augmented row as in Example 2, we do not find the correct operator. Indeed, the last row has no semblance of the correct values. The right panel of Fig. 3 shows how this formulation (dotted line) provides an incorrect operator and thus is not able to accurately predict in the future after being trained on 200 time snapshots. The dashed line shows the correct prediction after solving (4.15b) with the input and output observables defined as in (4.14a) and (4.14).
This particular equation can readily be solved with enough snapshot data. Thus, choosing the correct observable functions becomes of paramount importance both on the input and output space. A similar sentiment is expressed in [47], but without considering separate input and output spaces. Further, recent work has shown a statistical framework for determining which nonlinearities to include by sparsely choosing from a large library of possible dynamical terms [5].

Example 4 -A nonlinear example with periodic solutions.
Here, we consider a nonlinear example that contains periodic solutions such that x k = x k+m ∀ k. The same example was considered in [38] to illustrate that a common method, the discrete Fourier transform, for analyzing a periodic solutions can described in terms of the Koopman operator theory, more generally discussed [29]. They illustrate that the Fourier expansion on the periodic orbit illustrates eigenfunctions of the Koopman operator.
In this subsection, we consider a slightly different problem where the periodic orbit also has external inputs u k , where each of the states x k and u k are contained in a set S. Similar to [38], we can define a set of transformed states using the Fourier decomposition such that where the Z-transform is a generalization of the Fourier decomposition such that in the infinite expansion ∞ k=−∞ x k z −k = ∞ k=−∞ x k exp −jwn for values of |z| = 1. The Z-transform allows for a variety of different inputs and is significant when considering input-output transfer functions for linear systems. Define a set of functions ϕ j (x k , u k ) : S → C by ϕ j (x k , u k ) = z k , j, k = 0, . . . , m − 1 (4.18) then ϕ j are the right eigenfunctions of the Koopman operator K, with singular values z on the unit circle and the left eigenfunctions ψ j will be defined by the Fourier decomposition Kϕ j (x k , u k ) = ψ j (f (x k , u k ), u k+1 ) = ψ j (x k+1 , u k+1 ) = exp 2πij/m ψ j (x k , u k ) (4.19) Thus, the expansions of the input and output spaces can be written in terms of two different (but related) Hilbert spaces. By restricting the phase space to just the periodic orbit S, the Koopman modes are given by the Fourier transform and the Z-transform. Without inputs, the analysis reduces to the discrete Fourier transform. With inputs, the choice of the z-transform allows for exogenous inputs to be considered in analyzing the periodic orbit.
5. Discussion. A wealth of modern applications are nonlinear and high-dimensional including distribution systems, internet traffic, and vaccinating of human populations in the developing world. The need to develop quantitative and automatic methods to characterize and control these systems are of paramount importance to solving these large-scale problems. In order to construct effective controllers, the complex system has to be well-understood. In the case that we do not have wellestablished, physics-based governing equations, equation-free methods can help characterize these systems and offer insight into their control.
Koopman operator theory and DMD offer a data-driven method to characterizing complex systems [29,38]. These methods are strongly grounded in the analysis of nonlinear systems and have been successfully applied in a number of fields such as fluid dynamics [29,43,42,2], epidemiology [34], video processing [14], and neuroscience [3]. Further, this architecture has allowed for the incorporation of recent innovations from compressive sensing allowing insight into optimally measuring a system [21,45,6]. Generalizing Koopman for input-output systems allows for a broader set of systems to be considered. KIC is well-connected to DMDc, which is already having an impact analyzing input-output characteristics for systems with linear observables [33,10]. The extension to Koopman operator theory allows for a larger set of observable functions to be included allowing for nonlinear system identification and the design of controllers.
Theoretical innovations such as KIC will play an ever-increasing role in the characterization and control of complex systems. We believe KIC and DMDc are well poised to be integrated in to a diverse set of engineering and science applications. KIC is positioned to have a significant impact in the analysis and control of large-scale, complex, input-output systems.