SIMPLE ADAPTIVE CONTROL FOR POSITIVE LINEAR SYSTEMS WITH APPLICATIONS TO PEST MANAGEMENT

. Pest management is vitally important for modern arable farming, but models for pest species are often highly uncertain. In the context of pest management, control actions are naturally described by a nonlinear feedback that is generally unknown, which thus motivates a robust control approach. We argue that adaptive approaches are well suited for the management of pests and propose a simple high-gain adaptive tuning mechanism so that the nonlinear feedback achieves exponential stabilization. Furthermore, a switched adaptive controller is proposed, cycling through a set of given control actions, that also achieves global asymptotic stability. Such a model in practice allows for the possibility of rotating between diﬀerent courses of management action. In developing our control strategies we appeal to comparison and monotonicity arguments. Interestingly, componentwise nonnegativity of the model, combined with an irreducibility assumption, implies that several issues typically associated with high-gain adaptive controllers do not arise and usual high-gain structural assumptions are not required.

are straightforward to compute (and thus, in theory, implement). The nomenclature "simple" refers to the nonidentifier property of the controller, meaning that it does not seek to update the underlying dynamical model over time (for instance, by inferring or estimating parameters). That said, in an applied context the management strategy determined by a simple adaptive controller changes over time in response to how the measured variable changes. In that sense, there are parallels between adaptive control, an example of feedback control more generally, and (at least the technical, modelling aspects of) the academic discipline of adaptive management [32,33,34], as known in the resource and ecological management literature. The connections between robust feedback control and adaptive management have been explored by other authors as well: Heinimann [35] proposes principles from control theory as a concept for scholars and practitioners in adaptive ecosystem management.
The application of simple adaptive control to pest management is to the best of our knowledge novel and, owing to its potential societal and economic value, worthy of consideration. The present mathematical investigation is additionally motivated by the observation that existing simple adaptive controllers are typically not designed for models where the state variables are constrained to take nonnegative values, as is evidently the case not only in applied pest management contexts, but also in models for harvesting, scavenging, culling, or predation. One exception that we are aware of is [36,Chap. 15], where adaptive control for positive dynamical systems is considered, and we compare those results with ours. Dynamical systems that leave a positive cone invariant-such as the nonnegative orthant in n-dimensional Euclidean space-are called positive dynamical systems. Correspondingly, the state variable of positive dynamical systems take only nonnegative values, typically denoting abundances, densities, or concentrations. Positive dynamical systems arise as models in a diverse range of fields from biology, chemistry, ecology, and economics to genetics, medicine, and engineering [36, p. xv], and are understandably well studied, with textbooks [37,38,39]. Control of positive dynamical systems leads to positive input control systems [40], where the input variables are assumed to be positive as well. In a pest management context, the input variable is naturally allowed to take negative values, provided that a nonnegative number or distribution of pests remains, leading to the concept of so-called positive state systems [41]. Such a framework allows the modelling of control actions that are essential for pest management but, importantly, fall outside the existing positive input systems theory.
The purpose of the present paper is twofold. On the one hand, for those interested in applications, but possibly not familiar with adaptive control theory, we seek to describe how ideas from adaptive control may be used in pest management. We present a suite of tools that explore the utility of adaptive control techniques in pest management to eradicate or reduce pest abundance. Ultimately, we seek to use adaptive control methods to reduce pesticide usage, which in turn reduces the cost of food production and decreases consequential but undesirable costs of pesticide usage such as the humanitarian and economic cost of human pesticide poisoning and other pesticide-related illnesses, the destruction of beneficial insects (predators, pollinators), pesticide resistance evolution, and contamination of ground and surface water [42]. On the other hand, for those familiar with classical nonidentifier-based adaptive control, we describe how the typical structural assumptions on the set of (unknown) systems, such as minimum phase, relative degree, minimality, and knowledge of the highgain response, can essentially be replaced by componentwise nonnegativity and an irreducibility or primitivity assumption. Therefore, we present and develop two ideas from adaptive control in the con-text of pest management, developed for discrete-time, positive state linear systems. First, we derive a so-called high-gain result (section 2). We present several extensions, including a continuous-time analogue and an input-to-state stability-type estimate. Second, we present a so-called switching or universal adaptive control scheme (section 3). The results are applied to a model for Diaprepes root weevil, an invasive species threatening U.S. citrus production (section 4). Section 5 contains some concluding remarks. Proofs of certain results appear in the appendix. Notation. We collect mathematical notation and terminology used in what follows. The symbols N and R denote the sets of positive integers and real numbers, respectively. The symbol N 0 denotes the set of nonnegative integers. For n, m ∈ N, R n and R n×m denote usual n-dimensional Euclidean space and the set of n×m matrices with real entries, respectively. The superscript T denotes both matrix and vector transposition. With the conventions that R n×1 = R n and R 1 = R, for M, N ∈ R n×m with entries m ij and n ij , respectively, we write We recall that a square matrix M ∈ R n×n with entries m ij is said to be reducible if there exist nonempty, disjoint subsets J 1 , J 2 ⊆ {1, . . . , n} such that J 1 ∪ J 2 = {1, . . . , n} and m ij = 0 for all (i, j) ∈ J 1 × J 2 . If M is not reducible, then it is said to be irreducible. If M is additionally nonnegative, then M is irreducible if and only if for each i, j ∈ {1, 2, . . . , n} there exists k ∈ N such that the (i, j)th entry of M k is positive. A nonnegative matrix M is primitive if there exists k ∈ N such that M k 0 and the index of primitivity is the smallest such k. The matrix M is called Metzler (also known as quasi-positive or essentially nonnegative) if every off-diagonal entry is nonnegative (see, for example, [39,Chap. 6]). We let r(M ) and α(M ) denote the spectral radius and spectral abscissa of M , respectively, which we recall are given by see, for example, [43, p. 172].

Adaptive control schemes for pest management:
In the spirit of high-gain. We assume a discrete-time, age-or stage-structured model for the pest population, which, when the state space is finite-dimensional, is called a matrix Population Projection Model (PPM) in the ecology literature. We refer the reader to the monograph [44] for more background on matrix PPMs. Specifically, the structured population x, effect of the pest control v, and measurement y are modelled as where n, p ∈ N, A ∈ R n×n + is a population projection 1 matrix, and C ∈ R p×n + records the stages (or combinations of stages) measured. In pest control applications, r(A) > 1 so that uncontrolled, the pest population is increasing exponentially, at least asymptotically. The variable y(t) in (2.1) denotes some measured portion of the population and is the variable that is available to inform management strategies-in applications we do not expect to have exact knowledge of the state variables x(t) at all times owing to, for example, the difficulty or unreliability of measuring certain stage-classes.
For a biologically realistic and meaningful model, the control effect v satisfies the following three properties: (i) v is targeted in that it affects only specific life-stages; (ii) for those stages targeted, v is proportional to the pest population present; (iii) v is a (possibly nonlinear but) increasing function of some applied control effort u.
for some m ∈ N and u(t) ∈ R + denotes the applied control effort. The B in (2.2) is determined by which age-or stage-classes are affected by the pest control, and F is determined by the underlying biology. The inclusion of the function Φ : R + → R m×m + in (2.2) models the efficacy on various life stages of the control effort at time-step t, i.e., u(t), which, for example, could denote the mass of active pesticide ingredient used. For what follows we record an assumption of functions Φ arising in (2.2): (A1) Φ : R + → R m×m + has nonzero components belonging to K and mapping R + → [0, 1]. We note that if Φ satisfies (A1), then the limit Φ ∞ := lim ω→∞ Φ(ω) exists.
Example 2.1. If a pest management strategy (2.2) targets only the first stageclass, usually denoting juveniles in an animal model, seeds or seedlings in a plant model, or egg or larval stages in an insect model, then m = 1 and B = 1 0 . . . 0 T in (2.2). The row vector F in (2.2) has entries taken from the corresponding entries in the first row of A. Possible Φ : R + → R + are sigmoid-type functions, taking values between zero and one, such as for constants k 1 , k 2 , k 3 > 0. We comment further on possible choices of Φ in applications in section 4.
The combination of (2.1) and (2.2) yields where the sequence of control efforts u is still to be determined. The premise of the present contribution is that A, B, F , and Φ are uncertain or unknown, and so it is generally not possible to determine a constant choiceû > 0 in (2.2) or (2.4) such that which, in pest management applications, would lead to eradication of the pest. However, we seek to exploit the idea that the choice of pest management (2.2) is able to eradicate the pest, at least for a sufficiently large, but crucially unknown, control effort. To capture this property mathematically, we record a second assumption which pertains to the triple (A, B, F ) ∈ R n×n that appears in (2.4) and a function Φ : Assumption (A2) is in the spirit of "high-gain" in the control engineering literature as it posits that when u(t) (the "gain") is sufficiently large, the "closed loop" matrix A − BΦ(u(t))F is stable. Assuming that A − BΦ(ω)F is irreducible for every positive, finite ω is not restrictive for reasonable ecological models [45]; in other words, the directed graph (see, for example, [39,Definition 2.4,p. 29]) associated with the population projection matrix A − BΦ(ω)F is strongly connected for every ω. Recalling that the directed graph of a population projection matrix models the life-cycle of a stage-structured population (see [44, p. 57]), loosely speaking, a consequence of the irreducibility assumption in (A2) is that effects of the pest management strategy are (eventually) felt by every stage-class (although they perhaps target only a single stage-class).
To exploit the biologically reasonable assumptions (A1) and (A2) in pest management we propose an adaptive control scheme: where u 0 ≥ 0 and ψ ∈ K are design parameters. The term u 0 is the initial control effort, and the function ψ provides a recipe for converting current measured pest abundance y(t) into an increase of control effort at the next time-step and dictates "how fast" the control effort u increases. In practice, ψ will depend on the units of the (currently dimensionless) control effort u, such as mass or volume. The upshot is that the control scheme (2.6) continues to increase the control effort at time-step t, i.e., u(t), so long as the measured variable y(t) is nonzero. Our first result demonstrates that, on the basis of only (A1) and (A2) holding and a minor constraint on ψ, the adaptive control scheme (2.6) is globally asymptotically stable and that rate of convergence of the state to zero is exponential. In particular, the sequence of nondecreasing control efforts u converges. Theorem 2.3. Given the adaptive control scheme (2.6), assume that Φ satisfies and Φ satisfy (A2), ψ ∈ K, and C ∈ R p×n + , C = 0. For positive x 0 and nonnegative u 0 , let (x, u) denote the solution of (2.6). Then, there exist M > 0 and γ ∈ (0, 1) such that If additionally ψ has the property that then u converges, and its limit u ∞ ≥ 0 (which depends on x 0 , u 0 , and ψ) is stabilizing, that is, Proof. For any nonnegative sequence u, the state x is bounded by and thus (2.7) trivially holds if r(A) < 1. We therefore assume that r(A) ≥ 1. Assumption (A2) now implies that Φ = 0, and thus by (A1), Φ is componentwise strictly increasing in its nonzero components, so that In the arguments that follow we shall make use of monotonicity of the spectral radius: That κ is continuous follows from the continuity of the spectral radius and the continuity of the nonzero components of Φ. The monotonicity of κ follows from (2.10) and (2.11). Clearly, κ(0) = r(A) − 1 ≥ 0, and, by assumption (A2), lim ω→∞ κ(ω) < 0. We consider two exhaustive cases: either (a) u(t) ≤ u * for all t ∈ N 0 , or (b) there exists some τ ∈ N such that u(τ ) > u * . Seeking a contradiction, we assume that (a) holds. By its construction in (2.6), u is a nondecreasing sequence that is assumed to be bounded from above and hence convergent to some z ≤ u * . Consequently, the sequence (ψ( y(j) )) j∈N0 is summable, that is, belongs to 1 as necessarily implying that (2.12) ψ( y(j) ) → 0 as j → ∞ .
Since ψ ∈ K, the convergence in (2.12) yields that The assumption that u(t) ≤ u * for every t ∈ N 0 and (2.10) together imply that and thus (2.14) From the convergence in (2.13) and difference inequality (2.14) we infer the following: The combined properties that C and x 0 are positive and that A * is irreducible with r(A * ) = 1 yield the contradiction We deduce that (b) holds, and so there exists τ ∈ N such that as u is a nondecreasing sequence. Invoking (2.10) again, it follows from (2.15) that and as A − BΦ(u(τ ))F is irreducible, (2.11) implies that Hence, for t ∈ N, t > τ, we estimate from which the bound (2.7) readily follows. To see that u converges, we prove that u is bounded. For T ∈ N we have that where f 1 > 1 and f 2 ∈ (0, 1). Here A − δBF is reducible for every δ ∈ [0, 1). Furthermore, y(t) = f t 2 x 0 2 → 0 exponentially as t → ∞, and thus However, for all x 0 1 > 0, u 0 , x 0 2 ≥ 0, and f 2 ∈ (0, 1), there exists Φ : R + → [0, 1), Φ ∈ K, such that f 1 (1 − Φ(u ∞ )) > 1 and thus (iv) If x 0 = 0, then x(t) = 0 for all t ∈ N 0 , and hence (2.7) trivially holds. In this case, u(t) = u 0 for all t ∈ N, and so (0, u 0 ) is an equilibrium of (2.6) for all u 0 ∈ R + . Note that u ∞ = u 0 need not satisfy (2.9). However, one striking property is that for all nonzero x 0 , and any u 0 ≥ 0 the limiting control effort u ∞ is stabilizing.

Extensions.
Here we present directions in which the simple adaptive control model (2.6) may be extended. The motivation for doing so is, first, to show how the model (2.6) may be tailored to accommodate some of the nuanced scenarios likely to be encountered in pest management (Propositions 2.5 and 2.7) and, second, to demonstrate how the ideas behind (2.6) may be extended to different classes of model (Propositions 2.11 and 2.14).
A disadvantage of the scheme (2.6) is that the limiting control effort u ∞ ensured by Theorem 2.3 may be impractically large, becoming prohibitively expensive or risking environmental damage. Therefore, the first two extensions proposed are alternative strategies that seek to reduce a pest population with a smaller control effort.
A rate-limited adaptive control scheme. Consider first the adaptive control scheme where the strictly increasing, unbounded sequence θ is an additional design parameter. The terms in (2.18a) and their interpretation are the same as those in (2.6). The scheme (2.18) differs from (2.6) by the inclusion of θ in the update law for u in (2.18b). The interpretation of (2.18b) is that θ acts as a bound on the size of permitted control actions. While u(t) is no greater than θ(t), the control effort u(t) of the adaptive scheme (2.18) is updated in the same manner as that in (2.6). When u(t) is larger than θ(t), then u(t+1) = u(t), that is, u(t) is maintained at its current level. In effect, the inclusion of θ further limits the rate at which u may increase and in applications may reflect economic, capacity, or legislative constraints. Simulations of the rate-limited adaptive control scheme (2.18) are contained in Figure 3, where it is demonstrated that, although the exponential stabilization may take longer than in (2.6), the rate of increase is further controlled and the limiting control effort u ∞ may be smaller than that produced by (2.6).
Proposition 2.5. Given the rate-limited adaptive control system (2.18), assume that Φ satisfies (A1), (A, B, F ) ∈ R n×n θ is a strictly increasing, unbounded sequence, and C ∈ R p×n + , C = 0. Then, for every positive x 0 and nonnegative u 0 , the conclusions of Theorem 2.3 hold.
Proof. The proof is very similar to that of Theorem 2.3, and so we provide only an outline. Letting u * > 0 be such that A * := A − BΦ(u * )F has r(A * ) = 1, we again seek a contradiction and assume that u(t) ≤ u * for all t ∈ N 0 . Note that u defined by (2.18b) is still a nondecreasing sequence, and the assumption that it is bounded implies that it converges, say with limit z ≤ u * . The assumptions that θ is unbounded and strictly increasing imply that there exists s ∈ N such that z < θ(s) < θ(t) for all t ∈ N, t > s. Therefore, for such t, u(t) ≤ z ≤ θ(t), and hence the sequence ψ( y ) is summable as The proof now follows that of Theorem 2.3, joining at the convergence in (2.12).
Remark 2.6. The conclusions of Proposition 2.5 do not hold in general if the increasing threshold θ for u is bounded or replaced by a constant. Although θ may increase arbitrarily slowly, loosely speaking, the smaller θ is and the slower it increases, the larger we might expect transient growth of the state x to be.
A neutral-zone adaptive control scheme. The second extension is where L 1 ≥ L 2 > 0 and μ ∈ (0, 1) are additional design parameters. The terms in (2.19a) and their interpretation are the same as those in (2.6). The interpretation of the update law for u in (2.19b) is that when the output y(t) is larger than some prescribed level, meaning L 1 ≤ y(t) , the control effort u increases as before. When the output is deemed small enough-smaller than the threshold L 2 -then the control effort decreases geometrically. Between L 1 and L 2 a constant control effort is maintained: there is no adaptation of u, hence the nomenclature "neutral-zone." In control engineering the terms "dead-zone" or "deadband" are also sometimes used. The motivation for (2.19) is to trade off the cost of increasing the control effort u against the possible increase in observed pest population y.
Simulations of the neutral-zone adaptive control scheme (2.19) are contained in Proposition 2.7. Given the neutral-zone adaptive control system (2.19), The proof of Proposition 2.7 is elementary, but tedious, and hence is relegated to the appendix.
Remark 2.8. The proof of Proposition 2.7 is in fact independent of how u in (2.19b) is defined to decay when y(t) < L 2 . Consequently, the exponential decay of u proposed in (2.19b) could be replaced by any scheme with u(t) → 0 when y(t) ≤ L 2 . Furthermore, Proposition 2.7 remains true in the case that the update law u(t + 1) = u(t) is omitted, by taking L 1 = L 2 . Although the above result guarantees boundedness of the state and control effort, it does not give any indication of what these bounds are and how they depend on L 1 , L 2 , or γ. Again, for practical applications, one would need to trade off the costs of eradicating a pest against the cost of not (sufficiently) reducing pest abundance. For example, when the lower threshold L 2 in (2.19b) is small, then control u is only reduced once a pest abundance less than L 2 is observed, and until that point u is either increasing or kept at a fixed level, which may be costly.
Input-to-state stability. The models considered thus far have overlooked external signals acting on the state or output. In the context of pest management the former could denote previously unmodelled immigration, and the latter measurement or sampling error. To incorporate such a situation the model (2.6) becomes (2.20) where u 0 ≥ 0 and ψ ∈ K are design parameters and d 1 and d 2 are external signals. We shall assume that d 1 and d 2 are (locally) bounded and respect the nonnegativity constraints of state and output. Without further assumptions on the class of disturbances, however, we should not expect convergence of the state x or control effort u of (2.20). The following result demonstrates that the state x of (2.20) admits an input-to-state stability (ISS)-type estimate. We refer the reader to [46] for more background and information on ISS. Proposition 2.9. Given the disturbed adaptive control system (2.20), Then, for all positive x 0 and nonnegative u 0 and all disturbance signals The constant M 2 may be chosen to depend only on x 0 and δ. Proof. We restrict to the case that r(A) ≥ 1 (the result is immediate otherwise). Let u * > 0 be such that Since r(A * ) = 1, A * is irreducible, δ > 0, and C = 0, the summation on the righthand side of (2.23) is greater than u * at some time τ , independently of u 0 , d 1 , and d 2 . Then, since u is nondecreasing, Arguing as in the proof of Theorem 2.3, writing A τ := A − BΦ(u(τ ))F , we obtain the estimate and thus for t ≥ τ + 1 for positive constants N 1 , N 2 , N 3 depending only on A and A τ . For t ∈ {1, 2, . . . , τ} we may estimate Bounding the right-hand side of (2.26) yields that for t ∈ {1, 2, . . . , τ}  where ε is an unknown sequence of measurement errors with the property that ε(t) > ε 0 > −1 for some constant ε 0 . In this case, the condition (2.21) holds with δ = 1 + ε 0 > 0. (iii) The term robustness is used in control to quantify how much uncertainty (in some sense) a control scheme may tolerate and still function qualitatively as intended. The ISS estimate (2.22) is a robustness property with respect to the additive disturbances as in (2.20). There are other notions of robustness in adaptive control, and we refer the reader to [19], in which a framework is constructed for describing and analyzing robustness of adaptive control schemes (in the continuous-time case).

Infinite-dimensional state space.
Here we demonstrate that in certain circumstances the assumption of a finite-dimensional state space may be relaxed. A key ingredient in the proofs so far is strict monotonicity of the spectral radius with respect to monotonicity of operators; that is, holds. When A 1 , A 2 : R n + → R n + and R n is equipped with the usual partial ordering of 0 ≤ x ⇐⇒ x ∈ R n + , then irreducibility of A 2 is sufficient for (2.28) (indeed, see (2.11)). The monograph [38] contains nonequivalent sufficient conditions for (2.28) depending on what one is prepared to assume about the (possibly infinite-dimensional) Banach space X , its partial ordering, and the operators A 1 , A 2 : X → X. We appeal to the results and terminology of [38], noting that the authors there use the term positive when we would instead use nonnegative.
Let X denote an ordered real Banach space, so that X is equipped with a partial order ≤ (also ≥) that respects vector space addition and multiplication by nonnegative scalars. The positive cone C induced by (X , ≥) is the set of x ∈ X such that x ≥ 0 and is a closed, convex set (so that if x, y ∈ C and δ ≥ 0, then x + y, δx ∈ C) with the property that −x ∈ C implies that x = 0. The cone C is called reproducing if X = C C, meaning that every z ∈ X may be written as z = x 1 − x 2 with x 1 , x 2 ∈ C. For real Banach spaces X 1 , X 2 with respective positive cones C 1 , C 2 , a bounded linear operator T : Consider now the adaptive control system (2.6), where are bounded, positive operators. For Φ satisfying (A1), we record the following assumption: Proof. The proof is identical to that of Theorem 2.3 once (2.17) is established. Under our assumptions, this follows from [38,Theorems 16.2 and 16.3] with A in those results taken as A τ := A − BΦ(u(τ ))F , δ = 1, and y 0 = w, the positive eigenvector of A * = A − BΦ(u * )F corresponding to r(A * ) = 1, the existence of which is ensured by [38,Theorem 11.5]. A careful argument using A τ = A * shows that A τ w = w.
Remark 2.12. The assumptions that we have made in formulating Proposition 2.11 are motivated by the class of Integral Projection Models (IPMs) where we wish the results to apply. An IPM is a discrete-time linear difference equation on the natural state space L 1 (Ω) specified by integral operator (2.30) for some nonnegative-valued kernel where, for simplicity say, Ω is the closure of some bounded set in R n , n ∈ N. IPMs were introduced by [47] (see also [48,49,50]) as a tool for population modelling where the discrete age-or stage-class variable of a matrix model is replaced by a continuous variable, such as stem width of a plant species.

Continuous-time.
Here we demonstrate that the continuous-time counterpart of Theorem 2.3 holds. In continuous-time the adaptive control system (2.6) becomes To motivate our insistence that A in (2.32) is Metzler, we recall that for M ∈ R n×n the continuous-time linear system of differential equations has unique solution z(t) = e Mt z 0 for t ∈ R + , where e Mt denotes the matrix exponential of M t. It is well known that the solution z of the differential equation (2.33) is (componentwise) nonnegative for all z 0 ≥ 0 if and only if M is a Metzler matrix. In the applied context of pest management, for a meaningful model we naturally require that the x component of a solution of (2.32) is nonnegative. Returning to (2.32), as before the function ψ ∈ K and initial control effort u 0 ≥ 0 are design parameters. For continuous-time models, Φ : R + → R m×m + is still assumed to satisfy (A1), and the analogous assumption to (A2) on (A, B, F ) and Φ is the following: . The interpretation of the continuous-time pest model and control action in (2.32) is the same as that in (2.6). Unlike difference equations, when considering differential equations, existence of (global) solutions must be addressed. So as to not detract from the purpose and flow of the present section, the proof of the next lemma is postponed to the appendix. The main result of this section is Proposition 2.14, which mirrors Theorem 2.3.
. Proposition 2.14. Given the adaptive control system (2.32), assume that Φ : If additionally ψ has the property that then u converges and its limit u ∞ ≥ 0 (which depends on x 0 , u 0 , and ψ) is stabilizing, that is, The proof of Proposition 2.14 is very similar to that of Theorem 2.3, mutatis mutandis, and hence is relegated to the appendix. Here we instead provide two remarks. The first contains further comments on adaptive control scheme (2.32) and compares and contrasts it with other simple adaptive controllers in the literature.  [31,Lemma 3.4] or [26, Theorems 3.5 and 3.11]. That said, there are two key differences, which means that in the context of positive state systems, Proposition 2.14 is (to the best of our knowledge) novel. First, in the aforementioned results it is required that BF (in our notation) has simple null structure (meaning that R n = im (BF ) ⊕ ker (BF )), which by [26,Proposition 3.9] is equivalent to F B being invertible and is an assumption that cannot be dropped in general [26,Example 3.10]. When B = b, F = f ∈ R n , bf T having simple null structure is equivalent to the triple (A, b, f T ) having relative degree one. This assumption is not required for Proposition 2.14 and is replaced by the irreducibility assumption in (A3). For example, the triple where a i > 0 for each i ∈ {1, 2, 3, 4}, and R + x → Φ(x) = 1 − e −x together satisfy (A3), but BF does not have simple null structure because Second, in Proposition 2.14, the measured variable y = Cx and the output feedback F x used in (2.32) are not equal in general. This comment applies to the discrete-time system (2.6) as well. (iii) If x 0 = 0, then x(t) = 0 for all t ∈ R + , and hence (2.34) trivially holds, but u(t) = u 0 for all t ∈ R + and thus u ∞ = u 0 need not satisfy (2.36). However, if x 0 = 0, then the limiting control effort u ∞ satisfies (2.36) for every u 0 ≥ 0. See also Remark 2.4(iv). It is known that a (continuous-time) high-gain adaptive controller applied to a linear model converges to a limit v ∞ , and v ∞ is itself generically stabilizing [51,52]. Still, even this assertion admits the case that there are infinitely many x 0 for which v ∞ is not stabilizing. We comment that many continuous-time population models are in fact specified by partial or delay differential equations, such as the McKendrick-von Foerster partial differential equation [56,57] or models for blowfly [58], respectively. These models fall beyond the scope of (2.32) and require more sophisticated mathematical treatment, but their semidiscretizations (leading to systems of ordinary differential equations) may result in models of the form (2.32). Thus (2.32) may be seen as a starting point for considering simple adaptive control for continuous-time models that arise in pest management.

3.
A switching adaptive control mechanism for pest management. The approach taken to pest management so far in this paper is predicated on the assumption that the single management strategy v = −BΦ(u)F in (2.1) is capable of eradicating a pest once the control effort u is sufficiently large, formulated as assumption (A2). A high-gain-like adaptive controller for increasing u based on the measured output y is shown to achieve this task in Theorem 2.3. Here we consider the realistic situation wherein the control law v = −BΦ(u)F may not eradicate the pest, irrespective of how large the control effort u is: a biological possibility illustrated in the example below.
Example 3.1. A general 3 × 3 Lefkovitch [59] matrix has the structure where f i are recruitment rates into the population and s i and g i are rates of stasis within a stage-class and growth to the next stage-class, respectively. Suppose that a pest management strategy targets the third stage-class exclusively, so that B = 0 0 1 T in (2.2). If then assumption (A2) fails for any F ≥ 0 and Φ satisfying (A1). In other words, controlling the third stage-class alone here cannot eradicate the pest. Motivated by the above example, in response we assume that access is available to several different pest management strategies, for example, targeting distinct life-cycle stages of a pest. Owing to the problematic uncertainty in both the pest dynamics and the efficacy of any given control strategy, we propose a so-called switching adaptive control scheme (also known as a universal adaptive control scheme). The design is in the spirit of Mårtensson [26,27] or Helmke, Prätzel-Wolters, and Schmid [60] and, as we proceed to demonstrate, cycles through potentially (a priori infinitely) many controllers (that is, courses of management action) before converging in finite-time to a stabilizing controller.
We consider the switching system The function K in (3.2a) depends on two chosen sequences: the matrix sequence K = (K(j)) j∈N ⊆ R m×m and the strictly increasing, unbounded sequence of positive numbers τ = (τ (j)) j∈N0 ⊆ [0, ∞) with τ (0) = 0. The terms K, τ , and s 0 are all design parameters, and, once chosen, K is defined as We note that the assumptions τ (0) := 0, τ is strictly increasing, and τ is unbounded together imply that K (z) is well defined for all z ≥ 0. Given a solution (x, s) of (3.2a)-(3.2d), it is convenient to introduce the "counting" function i : N 0 → N, defined as It is readily seen that i is a nondecreasing function. Example 3.2.
(i) In an applied pest management context, the interpretations of x(t) in (3.2a) and y(t) in (3.2b) are as defined in (2.1) or (2.4)-they denote the pest population and some measured portion of the population, respectively. The scalar variable s(t) in (3.2a) and (3.2c) is an increasing switching variable, as it determines which control law (that is, management strategy) is applied at timestep t ∈ N, through the matrix function K and the sequence of switches τ . (ii) To connect (3.2) with the single control strategy (2.6) proposed in section 2, here we might assume that where φ j ∈ K maps R + → [0, 1] and g ⊆ R m + is a sequence of control efforts with the property that For example, if three strategies are available that target stages one, two, and n > 2 of an n-stage model, then denoting the jth row of A by a T j , we have Informally, the control scheme (3.2) tries control effort g(1) via the matrix K(1) for some length of time, determined by τ and the switching signal s. The control effort g(1) may be sufficient to stabilize the pest population. If not, then switching signal s must increase, and the control scheme (3.2) switches to g (2) and g(3) (although possibly bypassing some g(k) for k ∈ N altogether) and so on. The aim is to provide conditions on (3.2) and, in this example, the progressively "more effective" sequence g such that a stabilizing controller k * ∈ N is reached but not switched from, meaning that r(A − BK(k * )F ) < 1 and s(t) is never greater than τ (k * ). Of course, as highlighted by Mårtensson [26, Remark 5, p. 41], cycling through infinitely many controllers is physically impossible. The interpretation of (3.2a), highlighted by Example 3.2, is that there are in fact only finitely many courses of management action, parameterized by an (infinite) sequence of control efforts that are "eventually increasing," in a sense such as in (3.3). As we demonstrate in our main result of this section, Theorem 3.5, under reasonable assumptions on (3.2), only finitely many control actions are required to stabilize the state to zero (in other words, eradicate the pest). As one might expect when the model parameters are unknown, the theorem does not say which control effort shall stabilize (3.2), only that one shall, and in finite-time.
To that end, the first assumption is an underlying structural assumption: (B1) For each j ∈ N, A j := A − BK(j)F is nonnegative and primitive with an index of primitivity ∈ N that is independent of j.
Intuitively, the interpretation of a common index of primitivity in (B1) is that the location of the zero/nonzero entries of A j is independent of j (of course, the nonzero entries of A j are expected to have values that do depend on j). The next result is a technical lemma that is a consequence of assumption (B1), and its proof is relegated to the appendix. Lemma 3.3. Given the switching system (3.2), assume that (A, B, F ) and sequence K satisfy (B1)and that C, F = 0, and let (x, s) denote the solution of (3.2). Then, there exists a positive, nondecreasing sequence β such that The second assumption pertains to the existence of stabilizing controllers: (B2) There exists R ∈ R n×n + with r(R) < 1 and a subsequence ρ ⊆ N such that A ρ(j) ≤ R for all j ∈ N. The terms of the subsequence ρ enumerate the stabilizing controllers, and assumption (B2) implies that there are infinitely many such control schemes. The third assumption relates the C operator in (3.2c) to the triple (A, B, F ) and sequence K and enables the conclusions of Theorem 3.5 below to hold under a weaker assumption on τ : (B3) There exist q ∈ N, S ∈ R n×n + with r(S) < 1 and a bounded sequence D ⊆ R n×p + such that for every j ∈ N, j ≥ q, 0 ≤ A j − D j C ≤ S. In other words, (B3) means that for every control strategy j after the qth there is a matrix D j such that the output-feedback −D j y = −D j Cx stabilizes A j .
Theorem 3.5. Given the switching adaptive control system (3.2), assume that (A, B, F ) ∈ R n×n + × R n×m + × R m×n + and K satisfy (B1) and (B2), and that C ∈ R p×n + is nonzero. For a given strictly increasing, unbounded sequence τ , let (x, s) denote the solution of (3.2). For all positive x 0 and nonnegative s 0 , if τ has the property that where β is defined as in Lemma 3.3, then x(t) → 0 as t → ∞ and s converges in an interval (τ (r − 1), τ(r)] for some r ∈ N with r(A r ) < 1. If (B3) holds and τ has the property that then the same conclusions hold.
Proof. An argument similar to that used in the proof of Theorem 2.3 proves that s cannot converge in an interval (τ (j − 1), τ(j)] where j ∈ N is such that r(A j ) ≥ 1.
If s converges in an interval (τ (ρ(j) − 1), τ(ρ(j))] for some j ∈ N where, we recall, the subsequence ρ is from assumption (B2), then there is nothing to prove. Therefore, it suffices to consider the case that r ∈ N and T ∈ N are such that with T ∈ N, T = T (r) ≥ + 2. In other words, (3.7) means that T ∈ N is the first time-step when (3.2) has entered the stabilizing control scheme ρ(r). We rewrite the dynamics in (3.2a) as While t ∈ N is such that s(T + t) ≤ τ (ρ(r)), then i(t) = ρ(r), and thus from (3.8) by assumption (B2) and as A ρ(r)−1 ≤ A. A telescoping series argument shows that for some constant M 0 > 0 that is independent of r and T . The variation of parameters formula applied to (3.8) yields that (3.11) x

B[K(ρ(r)) − K(i(j))]F x(j) .
Assumption (B1) implies that for every k 1 , k 2 ∈ N, the operator B(K(k 1 )−K(k 2 ))| imF is bounded since and as each z ∈ R n may be expressed as z = x 1 − x 2 with x 1 , x 2 ∈ R n + . Consequently, the sequence with terms belongs to ∞ , and thus the second term on the right-hand side of (3.12) may be estimated by the Hölder inequality, with complementary exponents p = 1 for the F x(j) term and q = ∞ for the other. We infer that for some constants c 1 , c 2 > 0, where to obtain the estimate (3.13) we have used (3.7) to note that and the property that β is nondecreasing. Another telescoping sequence argument for s applied to (3.13) demonstrates that Substituting the estimate (3.14) into (3.10) yields constants d 1 , d 2 > 0 that are independent of r and T such that valid, we recall, for all t ∈ N such that s(T + t) < τ(ρ(r)). Dividing both sides of (3.15) by τ (ρ(r)) yields that and thus, by assumption (3.5), there exists r ∈ N such that the right-hand side of (3.16) is no greater than one. Therefore, s(T + t) ≤ τ (ρ(r)) for all t ∈ N 0 , and hence s is bounded and thus convergent (in a stabilizing interval), as required. It follows now from (3.9) that x(t) → 0 as t → ∞. Now suppose that (B3) holds. Recapping, we know that s cannot converge in an interval (τ (j − 1), τ(j)] where j ∈ N is such that r(A j ) ≥ 1. If s converges in (τ (j − 1), τ(j)] with j ≤ q, then there is nothing to prove. Therefore, it suffices to consider r, T ∈ N such that (3.7) and, additionally, that The inequality (3.17) means that at each time-step t ≥ T − 2, a control law greater than the qth is being applied, and thus assumption (B3) holds. We fix t * ∈ N, t * < T − 2, as the first time-step at which the qth control law was applied, so that and note that t * is independent of r and T . The argument deriving (3.10) is as before and uses (3.7). Letting and hence Since the sequence D is assumed bounded, r(S) < 1, and t * ∈ N is fixed, we estimate (3.18) in a similar manner to (3.11)-(3.14) to obtain for constants d 3 , d 4 > 0. Combining (3.10) and (3.19), it follows that valid for all t ∈ N 0 such that s(T + t) < τ(ρ(r)). Dividing both sides of (3.20) by τ (ρ(r)) yields that The growth property (3.6) of τ implies that the right-hand side of (3.21) is no greater than one for r ∈ N 0 sufficiently large. Therefore, s(T + t) ≤ τ (ρ(r)) for all t ∈ N 0 , and hence s is bounded and thus convergent (in a stabilizing interval), as required. The proof concludes in the same manner as when (B2) holds. Remark 3.6.
(i) In section 2, the adaptation law for u in (2.6) included a K function ψ. The role of ψ is to control the rate of increase of adaptation, which corresponds to the rate of increase of pest control effort. The switching variable s in (3.2c) does not contain such a function. Instead, the rate of switching between strategies is determined by the sequence τ -the faster τ grows, in principle, the longer each control strategy is persisted with before switching. (ii) The key difference between the switching adaptive controllers considered here and those elsewhere in the literature (such as [26,27] or [60]) is that in general C = F here, and so the output F x(t) driving the state in (3.2a), via the feedback −K i F x(t), is not equal to the measured output Cx(t), the variable which is driving the switching signal s(t). However, as we have demonstrated, a consequence of the assumed (and realistically nonrestrictive) primitivity is that there is a coupling between F x(t) and Cx(t) which is sufficient to stabilize (3.2), provided that τ grows sufficiently fast, that is, (3.5) holds.

Example.
We illustrate the present results using a stage-structured model from [61] of the economically important citrus pest Diaprepes root weevil (DRW; Diaprepes abbreviatus). The species is native to Caribbean islands and arrived in Florida, U.S., in the 1960s [62]. Females deposit egg clusters on the leaves of hostplants. Upon hatching, the larvae drop to the soil surface and burrow underground, where they feed on roots for several months until they pupate. The root damage caused by the larvae can lead to plant death or decline to an unproductive state. In the field it is challenging to study developmental rate and survival of DRW since larvae and pupae live underground, and eggs are hard to find because of their small size (1.2 mm long and 0.4 mm wide), making the estimates of model parameters highly uncertain. Populations are generally monitored via traps that capture adults emerging from the soil. Recommended insecticide treatments are targeted for specific insect stages. 2 For instance, adults are killed with foliar insecticide applications (e.g., fenpropathrin), egg hatching can be prevented with diflubenzuron applied with oil, and larvae attempting to burrow into the soil are killed with a chemical soil barrier (imidacloprid or thiamethoxam). An alternative control strategy are the biocontrol nematodes (Heterorhabditis indica, Steinernema riobrave) that attack and kill larvae underground. However, nematodes are not persistent because as soon as they kill most of the larvae in an area, they die from a lack of food. Hence, they are similar to insecticide application in the sense that they need to be reapplied and nematode dynamics need not be considered here.
A matrix model for DRW is given in [61], with six stage-classes denoting, in increasing order, eggs, two larval stages, pupae, and two adult stages. The timesteps in this model denote weeks. We refer the reader to [61] for more details. The projection matrix A [61, Table 2] has uncertain entries, but each belongs to the intervals in [61, Table 1]. We let A denote the set of all DRW matrices. Every A ∈ A satisfies r(A) > 1. In the following simulations, we numerically generate A ∈ A and initial condition x 0 with x 0 = 4000. We assume that total adult DRW abundance is measured, so that C := 0 0 0 0 1 1 .
Pest management strategies for DRW include the chemical pesticides described above, with a candidate substance for the egg, larval, and adult stages, as well as entomopathogenic nematodes, which attack the larval stage-classes. Let e j and a T j denote the jth standard basis vector in R 6 and the jth row of A, respectively. The effect of the pest control targeting egg, larval, and adult stages may be respectively modelled by (2.4) with (4.1) The efficacy functions Φ in (2.4) are in reality unknown but, as always, are assumed to satisfy (A1). In modelling Φ there is a trade-off between biologically motivated explanations and phenomenological solutions. For instance, in the present setting, if we assume that locally DRW larvae are distributed according to a Poisson distribution, and death via nematodes or insecticide application is random and independent, then e −k1u is the probability of a DRW larva surviving nematode attack or insecticide application. In the case of nematodes, k 1 > 0 is the search efficiency (or "area of discovery"), and u is the number of nematodes applied. In the case of insecticides, the probability of surviving a chemical application is also e −k1u , in which case u is the chemical concentration, and k 1 > 0 is some lethality term. In light of the above discussion, we shall assume that the efficacy function Φ is given by More generally, when Φ : R + → R m×m + is such that lim ω→∞ Φ(ω) = I, the m × m identity matrix, then it is straightforward to verify that the triple (A, B j , F j ) satisfies assumption (A2) for each A ∈ A and j ∈ {1, 2, 3}.
Recall that the ψ j functions control the rate of increase of the control effort u(t) in (2.6). Here ψ 1 is a power law with the small scaling parameter 0.05 chosen for ease of comparison with ψ 2 and ψ 3 . The function ψ 2 (sometimes called a Beverton-Holt or Holling type II nonlinearity) is a bounded function, and ψ 3 is a logarithm, which is unbounded but grows rather slowly. Figure 1 contains simulations of the adaptive control scheme (2.6) with u 0 = 0. Recall that in this simulation, and throughout, each time-step denotes one week. So 100 time-steps is roughly two years. As expected from Theorem 2.3, for each ψ j the state x converges to zero, as seen in Figure 1(a). The control efforts u(t) are plotted in Figure 1(b), and we see that they also converge to a finite limit. Of more relevance is Figure 1(c), which shows the efficacy Φ(u(t)) (solid lines) and the growth rates r(A − BΦ(u(t))F ) (dashed-dotted lines). It is observed that for each ψ j , u(t) rapidly reaches a level whereby Φ(u(t)) ≈ 1, but that the limiting growth rate r(A − BΦ(u(t))F ) ≈ 0.95 < 1, which provides a lower bound for the rate of decline. By way of comparison, the black line in Figure 1(a) denotes the solution of which provides a lower bound in this example for every solution of (2.6) and would be achieved if from the outbreak of infestation the first stage-class was targeted with a strategy that was "100% efficacious." Therefore, in this example ψ 1 and ψ 3 evidently use too great a control effort. We comment, however, that these conclusions may be made only retrospectively (and with knowledge of the to-be-controlled model). and motivate pest management strategies that target other stage-classes.
The power of Theorem 2.3 is not fully exhibited in Figure 1, as there the matrix A is fixed (but random). To demonstrate the important robustness of the adaptive control scheme (2.6) to model uncertainty, Figure 2 contains simulations of the DRW projection model for 30 pseudorandomly generated projection matrices A ∈ A and the efficacy parameter 0.6 from (4.2) replaced by a pseudorandomly drawn number from the interval [0.6, 0.8]. Biologically, these variations may be a consequence of variable environmental factors, such as temperature, sunlight, or rainfall, varying soil quality, or citrus tree health. In each simulation u 0 = 10 and ψ = ψ 2 , and it is observed that the same control strategy achieves stabilization of the state (Figure 2(a)). We reiterate that assumption (A2) holding is crucial for the success of (2.6), yet (A2) does not require explicit knowledge of A, B, F , or Φ. Note that across the simulations the control effort increases approximately linearly and is approximately equal for the first 150-180 time-steps. This is a consequence of the choice of ψ = ψ 2 which has two properties: (a) ψ 2 is bounded by one, and (b) ψ 2 (100) = 0.9901 ≈ 1. As such, the choice of ψ 2 does not particularly distinguish large arguments, and so while y(t) ≥ 100-the first 150 time-steps or so-each control effort u grows similarly. Figure 3 contains simulations of the adaptive control scheme with rate limitation (2.18). For variation here we simulated the effects of targeting the larval stageclass, so that B = B 2 and F = F 2 in (4.1). We have chosen the same efficacy function Φ as in (4.2) but with the parameter 0.6 replaced by 0.05 (so that the control effort is hardly efficacious). Our somewhat carefree approach to parameter selection is in part to illustrate the argument that they are not essential for our main results to hold. Here we have chosen u 0 = 10, fixed ψ = ψ 3 from (4.3), and compared the results of the adaptive control scheme (2.6) (blue curve) with those of the rate-limited adaptive control system (2.18) for two different choices of θ of the form θ(x) = ax + b; see figure legend (red and green curves). Both of the rate-limited adaptive control schemes achieve stabilization of the state (Figure 3(a)) at lower asymptotic control effort than that of (2.6) (Figure 3(b)). We cautiously comment that this need not hold in general, however. Figure 3(c) plots the efficacy Φ(u(t)) (solid lines) and growth rates r(A − BΦ(u(t))F ) (dashed-dotted lines). We see in Figure 3(a) that the choice θ = θ 1 leads to the largest transient growth, in part, we suspect, as the resulting growth rate r(A − BΦ(u(t))F ) in Figure 3(c) is greater than one for longer periods than with θ 2 or with no θ. However, the difference in performance between θ 2 and the unconstrained model (2.6) is negligible, which would motivate using the saturation rate θ 2 in this application.
The examples considered thus far have all invoked control schemes that result in asymptotic eradication of the pest. A consequence of the model (2.1), with state x denoting the structured pest population, is that the pest abundance is predicted to grow again once the control action is stopped. This is a limitation with assuming an unstable linear model for the pest where the basin of attraction of the zero equilibrium is zero alone. In applications one might expect (or hope) that once pest abundance is  (2.19) applied to the DRW projection model. reduced to a sufficiently low level, then it would not recover, which would be captured by, say, a nonlinear model with zero as an asymptotically stable equilibrium. Allee effects (see, for example, [63]) provide a biological explanation for why sufficiently small populations may not survive. Simple adaptive controllers for nonlinear pest models is beyond the scope of the present contribution, and instead Figure 4 contains simulations of the neutral-zone adaptive control scheme (2.19), which, we recall, seeks to reduce the (cumulative) control effort by reducing u(t) when the observed pest abundance y(t) is low. Specifically, for the simulation in Figure 4 we choose in (2.19) and efficacy function Φ given by (4.2) (with parameter 0.6). Recall that the choices of the thresholds L 1 , L 2 and decay-rate in control effort γ are the choices of the modeller, and in practice would be influenced by ecological and economic criteria. In this example, we observe in Figure 4(a) that each state trajectory x is bounded, as expected from Proposition 2.7. To illustrate the dynamics of the control scheme (2.19), Figure 4(b) plots u(t) against t, while Figure 4(c) plots Φ(u(t)) (solid lines) and r(A−BΦ(u(t))F ) (dashed-dotted lines) both against t. As u(t) increases, so does Φ(u(t)), and thus r(A−BΦ(u(t))F )-the asymptotic growth rate of the pest-decreases. However, once y(t) is sufficiently small, u(t) decreases and the process reverses. Broadly speaking, when r(A − BΦ(u(t))F ) > 1, the pest abundance increases, and the cycle repeats. From Figure 4(b) we see that the bounded function ψ 2 requires considerably less cumulative effort than the other two strategies yet maintains total pest abundance comparable to that of the other choices of ψ.
The simulations in Figure 5 illustrate the performance of the adaptive control scheme with additive disturbances (2.20). The motivation for such a model, we recall, is to explore the effect of previously unmodelled dynamics on (2.6), which could denote immigration, emigration when the state is disturbed, or sampling error when the output is disturbed. We choose ψ = ψ 1 in (4.3), and Φ is still given by (4.2). We assume that d 1 denotes periodic immigration into the adult stage-classes, with three different amplitudes, so that (4.4) The blue curves in Figure 5 are generated by (2.20) with d 2 = 0 (no measurement error) and d 1 = d 1,k as above. The red curves are generated by (2.20) with d 1 as in (4.4) and with ε(t) (pseudo)randomly drawn from a truncated uniform distribution between −0.2 and 0.2; that is, a proportional measurement error of up to 20% is made at each time-step. In Figure 5(a) the ISS estimate (2.22) is exhibited: the norm of the state is bounded and decreases, asymptotically linearly, with decreasing d 1 ∞ . The trajectories in Figures 5(a) and 5(c) may be compared to those in Figure 1 (blue curves), where the same model is projected with no disturbances. We note that, apart from on the outputs themselves, the effects of the output errors d 2 (red curves) are seemingly much smaller than those of d 1 (different line styles).
As an illustration of the theory, our final example considers cycling between control strategies that target different stage-classes, with an increasing effort over time, and thus uses the switching adaptive control scheme (3.2). We set (4.5a) where φ i is the efficacy function of control action j ∈ {1, 2, 3} and g ⊆ R 3 + is the predetermined sequence of control efforts (4.5b) In other words, if at a given starting point the egg stage-class is targeted (g(3j) in (4.5b)), then this course of action is persevered with until the switching variable s(t) reaches the next switch τ (3j). In this case, the management action switches to target both the egg and larval stage-classes (g(3j + 1) in (4.5b)). This course of management action is persisted with until the switching variable reaches the next switch τ (3j +1). As before, if this occurs, then the management action switches to the third strategy in which egg, larval, and adult stage-classes are all targeted (g(3j + 2) in (4.5b)). Should the switching variable s(t) reach the next switch τ (3j + 2), then the above cycle repeats, but with a subsequently larger control effort. The assumptions (B1) and (B2) hold, by construction, for (A, B, F ) and K for every A ∈ A and B, F , and K given by (4.5). Moreover, assumption (B3) holds with D = (D j ) j∈N given by 0 ≤ D j ≤ a 1,5 0 0 0 0 a 6,5 T , j ∈ N , (6,5)th entry equal to zero. For the following numerical simulation we take ψ = ψ 3 from (4.3) and φ 1 , φ 2 , and φ 3 in (4.5a) given by (4.5c) which, we comment, have been chosen somewhat arbitrarily to illustrate the theory. Theorem 3.5 holds for the model (4.5), and hence we expect eradication of the pest for any sequence τ that satisfies (3.6). Figure 6 contains simulations of the switching adaptive control system (3.2), with model data (4.5), for five random initial states x 0 , s 0 = 1/2, and τ given by which evidently satisfies (3.6) as Recall that the sequence τ determines the rate at which the control strategies are changed or, in other words, how long a control strategy is persevered with before changing. Figure 6(a) demonstrates that each state trajectory converges to zero over timemuch less time than in Figure 1(a), for example, where only the egg stage-class is targeted. Figure 6(b) plots the switching signal s(t) against t. We see that each signal s converges and, moreover, converges in either (τ (7), τ (8)] or (τ (10), τ (11)]. As expected, these intervals are stabilizing inasmuch as A 8 := A − BK(8)F has r(A 8 ) = 0.7720 < 1 and r(A 11 ) = 0.7719 < 1.

Discussion.
Simple adaptive control has been considered for discrete-time positive state linear systems, motivated by potential applications in pest management. Pest management is a timely and hugely important societal challenge that shall significantly contribute to the future success of economically and environmentally viable agriculture and horticulture. Several different adaptive control schemes have been proposed as strategies for reducing pest abundance. We believe that adapc 2016 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license tive controllers are ideally suited for such a task for two reasons. Foremost is the (theoretical) ability, and guarantee, of adaptive control schemes to achieve their objectives in spite of considerable anticipated uncertainty in the pest dynamics. Recall that (A1) and (A2) are the crucial assumptions in section 2, which cover a wide range of matrix models and do not require specific knowledge of A, B, F , or Φ. To recap, if A, B, F , and Φ are known with certainty, then it is possible to choose (the possibly smallest)û such that (2.5) holds, which would also ensure that the constant control effort u(t) =û for every t ∈ N eradicates the pest population over time. The caveat in the previous sentence being how certain is "known with certainty." The second reason, and still of importance, is the ease of computation of management strategies determined by simple adaptive control schemes. We reiterate that finding optimal controls is not necessarily straightforward, either analytically or computationally.
A downside of adaptive control schemes is that they are usually conservativethey are not optimal control strategies-which is the price paid for the additional robustness to model uncertainty. In other words, simple adaptive control as presented here trades off a loss of optimality against a gain in robustness. What this means in practice is that the adaptive control law may use more control effort than is required, which is costlier both economically and environmentally. In practical applications, however, the conservatism of adaptive control schemes must be traded off against the economic and social costs of not sufficiently reducing pest abundance-a possible consequence of a supposed optimal management strategy not functioning as intended owing to a lack of robustness, say to model uncertainty. As we have argued elsewhere [64], when seeking to control uncertain population models, open-loop control strategies, that is, those based on (estimated) model parameters and not on measured variables, may not achieve the control objective, whereas robust feedback controls do. The same is true of certain optimal control strategies; see [64] and the references therein.
Although borrowing from the tradition of simple, nonidentifier-based adaptive control [30,31], as a consequence of the naturally assumed componentwise nonnegativity there are mathematical differences between the material presented here and the situations usually considered (such as in [26] or [31]). The most striking difference is which structural assumptions are sufficient for global asymptotic stability of the closed-loop feedback system. For example, prescribed relative degree, minimum phase, and/or minimality may be replaced by irreducibility of the closed-loop system, specifically assumption (A2) (or (A3) in continuous-time). We have sought to explain these structural differences in more detail in Remarks 2.4, 2.15, and 3.6. Another crucial difference is that in typical adaptive control scenarios, the measured output y = Cx is the variable used in the feedback law (here v in (2.2)) to subsequently control the state. In pest management applications, there is no biological reason to assume that the effect of the pest control action (which must depend on the current state) is proportional to the measured population, meaning that presently the feedback takes the form v = −BΦ(u)F x, with F = C in general.
Our first result, Theorem 2.3, was extended in different directions, including a rate-limited adaptive control scheme, Proposition 2.5, and a neutral-zone adaptive control scheme, Proposition 2.7. Both of these schemes have been designed to seek to reduce the control effort which, as we mentioned above, may be unnecessarily conservative. The extension of Theorem 2.3 to a class of infinite-dimensional, discrete-time systems was straightforward for Banach spaces with partial orders and positive operators where monotonicity of the spectral radius (2.28) holds. A class of such operators was described in Remark 2.12. We suspect that the analogous continuous-time result, Proposition 2.14, would extend to classes of infinite-dimensional systems where A in (2.32) denotes the generator of a positive operator semigroup (on an ordered Banach space) and the analogous monotonicity of the growth bound holds. The upshot of this seems to be that the additional technical difficulties (see [65]) encountered in establishing high-gain adaptive control in infinite dimensions may again be replaced by an irreducibility or primitivity assumption (understood appropriately).
In the context of pest management, there are several extensions that we seek to address in future work. First, as an alternative to chemical pesticide use, we desire strategies for the adaptive deployment of biocontrol agents [66]. Because they are living organisms, with life-cycles that occupy a variety of different time-scales relative to the pest, the dynamics of biocontrol agents should be modelled, as well as those of the pest. There are also likely to be interactions between competing biocontrol agents [67], as well as between the biocontrol and pest populations. Second, the model (2.1) does not directly incorporate a spatial component, the inclusion of which would model the spread of a pest population spatially as well as its change in abundance temporally. Indeed, it is argued that the stages of ecological invasion are (broadly) arrival (or establishment or colonization), spread, and then community level changes [68,69]. Much attention has been devoted in the mathematical ecology literature to the spread of an invasive population; see, for example, [70] and the references therein. In effect, here we have considered adaptive control for local pest management, where the in situ pest population is well specified by (2.1). There are many cases where management is focused on local control, typically when organized by individual farmers or stakeholders, and the present description is useful. There is also interest in how local adaptive control and observation affect the global properties of the pest in a spatial model, such as abundance and dispersal.
In closing, it is our hope that techniques from adaptive control theory shall be added to the toolbox of available pest management strategies and motivate further research from both mathematicians and ecologists. The joint efforts of both groups shall undoubtedly be required in the future modelling of effective pest control strategies to help tackle the crucial issue of food security. As we have sought to explain, there are many possible extensions to the work presented here: the present paper is a starting point.
In other words, u cannot eventually be greater than or less than u * as in these regions the state (and thus the output) is exponentially decreasing or increasing, respectively. Arguing in a similar vein, (A.3) ∀ T ∈ N , ∃ t 3 , t 4 ∈ N , t 3 , t 4 ≥ T such that y(t 3 ) < L 1 and y(t 4 ) ≥ L 1 .
Seeking a contradiction, suppose that u is unbounded. Then, there exists a subse- In light of (A.1) and (A.4), for t ∈ N, let U t := {p t , . . . , q t }, sets of successive integers, be such that (i) τ t ∈ U t ; (ii) u > u * on U t ; (iii) k∈N U k = N. We record the fact that for t ∈ {0, 1, . . . , τ t − p t } For t ∈ N we estimate for each r ∈ N and some constant M * (r), where we have used (A.5). However, from the growth condition (2.8) of ψ, it follows from (A.6) that for each r ∈ N (A.7) x(p t − r) → ∞ as t → ∞.
Letting A(t) := A − BΦ(u(t))F , we note that By the assumption of a common period of A(t), (A.7) implies that k may always be chosen bounded so that the left-hand side of (A.8) is arbitrarily large. However, by (A.3), there exists r ∈ N such that the right-hand side of (A.8) is bounded, which is a contradiction.
We conclude that u is bounded (say by v < ∞), and thus x satisfies the bounds Clearly, if x is bounded, then so is y. Conversely, from the irreducibility of A v and A, and as C = 0, it follows from (A.9) that if x is unbounded, then so is y, and furthermore, for each L > 0 and t ∈ N (A.10) Seeking another contradiction, we now suppose that x is unbounded, so equivalently that y is unbounded. Similarly to our arguments for u, such a supposition implies the existence of (ρ t ) t∈N0 such that ρ t ∞ as t → ∞ and (A.11) y(ρ t ) ≥ t , t ∈ N 0 .
In light of (A.3) and (A.11), let Y t := {v t , . . . , w t }, sets of successive integers, be such that (i) ρ t ∈ Y t ; (ii) y > L 1 on Y t ; (iii) k∈N Y k = N. In particular, the construction of Y t is such that (else Y t may be enlarged and relabelled). For t ∈ N, t ≥ L 1 we first estimate ρ t − v t → ∞ as t → ∞.
Seeking a contradiction, if u is such that u(t) ≤ u * for all t ∈ R + , then as t → d dt u is nonnegative, u is nondecreasing and bounded from above, and thus u converges to z ≤ u * . Consequently, by the fundamental theorem of calculus, for T ≥ 0 and so t → ψ( y(t) ) ∈ L 1 . On the other hand, u ≤ u * implies thaṫ and hence x(t) ≥ e A * t x 0 , t ∈ R + ⇒ y(t) ≥ Ce A * t x 0 , t ∈ R + ⇒ lim inf The combined properties that C and x 0 are nonnegative and nonzero and A * is irreducible with α(A * ) = 0 yield that lim inf t→∞ Ce A * t x 0 > 0 , which, in light of (A.17) and the fact that y is continuous and ψ ∈ K, contradicts t → ψ( y(t) ) ∈ L 1 . We conclude that there exists τ ∈ N such that u(t) ≥ u(τ ) > u * , t ∈ R + , t ≥ τ , as u is nondecreasing. The proof of the estimate (2.34) now mirrors that of Theorem 2.3, instead noting that for Metzler A 1 , A 2 ∈ R n×n , A 2 ≤ A 1 , A 1 = A 2 , and A 2 irreducible imply that α(A 2 ) < α(A 1 ). To prove that u converges, we prove that it is bounded. To that end, for t ∈ R Therefore, as F = 0, C = 0, it follows that for t ≥ (A.18)