University of Birmingham On special cases of the generalized max-plus eigenproblem

. We study the generalized eigenproblem A ⊗ x = λ ⊗ B ⊗ x, where A,B ∈ R m × n in the max-plus algebra. It is known that if A and B are symmetric, then there is at most one generalized eigenvalue, but no description of this unique candidate is known in general. We prove that if C = A − B is symmetric, then the common value of all saddle points of C (if any) is the unique candidate for λ. We also explicitly describe the whole spectrum in the case when B is an outer product. It follows that when A is symmetric and B is constant, the smallest column maximum of A is the unique candidate for λ. Finally, we provide a complete description of the spectrum when n = 2.

1. Introduction. We start with four motivational examples. They are all variants of a model called the multiprocessor interactive system. Example 1. Products P 1 , . . . , P m are prepared using n processors, with every processor potentially contributing to the completion of each product. It is assumed that every processor can work for all products simultaneously and that all these actions on a processor start as soon as the processor starts to work. Let a ij be the duration of the work of the jth processor needed to complete the partial product for P i (i = 1, . . . , m; j = 1, . . . , n). Let us denote by x j the starting time of the jth processor (j = 1, . . . , n). Then all partial products for P i (i = 1, . . . , m) will be ready at time max(x 1 + a i1 , . . . , x n + a in ).
Hence if b 1 , . . . , b m are given completion times of the products that have to be met exactly, then the starting times have to satisfy the system of equations max(x 1 + a i1 , . . . , x n + a in ) = b i , ∀ i = 1, . . . , m.
If we denote a ⊕ b = max(a, b) and a ⊗ b = a + b, and the pair of operations (⊕, ⊗) is extended to matrices and vectors in the same way as in linear algebra, then this can be written as a compact equation: The matrix A = (a ij ) is called the production matrix.
Example 2. Consider the system in which processors P 1 , . . . , P n work interactively and in stages. In each stage all processors simultaneously produce components necessary for the work of some or all of the other processors in the next stage. Let x i (k) denote the starting time of the kth stage on P i (i = 1, . . . , n), and let a ij denote the duration of the operation at which processor P j prepares the component necessary for processor P i in the (k + 1)st stage (i, j = 1, . . . , n). Then, avoiding any delay, we have (2) x i (k + 1) = max(x 1 (k) + a i1 , . . . , x n (k) + a in ) (i = 1, . . . , n; k = 0, 1, . . .).
We say that the system reaches a steady regime [15], [16] if it eventually moves forward in regular steps; that is, for some λ and k 0 we have x(k + 1) = λ ⊗ x(k) for all k ≥ k 0 . Equivalently, the time between the starts of consecutive stages will eventually stabilize and be the same constant for every processor. If this happens, then we have and so x (k) is a max-plus eigenvector of A with associated eigenvalue λ.
Example 3. Now suppose that in addition to the assumption of Example 1, k other machines independently prepare partial products for products Q 1 , . . . , Q m , and the duration and starting times are b ij and y j , respectively. Then the synchronization problem is to find starting times of all n + k machines so that each pair (P i , Q i ) (i = 1, . . . , m) is completed at the same time. This task is equivalent to solving the system of equations (3) max(x 1 + a i1 , . . . , x n + a in ) = max(y 1 + b i1 , . . . , y k + b ik ) (i = 1, . . . , m) .
Again, using max-algebra and denoting K = {1, . . . , k} we can write this system as a system of max-linear equations: In the matrix-vector notation it has the form Example 4. A variant of (3) is the task when n = k and the starting times are linked; for instance, it is required that they be the same or, more generally, that there be a fixed interval between the starting times of the first and second systems; that is, the starting times x j , y j of each pair of machines differ by the same value. If we denote this (unknown) value by λ, then y = λ ⊗ x and the equations read (5) max(x 1 + a i1 , . . . , x n + a in ) = max(λ + x 1 + b i1 , . . . , λ + x n + b in ) for i = 1, . . . , m. In max-algebraic notation, this system gets the form of a "generalized eigenproblem": The examples above give rise to the following systems: A ⊗ x = λ ⊗ x, Systems (6) are historically the first problem studied in max-algebra [14], and their solution set can easily be described [12] both algebraically (using residuation) and combinatorially (in terms of set coverings; see below). For the origins of maxalgebra see also [26], [28], and [22]. Systems (7) have also been intensively studied since the 1960s [15] (see also [22], [19]). It is known [19], [13], [12] that every n × n matrix over R∪ {−∞} has up to n eigenvalues, with the maximum cycle mean of A always being a (biggest) eigenvalue. All eigenvalues and bases of all eigenspaces can be found in O n 3 time.
Systems (8) have been studied since 1978 [7], [8], [9], [10], [11]. It has been proved that the solution set is finitely generated [10]. These systems have been shown to be equivalent to mean payoff games [2]. A number of solution methods exist [17], [20], [3], [27]. Although none of them is polynomial, this problem is known to be in N P ∩ co − N P [5], and it is therefore expected that eventually a polynomial solution method will be found.
Systems (9) for fixed λ reduce to (8), so the effort is usually concentrated on finding the spectrum. In contrast to (7), no polynomial method seems to be known in general for finding even a single eigenvalue or deciding whether such a value exists. The task of finding the spectrum is made more complicated by the fact that on one hand, there may be no eigenvalue at all, and on the other hand, there is no upper limit for the number of eigenvalues-there may even be a continuum of them, and any union of closed intervals is the spectrum for some generalized eigenproblem [25]. Nevertheless, there is a pseudopolynomial method for finding the whole spectrum based on Newton-type iterations [21]. This problem was studied for the first time independently in [6] and [18] and then by many other authors. A fast method for narrowing the search for a generalized eigenvalue is presented in [12, section 9.3].
Paper [6] presents a number of conditions that are either necessary or sufficient for the existence of either a solution or a unique solution to (9). In particular, it follows that there is at most one generalized eigenvalue if A and B are finite and symmetric. The paper also includes specific conditions and a graphical method for 2 × 2 matrices.
Note that systems (6)-(9) may be considered over R∪ {−∞} . The aim of the present paper is to study the spectrum of (9) for finite matrices (that is, matrices over R), more precisely, we do the following: (a) We prove that if C = A−B is symmetric, then the common value of all saddle points of C (if any) is the unique candidate for λ (section 3).
(b) We explicitly describe the whole spectrum provided that B is an outer product (section 4).
(c) We show that it follows that when A is symmetric and B is constant, the smallest column maximum of A is the unique candidate for λ (section 4).
(d) Finally, we provide an alternative, algebraic description of the whole spectrum and eigenspaces when x is two-dimensional (section 5).

Prerequisites.
In this section we give the definitions and some basic results which will be used in the formulations and proofs of our results. For the proofs and more information about max-algebra, the reader is referred to [1], [4], [12], and [23].
It will be useful to first recall a simple property of matrices, well known in the theory of matrix games. Let A = (a ij ) ∈ R m×n , and denote v 1 (A) = max i∈M min j∈N a ij (usually called "gain-floor") and v 2 (A) = min j∈N max i∈M a ij ("loss-ceiling"). A pair Theorem 5 (see [24]). The inequality v 1 (A) ≤ v 2 (A) holds for every A ∈ R m×n . The equality holds if and only if there is a saddle point in A. If (r, s) is any saddle point, then v 1 (A) = a rs = v 2 (A) (and this value is called the value of the game A).
We will use the following notation: We will also write M j , N i instead of M j (A) , N i (A) if no confusion can arise. The following will be useful. and Throughout the paper we denote −∞ by ε (the neutral element with respect to ⊕), and for convenience we also denote by the same symbol any vector with all components −∞, and a matrix with all entries −∞. If a ∈ R, then the symbol a −1 stands for −a.
The symbol a k (k ≥ 1 integer) stands for the iterated product a ⊗ a ⊗ · · · in which the symbol a appears k times (that is, ka in conventional notation). By max-algebra (recently also called "tropical linear algebra") we understand the analogue of linear algebra developed for the pair of operations (⊕, ⊗), extended to matrices and vectors as in conventional linear algebra. That is, if A = (a ij ), B = (b ij ), and C = (c ij ) are matrices of compatible sizes with entries from R, for all i and j ∈ N . If α ∈ R, then α ⊗ A = (α ⊗ a ij ). Although the use of the symbols ⊗ and ⊕ is common in max-algebra, we will apply the usual convention of not writing the symbol ⊗. Thus, in what follows the symbol ⊗ will not be used and, unless explicitly stated otherwise, all multiplications indicated are in max-algebra. A vector or matrix is called finite if all its entries are real numbers. A square matrix is called diagonal if all its diagonal entries are real numbers and off-diagonal The matrix diag (0) is called the unit matrix and denoted by I. Obviously, AI = IA = A whenever A and I are of compatible sizes. A matrix obtained from a diagonal matrix (unit matrix) by permuting the rows and/or columns is called a generalized permutation matrix (permutation matrix). It is known that in max-algebra, generalized permutation matrices are the only type of invertible matrices [16], [12]. Clearly, The following theorem is probably the historically first result in max-algebra; here we denote the following for A ∈ R m×n and b ∈ R n : Theorem 7 (see [14], [12]). If A ∈ R m×n is a matrix with no ε columns, b ∈ R m , Proof. The proof is straightforward from the definitions.
or just eigenvector) and λ ∈ R (generalized eigenvalue or just eigenvalue) such that Note that the case λ = ε is trivial and is not discussed here. We denote The set Λ (A, B) will be called the spectrum of the pair (A, B) . In the rest of the paper we will assume that A, B ∈ R m×n . It is easy to see that then a generalized eigenvector exists if and only if a finite generalized eigenvector exists. Note that some statements remain valid if the finiteness requirement is removed or replaced by the condition that there are no ε columns. The next two statements have previously been proved and provide useful information about the spectrum. Here and in the rest of the paper we denote We will use shorthand λ or λ if no confusion can arise.
The interval λ, λ will be called the feasibility interval for the generalized eigenproblem.
The following statement follows from the results in [6].
This paragraph presents a sketch of a numerical method of [21] for solving the generalized eigenproblem for arbitrary matrices A and B. However, it will not be used in this paper and may be skipped. The method is based on the following idea. Let us define (see [16]) min-algebra over R ∪ {+∞} by for all a and b except when one of a and b is +∞ and the other −∞. In this case, We also define A # = −A T . As in max-algebra, we extend the pair of operations (⊕ , ⊗ ) to matrices and vectors. We will not write the operator ⊗ , and for matrices the convention applies that the product is in min-algebra whenever it follows the symbol #; otherwise it is in max-algebra. In this way a residuated pair of operations (a special case of Galois connection) has been defined, namely, for all x, y. Due to isotonicity, Ax ≤ y implies A(A # y) ≤ y, and it follows immediately that a one-sided system A ⊗ x = b has a solution if and only if A A # b = b. It will be convenient to denote P A (z) = A A # z for any A and z. Thus, finding a solution to a two-sided system with separated variables A ⊗ x = B ⊗ y then means finding a z such that and I is a unit matrix of an appropriate size. The function is an order-preserving, additively homogeneous and continuous map. As such, it has a largest "eigenvalue" It is also proved that the function s (λ) = r (h λ ) is piecewise-linear and Lipschitz continuous, and therefore in the case of integer matrices A and B its zero-level set can be found by a pseudopolynomial number of calls to an oracle that computes the value of a mean payoff game.
3. Generalized eigenproblem for symmetric matrices with saddle point. Note that in general even if the premise of Proposition 11 is satisfied, it is not clear what the unique candidate for the generalized eigenvalue is, and even if such a candidate is known, it is not clear how to check in polynomial time whether it is such an eigenvalue.
Let C ∈ R m×n be any matrix.
Hence λ (C) = v 1 (C) and λ (C) = v 2 (C) if C is symmetric, in particular, when C = A − B and both A and B are symmetric. Thus, if C is symmetric and has a saddle point, say (r, s) , then λ = a rs = λ and so either a rs is the unique generalized eigenvalue or there is no such eigenvalue. The next two examples confirm that both cases are possible.

Example 12. If
then C = A is symmetric and has saddle point (1, 2) of value 1, which is therefore the unique candidate for a generalized eigenvalue. If there was an associated eigenvector x = (x 1 , x 2 ) T , we may assume without loss of generality x 1 = 0, and the individual terms in Ax and λBx are as described in the following matrices: The first equation implies 1 + x 2 ≥ 2 and the second 1 + x 2 ≤ 1; thus Λ (A, B) = ∅.
then C is the same as in the previous example and so λ = 1 is the unique candidate for a generalized eigenvalue. Now the zero vector x = (0, 0) T is an associated eigenvector, and thus Λ (A, B) = {1} .
The following example shows that Λ (A, B) may be nonempty even if C is symmetric and has no saddle point.
is a symmetric matrix without a saddle point, but λ = 2 is a generalized eigenvalue with associated eigenvector x = (0, 1) T .
We summarize as follows.  Let V = diag (v) and W = diag (w) . Then (10) reads and is equivalent to Set x = W −1 y, where y = (y 1 , . . . , y n ) T ; then (10) is equivalent to  (10) with B an outer product it can be assumed without loss of generality that B is a zero matrix (a matrix with all entries equal to 0). In such a system the right-hand side of each equation is λ (x 1 ⊕ x 2 ⊕ · · · ⊕ x n ), and thus an x = ε satisfying Ax = λ0x exists if and only if there is a z ∈ R satisfying where A is obtained from A by adding an extra row whose every entry is λ, that is, Clearly, A is an (m + 1) × n matrix. Theorem 7 and Proposition 9 enable us to solve such systems, and we use them in the next two propositions. It will be useful to denote Proof. The proof follows straightforwardly from the previous discussion, Theorem 7, and Proposition 9.
Proof. A real number λ is in Λ (A, 0) if and only if the system (11) has a solution for some z ∈ R. This is a one-sided system whose solvability does not depend on z because the right-hand side is a constant vector. The solvability criterion is given in Theorem 7. We apply this condition to A using Proposition 9, first to each of the first m rows and then to the last row: The latter is equivalent to λ ≥ min j∈N max i∈M a ij , which proves the lower bound. The first is equivalent to the requirement that for every i ∈ M there exists a j ∈ N satisfying a ij ≥ max r∈M a rj ⊕ λ.
Since max it follows that a ij = max r∈M a rj ⊕ λ = max r∈M a rj and λ ≤ max r∈M a rj ≤ max j∈N max i∈M a ij , which proves the upper bound.
Proposition 17 does not provide any tool for checking whether Λ (A, 0) is nonempty. We give this answer next. Proof. The equivalence of (b) and (c) has been shown in Proposition 6. We prove The second implication is trivial, so suppose now that λ ∈ Λ (A, 0) . Hence, (11) has a solution with this value of λ, and thus j∈N M j = M . Let i ∈ M ; then i ∈ M j for some j ∈ N and therefore also i ∈ M j because for any j ∈ N the set M j either coincides with M j or is M j ∪ {m + 1} or is just {m + 1} . Statement (b) now follows. Proposition 20. L = λ 0 for any A ∈ R m×n . Proof. There exist r ∈ M and s ∈ N such that So a rs is the smallest column maximum in A. Let k ∈ M. The quantity min l∈N k a kl is the smallest of all column maxima appearing in row k of A (recall that this value is +∞ if N k = ∅). Hence min l∈N k a kl ≥ a rs , and therefore also L = min k∈M min l∈N k a kl ≥ a rs = λ 0 .
On the other hand, L = min k∈M min l∈N k a kl ≤ min l∈Nr a rl = a rs = λ 0 . Proof. The lower bounds in Propositions 17 and 21 coincide by Proposition 20, so we need only prove the upper bound and its tightness. Suppose without loss of generality that Λ (A, 0) = ∅ and so N i = ∅ for every i ∈ M. Let λ = U = min i∈M max j∈Ni a ij . Then λ = max i∈M a is for some s ∈ N, and thus m + 1 ∈ M s . At the same time if r ∈ M, then max j∈Nr a rj ≥ min i∈M max j∈Ni a ij = λ. Therefore, r ∈ M j for some j ∈ N r . This shows that U ∈ Λ (A, 0) by Proposition 16.
Suppose now that λ > U. Then λ > max j∈Ni a ij for some i ∈ M. Hence, λ > a ij for all j ∈ N i and so i / ∈ M j for all j ∈ N i . Since also i / ∈ M j for all j / ∈ N i and M j ⊆ M j for every j ∈ N i , we have that i / ∈ M j for all j ∈ N, and thus λ / ∈ Λ (A, 0) by Proposition 18. Suppose now Λ (A, 0) = ∅. Due to Proposition 21 we may also assume that L < U, and we need only prove that (L, U ) ⊆ Λ (A, 0) . Let λ ∈ (L, U ) . If i ∈ M, then λ < U ≤ max j∈Ni a ij = a it for some t ∈ N i . Hence, i ∈ M t = M t and so i ∈ j∈N M j . On the other hand, λ > L ≥ a rs , where r ∈ M, s ∈ N r , hence λ = max i∈M a is , and thus m+1 ∈ M s . We conclude that j∈N M j = M and so λ ∈ Λ (A, 0) by Proposition 16.
5. Two-dimensional generalized eigenproblem. In this section we give a complete description of the spectrum of (A, B) and all eigenspaces, where A, B ∈ R m×2 .
The case when x 1 = ε reduces the generalized eigenproblem to the question of whether the second columns of A and B are proportional (and the coefficient of proportionality is then the unique generalized eigenvalue). This holds similarly for x 2 = ε, so we will restrict our attention to the task of finding finite x. By homogeneity of V (A, B, λ) we can assume that x 1 = 0. We will therefore study the one-variable problem (14) a i1 ⊕ a i2 x 2 = λb i1 ⊕ λb i2 x 2 , i ∈ M.
Before we answer the main question, we will show in subsection 5.1 how to find all solutions of two-sided systems (15) Ax = Bx, x ∈ R 2 for A, B ∈ R m×2 , and then in subsection 5.2 we show how to solve (14) for m = 2.
The following technical lemma will be useful.
Lemma 23 (cancellation rule). Let v, w, a, b ∈ R, a < b. Then for any real x we Proof. The proof is straightforward; see also [12,Lemma 7.4.1].
This section can be seen as an independent generalization of the results for 2 × 2 matrices in [6] to m × 2 matrices. 5.1. Two-dimensional two-sided systems. Suppose that a system is given (as explained before, x 1 = 0 is assumed without loss of generality). Let us denote and V = i∈M V i . We will explicitly describe each V i . Let us apply the cancellation rule of Lemma 23 to (16). For every i ∈ M there are either two, or one, or no cancellations.
(i) If there is no cancellation, then and so V i = R.
(ii) If there is exactly one cancellation, then we can assume without loss of generality that it takes place on the left-hand side, and we consider two cases.
Either a i1 < b i1 and a i2 = b i2 so that (16) reduces to ]. (iii) If there are two cancellations and they take place on the same side, then this side becomes ε yielding V i = ∅. If the two cancellations take place on different sides, then either a i1 > b i1 and a i2 < b i2 so that (16) reduces to Since each V i obtained above is a closed interval (including possibly a singleton or empty set) and can be found in a constant number of operations, the intersection V = i∈M V i is also a closed interval (including possibly a singleton or empty set) and can be found in O (m) time.
We conclude with the following.
where V is a closed interval (including possibly a singleton or empty set) and can be found in O (m) time as described above.
5.2. Generalized eigenproblem for 2 × 2 matrices. Our aim in this subsection is to describe the whole spectrum for the 2 × 2 generalized eigenproblem where all a ij and b ij are real numbers. This is a very special case, already solved in [6], but it will be of key importance for solving the general two-dimensional case in the next subsection. We use a different methodology from that in [6] for solving the general case in subsection 5.3.
Recall that by Proposition 10, Λ (A, B) ⊆ λ, λ (see section 2). Both λ and λ can easily be found (in O (mn) time). We will therefore assume that λ < λ, since otherwise we have Λ (A, B) = ∅ (if λ > λ), or λ = λ is the unique candidate for a value in Λ (A, B), and this can be verified easily, for instance, using the tools of subsection 5.1. We will distinguish four cases and describe the spectrum in each of them. The feasibility interval λ, λ has exactly one of the forms Note that the first case can be equivalently described by inequalities c 11 < c 12 , c 21 < c 22 ; this holds similarly for the other cases. The first two cases can be transformed into each other by swapping the variables x 1 and x 2 and similarly for the last two cases. So we essentially have only two cases. In fact we will deal only with the third (and thus also with the fourth) case as the first (two) will be covered by the discussion in subsection 5. 3.
(v) The proof of this part is similar to that of (iv) and is omitted here.

Generalized eigenproblem:
The two-dimensional case. As before, due to the finiteness of x and homogeneity, we assume that x 1 = 0 and therefore study system (17).
We will distinguish three cases. Case 1: If c i1 = c i2 for some i ∈ M, then by Proposition 10 this value is the unique candidate for the generalized eigenvalue. Using the method of subsection 5.1 it can be readily checked whether this is indeed the case.
Case 2: If c i11 < c i12 and c i21 > c i22 for some i 1 , i 2 ∈ M, then the 2 × 2 system has a unique eigenvalue by Proposition 25 and is therefore a unique candidate for an eigenvalue of the whole system. This can easily be checked by the method of subsection 5.1. Case 3: If c i1 < c i2 for all i ∈ M (the case when c i1 > c i2 for all i ∈ M can be discussed similarly), then for any i ∈ M the feasibility interval for the ith equation alone is [c i1 , c i2 ] . Suppose λ ∈ Λ (A, B) ∩ (c i1 , c i2 ) . Then a i1 < λb i1 and a i2 > λb i2 , and the equation (21) a i1 ⊕ a i2 x 2 = λb i1 ⊕ λb i2 x 2 using cancellations reduces to a i2 x 2 = λb i1 .
Hence, x 2 = λb i1 a −1 i2 , and thus the dependence of x 2 on λ over (c i1 , c i2 ) is expressed by a linear function (with slope 1). This concludes the case when λ is strictly between c i1 and c i2 . To finish Case 3 suppose now that λ = c i1 ∈ Λ (A, B) . Then a i1 = λb i1 and a i2 > λb i2 , and (21) using cancellations reduces to and so the graph of dependence of x 2 on λ over [c i1 , c i2 ] is a continuous, piecewise linear map; see Figure 1. This result is consistent with the fact that a i1 a −1 since this is equivalent to c i1 < c i2 .  Finally, we note that for the whole system we have Thus if λ ∈ Λ (A, B) ∩ λ, λ , then λ ∈ Λ (A, B) ∩ (c i1 , c i2 ) for every i ∈ M , and so x 2 is the common value of all λb i1 a −1 i2 , i ∈ M. This implies λ, λ ⊆ Λ (A, B) . We have proved the following.
Proposition 26. If A, B ∈ R m×2 and c i1 < c i2 for every i ∈ M , then a generalized eigenvalue in λ, λ exists if and only if all values in λ, λ are generalized eigenvalues. This is equivalent to the requirement that all values b i1 a −1 i2 for i ∈ M coincide.
If the condition in Proposition 26 is satisfied, then by continuity also λ, λ ∈ Λ (A, B) , and in this case Λ (A, B) = λ, λ . If not, then λ, λ have to be examined separately to see if they are generalized eigenvalues. Figures 2-6 indicate that all possibilities may occur (both λ and λ, exactly one, or none in Λ (A, B)).
Summarizing all cases we have that if A, B ∈ R m×2 , then Λ (A, B) can be found in O (m) time and has one of the following forms (the illustrating figures are drawn for m = 2): • λ, λ (see Figure 2); • λ, λ (see Figure 3); • {λ} , where λ ∈ λ, λ (see Case 2 and Figures 4 and 5); • ∅ (see Figure 6 and Case 1). In all cases the eigenspace associated with a fixed generalized eigenvalue is described in Proposition 24.