Symmetry-independent stability analysis of synchronization patterns

The field of network synchronization has seen tremendous growth following the introduction of the master stability function (MSF) formalism, which enables the efficient stability analysis of synchronization in large oscillator networks. However, to make further progress we must overcome the limitations of this celebrated formalism, which focuses on global synchronization and requires both the oscillators and their interactions to be identical, while many systems of interest are inherently heterogeneous and exhibit complex synchronization patterns. Here, we establish a generalization of the MSF formalism that can characterize the stability of any cluster synchronization pattern, even when the oscillators and/or their interactions are nonidentical. The new framework is based on finding the finest simultaneous block diagonalization of matrices and does not rely on information about network symmetry. This leads to an algorithm that is error-tolerant and orders of magnitude faster than existing symmetry-based algorithms. As an application, we rigorously characterize the stability of chimera states in networks with multiple types of interactions.

1. Introduction. Coupled oscillator networks have been extensively studied as a fundamental model of collective behavior in complex systems [63,13,40,14,5,15,2]. The field is unique in its close interaction between theoretical developments [48,51,65,46,4] and practical applications [69,19,7,33,38]. A central theme of current research is how to characterize the stability of increasingly complex synchronization patterns in arbitrary network structures. Such patterns can be regarded as forms of cluster synchronization, in which the oscillators form one or more internally synchronized clusters that exhibit mutually distinct dynamics [8,22,9,17,52,70,45,26]. The stability of a synchronization pattern is important because it usually cannot be observed if it is unstable, and thus a bifurcation leading to the loss or restoration of stability has significant ramifications in various biological and technological systems.
In order to perform efficient stability analysis for large networks of coupled oscillators, the key is to first divide the full state space of the variational equation into minimal flow-invariant subspaces (defined below) and then calculate the maximum Lyapunov exponent in each flow-invariant subspace to determine whether perturbations within that subspace would grow. To achieve this for the global synchronization of identical oscillators, the MSF formalism [49] finds a coordinate transformation that diagonalizes the coupling matrix, which in turn decouples the high-dimensional variational equation of the full network into low-dimensional equations describing the evolution of independent perturbation modes. The full equation has a dimension that grows linearly with the network size, while the decoupled equations all have a fixed dimension equal to that of an individual oscillator, irrespective of the network size.
However, when one considers cluster synchronization states, nonidentical oscillators, or disparate interactions, all of which common in real systems, there are in general two or more non-commuting matrices in the variational equation. Since noncommuting matrices cannot be diagonalized simultaneously (even when the individual matrices can), the MSF formalism is not applicable to these cases. The goal of the current paper is to introduce an extension of the MSF formalism and propose a fundamentally new framework based on the theory of matrix * -algebra that addresses these important cases. In particular, we present a highly scalable algorithm that finds a finest simultaneous block diagonalization (SBD) of any given set of self-adjoint matrices, leading to an optimal separation of the perturbation modes and efficient stability analysis of arbitrary synchronization patterns.
Our framework applies to the general class of network dynamical systems described by (1) where x i is the d-dimensional state vector of the i-th oscillator, n is the number of oscillators, R is the number of interaction types, and overdot represents time derivative. Here, F i : R d → R d is the vector field governing the uncoupled dynamics of the i-th oscillator and C r is a self-adjoint coupling matrix representing interactions of the form H r and strength σ r . The synchronization patterns we study can be derived from any balanced equivalence relation [22,28], which is the most general class of patterns for which oscillators in the same cluster can admit equal dynamics for generic F i and H r . In general, nodes in a cluster can be separated by nodes from other clusters and do not necessarily form a connected component of the network. The resulting synchronization patterns describe a wide range of network dynamics, including remote synchronization and chimera states. 1 A related extension of the MSF formalism to study cluster synchronization patterns was previously proposed by Pecora and colleagues [50]. That framework was originally developed for networks with adjacency-matrix coupling and has since been extended to diffusively coupled networks [60] and multilayer networks [11]. In those studies, the authors simplified the stability analysis using the machinery of irreducible representations (IRR) [66], which decouples the variational equation according to symmetries present in the system.
Both the IRR framework and our SBD framework reduce to the MSF formalism for global synchronization of identical oscillators with a single type of interaction.
The key difference between the two frameworks is that the former relies on network symmetry to perform stability analysis, whereas the latter does not. 2 As a result, the IRR framework has to resort to ad hoc modifications when a cluster synchronization pattern is not induced by network symmetry [60,58]. In contrast, our SBD framework does not require any symmetry information to be known in advance and is directly applicable to all cluster synchronization patterns. Moreover, it forgoes the calculation on irreducible representations of network symmetry, which becomes computationally 1 For the ease of presentation, we ground our discussions in this paper on (1), but we note that it is straightforward to generalize our methods beyond ODE settings. For instance, it is possible to introduce coupling delay into the interaction functions and the resulting delay differential equations can still be analyzed within our framework. Naturally, the framework also applies to discrete-time dynamical systems. 2 We note that it is computationally inexpensive to identify synchronization patterns when compared to the cost of determining their stabilities. We thus assume that the synchronization patterns of interest are given before stability analysis are performed.
prohibitive very quickly as the number of symmetries grow. This leads to a faster, simpler, and more robust algorithm than existing ones based on the IRR framework and enables the study of complex synchronization patterns in large networks.
The paper is organized as follows. In section 2, we present the concept of matrix * -algebra and a fast algorithm for finding a finest simultaneous block diagonalization for any set of self-adjoint matrices. Then, in section 3, we develop a symmetryindependent framework for the stability analysis of arbitrary cluster synchronization patterns and compare our algorithm with state-of-the-art algorithms based on irreducible representations. We further show in section 4 that our algorithm can be applied to analyze cluster synchronization patterns of nonidentical oscillators and oscillators with multiple types of interactions. The strength of this unified framework is demonstrated with the characterization of permanently stable chimera-like states in multilayer networks. A discussion on open problems and future directions is presented in section 5.

Finest simultaneous block diagonalization. Given a set of n×n matrices
An invertible matrix T is said to give a finest simultaneous block diagonalization of the matrix set B if it brings all matrices in B into a common block-diagonal form that cannot be further refined. Equivalently, T decomposes C n into minimal invariant subspaces under B, such that the j-th common blocks in T −1 BT only have 0 and C nj as invariant subspaces, where n j is the dimension of the j-th blocks.
To make progress in finding a finest simultaneous block diagonalization, it is beneficial to consider an algebraic structure called matrix * -algebra. Letting M n denote the set of all n × n matrices with complex entries, a subset T of M n is said to be a matrix * -algebra over C if the identity matrix I n belongs to T and where * denotes conjugate transpose. 3 Matrix * -algebras enjoy better properties than matrix algebras because they are closed under the conjugate transpose operation. This makes matrix * -algebras semisimple and thus characterizable by the Artin-Wedderburn theorem [32]. According to structure theorems based on the Artin-Wedderburn theorem (Theorem 3.1 and Theorem 6.1 in Ref. [39]), a matrix * -algebra T can always be decomposed through a unitary transformation P into the direct sum of irreducible matrix * -algebras T j : where T j ⊆ M nj , m j is the multiplicity of T j , and j=1 n j m j = n. The ⊗ symbol denotes the tensor product of matrices (i.e., the Kronecker product), the summand I mj ⊗ T j = { mj k=1 B : B ∈ T j } represents m j copies of the irreducible matrix *algebra T j arranged diagonally (not to be confused with mj k=1 T j ). We say that a 3 The results remain applicable if the matrix * -algebras are over R, as in various examples considered throughout the paper. However, working in C both allows complex coupling matrices, which arise for oscillator networks that are naturally expressed using complex vector fields (such as coupled Stuart-Landau oscillators), and can lead to finer block structures when networks are directed. matrix * -algebra T j is irreducible if it contains matrices that only share trivial invariant subspaces (i.e., 0 and C nj ). Equation (3) is the canonical form of an irreducible decomposition of a matrix * -algebra, which is unique up to block permutations and unitary transformations localized within each block. As a consequence, all the matrices in T can be transformed simultaneously into a block-diagonal form of j=1 m j blocks through a single unitary matrix P .
A matrix * -algebra T is said to be generated by a set of matrices B if B ⊆ T and every matrix in T can be constructed from I n and B using the operations of matrix * -algebras (i.e., scalar multiplication, matrix addition, matrix multiplication, and conjugate transpose). In order to calculate a transformation matrix P that gives a finest simultaneous block diagonalization of all matrices in T , we propose a new algorithm (Algorithm 1) and refer to the corresponding coordinate transformation as an SBD transformation. The algorithm involves only numerical linear-algebraic calculations and does not require any algebraic structure (e.g., symmetries) to be known in advance.
Algorithm 1 Finding an SBD transformation for a matrix * -algebra generated by a set of n × n matrices B = {B k } in O(n 3 ). (A MATLAB implementation is available as a Github repository at https://github.com/y-z-zhang/net-sync-sym/.) step 1: generate a self-adjoint matrix B from combining matrices in B with random coefficients c k and d k 1 steps 3 to 9: find (a basis of) the minimal invariant subspace that contains v 1 3: perform Gram-Schmidt orthonormalization on {v 1 , B k v 1 , B * k v 1 }, k = 1, . . . , K to obtain a set of orthonormal vectors V 4: let v be a random linear combination of the vectors from V 5: while the images {B k v, B * k v}, k = 1, . . . , K include vectors that are linearly independent from V do 6: make these new vectors orthonormal to V and to each other 7: expand V to include the new vectors 8: let v be a random combination of the vectors from the expanded V 9: end while 10: let P be a matrix whose columns are made of vectors from V steps 11 to 16: find the rest of the minimal invariant subspaces 11: while the matrix P has less than n columns do 12: find an eigenvector v j outside the span of P 's column vectors 13: make v j orthonormal to the column vectors of P 14: repeat step 3 to 9 with v 1 replaced by v j

15:
add the vectors from V to P as additional columns 16: end while The idea behind the algorithm is simple. First, pick an eigenvector v 1 of a self- where c k and d k are random coefficients drawn from a Gaussian distribution. This eigenvector lies inside one of the minimal invariant subspaces of B with probability 1. Furthermore, all the images of v 1 under {B k } and {B * k } must also be inside the same minimal invariant subspace. By running the Gram-Schmidt process on {v 1 , B 1 v 1 , B * 1 v 1 , · · · , B K v 1 , B * K v 1 } and discarding the linearly redundant vectors, we can obtain a set of orthonormal vectors all inside the same minimal invariant subspace. If these vectors span the entire invariant subspace, then we have discovered a common block and can repeat the process starting from another eigenvector of B outside the discovered minimal invariant subspace. Otherwise, we generate a random linear combination v of the existing orthonormal vectors and "explore" the invariant subspace further by generating images of v under {B k } and {B * k }. It is easy to see that a complete basis for a minimal invariant subspace can always be reached after no more than n such iterations. The computational complexity of the algorithm scales as O(n 3 )-it can easily handle n×n matrices with n in the range of thousands. 4 This distinguishes Algorithm 1 from the best competing algorithms available [39,34,35], which run in O(n 4 ) time and are already slow at n ≈ 100. Moreover, applications of those algorithms to network synchronization have been limited to the study of global synchronization [27,72], while our framework enables application of the new algorithm to general synchronization patterns.
In most cases, we are interested in a given set of matrices instead of the full matrix * -algebra. When is Algorithm 1 guaranteed to find a finest simultaneous block diagonalization for a given matrix set B? A sufficient condition is that the matrices in B are self-adjoint.
Proposition 2.1. Given a set of n × n self-adjoint matrices B = {B 1 , · · · , B K }, let T be the matrix * -algebra generated by B. If a unitary matrix P leads to an irreducible decomposition of T , then it also gives rise to a finest simultaneous block diagonalization of B.
Proof. Assume that an invertible matrix T yields a finest simultaneous block diagonalization of the set of self-adjoint matrices B. Since all matrices in T −1 T T can admit the same block structure shared by the matrices in its generating set T −1 BT . Since this block structure is finest in B, it is also finest in T . By definition, the unitary matrix P yields a finest simultaneous block diagonalization of the matrices in T . Because B and T have the same finest common block structure, P also generates a finest simultaneous block diagonalization of the matrix set B.
Taken together, Proposition 2.1 and Algorithm 1 establish a powerful framework for finding a finest simultaneous block diagonalization for any set of self-adjoint matrices. In the sections below, we show how SBD transformations can be used to characterize the stability of synchronization patterns in an efficient and unified fashion. We will focus mainly on oscillators coupled through undirected networks (i.e., self-adjoint coupling matrices). The possibility of extending the current framework to directed networks will be discussed in section 5.

Cluster synchronization from a symmetry-independent perspective.
Consider a network of n identical d-dimensional oscillators forming a synchronization pattern of M clusters. The cluster synchronization subspace can be defined as an M d-dimensional subspace of the full nd-dimensional state space, in which oscillators from the same cluster have exactly the same dynamics. Parallel perturbations are perturbations inside the cluster synchronization subspace-they do not destroy the cluster synchronization pattern. Transverse perturbations are the ones that are perpendicular to the cluster synchronization subspace-all of them must have negative Lyapunov exponents under the evolution of the variational equation in order for the cluster synchronization pattern to be stable.
One key step in analyzing the stability of a synchronization pattern amounts to finding a coordinate system that separates the evolution of transverse and parallel perturbation modes. The coordinate transformation should also decouple the transverse perturbation modes as much as possible. The current state-of-the-art method exploits symmetries in the network structure and uses the machinery of group representation theory [50]. In this section, we establish a symmetry-independent framework based on SBD transformations and compare Algorithm 1 with symmetry-based algorithms in terms of speed, simplicity, and error tolerance.
3.1. The symmetry perspective. A network of n identical oscillators with adjacency-matrix coupling can be described as the following special case of (1): } is the self-adjoint adjacency matrix encoding the structure of the underlying network.
To study the stability of cluster synchronization states, it is necessary to first identify possible synchronization patterns supported by (4), a subset of which is determined by the symmetries of the network. The network symmetries, described by the graph automorphism group Aut(A), can be computed using discrete algebra softwares [61]. The nodes can be partitioned into disjoint clusters: two nodes belong to the same cluster if there is a symmetry operation (i.e., node permutations that respect the adjacency matrix) from the automorphism group that maps one node to the other. In other words, nodes are partitioned according to the orbits under the action of Aut(A) [50]. This is the coarsest synchronization pattern that can be derived from network symmetry. If one instead considers a subgroup G of Aut(A), the nodes can then be partitioned into finer clusters according to the orbits under the action of G. We call these partitions the orbital partitions of the network and refer to the corresponding clusters as symmetry clusters. Each element g ∈ G can be represented by a permutation matrix R g , upon whose action the adjacency matrix A stays invariant, i.e., R * g AR g = A. The set of matrices {R g } form a permutation representation of the subgroup G. As a result of network symmetry, nodes in each symmetry cluster receive the same input from other clusters and admit equal dynamics. In other words, cluster synchronization patterns based on symmetry clusters are guaranteed to be flow invariant (i.e., subspaces of the state space that are invariant under time evolution of the system).
Once a subgroup G and the corresponding orbital partition have been identified, one can find the associated cluster synchronization manifold by evolving the dynamical equation on a quotient network in which each symmetry cluster is represented by a single node. Equation (4) can then be linearized around the cluster synchronization manifold, leading to a variational equation that determines the stability of the corresponding synchronization pattern: where s m is the synchronization trajectory of the m-th cluster, δX = (δx 1 , · · · , δx n ) is the nd-dimensional perturbation vector, and J is the Jacobian operator. Let C m denote the set of nodes in the m-th cluster. Then is an n × n diagonal matrix encoding the nodes in the m-th cluster. It follows that A key insight from Ref. [50] is that there exists a coordinate choice under which the transformed adjacency matrixÃ = Q * AQ has a block-diagonal form that matches the cluster structure. They termed it the IRR coordinates since the transformation matrix Q decomposes the permutation representation {R g } into the direct sum of irreducible representations of G. In particular, where is the number of distinct IRRs present in {R g }, the j-th blockÃ (j) is an n j × n j matrix with n j equal to the multiplicity of the j-th IRR {R (j) g }, and m j is the multiplicity ofÃ (j) as well as the dimension ofR (j) g . The trivial IRR (which maps every g ∈ G to 1) is always present with multiplicity M , so there is always an M × M block inÃ corresponding to the dynamics inside the cluster synchronization subspace [50]. In this way, Q completely decouples the transverse perturbations from the parallel ones and also separates the transverse perturbation modes.
3.2. The symmetry-independent perspective. The IRR transformation decouples (5) by exploiting the network symmetry and its irreducible representations. The end result of this transformation is a block diagonalization of the matrix set A = {E 1 , · · · , E M , A}. But is finding the irreducible representations the most effective way to block diagonalize these matrices? Below we show that the answer is negative.
The readers might have noticed the parallel between the block forms in (3) and (6), which hints at a deep connection between the IRR and the SBD transformations. We now make this parallel precise by looking at the IRR transformation through the lens of matrix * -algebras.
To proceed, we introduce the commutant algebra T of a matrix * -algebra T ⊆ M n , which is the set of all matrices C ∈ M n that commute with every element in T . T is also a matrix * -algebra and enjoys the following dual relations with T [35]: (a) T = T , known as the double commutant property. (b) If the irreducible decomposition of T has blocks of sizes n j and multiplicities m j , then the irreducible decomposition of T is the direct sum of blocks of sizes m j and multiplicities n j . Given a subgroup G of Aut(A), let S be the set of all n × n matrices over C that are invariant under the action of G. That is, First, we note that S is a matrix * -algebra. Second, we make a key observation involving the commutant algebra S . It is clear from (8) that S is the commutant algebra of the matrix * -algebra R generated by {R g }. According to the double commutant property, we have S = R = R, and hence {R g } is a generating set of S . Since the IRR transformation Q decomposes {R g } into the formR g = Q * R g Q = j=1 R (j) g ⊗ I nj , the irreducible decomposition of S has blocks of sizes m j and multiplicities n j . 5 Next, we utilize the relation SR g = R g S, or equivalently, to show that Q performs the irreducible decomposition of S. Writing outR g andS more explicitly, one can see that the commutativity relation implies (R g ⊗ I nj ). According to Schur's Lemma [66,32],S ij = 0 when i = j (i.e., when the irreducible representations {R (i) g } and {R (j) g } are non-isomorphic); andS ij = I mj ⊗S (j) when i = j, whereS (j) is an n j × n j complex matrix. Taken together, Thus, Q simultaneously block diagonalizes all matrices in S into blocks of sizes n j , each of multiplicity m j . Based on the dual relation (b) between the commutant algebras, we see that this is the irreducible decomposition of S.
Accordingly, the IRR transformation can be interpreted within the framework of matrix * -algebra: it performs the irreducible decomposition of the matrix * -algebra S formed by all n×n matrices satisfying the symmetry condition (8). This interpretation explains the parallel between (3) and (6). However, S may not always be the best matrix * -algebra to work with for the stability analysis of synchronization patterns.
In particular, notice that the matrix * -algebra T generated by the matrix set A = {E 1 , · · · , E M , A} is always a subalgebra of S, as E m and A all share the symmetries defined by the subgroup G: This means that the IRR transformation could be considering the simultaneous block diagonalization of an unnecessarily large set of matrices, and S might have a coarser irreducible decomposition than its subalgebra T . Thus, an SBD transformation applied directly to A will always give a block structure on par or finer than the one found by the IRR transformation.
3.3. Optimal separation of perturbation modes. Next, we further characterize the decoupling among perturbation modes achieved by an SBD transformation. Given an adjacency matrix A and a flow-invariant synchronization pattern described by {E m }, we divide the perturbation modes into three classes according to their dynamical characteristics: (I) perturbation modes inside the cluster synchronization subspace; (II) perturbation modes transverse to the cluster synchronization subspace and belonging to a d-dimensional (the dimension of a single oscillator) flow-invariant subspace under the variational equation (5); (III) perturbation modes transverse to the cluster synchronization subspace and that do not belong to a d-dimensional flow-invariant subspace under the variational equation (5). Class I perturbation modes do not destroy the cluster synchronization pattern, while those from classes II and III do.
From an algebraic point of view, the class I perturbation modes correspond to the M -dimensional invariant subspace spanned by the diagonal vectors of matrices in {E m }. Perturbation modes of class II are associated with a one-dimensional invariant subspace under the matrix set A, whereas perturbation modes of class III are induced by higher-dimensional invariant subspaces.
In particular, perturbation modes in Class II are localized inside individual clusters and each of them is decoupled from all other perturbation modes. This is the basis of the so-called isolated desynchronization [50], in which an individual cluster can desynchronize without destroying synchronization in other clusters despite their mutual influence through inter-cluster coupling. Class III perturbation modes arise from intertwined clusters [50,16]. Two clusters are intertwined if there exists transverse perturbations inside one cluster that are coupled to transverse perturbations in the other. It is worth noting that not all transverse perturbations inside intertwined clusters belong to Class III, as some of them form d-dimensional invariant subspaces on their own and are thus Class II.
An SBD transformation finds the optimal separation of perturbation modes that can be inferred from the network structure and cluster patterns. In particular, it is guaranteed to separate the parallel perturbations (Class I) from the transverse ones (Class II and III), completely decouple the perturbation modes in Class II, and separate the ones in Class III as much as possible. This separation is "robust" in the sense that it works for any intrinsic dynamics F and coupling function H, since it is induced solely by the algebraic structure of the system. For some special F and H, the flow-invariant subspaces (under the variational equation) induced by the minimal invariant subspaces (under the matrix set A) may not be minimal and can be further decomposed. 6 But such special flow-invariant subspaces are not robust and will be destroyed by small changes to F and/or H.

Treating clusters not induced by symmetry.
The strength of the SBD framework becomes even more evident when the oscillators are diffusively coupled, which is another natural class of coupling schemes [49,41] featured prominently in real systems, such as consensus networks [33,42]. This class of systems is a special case of (1) and can be described by (10) where the Laplacian matrix L = {L(i, j)} is defined as L(i, j) = δ ij µ i − A(i, j), for δ ij denoting the Kronecker delta and µ i = j A(i, j) representing the indegree of node i. The main difference of systems with Laplacian-matrix coupling from those with adjacency-matrix coupling is that the interaction between two oscillators vanishes when they synchronize.
As a consequence of the diffusive coupling, additional flow-invariant synchronization patterns can emerge that are not predicted by network symmetry. These additional patterns are called Laplacian clusters, and they can be formed by merging some of the symmetry clusters [60]. Since an adjacency matrix and its corresponding Laplacian matrix have exactly the same symmetry (i.e., Aut(A) = Aut(L)), the original IRR transformation cannot distinguish the systems described by equations (4) and (10). Thus, it fails to decouple the parallel and transverse perturbations if applied directly. In Ref. [60], it was proposed that one can apply the IRR transformation to the adjacency matrix of the diffusive network first, then perform additional local coordinate transformations to account for the merging of symmetry clusters induced by the diffusive coupling. This method provides valuable insight, but at the same time it adds an additional layer of complexity on top of the irreducible representation calculations. In fact, all necessary information for the separation of perturbation modes is already encoded in the Laplacian matrix L and Laplacian clusters {C m }. Accordingly, neither network symmetry nor local coordinate transformations are needed in order to properly decouple the variational equation.
What the IRR transformation misses is the diffusive nature of the Laplacianmatrix coupling. Due to the IRR transformation's inability to detect non-symmetry features (e.g., the zero-row-sum of the Laplacian matrix), one has to perform local coordinate transformations to "manually" incorporate that information. An SBD transformation, on the other hand, does not assume any symmetry a priori. It can thus be applied directly to the matrix set L = {E 1 , · · · , E M , L} and automatically takes the additional features of L into account. As in the case of adjacency-matrix coupling, an SBD transformation can find the optimal separation of perturbation modes for any flow-invariant synchronization pattern under the Laplacian-matrix coupling.
More recently, Ref. [53] introduced the concept of external equitable partition as a new way of finding flow-invariant synchronization patterns in Laplacian-matrix coupled systems. An external equitable partition splits a network into input clusters such that each node inside a cluster connects to the same number of nodes in another cluster. This definition guarantees that any external equitable partition corresponds to a flow-invariant synchronization pattern and is more general than orbital partitions. One example of an external equitable partition that is not an orbital partition is presented in Figure 1(a). Ref. [53] also proposed the only other widely known symmetry-independent method for the stability analysis of cluster synchronization patterns, which is based on the concept of quotient graphs and uses results from algebraic graph theory. While it succeeds in decoupling the parallel and transverse perturbations, it in general fails to further separate the transverse perturbation modes. In contrast, an SBD transformation not only separates the parallel perturbations from the transverse ones but also optimally decouples the transverse perturbations (compare Figure 1(b) to Figure 1(c)). It is still possible to modify the symmetry-based IRR framework for the stability analysis of input clusters by introducing additional local coordinate transformations [58]. However, the SBD framework can be applied more directly to the problem and, as we show below, leads to a much more scalable algorithm.
3.5. Computational efficiency and error tolerance. An SBD transformation is not only directly applicable to more synchronization patterns but is also demonstrably more efficient to compute. The computational complexity of the SBD algorithm (Algorithm 1) scales with the network size as O(n 3 ). Moreover, the computational cost is independent of the number of symmetries in the network, as demonstrated in Figure 2(a). This gives the SBD algorithm a huge computational advantage over the algorithm based on the IRR transformation [23], which relies on the computation of irreducible group representations and becomes inefficient when a large number of symmetries is present. Since the complete graph with n nodes has the symmetric group S n as its automorphism group and |S n | = n!, the number of symmetries can grow as the factorial of network size n. Combined with the observation that the CPU time scales with the number of symmetries as a power law for the IRR algorithm (orange dots in Figure 2(a)), it follows that the computational cost of the IRR algorithm can grow super-exponentially with the network size. This is further illustrated in Figure 2(b), where the gap between the worst-case CPU time for the two algorithms grows rapidly with n, and the IRR algorithm can be more than six orders of magnitude slower than the SBD algorithm even for networks of moderate sizes (e.g., n = 14). We note that another polynomial-time algorithm exists, which applies to symmetry clusters [16]. However, that algorithm was designed to separate clusters that synchronize independently of each other, and thus is not intended to have the same decoupling power as the IRR and SBD algorithms for intertwined clusters and their generalizations.
Another aspect in which the SBD algorithm excels is its error tolerance. Algorithm 1 can be easily adapted to treat cases in which the coupling matrices contain small errors. In this case one can simply replace the linear dependence tests by approximate linear dependence tests. That is, a vector can be regarded as being linearly independent from a set of vectors if it cannot be expressed as a linear combination of the existing vectors within some preset tolerance. Unlike the IRR algorithm, which works best when the entries in the adjacency matrix are exact, the SBD algorithm, with its error control capability, has the flexibility to deal with noises and uncertainties in real data.
As an example, we consider a 30-node network generated by deleting 6 randomly selected edges from a complete graph. For each entry of the otherwise binary adjacency matrix, we add a mismatch term drawn from a normal distribution with zero mean and a standard deviation of 10 −3 . These mismatches can model hardware imperfections and measurement errors in real systems. We then equip each node with the dynamics of an electro-optic oscillator used for the first experimental demonstration of chimera states [24], described by (11) θ where θ i is the phase for the i-th oscillator, β is the strength of the self-feedback coupling, and δ = 0.525 is introduced to suppress the trivial solution at the origin. The nonlinear function I(θ) = [1 − cos(θ)]/2 models the dynamics of individual oscillators as well as their interaction function. To demonstrate the robustness of our approach in the context of (4), here we also introduce noise terms ξ i to mimic experimental conditions. The noise terms are Gaussian, have intensity of 10 −5 , and are independent for each oscillator.
The network admits a flow-invariant synchronization pattern of 4 clusters, as shown in Figure 3 (clusters are indicated by node colors), which is induced by the orbital partition of Aut(A). The IRR algorithm is not practical for this system due to the mismatch terms in the adjacency matrix and the huge number of symmetries present, which is generally the case for dense random networks. This particular network has approximately 1.557 × 10 20 symmetries, and thus extrapolation from Figure 2 suggests around a billion years of CPU time for the IRR algorithm to find the right transformation. In contrast, Algorithm 1 finds an SBD transformation of {E 1 , · · · , E 4 , A} within one CPU second. Under the SBD coordinates, the matrices share one 4 × 4 block corresponding to class I perturbation modes, twenty-four 1 × 1 blocks corresponding to class II perturbation modes, and one 2×2 block corresponding to class III perturbation modes (the red and blue clusters are intertwined) 7 .
Based on this decomposition, we calculate the maximal transverse Lyapunov exponent (MTLE) for each cluster over a range of parameter β. We further verify their stabilities by directly simulating equation (11) for β slowly increasing from 3 to 6.5 and calculating the synchronization error in each cluster. We define the synchronization error e m in the m-th cluster C m with n m nodes as the standard deviation of the phases θ i in that cluster: Figure 3(b) and (c) show a sequence of three symmetrybreaking bifurcations as β is increased: it starts with the isolated desynchronization of the magenta cluster around β = 3.9, followed by the concurrent loss of stability of the red and blue clusters around β = 4.9, and ends with a transition to incoherence in the green cluster just below β = 5.5.

Extension to nonidentical oscillators and coupling functions.
The techniques developed in the previous sections can be easily extended to study cluster synchronization of nonidentical oscillators with disparate coupling functions. In this section, we establish such a generalized formalism and use it to discover permanently stable chimera states in multilayer networks.
A system of (possibly nonidentical) oscillators diffusively coupled through a multilayer network with R different types of interactions can be described by (12) where F k(i) : R d → R d is the vector field governing the uncoupled dynamics of the i-th oscillator, k indexes the K different functions {F k } that can be assigned to each oscillator, and L r is the Laplacian matrix representing the r-th type of interaction H r . Other special cases of (1), corresponding to different choices of the coupling matrices in (12), can be treated similarly, as outlined below.
For any flow-invariant synchronization pattern, a variational equation governing the evolution of δX = (δx 1 , · · · , δx n ) can be obtained by linearizing (12) around the corresponding cluster synchronization manifold: where s m is the synchronization trajectory of the m-th cluster. Recall that E m is an n × n diagonal matrix encoding the nodes inside the m-th cluster. Similarly, let N k be the set of nodes equipped with the k-th function F k . Then are n × n diagonal matrices encoding the assignment of heterogeneous nodes, whose sum satisfies K k=1 D k = I n . In order to find the coordinates that optimally decouple (13), one can apply Algorithm 1 to the following matrix set: {E 1 , · · · , E M , D 1 , · · · , D K , L 1 , · · · , L R }.
Our formalism can be used, in particular, to search for permanently stable chimera states in multilayer networks. Broadly speaking, chimera states and their generalizations refer to states in which coherence and incoherence coexist in a system. In the context of coupled oscillators, a network in a chimera state splits into one group of synchronized oscillators and one group of incoherent oscillators [47,43]. Over the past two decades, chimera states have been shown to be a general phenomenon [29,31,3,1,44,71,56,54,6,55] that arises robustly in physical systems [24,67,37,10,68]. Meanwhile, multilayer and multiplex networks have recently emerged as suitable descriptions of many real systems [30,12]. In the synchronization community, such networks are often used to represent oscillators coupled through multiple types of interactions [59,27,57,18,11,64].
Given the relevance of these developments, it is of interest to consider chimera and chimera-like states in networks with two or more types of interactions There Fig. 4. Chimera states in a multilayer network. (a) Two-layer network of Lorenz oscillators with different intralayer and interlayer interactions, given by H 1 = (0, 0, z) and H 2 = (0, 0, x) , respectively. The color coded nodes represent a chimera state in which the first layer is incoherent and the second layer is synchronized. (b) Finest common block structure for the two-cluster state in which both layers are synchronized, which is obtained through an SBD transformation. (c) Finest common block structure for the seven-cluster state depicted in (a), also obtained through an SBD transformation. (d) Diagram in the ρ-σ plane characterizing the stability of the two patterns. The three regions correspond to parameters for which both patterns are unstable (red), both patterns are stable (blue), and only the seven-cluster pattern (i.e., chimera state) is stable (green).
has been previous reports of chimera states in multiplex networks based on numerical simulations [20,36]. However, an analytical treatment of their stability is still lacking. The formalism developed here bridges this gap, since many chimera and chimera-like states can be seen as special cluster synchronization patterns [25,16].
As an example, we consider a multilayer network depicted in Figure 4(a). Each layer consists of six identical Lorenz oscillators interacting through eight connections with the coupling function H 1 = (0, 0, z) . We represent the intralayer connections in the first (second) layer using the Laplacian matrix L 1 ). In addition, the two layers are all-to-all coupled through the coupling function H 2 = (0, 0, x) . The oscillators in the first layer are thus described by the equationṡ where we set α = 10, β = 2, σ 1 = σ, and σ 2 = 0.2σ, leaving the parameters ρ and σ to be varied. The oscillators in the second layer are described by similar equations.
To search for chimera states where one layer is synchronized and the other is incoherent, we need to analyze the linear stability of two different cluster synchronization patterns (both formed by input clusters). Specifically, the two-cluster state in which both layers are coherent (x 6 ) should be unstable while the seven-cluster state x 6 (each cluster represented by a different color in Figure 4(a)) should be stable. In both cases, the cluster synchronization manifold can be found by simulating Lorenz oscillators coupled through the corresponding quotient network. Applying the SBD algorithm to the two-cluster state leads to a common block structure for the matrices in the variational equation (13), as shown in Figure 4(b), where a diagonal entry is colored orange if the corresponding perturbation mode belongs to the first layer and is colored blue if the mode belongs to the second layer. In this case, the transverse perturbation modes are fully decoupled and the 2 × 2 block corresponds to perturbations inside the cluster synchronization subspace. Similarly, the common block structure for the seven-cluster state is shown in Figure 4(c). In this case, we have a 7 × 7 block representing the parallel perturbations and five 1 × 1 blocks related to the transverse perturbation modes for the coherent layer. It is straightforward to perform stability analysis under these SBD coordinates. We show the results in the ρ-σ diagram of Figure 4(d). Red indicates parameters for which both patterns are unstable while blue indicates where both patterns are stable. Chimera states are found in the green region, where only the seven-cluster pattern is stable.
A representative trajectory of the chimera state for ρ = 60 and σ = 2 is shown in Figure 5. The lower and upper panels show the dynamics of x variables for oscillators in each layer, while the middle panel shows their respective synchronization error. This chimera state is permanently stable and can emerge from random initial conditions. 5. Concluding remarks. The framework established here utilizes the finest simultaneous block diagonalization of matrices to study cluster synchronization patterns in complex networks. This framework has its theoretical foundation rooted in the theory of matrix * -algebra and does not rely on symmetry information in the system. This results in an algorithm that is faster, simpler, and more robust than the state-of-the-art algorithm based on irreducible representations of network symmetry. In particular, the SBD framework enjoys the following advantages over the IRR framework and its variants: 1. It applies straightforwardly to any flow-invariant synchronization pattern, including those formed by symmetry clusters, Laplacian clusters, and input clusters.
2. It can easily treat nonidentical oscillators and oscillators coupled through multiple types of interactions. 3. It is highly scalable because the SBD transformations can be calculated much more efficiently than the IRR transformations, which enables the stability analysis of complex synchronization patterns in large networks and in networks with a high degree of symmetry. 4. It is especially suited for practical applications because Algorithm 1 is robust against uncertainties in the network structure typical of real systems. A Matlab implementation of Algorithm 1 is available online and comes with illustrative examples of use. 8 The utility of this algorithm is not limited to network synchronization problems. It can be applied, for instance, to reduce the complexity of many problems in which multiple matrices are involved, such as in the control of network systems and in semidefinite programming [39].
An important open problem for future research concerns the case of directed networks. When considering cluster synchronization patterns in directed networks, two complications arise. The first concerns the identification of valid clusters. Directed networks support many flow-invariant synchronization patterns that do not result from orbital partitions. Thus, it is often the case that a synchronization pattern of interest will not be identified by a software based on computational group theory. Indeed, any partition of the nodes that satisfies the balanced equivalence relations [21,62] gives rise to a flow-invariant cluster synchronization pattern.
The second difficulty involves finding an optimal coordinate system to separate perturbation modes in the stability analysis. Since in directed networks the coupling matrices are no longer self-adjoint, one must consider the corresponding matrix algebra (as opposed to the matrix * -algebra) to obtain a finest simultaneous block diagonalization of matrices in the variational equation. Unlike matrix * -algebras, matrix algebras are no longer closed under the conjugate transpose operation and are generally not semisimple algebras. This renders the Artin-Wedderburn theorem inapplicable and introduces "bad" elements called radicals [32] such that, in general, matrix algebras cannot be decomposed into the direct sum of irreducible matrix algebras. Thus, a promising direction for future research is to generalize the current algorithm to find a finest simultaneous block diagonalization for matrices that are not necessarily self-adjoint.