(2018). A Gillespie Algorithm for Non-Markovian Stochastic Processes. SIAM Review , 60 (1), 95-115.

. The Gillespie algorithm provides statistically exact methods for simulating stochastic dy- namics modeled as interacting sequences of discrete events including systems of biochemical reactions or earthquake occurrences, networks of queuing processes or spiking neurons, and epidemic and opinion formation processes on social networks. Empirically, the inter-event times of various phenomena obey long-tailed distributions. The Gillespie algorithm and its variants either assume Poisson processes (i.e., exponentially distributed inter-event times), use particular functions for time courses of the event rate, or work for non-Poissonian renewal processes, including the case of long-tailed distributions of inter-event times, but at a high computational cost. In the present study, we propose an innovative Gillespie algorithm for renewal processes on the basis of the Laplace transform. The algorithm makes use of the fact that a class of point processes is represented as a mixture of Poisson processes with diﬀerent event rates. The method is applicable to multivariate renewal processes whose survival function of inter-event times is completely monotone. It is an exact algorithm and works faster than a recently proposed Gillespie algorithm for general renewal processes, which is exact only in the limit of inﬁnitely many processes. We also propose a method to generate sequences of event times with a tunable amount of positive correlation between inter-event times. We demonstrate our algorithm with exact simulations of epidemic processes on networks, ﬁnding that a realistic amount of positive correlation in inter-event times only slightly aﬀects the epidemic dynamics.


Introduction.
Various complex systems are driven by interactions between subsystems via time-stamped discrete events. For example, in chemical systems, a chemical reaction event changes the number of reagents of particular types in a discrete manner. Stochastic point processes, in particular Poisson processes assuming that events occur independently and at a constant rate over time, are a central tool for emulating the dynamics of chemical systems [67]. They are also useful in simulating epidemic processes in a population [14] and many other systems. driver. Boguñá and colleagues extended the Gillespie algorithm to be applicable for general renewal processes [9] (also see [70] for further developments). However, the algorithm has practical limitations. First, it is not accurate when the number of ongoing renewal processes is small [9]. This can result in a considerable amount of approximation error in the beginning or final stages of the dynamics of epidemic or opinion formation models, for example, in which only a small number of processes is active, even in a large population [70]. Second, it is necessary to recalculate the instantaneous event rate of each process following the occurrence of every event in the entire population, a procedure that can be computationally expensive.
In the present study, we propose an innovative Gillespie algorithm, the Laplace Gillespie algorithm, which is applicable to non-Poissonian renewal processes. It exploits the mathematical properties of the Laplace transform, is accurate for an arbitrary number of ongoing renewal processes, and runs faster than the previous algorithm [9]. This article is organized as follows. In section 2, we review the original Gillespie algorithm for Poisson processes. In section 3, we review the previous extension of the Gillespie algorithm to general renewal processes [9]. In section 4, we introduce the Laplace Gillespie algorithm, together with theoretical underpinnings and examples. In section 5, we numerically compare the previous algorithm [9] and the Laplace Gillespie algorithm. In section 6, we introduce a method to generate event sequences with positive correlation in inter-event times, as is typically observed in human behavior and natural phenomena [24]. In section 7, we demonstrate our methods by performing exact simulations of an epidemic process in which inter-event times follow a power-law distribution. In section 8, we discuss the results, including limitations of the proposed algorithm. The codes used in our numerical simulations are available in the Supplementary Materials.

Gillespie Algorithm.
The original Gillespie algorithm [21,22,39] assumes N independent Poisson processes with a rate of λ i (1 ≤ i ≤ N ) running in parallel. Because of the independence of the different Poisson processes, the superposition of the N processes is a Poisson process with rate N i=1 λ i . Therefore, we first draw Δt, an increment in time to the next event of the superposed Poisson process, from the exponential distribution given by Because the survival function (i.e., the probability that a random variable is larger than a certain value) of φ(Δt) is given by where u is a random variate drawn from the uniform density on the interval [0, 1]. Then, we determine the process i that has produced the event with probability Finally, we advance the time by Δt and repeat the procedure. Following the occurrence of an event, any λ i is permitted to change.
3. Non-Markovian Gillespie Algorithm. Now consider N renewal processes running in parallel, and denote by ψ i (τ ) the probability density function of inter-event times for the ith process (1 ≤ i ≤ N ). If the process is Poissonian, we obtain For such a population of general renewal processes, Boguñá and colleagues proposed an extension of the Gillespie algorithm, which they called the non-Markovian Gillespie algorithm (nMGA) [9].
To begin with, we explain their exact Gillespie algorithm for general renewal processes, which is the basis of the nMGA. A short derivation of the exact Gillespie algorithm is given in Appendix B. We denote by t i the time elapsed since the last event of the ith process, and by the survival function of the ith process (i.e., the probability that the inter-event time is larger than t i ). We also set which is in fact equal to the probability that no process generates an event for time Δt (see Appendix B). The exact Gillespie algorithm for general renewal processes is given as follows: Draw the time until the next event, Δt, by solving Φ(Δt|{t j }) = u, where u is a random variate uniformly distributed over [0, 1]. 3. Select the process i that has generated the event with probability is equal to the instantaneous rate of the ith process. 4. Update the time since the last event, t j , to t j + Δt (j = i) and set t i = 0. 5. Repeat steps 2-4. Although this algorithm is statistically exact, step 2 can be time-consuming [9,70]. To improve performance, Boguñá and colleagues introduced the nMGA. The nMGA is an approximation to the aforementioned algorithm and is exact as N → ∞. When Δt is small, as is the case when N is large, (4) is approximated as With this approximation, the time until the next event is determined by Φ . The process that generates the event is determined by setting Δt = 0 in (5). For a Poisson process, we set λ i (t i ) = λ i to recover the original Gillespie algorithm (cf. (1) and (2)).

Algorithm.
In the nMGA, we update the instantaneous event rates for all the processes λ j (t j ) (1 ≤ j ≤ N ) and their sum following the occurrence of each event. This is because t j (1 ≤ j ≤ N ) is updated following an event. This procedure is time-consuming when N is large; we have to update λ j (t j ) even if the probability density of the inter-event times for the jth process is not perturbed by an event that has occurred elsewhere.
To construct an efficient Gillespie algorithm for non-Markovian point processes, we start by considering the following renewal process, called the event-modulated Poisson process. When an event occurs, we first draw the rate of the Poisson process, denoted by λ, according to a fixed probability density function p(λ). Then, we draw the time until the next event according to the Poisson process with rate λ. Upon the occurrence of the next event, we renew the rate λ by redrawing it from p(λ). We then repeat these steps.
The event-modulated Poisson process is a mixture of Poisson processes of different rates. It is, in general, a non-Poissonian renewal process and is slightly different from a mixed Poisson process, in which a single rate is initially drawn from a random ensemble and used throughout a realization [27,38]. It is also different from a doubly stochastic Poisson process (also known as a Cox process), in which the rate of the Poisson process is a stochastic process [27,38,40], or its subclass called the Markov-modulated Poisson process, in which the event rate switches in time according to a Markov process [18]. In these processes, the dynamics of the event rate are independent of the occurrence of events. In contrast, for event-modulated Poisson processes, the event rate changes upon the occurrence of events.
An event-modulated Poisson process is a Poisson process when conditioned on the current value of λ. Therefore, when we simulate N event-modulated Poisson processes, they are independent of each other and of the past event sequences if we are given the instantaneous rate of the ith process, denoted by λ i for all i (1 ≤ i ≤ N ). This property enables us to construct a Gillespie algorithm. By engineering p(λ), we can emulate a range of renewal processes with different distributions of inter-event times. The new Gillespie algorithm, which we call the Laplace Gillespie algorithm (the reason for this name will be made clear in section 4.2, where we discuss the algorithm's theoretical basis in the Laplace transform), is defined as the Gillespie algorithm for event-modulated Poisson processes. We denote the density of the event rate for the ith process by p i (λ i ). The Laplace Gillespie algorithm proceeds as follows: 1. Initialize each of the N processes by drawing the rate λ i (1 ≤ i ≤ N ) according to its density function p i (λ i ). 2. Draw the time until the next event Δt = − ln u/ N j=1 λ j , where u is a random variate uniformly distributed over [0, 1]. 3. Select the process i that has generated the event with probability λ i / N j=1 λ j . 4. Draw a new rate λ i according to p i (λ i ). If there are processes j (1 ≤ j ≤ N ) for which the statistics of inter-event times have changed following the occurrence of the event generated in steps 2 and 3 (e.g., a decrease in the rate of being infected owing to the recovery of an infected neighbor), modify p j (λ j ) accordingly and draw a new rate λ j from the modified p j (λ j ). The event rates of the remaining processes remain unchanged. 5. Repeat steps 2-4.

4.2.
Theory. An event-modulated Poisson process is a renewal process. The renewal process is fully characterized by the probability density of inter-event times, ψ(τ ). For an event-modulated Poisson process with probability density of the event rate p(λ), we obtain Integration of both sides of (7) yields the survival function of the inter-event times as follows: Equation (8) indicates that Ψ(τ ) is the Laplace transform of p(λ). Therefore, the necessary and sufficient condition for a renewal process to be simulated by the Laplace Gillespie algorithm is that Ψ(τ ) is the Laplace transform of a probability density function of a random variable taking nonnegative values. Although this statement can be made more rigorous if we replace p(λ)dλ by the probability distribution function, we will use the probability density representation for simplicity. A necessary and sufficient condition for the existence of p(λ) is that Ψ(τ ) is completely monotone and Ψ(0) = 1 [15]. The complete monotonicity is defined by Condition Ψ(0) = 1 is satisfied by any survival function. With n = 0, (9) reads Ψ(τ ) ≥ 0, which all survival functions satisfy. With n = 1, (9) reads ψ(τ ) ≥ 0, which is also always satisfied. However, (9) with n ≥ 2 imposes nontrivial conditions.

Examples.
In this section, we give examples of distributions of inter-event times ψ(τ ) for which the Laplace Gillespie algorithm can be used. These examples are summarized in Table 1.

Power-Law Distribution Derived from a Gamma Distribution of λ.
Consider the case in which p(λ) is the gamma distribution given by (10) p where Γ(α) is the gamma function, α is the shape parameter, and κ is the scale parameter. By combining (8) and (10), we obtain The probability density of inter-event times is given by the following power-law distribution: When α = 1, p(λ) is the exponential distribution and ψ(τ ) ∝ τ −2 [31]. The same mathematical relationship connecting the gamma distribution and a power-law distribution has been used in superstatistics in statistical mechanics [8] and in a reinforcement learning model for generating discount rates that decay with time according to a power law [43].
Power-Law Distribution Derived from a Power-Law Distribution of λ. Consider the distribution where α > −1 and 0 ≤ λ ≤ 1 [31]. We obtain Weibull Distribution. The Weibull distribution is defined by The Weibull distribution with α = 1 is an exponential distribution. The Weibull distribution has a longer and shorter tail than the exponential distribution when α < 1 and α > 1, respectively. The Weibull distribution can be expressed as the Laplace transform of a p(λ) if and only if 0 < α ≤ 1 [35,75]. The distribution when α = 1/2 is the so-called stable distribution of order 1/2, for which we obtain [15,27,35] (23) For other values of α (i.e., 0 < α < 1/2 or 1/2 < α < 1), the explicit form of p(λ) is complicated [35] such that the use of the Laplace Gillespie algorithm is impractical. For these α values, a mixture of a small number of exponential distributions may resemble the Weibull distribution [36], such that we may be able to use p(λ) with point masses at some discrete values of λ to approximate the Weibull distribution of inter-event times.
Gamma Distribution. When inter-event times obey the gamma distribution, i.e., (24) ψ Ψ(τ ) can be expressed as the Laplace transform of a probability density function p(λ) if and only if 0 < α ≤ 1 [23,74]. We obtain [23] (25) Mittag-Leffler Distribution. Consider the distribution of inter-event times defined in terms of the survival function given by (26) Ψ is the so-called Mittag-Leffler function. When 0 < β < 1, Ψ(τ ) is completely monotone, and we obtain [25,26] When β = 1, (26) and (27) imply that Ψ(τ ) = e −λτ , yielding a Poisson process. When 0 < β < 1, Ψ(τ ) is long-tailed with the asymptotics [19,26] or, equivalently, Therefore, this class of ψ(τ ) produces long-tailed distributions of inter-event times with a power-law exponent lying between one and two. A special case occurs when Integral of a Valid Survival Function. The function given by is well-defined if and only if τ ψ , i.e., the mean inter-event time with respect to density ψ(τ ) is finite. Assume that the renewal process generated by ψ(τ ) permits use of the Laplace Gillespie algorithm.
, and Ψ(τ ) is completely monotone, it follows that Ψ w (τ ) is completely monotone. In addition, (31) implies that Ψ w (0) = 1. Therefore, the renewal process with survival function Ψ w (τ ) can also be simulated by the Laplace Gillespie algorithm.
The corresponding probability density of inter-event times is given by In terms of p(λ), we obtain Therefore, in each update of the Laplace Gillespie algorithm with the density of interevent times given by ψ w (τ ), the rate of the Poisson process λ should be sampled according to density p w (λ), where For example, if ψ(τ ) is an exponential distribution, then ψ w (τ ) is an exponential distribution of the same mean. If ψ(τ ) is the power-law distribution given by (12), then ψ w (τ ) is a power-law distribution of the same form, with α replaced by α − 1.

Empirical Distributions of Inter-Event Times.
We are often interested in informing multivariate point processes by empirical data of event sequences. A standard numerical approach is to emulate the dynamics (e.g., epidemic processes) on top of empirical event sequences, i.e., use empirically observed events with time stamps to induce, for example, infection events [33,50]. There exists a Gillespie algorithm to run dynamical processes on such empirical temporal networks [70]. Another approach is to estimate ψ(τ ) from empirical data and then use a variant of the Gillespie algorithm (e.g., the nMGA or the Laplace Gillespie algorithm) to simulate stochastic point processes (e.g., epidemic processes).
The Laplace Gillespie algorithm requires the survival function, Ψ(τ ), to be completely monotone. Under this condition, we may be able to compute the inverse Laplace transform to obtain p(λ) at a reasonable computational cost [2]. However, because it is likely that an empirical Ψ(τ ) is not completely monotone, we propose two alternative methods to estimate p(λ) from empirical data. The first method is to fit a completely monotone survival function of inter-event times, such as (11), to given data. The second method is to estimate a finite mixture of exponential distributions of different means to approximate the empirical ψ(τ ) or Ψ(τ ). Likelihood or other cost-function methods are available for performing this estimation [29,35,36,42,58]. If the empirical Ψ(τ ) is completely monotone, the approximation error is guaranteed to decay inversely proportional to the number of constituent distributions [44]. For both of these methods, we should be mindful of the bias in the estimation caused by a finite time window of observation [41].

Initial Conditions.
When we begin to run N processes, one approach is to initially draw the inter-event time for each process from ψ(τ ). This initial condition defines the so-called ordinary renewal process [13]. An alternative model, called the equilibrium renewal process, assumes that the process has begun at time t = −∞, such that the first inter-event time for each process, drawn at t = 0, uses the equilibrium distribution of waiting times to the next event, rather than ψ(τ ) (i.e., the distribution of inter-event times) [13]. In fact, the equilibrium distribution of waiting times to the next event coincides with ψ w (τ ) given by (32) [13,15,51]. To simulate the equilibrium renewal process, we start by drawing the rates of the Poisson processes according to p w (λ) given by (35). Afterwards, we draw the event rates according to p(λ).

Numerical Performance.
In this section, we compare the performances of the nMGA and the Laplace Gillespie algorithm. We use the power-law distribution of inter-event times given by (12). Because κ only controls the scale of inter-event times, we set κ = 1 without loss of generality. To generate gamma-distributed random variates, we use a well-known algorithm [48] and adapt an open source code [1] for our purposes. We generate N/3 processes by (12) with α = 1, another N/3 processes with α = 1.5, and another N − 2 N/3 ≈ N/3 processes with α = 2. We continue the simulation until one of the N processes generates 10 6 inter-event times. We employ the ordinary renewal process such that the initial inter-event time for each process is drawn from ψ(τ ).
The survival functions for one of the processes with α = 1, one of those with α = 1.5, and one of those with α = 2 are shown by the solid curves for the nMGA and the Laplace Gillespie algorithm in Figures 1(a) and 1(b), respectively, for N = 10. The theoretical survival function (11) is plotted with the dashed curves. The results obtained from the Laplace Gillespie algorithm (Figure 1(b)) are more accurate than those obtained from the nMGA. This is because the nMGA is exact only in the limit of N → ∞, whereas the Laplace Gillespie algorithm is exact for any N . When N = 10 3 , the nMGA is sufficiently accurate (Figure 1(c)), as is the Laplace Gillespie algorithm (Figure 1(d)). The results shown in Figures 1(a) and 1(c) are consistent with the numerical results obtained in [9].
The nMGA may require a lot of time in updating the instantaneous event rates for all processes every time an event occurs in one of the N processes. The Laplace Gillespie algorithm avoids this rate recalculation, whereas it might be costly to calculate the gamma variates each time an event occurs. We compare numerically the computation times for the two algorithms by varying N . The other parameters remain the same as those used in Figures 1(a)-(d). For the Laplace Gillespie algorithm, we use a binary tree data structure to store and update λ i (1 ≤ i ≤ N ) to accelerate the selection of the i value with probability Π i upon the occurrence of each event [20]. In short, each λ i occupies a leaf of the tree, and each non-leaf node stores the sum of its left child and its right child. This data structure is useful when only a small number of λ i are changed following the occurrence of each event [20]. This is not the case for the nMGA, for which all the N instantaneous event rates must be updated upon each event. Therefore, for the nMGA, we use a simple linear search, which is computationally less expensive than updating a binary tree every time an event occurs. We use codes written in C++, compiled with a standard g++ compiler without an optimization option on a Mac Book Air with 1.7 GHz Intel Core i7 processor and 8GB 1600 MHz DDR3 RAM. The computation time in seconds plotted as a function of N in Figure 1(e) indicates that the Laplace Gillespie algorithm runs substantially faster than the nMGA as N increases.
Both the nMGA and the Laplace Gillespie algorithm require two uniformly distributed random variates per event, as does the standard Gillespie algorithm. In for each updated λ j . Because k is usually much smaller than N in practice [20,33], the Laplace Gillespie algorithm is expected to run faster than the nMGA, which is consistent with the numerical results shown in Figure 1(e). Additionally, the Laplace Gillespie algorithm outperforms the nMGA in the sense that the Laplace Gillespie algorithm is exact for any N , whereas the nMGA is not. On the other hand, for the Laplace Gillespie algorithm, the form of ψ(τ ) is limited, whereas the nMGA allows for any ψ(τ ) to be used.

Positively Correlated Sequences of Inter-Event Times.
We have considered renewal processes, i.e., stationary point processes without correlation between interevent times. However, inter-event times are positively correlated for human activity and earthquakes [24,52]. The Laplace Gillespie algorithm provides a method for generating point processes with positive correlation, without changing ψ(τ ). To generate such event sequences, we redraw a new event rate for the Poisson process, λ i , with probability 1 − q (0 ≤ q < 1), when the ith process has generated an event. With probability q, we continue to use the same value of λ i until the ith process generates another event. We call this algorithm the correlated Laplace Gillespie algorithm. The standard Laplace Gillespie algorithm is recovered when q = 0. The correlation between inter-event times grows as q increases. Although the same λ i value may be used for generating consecutive inter-event times, the corresponding inter-event times are different because they are generated from a Poisson process. The computation time for the correlated Laplace Gillespie algorithm decreases as q increases because the number of times that λ i is redrawn is proportional to 1 − q.
In a continuous-time Markov process with a state-dependent hopping rate, the inter-event time defined as the time between two consecutive hops, regardless of the state, is generally correlated across inter-event times [63]. The correlated Laplace Gillespie algorithm can be interpreted as a special case of this scenario such that the state is continuous, the process hops back to the current state with probability q, and it hops to any other state with probability proportional to (1 − q) × p(λ). The correlated Laplace Gillespie algorithm can be alternatively built on top of a finitestate [63] or an infinite-state [42] Markov process with a general transition probability between states. This variant of the correlated Laplace Gillespie algorithm is similar to a two-state cascading Poisson process in which the two states correspond to different event rates [47].
Two remarks are now in order. First, ψ(τ ) is independent of the q value. This is because the stationary density of the corresponding continuous-time Markov process in the λ-space is equal to p(λ), irrespective of the q value. Of course, the distribution of τ conditioned on the previous inter-event time, τ , is different from ψ(τ ) and depends on τ in general. Second, the correlated Laplace Gillespie algorithm cannot be used to generate correlated event sequences when ψ(τ ) is the exponential distribution. In this case, the event rate λ must be kept constant over time and therefore cannot be modulated in a temporally correlated manner.
We measure the so-called memory coefficient [24] to quantify the amount of correlation in a sequence of inter-event times generated by the correlated Laplace Gillespie algorithm. The memory coefficient for a sequence of inter-event times, {τ 1 , τ 2 , . . . , τ n }, where n is the number of inter-event times, is defined by where m 1 = For the power-law distribution of inter-event times given by (12) with κ = 1, we generate a sequence of n = 10 5 inter-event times and calculate M . The mean and standard deviation of M , calculated on the basis of 10 3 sequences, are plotted for α = 1 and α = 2 in Figures 2(a) and 2(b), respectively. For both α values, M monotonically increases with q and a range of M values between 0 and ≈ 0.4 is produced. In empirical data, M lies between 0 and 0.1 for human activity and between 0.1 and 0.25 for natural phenomena [24]. These ranges of M are produced using approximately 0 ≤ q ≤ 0.2 and 0.2 ≤ q ≤ 0.5, respectively.

Epidemic Processes.
Previous numerical efforts suggested that the dynamics of epidemic processes in well-mixed populations or networks were altered if contact events were generated by non-Poissonian renewal processes [34,53,61,68]. The nMGA and the Laplace Gillespie algorithm can be used for implementing such models of epidemic processes. To demonstrate the use of the Laplace Gillespie algorithm, we simulate a node-centric susceptible-infected-recovered (SIR) epidemic process model, which is similar to previous models [57,61,62].
Consider a static network composed of N nodes. At any point in time, each node assumes one of the three states: susceptible, infected, or recovered. An infected node i transmits the disease to a susceptible node j upon the activation of link (i, j).
To activate links, we initially assign to each node i (1 ≤ i ≤ N ) an independent and identical point process whose probability density of inter-event times is given by ψ(τ ). When an event occurs at node i, we select a neighbor of i, denoted by j, with the equal probability and activate link (i, j). If either i or j is infected and the other is susceptible, the disease is transmitted such that the susceptible node becomes infected. An infected node transits to the recovered state according to a Poisson process of rate μ. A recovered node neither infects nor is infected by other nodes. The mean time to node activation, which enables infection, is given by τ = ∞ 0 τψ(τ )dτ . The mean time for an infected node to recover is equal to 1/μ. We define the effective infection rate by λ eff = (1/μ)/ τ [9]. We control λ eff by changing μ for a given ψ(τ ). This definition is justified because multiplying τ and 1/μ by the same factor only changes the time scale of the dynamics.
We assume an equilibrium point process, i.e., we start simulations from the equilibrium state. This is equivalent to drawing the first event time for each node from the waiting-time distribution, ψ w (τ ), rather than from ψ(τ ), and drawing subsequent event times from ψ(τ ). The population structure is assumed to be either well mixed (i.e., complete graph) or the regular random graph in which all nodes have degree five and all links are randomly formed. In both cases, we set N = 10 4 . Each simulation starts from the same initial condition, in which a particular node, which is the same in all simulations, is infected and all the other N − 1 nodes are susceptible. We measure the number of recovered nodes at the end of the simulation normalized by N , called the final size, averaged over 10 4 simulations. We consider four renewal processes for node activation: the exponential distribution, corresponding to a Poisson process, and three power-law distributions given by (12) with α = 1.5, κ = 1, and q = 0, 0.2, and 0.9.
The final size for the well-mixed population and the regular random graph is shown in Figures 3(a) and 3(b), respectively. For both population structures, and across the entire range of the effective infection rate, λ eff , the final size is larger when ψ(τ ) is the power-law distribution than when it is the exponential distribution. Consistent with this result, the epidemic threshold, i.e., the value of λ eff at which the final size becomes positive, is smaller for the power-law distribution ψ(τ ) than for the exponential distribution ψ(τ ).
The final size is larger with positive correlation of inter-event times (q = 0.9) than with no correlation (q = 0). Results for q = 0.2 are almost identical to those for q = 0. Because realistic values of the memory coefficient, M , for human activity are produced by 0 ≤ q ≤ 0.2 (section 6), we conclude that a realistic amount of positive correlation in inter-event times does not affect the final size.
8. Discussion. We have provided a generalization of the Gillespie algorithm for non-Poissonian renewal processes, which we call the Laplace Gillespie algorithm. Our algorithm is exact for any number of processes running in parallel and is faster than the nMGA [9]. Although it is only applicable to renewal processes whose survival function is completely monotone, it applies to several renewal processes of interest. We have also proposed a method to simulate nonrenewal point processes with tunable positive correlation between inter-event times.
Previous studies numerically explored Poissonian explanations of long-tailed distributions of inter-event times. Examples include a nonhomogeneous Poisson process whose event rate switches between two values and is also periodically modulated [47]. Another example is a self-exciting Hawkes process with an exponential memory kernel [52]. We have shown that a power-law distribution of inter-event times (12) is generated when the rate of an event-modulated Poisson process is drawn from the gamma distribution upon the occurrence of every event. This observation provides a theoretical underpinning of the fact that nonhomogeneous Poisson processes and Hawkes processes can generate long-tailed distributions of inter-event times. In other words, switching between different rates is a general mechanism to produce long-tailed distributions of inter-event times. Although the present results indicate that, in theory, we require a mixture of an infinite number of Poisson processes of different rates to produce a power-law distribution, in practice a small number of Poisson processes may be sufficient. In fact, a mixture of a small number of exponential distributions is sometimes employed to fit empirical distributions of inter-event times [29,35,36,58].
We have applied the Laplace Gillespie algorithm to an epidemic model in wellmixed and networked populations. The applicability of the Laplace Gillespie algorithm, as well as of the modified next reaction method [3] and of the nMGA, extends far beyond epidemic modeling. In fact, these algorithms can simulate systems of earthquakes, spiking neurons, financial time series, crimes, and so on (see section 1 for references). In particular, empirical data corresponding to these applications suggest long-tailed distributions of inter-event times (section 1), thus yielding a CV (coefficient of variation, i.e., the standard deviation divided by the mean) larger than unity, and therefore not excluding the use of the Laplace Gillespie algorithm. It is also straightforward to include births and deaths of nodes [6,61] and links [32] of contact networks, or rewiring of links [28,71], as long as these events obey renewal processes or nonrenewal point processes with positive correlation, as emulated by the correlated Laplace Gillespie algorithm.
(ii) CV is smaller than unity. Complete monotonicity implies that the CV of τ is larger than or equal to unity [75]. This condition again excludes the gamma and Weibull distributions with α > 1. In practice, a CV value less than unity indicates that events occur more regularly than in a Poisson process, which would yield CV = 1. Therefore, renewal processes producing relatively periodic event sequences are also excluded.
In epidemiology, evidence suggests that empirical recovery times are less dispersed than the exponential distribution, implying a CV value less than unity. Therefore, a gamma distribution with scale parameter α > 1 or even a delta distribution is often alternatively employed [45,72]. These distributions cannot be simulated by our algorithm.
Empirical evidence of online correspondences of humans suggests that, except for very small τ values, τ obeys a power-law distribution for small τ and an exponential distribution for large τ [73]. Such a ψ(τ ) monotonically decreases with τ , verifying that dψ(τ )/dτ ≤ 0. However, the sign of d 2 ψ(τ )/dτ 2 depends on the τ value, such that the corresponding survival function is not completely monotone.
where (40) Ψ j (Δt|t j ) = ∞ Δt ψ w j (τ |t j )dτ = Ψ j (t j + Δt) Ψ j (t j ) is the probability that the time until the next event for the hypothetically isolated jth process is larger than Δt, conditioned on the assumption that the last event has occurred time t j before. Using (38) and (40), we rewrite (39) as where Φ(Δt|{t j }) is given by (4). Equation (4) represents the probability that no process generates an event for time Δt. By equating this quantity to u, a random variate over the unit interval, we can determine Δt, i.e., the time until the next event in the entire population of the N renewal processes. Equation (41) implies that, once Δt is determined, λ i (t i + Δt) = ψ i (t i + Δt)/Ψ i (t i + Δt) is the instantaneous rate of the ith process and is proportional to the probability that the ith process generates this event. Therefore, the exact Gillespie algorithm for general renewal processes is produced as given in section 3.