Splitting methods in the numerical integration of non-autonomous dynamical systems

We present a procedure leading to efficient splitting schemes for the time integration of explicitly time dependent partitioned linear differential equations arising when certain partial differential equations are previously discretized in space. In the first stage we analyze the order conditions of the corresponding autonomous problem and construct new 6th-order methods. In the second stage, by following a procedure previously designed by the authors, we generalize the methods to the time dependent case in such a way that no order reduction is present. The resulting schemes compare favorably with other integrators previously available.


Introduction
The evolution of many physical systems is usually described by an ordinary differential equation (ODE) whose formal solution can be written as Here D f stands for the Lie derivative associated with f (x), i.e. D f ≡ f (x) · ∇. Describing the physical problem at hand requires then to formulate an appropriate mathematical model in this setting. In other words, a suitable function f (x) such that (1) can reproduce the most salient features of the real system. In this respect, since the real system often involves one or more symmetries and these symmetries can be mathematically formulated in terms of Lie groups, a necessary condition is that the Lie derivative D f possesses the appropriate Lie algebraic structure. Of course, one is also interested in solving (1), but with the exception of a very few simple cases, only numerical approximations are usually obtained. Generally speaking, standard numerical integrators (Runge-Kutta formulae, linear multistep methods) produce approximate solutions that do not take into account the special algebraic structure of f and therefore do not preserve the corresponding symmetries. In consequence, much effort has been devoted during the last two decades to the design of numerical integrators preserving those qualitative (geometric) properties of the exact solution. Examples of such algorithms include symplectic integrators, volume preserving methods, Lie group integrators, variational methods in mechanics, etc. [13,14,20]. All of them are now put into the more general category of geometric numerical integrators. In geometric integration, in fact, it is crucial to identify significant (geometric) properties of the dynamical system (1) and construct numerical integration algorithms that preserve those features. In addition, of course, one is interested in building efficient methods with the usual properties of accuracy and stability.
Although research in geometric numerical integrators for differential equations has experienced a tremendous boost during the last decades, it is fair to say that this has been mainly restricted to autonomous problems, whereas nonlinear systems of the form i.e., when time appears explicitly in the formulation of the problem, have been up to some point disregarded. In the linear case X = A(t)X , with A and X n × n matrices, several options have been widely explored and indeed the Magnus expansion has shown to be an extremely useful device to get analytical as well as numerical approximations [7]. A usual procedure in the numerical analysis of explicitly time-dependent problems consists in transforming (2) into an autonomous differential equation by the introduction of a new variable, or, equivalently, and F(y) = ( f (x, x t ), 1). Notice that formulation (3) introduces the auxiliary variable x t aimed at eliminating the explicit time dependence, so that numerical integrators designed for (1) can, in principle, be used in this setting. This process exhibits, nevertheless, several drawbacks. First, the algebraic structure of D f and D F may differ, so that methods specifically designed for (1) cannot simply be used for the integration of the new enlarged system (4). Second, even when they can be applied, very often their efficiency reduces considerably. For this reason, during the last years, the authors have analyzed different procedures to adapt effective splitting schemes when the system is explicitly time-dependent without losing their efficiency (see, e.g., [1,3,9]). In the particular but important case of the linear differential equations arising from the space discretization of the Schrödinger equation, it is shown in [3] how to generalize methods previously designed in [11] when an explicit time dependency is introduced. The results achieved in [3] motivates the search of new and more powerful integration methods for time independent systems especially designed to be used in the non-autonomous case. The goal of the present paper is precisely this: to carry out a systematic analysis of the order conditions to be satisfied by splitting methods when integrating partitioned linear systems to which the techniques exposed in [3] are subsequently applied to render schemes also valid in the time dependent case. The plan of the paper is as follows. In Sect. 2 we briefly review some of the available techniques to adapt splitting methods designed for autonomous problems to the explicitly time dependent case. In Sect. 3 we focus the treatment on partitioned linear systems of differential equations arising, in particular, in the time integration of the Schrödinger equation previously discretized in space. There we present a procedure to get and solve the resulting order conditions for methods of order ≤ 6 to which the approach presented in [3] is applied to construct efficient splitting schemes in the time dependent case. The validity of the treatment is then illustrated in Sect. 4, where the performance of a new 6th-order method constructed along these lines is compared with other standard integrators.

General treatment
Although splitting methods have been used for a long time in the numerical treatment of differential equations, they have experienced a revival with the advent of geometric integration. In fact, a good deal of geometric integrators are based on the idea of splitting. The idea is fairly simple: suppose f in Eq. (1) can be decomposed as can either be solved in closed form or accurately integrated. Then one combines these partial solutions into an approximate solution for (1), often of high accuracy. To make this sentence precise, let us denote by the flows corresponding to Eqs. (5) with appropriately chosen real coefficients a i , b i to ensure that ψ h is an approximation to the exact solution ϕ h up to order O(h p ) with respect to the time step h, i.e.
A great deal of methods of this class exists in the literature, of different orders and tailored for different structures of the vector field: general separable problems, systems arising from second order differential equations x = g(x), near-integrable systems, etc. (see [5,13,15,17] and references therein). One could say that the performance of the different splitting methods strongly depends on the particular problem at hand, so that a previous analysis is highly recommended to select the most appropriate scheme for its numerical treatment [5]. The non-autonomous separable problem is indeed a case in point. One might think of two procedures to adapt scheme (6) in this setting. The first one consists in replacing the maps ϕ [A] a i h , ϕ [B] b i h by the maps associated to the exact flow defined by the equations Here c i = i−1 j=0 a j , d i = i−1 j=0 b j , a 0 = 0, b 0 = 0, and the initial conditions are given by the solution obtained from the previous flow. This approach can be considered as a time-average on each stage of the composition. Obviously, obtaining the exact solution of the non-autonomous equations (8) and (9) is by no means trivial due to the explicit timedependence. In any case, the formal solution can be obtained by using the Magnus expansion, as shown in [7].
The second procedure is perhaps simpler. It consists in taking the maps ϕ [A] a i h , ϕ [B] b i h in (6) as the (a i h)-flow and (b i h)-flow associated respectively to the autonomous equations Notice that the coefficients c i , d i appear interchanged in the vector fields with respect to (8) and (9). These two strategies, which could be dubbed 'averaging' and 'frozen' techniques, respectively, may differ considerably both in the accuracy reached by the methods and also in their computational cost. Let us illustrate them on a simple but important example arising in applications.

Linear non-autonomous separable systems
When discretizing in space the time dependent Schrödinger equation involving a time dependent potential V (t) (for instance, with a pseudospectral method), the following system of ODEs arises: where u ∈ C N and H is a real symmetric matrix. If the real and imaginary parts in u are considered, u = q + i p, the N -dimensional linear complex system (12) can be written as the 2N -dimensional real system These, in fact, can be interpreted as the classical Hamilton equations corresponding to the Hamiltonian Although we limit ourselves to this problem, most of the discussion is also valid with minor modifications for the more general system for q ∈ R d 1 , p ∈ R d 2 . Equation (15) arises, in particular, when the Maxwell equations are discretized in space [19]. Equations (13) can be written in the compact form where z = (q, p) T and Let us consider first the autonomous problem, i.e., when H does not depend explicitly on t, in which case the corresponding equations possess the formal solution Typically, as a result of the discretization in space, the value of N is large, and thus the exact computation of the exponential is exceedingly costly. In consequence, it makes sense to construct approximations requiring a much reduced computational effort. This can be achieved when the scheme only involves the computation of Hq and H p in a particular sequence and only a few times per step. But this is precisely what splitting methods effectively do, as it is evident if one writes the composition (6) for this particular problem: The scheme requires m matrix-vector products Hq and H p (the last product at each step can be reused in the first stage at the following step) and is referred as an m-stage method.
Although any of the splitting methods for separable systems collected in [13,15,17,18] can be used for carrying out numerical integrations here, Eq. (16) in the autonomous case possesses the following crucial simplifying property: where [·, ·] stands for the usual commutator: This feature allows one to build very efficient methods indeed [2,3,11]. Moreover, it has been shown that any nonsymmetric method for this problem is conjugate to a symmetric method [4], so that one may restrict the analysis to symmetric compositions, i.e., when The resulting scheme is sometimes referred to as an AB A composition. Since the role of A and B can be naturally interchanged, B AB compositions are not separately studied. Let us turn our attention now to the time dependent case. As is well known, the Magnus expansion allows one to formally write the solution of (16) as where Ω(t) is given by an infinite series involving A(t), B(t), multivariate integrals and nested commutators with a finite radius of convergence [7,16]. Although it is indeed possible to derive numerical integration algorithms from the Magnus expansion [8], they still require the computation of the exponential of a full matrix of high dimension involving iterate integrals and commutators. We apply instead the two procedures pointed out in Sect. 2.1. The first one (the 'averaging' technique) uses composition (6) with maps corresponding to the exact solutions of (8)- (9), which for this particular problem read (taking t 0 = 0) Notice that the resulting scheme can be seen as the composition method (6) applied to the autonomous Hamiltonian where we have considered time as two different additional coordinates.
On the other hand, with the 'frozen' technique (10) and (11) one has and the corresponding scheme, as before, is nothing but composition (6) applied to the Hamiltonian Now the Hamilton equations are no longer linear and, moreover, in terms of the Poisson bracket. In consequence, the highly efficient schemes of type (20) designed for systems verifying (21) lose their appealing accomplishments when applied in the non-autonomous case.
Another possibility is suggested in [9]. One might consider a combination of (24) and (26) in the form or equivalently with associated maps so that Runge-Kutta-Nyström methods can be used (even with modified potentials) [5], and thus significant improvements in the efficiency with respect to the previous choices can be achieved.

A new class of splitting methods for non-autonomous linear systems
Our purpose here is to generalize the 'averaging' technique (23) by using the Magnus expansion and formulate directly splitting methods of the form (20) in terms of the new maps in such a way that the resulting schemes do not suffer from a degradation in their performance. More specifically, the new methods have the form (for a time step of size h)

taken as
Here t 1/2 = t + h 2 and p a i (τ ), p b i (τ ) are filters (scalar functions). As we will see in the sequel, to get integrators of even order n = 2s, it is enough to consider p a i , p b i as polynomials of degree s − 1, i.e., Since in our case These integrals can either be computed analytically or numerically approximated by using some appropriate quadrature rule. If the method (32) is of order 2s, then a quadrature rule of order 2s or higher must be used to retain the original order.
The numerical scheme is then determined by the values of the coefficients a . . . , m). The problem of designing efficient methods of a prescribed order 2s is then equivalent to determining coefficients such that the composition (30) achieves the desired order of accuracy and, at a given cost, provides the most accurate results among a number of possible choices. The method is of order 2s if the coefficients a [k] i , b [k] i satisfy a system of algebraic equations (the order conditions), which have to be first formulated and then solved (usually by numerical tools). A subset of such order conditions corresponds precisely to the particular case where H (t) actually does not depend on t, so that the matrix (32) reduces to the matrix When constructing a method of order 2s, we proceed as follows: (i) we first determine the values of the coefficients a i , b i in such a way that (20) gives a good method of order 2s for the autonomous case, (ii) and then choose the coefficients a [k] i , b [k] i , so that the remaining order conditions hold (once the relation between the coefficients a i , b i and a [k] i i is taken into account).

Order conditions for the autonomous case
When considering the matrix (20) used to propagate the numerical solution in the autonomous case (18), one observes that diagonalizing the matrix H with an appropriate linear change of variables transforms the system into N uncoupled harmonic oscillators with frequencies ω 1 , . . . , ω N . Although in practice one wants to avoid diagonalizing the matrix H , numerically solving the system (18) by a splitting method is mathematically equivalent to applying the splitting method to each of such harmonic oscillators (and then rewritting the result in the original variables). Clearly, the exact solution of each individual harmonic oscillator with frequency ω is propagated by the 2 As for the numerical solution, it is propagated by a matrix K (ωh) defined as It is straightforward to check that where k i, j are homogeneous polynomials in the parameters a i , b i . The integrator (35) typically will be stable for |hω| < x * for some value x * (that we call stability threshold) depending on the coefficients a i , b i . It has been shown [4] that if for a given splitting method x * > 0, then the method applied to (18), for H a constant matrix, is conjugate, for hρ(H ) < x * , to the solution of a modified system where for some constants φ 2i+1 , i = 1, 2, . . ., provided that the method is of order 2n for the harmonic oscillator (see [4] for more details). Here ρ(H ) denotes the spectral radius of H . We thus intend to construct accurate symmetric schemes with large stability intervals (−x * , x * ). Notice that for a fair comparison of the stability interval for splitting methods with different number of stages, one must consider the relative stability threshold x * /m. For this class of schemes, the elements of the stability matrix have to satisfy Since we are dealing with symmetric compositions, we found more appropriate (due to the ill conditioned equations to be numerically solved) to consider the decomposition with U 1 , U 4 even polynomial functions and U 2 , U 3 odd polynomial functions. Since we are interested in matrices K , U to be decomposed as products (35), then they must sat- Whence, one has an approximation of order 2s if Observe that to obtain a method of order 2n one needs a composition with m ≥ 2n − 1 stages, as already noticed in [11]. The matrix U has 4n − 2 parameters that can be used to solve the required system of 4n − 2 equations: indeed, equations (43) originate 2n linear equations and condition (42) gives 2n − 2 quadratic equations. However, the methods with minimal number of stages obtained by solving these 4n − 2 equations have stability thresholds x * ≈ π, and thus the relative stability threshold x * /m ≈ π/(4n − 2) becomes very small for high order methods.
A simple trick to get methods with larger relative stability threshold is to add the condition for some positive integer l. For moderate values of l relative to s, this typically gives a method with stability threshold x * > lπ. In addition to improving stability, due to its interpolatory nature, condition (44) contributes to improve also the precision of the method when applied to (18). In terms of the matrix U (x), condition (44) reads Given positive integers s, l, we impose conditions (42), (43), and (45) to the matrix U (x), which gives a system of 4(n +l)−2 linear and quadratic equations in terms of the coefficients of the polynomials U j (x), j = 1, 2, 3, 4. The required number of free parameters can be obtained by considering where d(P(x)) denotes de degree of the polynomial P(x). For a given matrix U (x) satisfying the required conditions, if there exists a splitting method associated to the matrix (41) (if it exists, is unique [4]), then in general will have m = 2(n + l) − 1 stages. We have obtained (with the help of the software Mathematica) all the solutions of the equations corresponding to moderate values of n and l (n + l ≤ 6). For each n and l, we choose among all the real solutions of the corresponding system of polynomial equations the best methods with respect to suitable criteria based on the rigorous error estimates (for the application of (18)) derived in [6]. Once an appropriate matrix U (x) is chosen for given n and l, we compute the coefficients {a 1 , b 1 , a 2 , b 2 , . . .} of the splitting scheme corresponding to K (x) = U (−x) −1 U (x) by following the algorithm presented in [4]. We collect in Table 1 the coefficients of two of the best methods obtained in this way with m = 11 stages with n = l = 3 and n = 4, l = 2 (in the last case, the method is of order eight at the cost of being slightly less stable due to the smaller value of l).

Determining the coefficients for the non-autonomous case
The order conditions to be satisfied by the coefficients a [k] i , b [k] i , i, k = 0, 1, . . . , s − 1 for the polynomials in the scheme (33) to give a method of order p = 2s can be determined by following the approach of [3]. First one considers the formal solution of Eqs. (16) at time h as furnished by the Magnus expansion, where [7,16]. The explicit expression of Ω k (t, h) can be obtained by inserting into the recurrence defining the Magnus expansion a Taylor series of the matrices A(t) and B(t) around the midpoint t + h/2 (to take advantage of the time-symmetry property of the solution), i.e.,

t, h) and each Ω k (t, h) is a multiple integral of combinations of nested commutators containing k matrices A(t) and B(t)
Then, the Baker-Campbell-Hausdorff (BCH) formula is repeatedly applied to (30), so that K (t, h) is expressed as the exponential of only one operator, 1, . . . , m) and nested commutators of these matrices. In consequence, the numerical scheme is of order The analysis is simplified by imposing time symmetry to the composition (32): 1, 2, . . . , m. ExpandingÃ i (t, h),B i (t, h) in (31) in terms of α j , β j , it is clear that conditions (47) are fulfilled as soon as Table 1 Coefficients a i for the 11-stage sixth-order method with s = 3 and n = 3, l = 3 (SM 11 6), and n = 4, l = 2 (SM 11 86). The schemes are written as ABA time-reversible compositions. The corresponding coefficients a i can be obtained from (58) for s = 3, and then using the coefficients from r (3) i, j given in (56) SM 11 6 a (1) for n ≥ 1, i = 1, 2, . . . , m.
In the autonomous case one has α 1 = h A, β 1 = h B and α j = β j = 0, j > 1 so that the scheme reduces to which corresponds to Eq. (20) with a In any case, the actual choice of a (1) i , b (1) i plays an essential role to get the coefficients a i leading to efficient methods, since the constant parts α 1 = h A(t 1/2 ), β 1 = h B(t 1/2 ) usually represent the dominant contributions to the evolution of the system.
Once a set of values for a (1) i , i = 1, . . . , m, satisfying the symmetry condition (49) is chosen, we have to determine the coefficients, a n ≥ 2, i = 1, . . . , m, which satisfy the remaining order conditions. This can produce different methods with the same values for the coefficients a (1) i . Let us now writeÃ i ,B i as follows: where for j = 0, . . . , s − 1.
At this point the following remark is worth to be stated. Suppose thatb i , c i (i = 1, . . . , k), are the weights and nodes of a particular quadrature rule for integrals. Then it is possible to approximate all the integrals A (i) (up to the required order) just by using only the evaluations A i at the nodes c i of the quadrature rule required to compute A (0) : with A i ≡ A(t n + c i h). In particular, if fourth and sixth order Gauss-Legendre quadrature rules are considered, we have s = k = 2 andb 1 =b 2 = 1/2, c 1 = 1/2 − √ 3/6, c 2 = 1/2 + √ 3/6. To order six we have s = k = 3 andb 1 =b 3 = 5/18,b 2 = 4/9, c 1 = 1/2 − √ 15/10, c 2 = 1/2, c 3 = 1/2 + √ 15/10. Now, a simple relationship can be established between the coefficients a i for a given method and the coefficients a i by taking into account how the matrices A (i) , B (i) and α i , β i are related. Specifically, one has (neglecting higher order terms) 0 ≤ i ≤ s − 1, and an analogous expression relating B (i) with β i . If this relation is inverted (to order four, s = 2, and six, s = 3) we get (r (2) i j ) ≡ (T (2) ) −1 = From this analysis, we proceed as follows. We first compute the coefficients a i , b i by applying the procedure shown before for the autonomous case. Then, we take a In this work we have considered symmetric methods of order six. For the autonomous case the schemes have necessarily m ≥ 5 stages. In consequence, we have analyzed compositions with m = 5, 7, 9, 11 which corresponds to take n = 3 and l = 0, 1, 2, 3 in Sect. 3.1, but we have also analyzed the case n = 4 and l = 0, 1, 2. In each case we have taken the optimal choices according to the criterion considered before.
On the other hand, getting sixth-order methods for the non-autonomous case requires to solve a system of 10 nonlinear equations in the variables a (2) i , b (2) i and an additional linear system of 8 equations to be solved in a i , with the symmetry (49). To obtain real solutions for these equations it was necessary to consider methods with at least 11 stages.
Here we have carried out an exhaustive search of solutions and examined how all of them behave in practice. This turns out to be the set of coefficients which minimize the sum of the absolute values of the coefficients. The corresponding sets are collected in Table 1. For future reference, we refer to these (optimal in the previous sense) methods as SM 11 6 and SM 11 86. The coefficients a i can be obtained from (58) for s = 3, and then using the coefficients from (r (3) i, j ) given in (56). As a matter of fact, in [3] we included in the numerical examples the results achieved by SM 11 6, but without indicating how the scheme was obtained.

Numerical examples
Our purpose in this section is to illustrate the performance of the new specially adapted 11-stage sixth-order (SM 11 6 and SM 11 86) splitting methods for partitioned non-autonomous linear systems. To do so, we carry out some comparisons with some other well established general purpose geometric schemes. Given a basic symmetric second order method, we consider symmetric compositions of the basic method with fractional steps. The most efficient compositions of this family of orders 6 to 10 known by the authors (with real coefficients and without processor or corrector) correspond to the following methods: the 13-stage sixth-order (S 13 6), the 21-stage eighth-order (S 21 8), and the 35-stage tenth-order (S 35 10) methods obtained in [22]. The coefficients of S 35 10 are also collected, for instance, in [13, chapter V]. As basic second order scheme we take the well known leapfrog composition: h/2 . The explicit time dependence is treated by taking the time as two new coordinates as shown in (10)- (11) or (25). We also consider the 11-stage sixth-order Runge-Kutta-Nyström splitting method, RKN 11 6, given in [10] and adapted to explicitly time-dependent problems [9] similarly as given in (28)-(29), which gives similar performance to the scheme proposed in [1], but it is simpler to implement. This method is also implemented taking the time as two new coordinates, as in the S m p schemes, to illustrate the advantage of treating the time dependency appropriately (in this case it is referred as RKN 11 64).
Since the main interest of this family of methods lies in the numerical integration of differential equations originated from space discretizations of PDEs, the computational cost of the methods is measured by the number of stages required. The integrals (31) appearing in the new scheme are approximated by the sixth-order Gauss-Legendre quadrature rule, as indicated in (52) and (54). Notice that using this quadrature rule only three evaluations for the time dependent functions are required per step, whereas 11 is the number used to count the cost of the algorithm.
Perturbed harmonic oscillators. We first consider as a simple test bench problem the Mathieu equation, q + (ω 2 + cos(t))q = 0, with q ∈ R, which corresponds to a time dependent linear harmonic oscillator with Hamiltonian We take as initial condition q(0) = 1, p(0) = 1, integrate up to t = 200 π/ω and measure the average error in phase space (at t = 2π/ω, 4π/ω, . . . , 200π/ω) in terms of the number of force evaluations for different time steps (in logarithmic scale). We compare the relative error for ω = 3/2 with = 1/40 and = 1/4, and ω = 5, with = 1/40 and = 4. The results are collected in Fig. 1. Since the kinetic part is time-independent, the RKN 11 6 method can be used in this case by just taking the time as a new coordinate. The superiority of the new schemes is manifest for all accuracies of practical interest. We can also observe that this superiority is more relevant when the dominant contribution from the Hamiltonian originates from the constant part, since the new scheme is built to be highly efficient for small perturbations of the autonomous harmonic oscillator. In this setting the scheme SM 11 86 is superior and behaves as an eighth-order method in the case ω = 5, = 1/40. We observe that when the time-dependent functions are significant, the errors associated to the new method are dominated by the Magnus expansion treatment. This is an important observation to take into account to build new improved methods in this class. The Schrödinger equation. Let us now consider the one-dimensional Schrödinger equation with ψ(x, 0) = ψ 0 (x). We take the Morse potential V (x) = D 1 − e −αx 2 in a laser field described by f (t)x = A cos(ωt)x. It corresponds to the Walker-Preston model of a diatomic molecule in a strong laser field [23]. This problem is used as a test bench for the numerical methods presented in [12,21]   is defined in the interval x ∈ [−0.8, 4.32], which is split into N = 64 parts of length x = 0.08, and impose periodic boundary conditions. After space discretization, Eq. (59) leads to the complex linear equation (12) with u ∈ C N and u k (t) = ψ(x k , t)( x) 1/2 , k = 0, 1, . . . , N − 1. Here x k = x 0 + k x and H (t) = T +V (t) is an Hermitian matrix (real and symmetric). As initial conditions we take the ground state of the Morse potential, with γ = 2D/w 0 , w 0 = α √ 2D/μ and σ is a normalizing constant. Notice thatV (t) is a diagonal matrix with elementsV j j = V (x j ) + f (t)x j , and T q, T p can be efficiently computed using FFTs [2,11]. Notice also that in (33) (1) i T + hb (1) where F a/b (t, h) = h Here X is a diagonal matrix with diagonal elements X j j = x j . The products H i q and H i p only require one FFT and its inverse and thus an m-stage method requires 4m FFTs per step. The split (25) used for the general purpose methods S m p was already proposed in [21], showing a clear improvement with respect to the second order Magnus integrator (combined with a third order splitting scheme) given in [12]. This split is also used for the RK N 11 6 method, but it produces a fourth-order scheme. This is corrected using the averaging (28)-(29) which only requires to replace the scalar function f (t) by its corresponding integral along each fractional time-step in one part of the separable Hamiltonian system.
First, we integrate the system in the time interval t ∈ [0, 2τ ] with τ = 2π/ω. The exact solution, u ex (2τ ), at the final time is obtained numerically using a sufficiently small time step. We measure the error in the wave function, u ex (2τ ) − u(2τ ) versus the number of FFTs required for each method, and this is repeated for different values of the time step, starting with a very small time step and increasing it until reaching a time step close to the stability limit of the method (an overflow appears if the time step is slightly increased). We repeated the same experiment taking a larger time integration, t ∈ [0, 200τ ]. Figure 2 shows the efficiency plots for the methods. The superiority of the new splitting methods is manifest both with respect to efficiency and the stability limit. In addition, we observe that this superiority increases when taking a longer time integration. This constitutes indeed an interesting property of the new methods which is currently under investigation [6]. Scheme SM 11 6 is more stable (it was built with a larger value of l), but SM 11 86 can be more efficient when a high accuracy is desired and the time dependent functions originate a small perturbation to the remaining autonomous problem, because in that limit the method behaves as an eighth-order method.