On the Linear Stability of Splitting Methods

A comprehensive linear stability analysis of splitting methods is carried out by means of a 2 × 2 matrix K(x) with polynomial entries (the stability matrix) and the stability polynomial p(x) (the trace of K(x) divided by two). An algorithm is provided for determining the coefficients of all possible time-reversible splitting schemes for a prescribed stability polynomial. It is shown that p(x) carries essentially all the information needed to construct processed splitting methods for numerically approximating the evolution of linear systems. By conveniently selecting the stability polynomial, new integrators with processing for linear equations are built which are orders of magnitude more efficient than other algorithms previously available.


Introduction
Splitting methods are frequently used in practice to integrate numerically differential equations. They constitute a natural choice when the vector field associated with the differential equation can be split into a sum of two o more parts that are each simpler to integrate than the original problem. Suppose we have an ordinary differential equation (ODE) of the form such that the h-flows ϕ [A] h and ϕ [B] h corresponding to f [A] and f [B] , respectively, can be either exactly computed or accurately approximated. Then the exact flow ϕ h of equation (1.1) can be approximated by a composition of the flows of the parts, where the 2k coefficients a i , b i are chosen as to ensure that ψ h is a suitable approximation to the exact flow ϕ h , typically in such a way that ψ h = ϕ h + O(h p+1 ): then the numerical integrator ψ h is said to be accurate to order p in the time step h.
Perhaps the most frequently used splitting methods are corresponding to the first order Lie-Trotter method and the second order leapfrog (also called Störmer, Verlet, Strang splitting, etc.) method, respectively. Their straightforward implementation and low storage requirements have made them common tools for the numerical treatment of ODEs and partial differential equations (PDEs). In addition, they are very fast if the system is split in easy computable parts. Splitting schemes have proved to be especially useful in the context of geometric integration, since they preserve structural features of the flow of f as long as the basic methods ϕ [A] h and ϕ [B] h also share these properties. Important examples include symplecticity, volume preservation, symmetries, etc. In this sense, schemes (1.3) can be considered as geometric integrators, and as such, they show smaller error growth than standard integrators. It is not surprising, then, that a systematic search for splitting methods of higher order of accuracy has taken place during the last two decades and a large number of them exist in the literature (see [11,16,23,24,27] and references therein) which have been specifically designed for different families of problems.
Another characteristic of a numerical integration method for differential equations is stability. Roughly speaking, the numerical solution provided by an stable numerical integrator does not tend to infinity when the exact solution is bounded. Although important, this feature has received considerably less attention in the specific case of splitting methods. In fact, in the process of building high order schemes, stability is not usually taken into account, ending with methods which are rather less stable than ψ h,1 and ψ h,2 , which, as is well known, attain the highest possible stability (at a given cost) among all methods given by (1.2) [8].
Usually, the k-stage scheme (1.2) is unstable for h > kL for some L > 0, so that stability imposes restrictions on the step size. In this sense, the length L of the stability interval should be as large as possible for a given order and a high order method possessing a tiny L could be useless in practice.
As an illustration, the application of schemes (1.3) to (1.5) provides respectively, where x = hλ. Now, as both matrices have unit determinant, one concludes that ψ h,1 and ψ h,2 are stable if tr(ψ h,1 ) = tr(ψ h,2 ) = |2 − x 2 | < 2, or |x| < 2 and thus L = 2 is the greatest value one can obtain for any splitting method, since k = 1. With respect to ψ h,2 , although k = 2, it requires in practice only one evaluation per step, as one ϕ h/2 can be concatenated with the next step.
One of the aims of this paper is precisely to analyze the composition (1.2) to get high order schemes with relatively large stability intervals (as close as possible to those of (1.3)) with the ultimate goal of improving the performance of the existing splitting methods.
The analysis carried out here also allows us, in a natural way, to construct new methods of any order and with large stability intervals for the harmonic oscillator. They have the special structureψ and, as thus, could be considered as generalizations of ψ h,2 . In (1.7) ψ h is referred to as the kernel and π −1 h , π h are the pre-and postprocessors or correctors. Application ofψ h over n integration steps with constant step size h giveŝ The cost ofψ h is measured by the cost of the kernel because π −1 h is computed only once to start the integration and π h , in most cases, is only required occasionally. One says that ψ h has effective order p if there exist maps π −1 h , π h such thatψ h has order p. This processing technique was first introduced by Butcher [7] and later received a renewed attention in the context of geometric integration [26,29,21,18,19,4,5,6,1,2]. The stability analysis of such class of methods is mainly concerned with the kernel, since the processor does not affect the stability (it is just a change of variables). Here we consider a composition (1.2) for the kernel.
Obviously, there is not much point in designing new and somewhat sophisticated numerical methods for the harmonic oscillator (1.5). It turns out, however, that splitting methods especially tailored for this system can be of great interest for the numerical treatment of non-trivial problems appearing, for instance, in quantum mechanics, electrodynamics and structural dynamics [9,14,15,30]. We should also remark that many PDEs, once spatially discretized, give rise to large systems of coupled harmonic oscillators where the techniques and methods proposed in this work directly apply.
The paper is organized as follows. In section 2 we introduce and characterize the stability matrix of a splitting method as applied to the one-dimensional harmonic oscillator. In particular, we show that any splitting method is uniquely determined by its stability matrix. In section 3 the analysis is generalized to certain classes of linear systems relevant in applications. In section 4, by proposing different families of stability polynomials and stability matrices, we get particular splitting methods of the form (1.7) with high accuracy and enlarged stability domain for linear Hamiltonian systems. Finally, the new methods are illustrated in section 5 on some numerical examples.

Analysis of splitting methods applied to the harmonic oscillator
When applying the splitting method (1.2) to the one-dimensional harmonic oscillator (1.5) we approximate the exact 2 × 2 solution matrix or alternatively, in terms of matrices A and B introduced in (1.5), with appropriate coefficients a i , b i ∈ R. The matrix K(x) (the stability matrix of the splitting method) has the form with elements (2.5) Here k i,j are homogeneous polynomials in the parameters a i , b i . In particular, Since each individual matrix in the composition (2.2) has unit determinant, it must hold that det K(x) ≡ 1. It is not difficult to check that where we denote by d(K i ) the degree of each polynomial K i (x) (i = 1, . . . , 4). As a matter of fact, the linear stability analysis of splitting methods is made easier if one considers a generic 2 × 2 matrix (2.4) such that K 1 (x) and K 4 (x) (respectively, K 2 (x) and K 3 (x)) are even (respectively, odd) polynomials in x and In addition, since one is interested in consistent methods, we also require that In the application of an splitting method to the harmonic oscillator, an essential role is played by the so-called stability polynomial, defined as Clearly, the stability polynomial of a consistent splitting method is an even polynomial p(x) satisfying In particular, for schemes (1.3) one has, from (1.6), p(x) = 1 − x 2 /2.

From the stability matrix to the splitting method
Obviously, analyzing splitting methods through a generic stability matrix is useful as long as one is able to factorize K(x) as (2.2) and determine uniquely the coefficients a i , b i of the splitting method from a particular K(x) with polynomial entries. Only in such circumstances one could say that any splitting method of the form (1.2) is completely characterized by the result of applying one step of the method to the harmonic oscillator. What the following result shows is that, under certain assumptions, each matrix (2.4) can be decomposed uniquely in certain form, that includes (2.2) as a particular case.
Proposition 2.1 Given a matrix (2.4) such that K 1 (x), K 4 (x) are even polynomials and K 2 (x), K 3 (x) are odd polynomials verifying conditions (2.7)-(2.9), there exists a unique decomposition of K(x) of the form Proof: We will prove the result by induction on the sum of the degrees of the two polynomials K 1 (x) and K 4 (x). In the trivial case, where the sum of their degrees is 0, it holds by assumption that , then two possibilities occur: Application of polynomial division uniquely determines the odd polynomials A 1 (x) andK 2 (x) such that where d(K 2 ) = d(K 1 ) + d(A 1 ) and d(K 2 ) < d(K 1 ). Now, we define the even polynomialK 4 (x) = K 4 (x) − K 3 (x)A 1 (x), so that Clearly,K 4 (0) = 1 and , and the required result follows by induction.
, and one similarly obtains the decomposition where the odd polynomials B 1 (x) andK 3 (x) are determined from the poly- then d(K 1 ) < d(K 2 ) < d(K 1 ) and the required result follows by induction.
This completes the proof. 2

Remarks:
1. Notice from the proof of Proposition 2.1 thatK 4 (x) (respectivelyK 1 (x)) can also be obtained as the remainder of the polynomial division of K 4 (x) by K 3 (x) (respectively K 1 (x) by K 2 (x)), which (in exact arithmetic) must give the same quotient A 1 (x) (resp. B 1 (x)) due to the fact that det K(x) ≡ 1.
2. Obviously, the decomposition (2.2) corresponds to (2.11) with A j (x) = a j x and B j (x) = b j x for j = 1, . . . , m, and m = k.
Example: Let us illustrate this result with two different stability matrices, leading to different types of decomposition. Consider first the matrix (2.13) which satisfies conditions (2.6)-(2.9). By applying the constructive proof of Proposition 2.1 it is straightforward to check that K(x) can be decomposed as 14) and this is in fact the only decomposition of the form (2.2) for the matrix (2.13). As a second example, consider now with the same stability polynomial as K(x). In this case, although the degrees of the entries coincide with those of (2.13) (and thus condition (2.6) holds),K(x) can not be decomposed in the form (2.2). Instead it admits the unique decomposition

From the stability polynomial to the splitting method
We have seen that, under certain circumstances, the stability matrix K(x) allows us to get the composition (2.2) in a unique way and therefore the values of the coefficients of the splitting method. The question we analyze now is whether something similar can be done starting from the stability polynomial. We show that, given such a p(x), there exists a finite number of different time-reversible splitting schemes having p(x) as their stability polynomial.
To begin with, suppose one has an even polynomial p(x) satisfying (2.10). Obviously, there exist infinite splitting methods having p(x) as their stability polynomial. Nevertheless, for each arbitrary even polynomial r(x) = p(x) with r(0) = 0 there exist a finite number of consistent splitting methods with stability matrix (2.4) verifying K 1 (x) = p(x) + r(x) and K 4 (x) = p(x) − r(x), or equivalently The remaining entries of (2.4) are obtained by considering all possible decompositions of the polynomial p(x) 2 − r(x) 2 − 1(= K 1 (x)K 4 (x) − 1) as the product of two odd polynomials K 2 (x) and K 3 (x) satisfying (2.9). Proposition 2.1 gives then a decomposition of the form (2.11) for each choice of the stability matrix K(x). Finally, all possible consistent splitting methods corresponding to the polynomials p(x) and r(x) are obtained by selecting, among the finite number of different decompositions of the form (2.11) obtained in that way, those that are actually of the form (2.2).
With the simplest choice r(x) ≡ 0 one has K 1 (x) ≡ K 4 (x). In that case the stability matrix verifies the identity K(x) −1 ≡ K(−x), and this is precisely the characterization of a time-reversible method [11]. In terms of the composition (2.2) it corresponds to taking either . ., which results in a left-right palindromic composition. In this way it is possible to construct explicitly all consistent time-reversible splitting methods with a prescribed stability polynomial p(x) satisfying (2.10).
Example: Let us consider the stability polynomial of K(x) andK(x) in (2.13) and (2.15), respectively, (2.17) There are six different ways of factorizing p(x) 2 − 1 as a product of two odd polynomials K 2 (x) and which gives the stability matrix which gives the stability matrix (2.15) (with unique decomposition (2.16), and thus not corresponding to an splitting method of the form (1.2)); and (iii) K 3 (x) = −x which leads to the stability matrix Application of the algorithm used in the proof of Proposition 2.1 allows us to factorize the matrix (2.18) as which does not correspond to an splitting method of the form (1.2), as expected be-causeK(x) does not satisfy conditions (2.6). The remaining three stability matrices are obtained by interchanging the roles of the polynomials K 2 (x) and −K 3 (x). 2

Stability
According to the notion of stability given in the introduction, a splitting method is stable when applied to the harmonic oscillator if K(x) n can be bounded independently of n ≥ 1. As is well known, if the method is stable for a given x ∈ R, then |p(x)| ≤ 1. The converse is not in general true, as shown by the following simple example: and thus K(x) is linearly unstable. The following proposition gives an useful characterization of the stability of K(x).
Proposition 2.2 Let K(x) be a 2 × 2 matrix with det K(x) = 1, and p(x) = 1 2 tr K(x). Then K(x) is stable for a given x ∈ R if and only if any of the following two conditions hold: Proof: This is in fact an elementary consequence of the symplecticity of the matrix K(x). In more detail, let λ 1 (x) and λ 2 (x) be the eigenvalues of K(x). The is diagonalizable with eigenvalues of modulus 1, then |p(x)| ≤ 1, and where Φ(x) is given by (2.20). In consequence, K(x) is similar to the matrix S(x) given in (2.20), as S(x) is also diagonalizable with eigenvalues (2.21). Finally, the second condition clearly implies that the matrices S(x) and K(x) are both stable. 2 Observe that S(x) can be considered as the stability matrix of a time-reversible method. In consequence, the above proposition allows us to conclude that for sufficiently small values of x each matrix K(x) with det K(x) = 1 is similar to a time-reversible matrix, and this provides an additional justification to taking only into account time-reversible splitting schemes.
In the stability analysis, the real parameters x * and x * defined next play a crucial role. Definition 2.3 Given a 2 × 2 matrix K(x) depending on a real parameter x such that det K(x) ≡ 1, we denote by x * the largest non-negative real number such that K(x) is stable for all x ∈ [−x * , x * ]. We say that x * is the stability threshold of K(x), and that [−x * , x * ] is the stability interval of K(x).

Definition 2.4
Let p(x) be an even polynomial depending on a real parameter x with p(0) = 1. We denote by x * the largest real positive number such that |p(x)| ≤ 1 for all x ∈ [0, x * ].

Remarks:
1. From the proof of Proposition 2.2, it is clear that (−x * , x * ) is the largest interval including 0 such that K(x) has eigenvalues of modulus 1.
2. If p (0) < 0, then p(x) 2 − 1 ≤ 0 for sufficiently small |x|, and in that case x * is the smallest real positive zero with odd multiplicity of the polynomial (p(x) 2 − 1) (for this x * , the sign of the polynomial p(x) 2 − 1 does not change in the interval x ∈ (−x * , x * ), but it does when crossing x = ±x * ).
If p(x) is the stability polynomial of K(x) then it is clear that x * ≤ x * . Next we analyze which conditions have to be imposed on K(x) to get the optimal stability threshold, i.e., to ensure that x * = x * . Proposition 2.5 Assume that a 2 × 2 matrix K(x) depending on a real parameter x is such that det K(x) ≡ 1, K 1 (x) and K 4 (x) are even polynomials, K 2 (x) and K 3 (x) are odd polynomials, K 1 (0) = K 4 (0) = 1, and K 2 (0)K 3 (0) < 0. Let p(x) = 1 2 tr K(x) be the corresponding stability polynomial and let 0 = x 0 < x 1 < · · · < x l be the real zeros with even multiplicity of the polynomial (p(x) 2 for each j = 1, . . . , l. Otherwise, x * is the smallest x j that violates condition (2.22).
Proof: On the one hand, by differentiating det K(x) ≡ 1 twice, replacing x by 0 and taking into account that K 1 (0) = K 4 (0) = 1 and Example: Consider the polynomial p(x) given in (2.17). Then l = 1, x 1 = 2 √ 2, and x * = 4. For the stability matrix (2.13) we have K 2 (x 1 ) = K 3 (x 1 ) = 0, and thus the corresponding stability threshold is x * = x * = 4. As for the stability matrices The above proposition allows us to build easily a stability matrix corresponding to a time-reversible splitting method with the optimal stability threshold: Proposition 2.6 Let p(x) be an even polynomial satisfying (2.10). Then there exists a stability matrix of the form for which x * = x * .
Proof: Let 0 = x 0 < x 1 < · · · < x l be the real zeros with even multiplicity of the Then the polynomial p(x) 2 − 1 can be decomposed as where Q(x) is an even polynomial verifying Q(0) = 1. We then choose a decompo- This completes the proof. 2

Accuracy
As mentioned in the introduction, an accurate high order method can be useless in practice if it possesses a tiny stability domain. Similarly, a very stable but poorly accurate method is also of no interest in practical applications. In consequence, when designing new integration schemes of the form (2.2) the goal is to achieve the right balance between accuracy and stability. This, of course, is not an easy task in general, although a partial analysis has been done for the harmonic oscillator [9,22]. Obviously, the accuracy of a splitting method depends on the difference between the matrix K(x) and the exact solution, is small, then the eigenvalues of K(x) must be close to the eigenvalues of O(x). According to Proposition 2.2, the stability polynomial p(x) must be an approximation to cos(x). In particular, for a splitting method of order 2n it necessarily holds that (2.20). In addition, under the assumptions of Proposi- It is worth stressing that if one is interested in processed splitting methods of the form (1.7), then, according to Proposition 2.2, their accuracy when applied to the harmonic oscillator only depends on the quality of the approximation p(x) ≈ cos(x) of their stability polynomial p(x) in the stability interval [−x * , x * ].

Geometric properties
Consider the application of a consistent splitting method (with stability matrix K(x) and stability polynomial p(x)) to the harmonic oscillator (1.4) split as (1.5). According to Proposition 2.2, if hλ ∈ (−x * , x * ), one step of the method is conjugate to the exact h-flow of the modified harmonic oscillator . In other words, there exists a well defined matrix is given in (2.20). In the particular case of time-reversible methods (i.e., satisfying Obviously, these expressions are only valid when K 2 (x) = 0 = K 3 (x). Otherwise, The exact solution O(x) is an orthogonal matrix. Alternatively, the complex vector u = q+i p evolves through a unitary operator, i.e. u(h) = U (x)u(0) with U (x) a unitary matrix (in fact, u verifies the differential equation i u = λu). Although a splitting method of the form (2.2) does not preserve the unitarity of U (x) or, equivalently, the orthogonality of O(x), the previous considerations show that the average relative errors due to the lack of preservation of unitarity or orthogonality do not grow with time, since the scheme is conjugate (when |hλ| < x * ) to orthogonal or unitary methods. Definition 2.7 We denote by r * the radius of convergence of the expansion in powers of x of Φ(x) = arccos(p(x)) = arcsin 1 − p(x) 2 , that is, the maximum of the modulus of the (non-necessarily real) zeroes of 1 − p(x) 2 with odd multiplicity.

Application of splitting methods to linear systems
One could reasonably argue that there is not much interest in designing new splitting methods with high accuracy and enlarged stability for the numerical integration of the simple harmonic oscillator. There are, however, at least two different issues to be taken into account in regarding this assertion. First, it is unlikely that a splitting method applied to an arbitrary nonlinear system provides good efficiency if it performs poorly when applied to the harmonic oscillator. In particular, good stability and accuracy for the harmonic oscillator is a necessary condition for a good performance when applied to systems that can be considered as perturbations of harmonic oscillators. Second, there are several PDEs modeling highly relevant physical phenomena that, once spatially discretized, give rise to systems of coupled harmonic oscillators where the previous analysis can be used to build accurate and stable algorithms for their numerical treatment. In the sequel we briefly consider four classes of linear systems for which the results in Section 2 are of interest.
(i) As a first instance, we consider the time-dependent Schrödinger equation where ψ(x, t) : R D × R → C is the wave function associated with the system. A common procedure for numerically solving this problem consists in taking first a discrete spatial representation of the wave function. For simplicity, let us consider the one-dimensional case and a given interval x ∈ [x 0 , x d ] (ψ(x 0 , t) = ψ(x d , t) = 0 or it has periodic boundary conditions). The interval is split in d parts of length ∆x = (x d − x 0 )/d and the vector u = (u 0 , . . . , u d−1 ) T ∈ C d is formed, with u j = ψ(x j , t) and x j = x 0 + j∆x, j = 0, 1, . . . , d − 1. The partial differential equation (3.1) is then replaced by the d-dimensional linear ODE where H ∈ R d×d represents the (real symmetric) matrix associated with the Hamiltonian. Complex vectors can be avoided by writing u = q + i p, with q, p ∈ R d . Equation (3.2) is then equivalent to [9,10,22,32,3,17] q = Hp, p = −Hq. Since by assumption H is a symmetric matrix, the solution operator of (3.3) is orthogonal. Equivalently, the vector u associated with the wave function evolves through a unitary operator, and the norm of u is preserved. Then, as we have seen in section 2.5, when a splitting method is applied, the averaged errors in the preservation of unitarity do not grow with time (a fact already noticed numerically in [28]).
Let us analyze this issue in more detail. Clearly, applying a splitting method to If |h|ρ(H) < min(r * , x * ), we have that (3.6) Notice thatH(h) is obviously symmetric, provided that H is symmetric.
(ii) Another system which can also be decoupled into a number of one dimensional harmonic oscillators is y + Ky = 0, where y ∈ R d and the matrix K ∈ R d×d is diagonalizable with real positive eigenvalues (K can be, in particular, a real symmetric positive definite matrix). Systems of this form arise after a semidiscretization of some parabolic PDEs whose linear part has to be efficiently computed (see Chapter XIII of [11] and references therein).
In this case, one can use a change of variables of the form P = Λ −1 U y , Q = U y (where Λ is the diagonal matrix such that K = U −1 Λ 2 U ) to show that the application of one step of size h of a splitting method to the system (3.7) is, provided that |h| ρ(K) < x * , conjugate to the exact h-flow of the modified system y +K(h)y = 0, whereK(h) is a perturbation of K defined in terms of Φ. In particular,K(h) = K + h 2 φ 3 K 2 + h 4 φ 5 K 3 + · · · whenever |h| ρ(K) < min(r * , x * ).
(iii) A more general class of problems which can be reduced to a system of decoupled harmonic oscillators is the following: In general, we will next show that there exists r * > 0 (satisfying that r * ≤ min(r * , x * )), depending on the parameters a j , b j of the splitting method such that the approximate solution of (3.8) obtained is conjugate to the solution of the modified system (3.9)-(3.11) whenever |h| √ ρ M,N < r * , where ρ M,N = min(ρ(N M ), ρ(M N )). (3.12) As this assertion can be proved for arbitrary matrices N and M (without any constraint about the square matrices N M and M N ), it is worth considering a more general class of systems: (iv) Let us examine now a linear system of the form (3.8) with arbitrary matrices N and M . One step of the splitting method (1.2) applied to that system with Studying the application of splitting methods to the system (3.8) is of interest, for instance, when considering the linearization around a stationary point of a Hamiltonian system with Hamiltonian function of the form H(q, p) = T (p) + V (q). Clearly, K(h) can be rewritten in the form Here the coefficients k i,j are those of the polynomial entries (2.5) of the stability matrix (2.4) of the splitting method. It is worth noting that one step ψ h (z) of an arbitrary partitioned Runge-Kutta (PRK) method applied to the partitioned system (3.8) is also of the form ψ h (z) = K(h)z, where the matrix K(h) is of the form (3.14). For implicit PRK methods there is an infinite number of non-zero coefficients k i,j , and the expressions are obtained by expanding in series of powers of x certain rational functions (the entries of the stability matrix K(x) of the method). For explicit PRK methods, The next result shows that any partitioned method (splitting methods, partitioned Runge-Kutta methods) applied to the system (3.8) is, for a sufficiently small step size h, conjugate to a non-partitioned method (defined as a power series expansion). Proposition 3.1 Consider a 2 × 2 matrix (2.4) depending on the variable x whose entries are defined as power series (3.15) with non-zero radius of convergence. For arbitrary matrices M ∈ R d1×d2 , N ∈ R d2×d1 , consider the square matrix of dimension (d 1 + d 2 ) given by (3.14).
Then, there exists r > 0 and a power series j≥1 s j x j such that (3.14) is, provided that |h| √ ρ M,N < r, similar to (3. 16) In addition, if det K(x) ≡ 1 (in particular, if K(x) is the stability matrix of a splitting method), then S(h) is the exponential of the matrix whereM (h) andÑ (h) are given by (3.10)- (3.11), and x + φ 3 x 3 + φ 5 x 5 + · · · is the power series expansion of Proof: We first show that there exist r > 0 and a 2×2 matrix (2.26) whose entries are defined as power series with non-zero radius of convergence, such that, for |x| < r, P (x)K(x)P (x) −1 is well defined and has the form Once this is proven, it is straightforward to check that, for the square matrix P(h) of dimension (d 1 + d 2 ) which is precisely (3.16).
Indeed, one can directly check that P (x)K(x)P (x) −1 = S(x) with and r can be taken as the minimum among the radii of convergence of the series K j (x), S j (x), P j (x). According to (3.18), S 1 (x) + iS 2 (x) = s j (ix) j , and thus s j x j is the series expansion of S 1 (−ix) + iS 2 (−ix) where S 1 (x) and S 2 (x) are given by (3.20). If det K(x) ≡ 1 (in particular, if K(x) is the stability matrix of a splitting method), we have that and thus s j x j is the series expansion of exp(iΦ(−ix)). If x + φ 3 x 3 + φ 5 x 5 + · · · is the power series expansion of Φ(x), it is clear that iΦ(−ix) = x − φ 3 x 3 + φ 5 x 5 − · · · and a simple calculation shows that S(h) is the exponential of thus completing the proof. 2 Notice that, in the proof above, there are other choices for P (x). For instance, we could have required that P 2 (x) = 0, P 4 (x) = 1/P 1 (x). Different choices for P (x) in general will give a different value of r. Let us denote as r * the maximal r for which the statement of Proposition 3.1 holds.
For splitting methods, we always have that r * ≤ r * and r * ≤ x * . The following generalization of Proposition 2.6 will be useful in the next section.
Proposition 3.2 Suppose we have an even polynomial p(x) satisfying (2.10), so that 0 < r * ≤ x * . Then, there exists a time-reversible stability matrix of the form for which x * = x * and r * = r * .
Proof: According to Definition 2.7 r * > 0 is the maximum of the modulus of the zeroes of 1 − p(x) 2 with odd multiplicity. Now let 0, ±x 1 , . . . , ±x l be all the zeros with even multiplicity of the polynomial 1 − p(x) 2 with modulus |x j | < r * . Then the polynomial 1 − p(x) 2 can be decomposed as where each m 0 , m 1 , . . . , m l is the multiplicity of the zeroes 0, ±x 1 , . . . , ±x l , respectively, and Q(x) is an even polynomial satisfying that Q(0) = 1 and has no zeroes with even multiplicity. We thus have that so that r * is the maximum of the modulus of the zeroes of Q(x). We then choose a decomposition Q(x) = Q 2 (x)Q 3 (x) of even polynomials such that Q 2 (0) = Q 3 (0) = 1, and determine K 2 (x), K 3 (x) as (3.22) This completes the proof, as, according to (2.29), P 1 (x) = 4 Q 3 (x)/Q 2 (x) and P 4 (x) = 4 Q 2 (x)/Q 3 (x), and the radius of convergence of their powers series expansions is precisely the maximum of the modulus of the zeroes of Q(x) = Q 2 (x)Q 3 (x), that is, r * . 2

Construction of processed splitting methods with high accuracy and enlarged stability domain
The theoretical analysis done in the previous sections shows in particular that the stability polynomial p(x) carries all the information needed to construct processed splitting methods for numerically approximating the evolution of the harmonic oscillator. Our aim in this section is precisely to use this analysis to obtain efficient processed splitting methods to solve numerically the linear system (3.8). In other words, our goal is to approximate the exponential where h = t/m is sufficiently small, the kernel K(h) is given by (3.13) (which can be rewritten as (3.14)), and P(h) is defined in terms of power series expansions as in the proof of Proposition 3.1 provided that h √ ρ M,N < r * (where ρ M,N is given by (3.12)).
We will concentrate ourselves in the case where either N M or M N is diagonalizable with positive eigenvalues, so that the performance of the method will be directly related to the accuracy and stability of the method when applied to the harmonic oscillator.
In practice, P(h) and P(h) −1 will be approximated by polynomials, and one will be able to approximate the action of the exponential (4.1) on the vector (q, p) T by means of matrix-vector products of the form M p and N q.
Taking previous considerations into account, we consider a composition of the form (3.13) whose stability polynomial p(x) fulfills the following requirements: ) as x → 0 for certain n ≥ 1 (thus achieving effective order q = 2n).
C.2 There exist l ≥ 1 and x j ∈ R, j = 1, . . . , l, such that 0 < x 1 < · · · < x l , each x j is a double zero of the polynomial p(x)−(−1) j , and all the (non-necessarily real) zeroes of odd multiplicity of the polynomial (p(x) 2 − 1) have modulus greater than x l (so that x * ≥ r * > x l ).
Once the polynomial p(x) is determined, it still remains to determine a stability matrix K(x). In general, we have that x * ≤ x * and r * ≤ r * . The optimal case is achieved when x * = x * and r * = r * hold, and any processed method for which these two equalities hold are equivalent. We thus construct symmetric stability matrices K(x) for which x * = x * and r * = r * (this is always possible according to Proposition 3.2). Once this matrix K(x) is determined, the algorithm described in the proof of Proposition 2.1 will give the coefficients a i , b i for the kernel leading to the composition (3.13). Finally, we provide simple procedures to obtain pre-and post-processors from the polynomials K i (x).
It is important to keep in mind that the cost of a composition method with polynomial stability p(x) is proportional to the degree of p(x). We can always improve both the stability and the accuracy of p(x) by increasing its degree, but this makes only sense if the improvement compensates the extra computational cost.

Construction of stability polynomials
According to the previous requirements, we take as candidates for p(x) polynomials in the following family. For each n, l ≥ 0 we consider the even polynomial p n,l (x) with minimal degree among those satisfying (i) p n,l (x) = cos(x) + O(x 2n+2 ) as x → 0, and (ii) for each 1 ≤ j ≤ l, x j = jπ is a double zero of (p n,l (x) − (−1) j ]). For arbitrary n, l ≥ 1 we take p n,l (x) as where the coefficients d j are uniquely determined by the requirement that holds for p(x) = p n,l (x). Notice the interpolatory nature of p n,l (x) (as cos(jπ) = (−1) j and cos (jπ) = − sin(jπ) = 0). By using a symbolic algebra package one gets the numerical values of d j , j = 1, . . . , 2l with the desired accuracy and for any value of l of practical interest. With the polynomial p n,l (x) it is possible to get an estimate of the relative performance of the splitting methods which can be obtained from it. Since a kstage method has a stability polynomial of degree 2k we can take the degree of p n,l (x) (d(p n,l ) = 2(n + 2l)) as twice the number of stages needed for the composition methods. The accuracy can be measured by evaluating the error function E n,l (x) ≡ arccos(p n,l (x)) − x for different values of x ∈ [0, x * ]. Figure 1(a) shows E n,l (x) versus COST= (n + 2l)/x (to measure the accuracy at a given cost, where x plays the role of the time step) for n = 5, l = 3, 5, 7 and n = 10, l = 6, 10, 14 corresponding to approximations of order 10 and 20 (each choice for p n,l (x) is denoted by (n, l, 0)). The curves which stay below in the same picture show the highest performance, i.e. they give the smallest error at a given cost. From the figures we observe the surprising fact that the performance seems to improve with l for each fixed value of n, and for most values of COST. This occurs for nearly all values of n and l checked, up to (n + 2l) = 50, corresponding to methods with up to 50 stages. Figure 1(b) shows the results for (n, l) = (5, 7), (8,12), (10,14) which would correspond to methods with 19, 32 and 38 stages.
On the other hand, if one is interested in approximating the solution matrix O(x) accurately for x ∈ (−x * , x * ) with |x| as large as possible, we may require in addition to C.1-C.3 above, C.4 for a relatively low effective order q = 2n, the stability polynomial p(x) approximates cos(x) with acceptable precision for all x ∈ (−x l , x l ). , (8,12), (10,14); (c) the same for p n,l,m (x) with (n, l, m) = (1, 7, 4), (1,12,7), (1,14,9) corresponding to polynomials of the same degree as in (b); (d) comparison of the most efficient stability polynomials.
With the purpose of fulfilling this goal, we propose considering the following polynomials as candidates for the stability polynomial p(x). For each n, l, m ≥ 0, we take satisfying (4.3) for p(x) = p n,l,m (x), so that the coefficients d j (1 ≤ j ≤ 2l) verify the same conditions as before (and thus they are uniquely determined). We have now the free parameters e i (1 ≤ i ≤ m), which we propose to determine in such a way that is minimized. Notice that this is equivalent to minimizing in the least square sense the coefficients of the Chebyshev series expansion of (p n,l,m (x) − cos(x))/(x 2n+2 ), which depend linearly on e i (1 ≤ i ≤ m), and thus the values of e i can be exactly determined. In practice, this can be conveniently done with the help of some symbolic algebra program. By following this approach we have analyzed stability polynomials with up to (n + 2l + m) = 50 (corresponding to methods up to 50 stages) for different values of n, l, m. Analogously, to measure their relative performances, we compare the error E n,l,m (x) ≡ arccos(p n,l,m (x)) − x versus COST=(n + 2l + m)/x and for different real values of x. In Figure 1(c) we show the results obtained for some representative choices of p n,l,m (x), denoted by (n, l, m), corresponding to polynomials of the same degree as those shown in Figure 1(b). In both cases, it seems that the performance improves with the number of stages. Figure 1(d) compares the performance of the best stability polynomial of each family (both correspond to polynomials of degree 76 and associated to 38-stage methods). It seems that up to a very high accuracy, the best performance corresponds to the second order method (1,14,9).

Construction of the stability matrix
Once a polynomial p(x) satisfying conditions C.1-C.3 (and possibly C.4) is determined, the next step is choosing the remaining entries of the stability K(x). As previously stated, two different k-stage left-right palindromic compositions are considered: In both cases p(x) has degree 2k, K 1 (x) = K 4 (x) = p(x), and (K 2 (x), K 3 (x)) have degree (2k − 1, 2k + 1) for (4.6) and degree (2k + 1, 2k − 1) for (4.7). The polynomials (K 2 (x), K 3 (x)) are such that and satisfying (2.24), which guarantee that det K(x) = 1, K(x) is a consistent approximation to (2.1), x * = x * and r * = r * . Clearly, there is a finite number of different choices of such pairs (K 2 (x), K 3 (x)), and among them, we choose a pair such that the corresponding matrix K(x) admits a decomposition of the form (2.2). As representative of the methods which can be obtained, in Table 1 we show the coefficients a i , b i for the kernel of a pair of processed methods. Denoting a kstage processed method of order q by P k q, the first set of coefficients corresponds to P 19 10, given by the composition (4.6) with k = 19, whereas the second belongs to P 32 16, given by the composition (4.7) with k = 32. Coefficients for a method P 38 2 can be found in [3]. It is worth noticing the small size of all the coefficients.

Construction of the processor
As for the processor, since the kernel is time-reversible, it can be chosen as (2.28). Recall that, for their practical implementation, P 1 (x) and P 4 (x) must be replaced by polynomial approximations, say for a given s. In this way the constraint P 4 = P −1 1 is relaxed to P 4 = P −1 1 +O(x 2s+2 ). If we assume that S(x) = P (x)K(x)P (x) −1 with S(x) given by (2.20), then the coefficients c i , d i in (4.8) can be obtained, for instance, by truncating the Taylor  Table 1: Coefficients for a 19-stage and a 32-stage symmetric kernels of orders 10, P 19 10, and 16, P 32 16, respectively.
expansion of the expressions (2.29). Note that by construction, the radius of convergence of the series P 1 (x) and P 4 (x) is r * = r * > x l . Coefficients c i , d i for P 38 2 can be found in [3]. There are many other ways to approximate the processor which might be more convenient for a given problem. For instance, one may consider even polynomials P j (x) of degree 2(n + m) such that P j (x) −P j (x) = O(x 2n+2 ) as x → 0, which minimize There is still another procedure which may be suitable in case the output is frequently required. The post-processor can be virtually cost free if approximated using the intermediate stages obtained during the computation of the kernel (see [1] for more details).

Numerical examples
We have analyzed the relative performance of the stability polynomials when approximating the function cos(x). This study has allowed us to choose some representative stability polynomials among those showing the best performance and subsequently we have built processed splitting methods from them. It is then im-portant to check if this relative performance still takes place in practical applications for the splitting methods obtained.
The following standard non-processed splitting methods from the literature are chosen for comparison: • The 1-stage second order leapfrog method, ψ h,2 , given in (1.3) and denoted by LF 1 2.
We have also considered as a reference the standard 4-stage fourth order nonsymplectic Runge-Kutta method, RK 4 4.

The harmonic oscillator
As a first example we consider again the one dimensional harmonic oscillator This trivial example is well suited as a test bench for the following purposes: (i) to check that all coefficients of the kernel and postprocessor are correct with sufficient accuracy, (ii) to see if the relative performance shown by the stability polynomials is still valid for the processed splitting methods obtained from them, and (iii) to compare the performance of the new processed methods with other well established methods from the literature. We take as initial conditions (q, p) = (1, 1) and integrate for t ∈ [0, 2000π] using different (constant) time steps. The larger time step corresponds to the stability limit (we repeated the experiment by increasing the time step until an overflow appeared). We measure the average error in the Euclidean norm of (q, p) versus the number of evaluations (for processed methods, this number corresponds to the kernel). Figure 2(a) shows the results obtained for the standard non-processed methods from the literature (LF 1 2, YS 3 4, GM 4 4, M 17 8, GM 10 10, GM 12 12). We clearly see that LF 1 2 is the most stable and GM 12 12 is the most efficient if accurate results are desired (recall that this is a method designed for the harmonic oscillator), so that they are chosen to compare with the new processed schemes. Figure 2(b) shows the results obtained with the high order processed methods P 19 10, P 32 16 and P 38 20, while Figure 2(c) repeats the experiments for the second order processed schemes P 19 2, P 32 2 and P 38 2. Finally, Figure 2(d) illustrates the performance achieved by the most efficient methods in each case. It is clear the superiority of P 38 2 when accurate results are desired.
For the processor we have considered (2.28)-(2.29) where P 1 , P 2 are approximated using (4.8) and the coefficients c i , d i , i = 1, . . . , s are obtained from the Taylor series expansion. The error introduced by the processor is of local character and does not propagate with time. For most problems it is enough to take for the pre-processor s = k for 2k-stage methods. With respect to the post-processor, we can take either the same value of s or a smaller one (depending on the accuracy required at intermediate outputs or the length of the integration interval) because this error does not propagate. If the output is required frequently we can always approximate the post-processor using the intermediate stages obtained during the computation of the kernel [1].
Finally, it is important to notice the agreement between the relative performance shown by the processed methods given in Figure 2 and the results obtained for the stability polynomials in Figure 1.

The Schrödinger equation
As a second example we now consider the one-dimensional time-dependent Schrödinger equation (3.1) with the Morse potential V (x) = D (1 − e −αx ) 2 . We fix the parameters to the following values: µ = 1745 a.u., D = 0.2251 a.u. and α = 1.1741 a.u., which are frequently used for modelling the HF molecule. As initial conditions we take the Gaussian wave function ψ(x, t) = ρ exp − β(x −x) 2 , with β = √ kµ/2, µ = 2Dα 2 ,x = −0.1 and ρ is a normalizing constant. Assuming that the system is defined in the interval x ∈ [−0.8, 4.32], we split it into d = 128 parts of length ∆x = 0.04, take periodic boundary conditions and integrate along the interval t ∈ [0, 20 · 2π/w 0 ] with w 0 = α 2D/µ (see [3] for more details on the implementation of the splitting methods to this problem). As we have seen, the splitting methods considered in this work preserve symplecticity but not unitarity. In Figure 3 we show the error in the preservation of unitarity, |q T (t)q(t)+p T (t)p(t)−1|, and the relative error in energy (|(E(t)−E(0))/E(0)|, where E(t) = u T (t)Hu(t) = q T (t)Hq(t) + p T (t)Hp(t)) for the 4-stage fourth order methods RK 4 4 and GM 4 4. Both methods require the same number of FFTs calls per step (GM 4 4 has less storage requirements) and they are used with the same time step h = (2π/w 0 )/250. The scheme GM 4 4 is orders of magnitude more accurate and, as expected for a symplectic integrator, the error in energy does not grow secularly in time. A similar behavior is observed for the error in unitarity, in agreement with the results shown in this work.  The integrations are done starting from a sufficiently small time step and repeating the computation by slightly increasing the time step until an overflow occurs, which we identify with the stability limit. We present the results for the 2nd-and 12thorder non-processed splitting methods chosen from the previous example (the results corresponding to the remaining methods also stay between them). For the processed methods we take the 19 and 38-stage methods both of order two (P 19 2 and P 38 2) and of order ten (P 19 10) and twenty (P 38 20), respectively. We observe that these methods have a relative performance similar to such obtained from the study of the harmonic oscillator or the corresponding stability polynomials. For reference, we have also included the results for the RK 4 4 method. Figure 4: Error of the vector solution for the Schrödinger equation versus the number of FFT calls in logarithmic scale for the 2nd-and 12th-order non-processed methods, LF 1 2, GM 12 12, the standard RK 4 4 and for the processed methods P 19 2, P 19 10, P 38 2 and P 38 20.