Fourier-Splitting methods for the dynamics of rotating Bose-Einstein condensates

We present a new method to propagate rotating Bose-Einstein condensates subject to explicitly time-dependent trapping potentials. Using algebraic techniques, we combine Magnus expansions and splitting methods to yield any order methods for the multivariate and nonautonomous quadratic part of the Hamiltonian that can be computed using only Fourier transforms at the cost of solving a small system of polynomial equations. The resulting scheme solves the challenging component of the (nonlinear) Hamiltonian and can be combined with optimized splitting methods to yield efficient algorithms for rotating Bose-Einstein condensates. The method is particularly efficient for potentials that can be regarded as perturbed rotating and trapped condensates, e.g., for small nonlinearities, since it retains the near-integrable structure of the problem. For large nonlinearities, the method remains highly efficient if higher order p>2 is sought. Furthermore, we show how it can adapted to the presence of dissipation terms. Numerical examples illustrate the performance of the scheme.

1. Introduction. The centerpiece of this work is the construction of an efficient geometric integrator for the two-dimensional harmonically trapped rotational Schrödinger equation in atomic units ( = m = 1) subject to periodic boundary conditions i∂ t ψ(r, t) = H A (t)ψ(r, t), ψ(r, 0) = ψ 0 ∈ L 2 ([−π, π] 2 ), (1.1) with the explicitly time-dependent Hamiltonian H A (t) = 1 2 p T p + 1 2 ω x (t) 2 x 2 + ω y (t) 2 y 2 + ΩL z , where r = (x, y) T , p = (p x , p y ) T , L z = xp y − yp x denotes the angular momentum operator and p k = −i∂ k , k = x, y. This includes the case of unbounded domains since the solution vanishes up to round-off at sufficiently large spatial intervals due to the harmonic trapping potential. The presented techniques will also be valid for more general quadratic time-dependencies which are used to model collisions of atoms and molecules [23,15]. The generalization to three dimensions is straight-forward and will be briefly addressed in Section 2. The efficient solution of (1.1) is paramount to the computation of the dynamics of rotating Bose-Einstein condensates as we will see below.
For sufficiently small h and any order p > 1, we show that there exist cheaply computable coefficients f j , g k , e l ∈ iR obeying a small system of polynomial equations such that (1.2) e f0x 2 e f1y 2 +g1p 2 x −e1ypx e f2x 2 +g2p 2 y +e2xpy e f3y 2 +g3p 2 where ϕ H A t,t+h denotes the exact flow of (1.1) from t to t + h. This approximation preserves unitarity and gauge invariance of the exact solution and can thus be considered a geometric integrator in the sense of Ref. [14]. After discretization, this decomposition, named Φ [p] t,t+h , requires only six (one-dimensional) changes from coordinate to momentum space and vice versa per time-step in order to diagonalize the exponents which can be performed by Fast Fourier Transforms (FFT) and hence suggest the name Fourier-splitting.
The method is particularly successful for perturbed problems of the form H = H A (t) + εB(t, r, |ψ|), ε 1, with a small parameter ε, and some real-valued function B, which includes the Gross-Pitaevskii equation for Bose-Einstein condensates as special case. The (nonlinear) Hamiltonian H with B = g|ψ| 2 +V describes the evolution of a rotating Bose-Einstein condensate (BEC) subject to a harmonic (parabolic) trapping potential plus some perturbation εV . After the first experimental realization of BECs [1,11,13] and the consequently awarded Nobel prize in 2001, continuous attention of numerical analysts [12,3,25,4,5,21,17] has been drawn to the solution of the autonomous version of (1.1), which is obtained by dropping all time-dependencies in the Hamiltonian. The flow of the perturbation B can be easily computed since B is diagonal in coordinate space and leaves the modulus |ψ| constant, see Lemma 2.1 for details. Using (1.2), the exact flow can be approximated by Strang's method to where the tildes,B, indicate frozen (nonlinear) operators which also justify the appearance of only one index, i.e., ϕB The term proportional to h p+1 originates from the error in the approximation of the part H A by the pth order method Φ [p] . Observe that the outer exponentials of (1.2) are diagonal in coordinate space and no further FFT is necessary to solve the full problem (1.3). An alternative approach [3,25] splits the system into simultaneously diagonalizable parts T x = 1 2 p 2 x − Ωyp x , T y = 1 2 p 2 y + Ωxp y , W = 1 2 ω x (t) 2 x 2 + ω y (t) 2 y 2 + εB(t) and then h/2 = ϕ H t,t+h + O h 3 , which also requires six FFTs but the small factor ε in the error is lost. If the time is frozen in H A , Laguerre transforms [4,5,21,17] or a decomposition similar to (1.2) [12] can be used to advance H A without recovering the small factor [8].
The decomposition is built upon earlier works for rotating but autonomous BEC [12] and explicitly time-dependent one-dimensional harmonic oscillators [2], where Fourier-splittings have been used for simpler Hamiltonians.
In the following section, we give a short introduction to some numerical concepts, in particular, splitting methods and the Magnus expansion, which will culminate in the derivation of our method. The efficiency is demonstrated by a series of numerical examples.
2. Derivation of the new method. Splitting methods are frequently recommended for the integration of (nonlinear) Schrödinger equations due to their fast computability and high accuracy [6,22,24]. Furthermore, they preserve geometric features of the exact solution, such as norm-conservation (unitarity) and gauge-invariance.
for a suitable choice of coefficients a j , b j ∈ R. The choice a 1 = 1, a 2 = 0 and b 1 = b 2 = 1/2 with s = 2 corresponds to Strang's second-order method (1.3). Higherorder methods can be designed using the Baker-Campbell-Hausdorff (BCH) formula, e hA e hB = e bch(hA,hB) , whose first terms are given by , solely in terms of commutators, whose exact solution ϕH h coincides with the result of the method and the discrepancy between this modified vector field and the original problem is called backward error. Typically, the split is done such that A = 1 2 p T p and B = B(t, r, |ψ|) because then, where F is the Fourier transform w.r.t. all spatial variables, ψ(x, y)e i(kxx+kyy) dxdy.
After spatial discretization, the integral can be evaluated using Fast-Fourier transforms and it remains to compute the exponential of a diagonal matrix constituted by the wave-numbers −ih(k 2 x + k 2 y ). The flow of part B can be computed as detailed in the following well-known lemma, Lemma 2.1. Let B(t, x, |ψ|) be a real-valued function, then has the solution The proof relies on the simple calculation that d dt |ψ| 2 =ψ * ψ + ψ * ψ = 0, after plugging in the definition of the derivative. Then, the system reduces to a linear non-autonomous ODE.
Suppose now, that the Hamiltonian has explicit time-dependencies, H = H A (t) + εF (t, x, |ψ|). Introducing time as two new coordinates t 1 , t 2 makes the system treatable with splitting methods. We write the Lie-derivative corresponding to the (nonlinear) vector field Hψ as The derivatives d/dt j are responsible for the evolution of the time-coordinates t j .
There are several possibilities to split this enhanced operator: the computationally simplest pairs operators depending on t j with the evolution of t i , i = j, e.g., H A (t 1 ) with d/dt 2 and εB(t 2 ) with d/dt 1 . In this way, the splitting works exactly as for the autonomous situation since, in each internal step, the main operator H A (t 1 ) (or B(t 2 )) is frozen and the other time-coordinate t 2 (or t 1 ) is advanced accordingly. A closer look at the error terms reveals that this split, albeit simple, is by a factor ε less accurate since the derivatives w.r.t t j mix large and small terms and we briefly examine how it can be recovered [8] after a short interlude on higher-order compositions in presence of a small parameter.

Near-integrable structure.
The appearance of the extra factor ε in the error terms is due to the separation of large and small parts in the splitting and extremely successful composition methods have been developed for this problem class [20,10]. The basic idea is to express the error in a power series in the time-step and in the small parameter, where the s j start from the first non-vanishing error coefficient and such a method Φ h is said to be of generalized order (s 1 , s 2 , . . . , s m ) (where s 1 ≥ s 2 ≥ · · · ≥ s m ). Then, the coefficients a j , b j of a splitting method (2.1) can be chosen to construct methods of any generalized order, given that either of the two parts A or B is proportional to ε.
This proportionality automatically carries over to the commutators [A, B] ∝ ε but some additional consideration is required in the presence of time-dependencies since the operators causing the time-evolution are not necessarily proportional to ε. The simplest treatment makes use of the formulation in Lie derivatives (2.2), for which we have already pointed out that freezing both parts will necessarily reduce the generalized order of a scheme. On the upside, there is a remedy which has motivated this study: if the large part is advanced non-autonomously, the generalized error is preserved [8]. In terms of Lie derivatives, this corresponds to taking the large part of (2.2) to be either A = H A (t 1 )+d/dt 1 or A = H A (t 1 )+d/dt 1 +d/dt 2 . The latter option implies freezing the remainder F and is usually preferred for simplicity and efficiency. Our aim is to apply the highly efficient splitting methods for near-integrable systems [20] to the problem at hand which is of similar structure. As discussed above, however, a proper application of splittings means to solve ϕ H A tj ,tj +aj h for a fractional time-step a j h.
We stress that a propagation of H A in the autonomous (or frozen) setting using Laguerre-polynomials [6,4,17] would be (at least) by a factor ε less accurate and the basis would have to be recomputed in each internal step.

Time averaging.
A cornerstone of the construction is the formal solution of a non-autonomous (linear) initial value problem, ∂ t u(t) = A(t)u(t), in the form u(t + h) = exp(Θ(t, t + h))u(t). The Magnus expansion [19] gives an expression for the exponent Θ using integrals of commutators of increasing length of the operator A evaluated at different instances of time. Its first two terms are and recursive procedures exist to obtain higher-order corrections [7]. The integrals can be efficiently computed by quadrature rules [18,7] which will be exemplified in the numerical section.
It is easy to verify that the components of H A generate, via commutation, a ten-dimensional Lie algebra g with basis and since the Magnus expansion only operates by summation and commutation, we have Θ, Θ [p] ∈ g at any truncation order p, where the truncation is performed within the algebra, s.t., Θ [p] = Θ+O(h p+1 ). We stress that this yields a geometric integrator as staying in the correct algebra g assures unitarity of the exponential. Evaluating the commutators, the (truncated) Magnus expansion can be interpreted as averaged Hamiltonian,H for some coefficients m x , m y , w x , etc. that depend on h, p and on the integrals over coincides with the truncated Magnus expansion exp(Θ

[p]
t,t+h ) at t = h. In principle, one could think about using some standard splitting method for this operator, but the mixed terms xp x , yp y cannot be diagonalized by means of Fourier transforms. Notice the relationship between the decomposition (1.2) and commutator-free Magnus methods [9] since the computationally difficult terms arise after commutation only. These methods, however, require a higher number of FFTs (double for order 4).

The decomposition.
Nevertheless, it is possible to accurately compute e Θ [p] without having to evaluate mixed operators. The key to our endeavor is the finiteness of the underlying algebra to which any such Θ belongs. Together with the BCH formula, we deduce that for sufficiently small h, the lefthand side of (1.2) is summable and can be expressed as a single exponential of a linear combination of the basis elements. The coefficients in (1.2) are, in principle, determined by equating the resulting exponent to the averaged Hamiltonian −ihH Instead, we propose an alternative procedure which extends results from Ref. [2] and relies on finding a faithful (injective) representation of the operator Lie algebra g. This Lie algebra isomorphism drastically simplifies all calculations since we will be able to verify the decomposition (1.2) by computations in a (low-dimensional) matrix setting. The correspondence principle i[·, ·] → {·, ·} between the quantum Lie bracket and the Poisson bracket gives an elegant method to find the isomorphism by considering the equivalent classical Hamiltonian system forH(x, y, p x , p y ). For convenience of the reader, we recall that the classical equations of motion are given byṙ which translates for our HamiltonianH The injectivity is a consequence of the uniqueness of each matrix element w.r.t. the coefficients inH and it is easy to verify that the matrices indeed form an isomorphic Lie algebra with the standard matrix commutator.
Although it is too cumbersome to evaluate the solution operator exp(hM ) in closed form, it is a straight-forward numerical task.
Next, we take a look at the left-hand side of (1.2), and for illustration, we compute the rightmost exponent, exp(f 3 y 2 + g 3 p 2 x − e 3 yp x ) in the matrix algebra which solves the equation and hence exp (f 2 x 2 + g 2 p 2 y + e 2 xp y ) = exp(N ) = 1 + N . The remaining matrices in the exponents of Φ [p] t,t+h are also nilpotent and can be trivially exponentiated, Notice that we have changed the exponents by introducing factors 1/2 in the previous equation to get a more compact notation. Later on, we revert to the original form for better readability. Furthermore, the multiplication order of the matrices had to be reversed w.r.t. the exponentials to account for their nature as Lie derivatives (cf. Vertauschungssatz [16]) and equality holds if both sides are understood as flows.
Multiplication of the matrix exponentials (2.4) according to (1.2) yields a 4 × 4 matrix of multivariate polynomials of maximum degree four that has to be equated to exp(hM ). The system u = M u, u = (x, y, p x , p y ) T has ten degrees of freedom, originating from the linearly independent basis terms and the same number of variables has been introduced in (1.2). It is clear that if we had allowed the appearance of the mixed terms xp x , yp y , the composition could be solved easily since we have a free variable for each basis element [26].
With the aim of obtaining a Fourier-diagonalizable decomposition, i.e., terms that are diagonal after Fourier transform, we examine the structure coefficients of the algebra. It turns out that xp x , yp y , can be generated by the following commutators, only. Our ansatz hence includes two free variables multiplying x 2 (f 0 , f 2 ), y 2 (f 1 , f 3 ), p 2 x (g 1 , g 3 ) and yp x (e 1 , e 3 ). One in each pair will satisfy the equation for the corresponding basis element, whereas the other can be used to create the terms xy, p x p y , xp x and yp y . We deduce that all basis terms can be generated and thus, for sufficiently small h, a solution of the only formally overdetermined 4×4 nonlinear algebraic system can be computed, e.g., with the Gauss-Newton algorithm. With the help of a Gröbner basis and simple algebra, the number of variables can be reduced to accelerate the algorithm.
Notice that there are multiple choices of possible compositions. For example, at the same cost, we could have replaced the outer exponential by e f0xy or introduced the terms e kpxpy before and after the center exponential (setting f 0 = 0). It is not clear whether other choices for the decomposition are more advantageous.
For simplicity of the presentation, we have chosen a simple form of the Hamiltonian (1.1), but using our methodology, analogous methods can be derived for virtually all 1 relevant polynomial Hamiltonians of degree ≤ 2 in any dimension with arbitrary time-dependencies.
Summarizing, a time-step of the algorithm consists of the parts: Compute the Magnus expansion (2.3) of H A up to the desired order p, solve the resulting (small) polynomial system to obtain the coefficients e j , f k , g l and apply the composition Φ It is worth pointing out that the extra effort is virtually independent of the order choose for the Magnus expansion and can be neglected as the number of grid points increase.
In total, one step of the algorithm requires the application of two 1D and two 2D Fourier transforms, which can be implemented at the cost of three 2D-FFTs, and prior to evolving the wave function, the coefficients are determined through exponentiating a 4 × 4 matrix and solving a small nonlinear system. The effort for the solution of the (formally) overdetermined system, which can be done by a least-square algorithm, is marginal since -for small time-steps -the solution is not far from 0 ∈ R 10 .

Special cases.
In passing, we mention some further cases, for which the algebra simplifies.
Due to cancellations, the commutators of H at different instances lie in the span of {p 2 x + p 2 y , x 2 + y 2 , L z , (xp x + p x x) + (yp y + p y y)} and any Magnus integrator can be written as effective Hamiltoniañ H t,t+h = a t,t+h (p 2 x + p 2 y ) + b t,t+h (x 2 + y 2 ) + c t,t+h L z + d t,t+h (xp x + yp y ) + e t,t+h xy.
Linear interaction. For time-dependencies proportional to the linear components only, the following terms in algebra do not appear: xy, p x p y , xp x , yp y . It is therefore sufficient to employ a symmetric composition including the linear terms in the exponent, e f1y 2 +g1p 2 x −e1ypx+d1y+c1px e f2x 2 +g2p 2 y +e2xpy+d2x+c2py e f1y 2 +g1p 2 x −e1ypx+d1y+c1px .
The equations for the parameters have to be obtained in a slightly different way which is described below, preceding (2.6).
General quadratics. For more complicated Hamiltonians, in the algebra g with basis in particular when linear terms are involved, the described procedure fails to a certain degree: As for the one dimensional harmonic oscillator problem [2], the phase relation cannot be recovered since the classical mechanical equivalent does not enter the equations of motion. In principle, one could approximate the phase numerically, by introducing a new variable proportional to the phase E 15 and derive a system of differential equations for all parameters by interpreting them as time-dependent functions and plugging the ansatz into the Schrödinger equation with Hamiltonian (2.5) after Magnus averaging. Then, the resulting scalar functions are evaluated using the presented algorithm on a fixed grid which partitions the time-step interval [t n , t n +h] and then we finally solve the differential equation for the free phase parameter numerically, or alternatively, we make use of the BCH formula.
This effort can be spared since only the global phase information is lost which is not observable. The polynomial system to be solved then needs additional degrees of freedom to cater for the linear contributions and to close the discussion, we conjecture that there exist (under mild assumptions 2 ) imaginary coefficients, such that Ψ h = e n1x 2 +m1x e f1y 2 +g1p 2 x −e1ypx+k1px e f2x 2 +g2p 2 y +e2xpy+k2py e f3y 2 +g3p 2 x −e3ypx+k3px e n2x 2 +m2x , is the solution of the SE with Hamiltonian (2.5) for small values of the α j . Note that a slightly different methodology has to be applied to account adequately for the linear terms. The exact solution for the corresponding classical mechanical system is obtained using the variation-of-constants formula which handles the linear contributions in the Hamiltonian. The decomposition on the other hand cannot be written anymore as a product of matrix exponentials, instead, the exponentials are interpreted as flows and computed accordingly. For clarity, we describe how to compute the flow of H = n 1 x 2 + m 1 x. which corresponds to the action of e n1x 2 +m1x on some initial value (x, y, p x , p y ) T . Trivially, we get (2.6) e n1x 2 +m1x (x, y, p x , p y ) T = (x, y, p x − (2n 1 x + m 1 ), p y ) T , and following through in the same fashion, a polynomial system in the parameters is obtained.

Higher dimensions.
It is straightforward to generalize the results to arbitrary spatial dimensions n given that the potentials remain quadratic. The only noteworthy detail is that the dimensions of the matrices that yield the polynomial system scale with 2n × 2n. In the particular case of a harmonic trap in the z-axisceteris paribus -the Hamiltonian can written as a sum where H z = 1 2 p 2 z + 1 2 ω 2 z z 2 . A sensible splitting groups commuting terms A = H A +H z , leaving the (small) remainder B = G in order to compute one splitting step and e −ihHz can be solved using two FFTs in the z-direction since e −ihHz = e −i tan(hωz/2)z 2 /2 e −i sin(hωz)p 2 z /2 e −i tan(hωz/2)z 2 /2 , see [2].
3. Numerical results. To numerically verify the proposed algorithm, we choose the Hamiltonian where ω x (t) 2 = 4(1 + sin(t/2)), ω y (t) 2 = (4 − sin(t/2)) and Ω = 1/10. The spatial domain is discretized with 128 × 128 grid points on [−10, 10] 2 and we integrate the normalized initial condition ψ 0 ∝ (x + iy)e −(x 2 +y 2 )/2 until the final time T = 3. For the time-averaging we choose a fourth-order Magnus integrator that is in turn based on the fourth-order Gauss-Legendre quadrature, Θ [4] t,t+h = −i where t j = t + hc j with the standard Gauss-nodes c 1,2 = (1 ∓ 1/ √ 3)/2. Evaluating the commutator leads to the averaged Hamiltonian iΘ [4] t,t+h = In a first experiment, we compare the split (1.2) against a standard symmetric approach which uses the same number of FFTs per step, (3.2) Ψ [2] t,t+h = e −i h The results are shown in Fig. 1 and clearly show the correct order and high accuracy of the new method. In a second experiment, the Hamiltonian H L is perturbed by a small cubic nonlinearity,  In the first row, the initial condition with real (left) and imaginary part (center) are displayed, whereas the evolution at time T = 3 is depicted in the second row, both for real (left) and imaginary part (center). The imaginary part is shown from above and the colormaps are kept constant in each row, i.e., the same color corresponds to the same value.
with g = 1 and the experimental setup is taken as above with N x = N y = 256 grid points. Instead of embedding our method within a higher-order splitting, we chose a simple Strang-type approach where the perturbation H − H L is appended on both sides of the method, where Ψ = Φ [4] t,t+h as in (1.3), or Ψ = Ψ [2] t,t+h for the standard approach from (1.4), respectively. Of course, despite the fourth-order Magnus expansion, we only expect a second-order integrator, however, with much smaller error terms when compared to (3.2) due to the smallness of the perturbation. The results in Fig. 2 confirm the predicted behavior. At any given precision, roughly half of the steps are needed by the new algorithm using the same number of FFTs. As the perturbation, in this case the nonlinearity, becomes weaker, the benefits increase -on the other hand, when the remainder cannot be considered to be a perturbation, the curves will eventually overlap.

Conclusions.
We have designed an efficient algorithm to integrate time-dependent rotating BEC using only Fourier transforms and an iterative solver for a small algebraic system. The method solves any quadratic Hamiltonian, such as for example rotating condensates subject to time-dependent harmonic trappings, up to the desired precision and is thus particularly useful if the full Hamiltonian can be regarded as a perturbation thereof. The proof technique is based on the finiteness of the Lie algebra generated by the Hamiltonian on which Magnus averaging has been performed. The quantum mechanical algebra has been identified with its classical  mechanical counterpart from which a matrix representation has been derived. The generalization of this technique to other quantum mechanical systems is subject of future work.