next up previous
Next: The Runge-Kutta Method Up: Ordinary Differential Equations Previous: Application to Vector Equations


The Leap-Frog Method

How can we improve on the Euler method? The most obvious way would be to replace the forward difference in (1.12) with a centred difference (1.13) to get the formula
\begin{displaymath}
\bi{y}_{n+1} = \bi{y}_{n-1} - 2 \delta t
\mathop{\bi{f}}(\bi{y}_n, t).
\end{displaymath} (1.28)

If we expand both $\bi{y}_{n+1}$ and $\bi{y}_{n-1}$ as before (1.28) becomes
$\displaystyle {y_n + \delta t \left.\d y\over\d t\right\vert _n
+ {\delta t^2\o...
...t\vert _n
+ {\delta t^3\over 6} \left.\d^3 y\over\d t^3\right\vert _n
+ \ldots}$
    $\displaystyle = y_n - \delta t \left.\d y\over\d t\right\vert _n
+ {\delta t^2\...
...elta t^3\over 6} \left.\d^3 y\over\d t^3\right\vert _n
+ \ldots - 2\delta t f_n$ (1.29)
    $\displaystyle = y_n + \delta t \left.\d y\over\d t\right\vert _n
+ {\delta t^2\...
...ht\vert _n
- {\delta t^3\over 6} \left.\d^3 y\over\d t^3\right\vert _n
+ \ldots$ (1.30)

from which all terms up to $\delta t^2$ cancel so that the method is clearly 2nd order accurate. Note in passing that using (1.28) 2 consecutive values of $y_n$ are required in order to calculate the next one: $y_n$ and $y_{n-
1}$ are required to calculate $y_{n+1}$. Hence 2 boundary conditions are required, even though (1.28) was derived from a 1st order differential equation. This so-called leap-frog method is more accurate than the Euler method, but is it stable? Repeating the same analysis as for the Euler method we again obtain a linear equation for $\delta y_n$
\begin{displaymath}
\delta y_{n+1} = \delta y_{n-1}
-2 \delta t \left.\partial f\over\partial y\right\vert _n \delta y_n.
\end{displaymath} (1.31)

We analyse this equation by writing $\delta y_{n} = g \delta y_{n-1}$ and $\delta y_{n+1} = g^2 \delta y_{n-1}$ to obtain
\begin{displaymath}
g^2 = 1 - 2 \delta t \left.\partial f\over\partial y\right\vert _n g
\end{displaymath} (1.32)

which has the solutions
\begin{displaymath}
g = \delta t {\partial f\over\partial y}
\pm\sqrt{\left(\delta t{\partial f\over\partial y}\right)^2 + 1}.
\end{displaymath} (1.33)

The product of the 2 solutions is equal to the constant in the quadratic equation, i.e. $-1$. Since the 2 solutions are different, one of them always has magnitude $>1$. Since for a small random error it is impossible to guarantee that there will be no contribution with $\vert g\vert>1$ this contribution will tend to dominate as the equation is iterated. Hence the method is unstable. There is an important exception to this instability: when the partial derivative is purely imaginary (but not when it has some general complex value), the quantity under the square root in (1.33) can be negative and both $g$'s have modulus unity. Hence, for the case of oscillation (1.1c) where $\partial f/\partial y =
\pm\i\omega$, the algorithm is just stable, as long as
\begin{displaymath}
\delta t \le 1/\omega.
\end{displaymath} (1.34)

The stability properties of the leap-frog method are summarised below


Decay Growth Oscillation
unstable unstable $\delta t \le 1/\omega$
Again the growth equation should be analysed somewhat differently.
next up previous
Next: The Runge-Kutta Method Up: Ordinary Differential Equations Previous: Application to Vector Equations