next up previous
Next: Matrix Eigenvalue Problems Up: Iterative Methods Previous: The Jacobi Method

The Gauss-Seidel Method

Any implementation of the Jacobi Method above will involve a loop over the matrix elements in each column of $\bss{X}^{n+1}$. Instead of calculating a completely new matrix $\bss{X}^{n+1}$ at each iteration and then replacing $\bss{X}^n$ with it before the next iteration, as in the above code, it might seem sensible to replace the elements of $\bss{X}^n$ with those of $\bss{X}^{n+1}$ as soon as they have been calculated. Naïvely we would expect faster convergence. This is equivalent to rewriting the Jacobi equation as
\begin{displaymath}
\bss{X}^{n+1} = \left(\bss{D} + \bss{L}\right)^{-1}\left[\bss{B} -
\bss{U}\bss{X}^n\right].
\end{displaymath} (3.15)

This Gauss-Seidel method has better convergence than the Jacobi method, but only marginally so. As before we consider the example of the solution of the 1D Poisson's equation. As C programs the basic structure might be something like
      int i;
      const int N = ??;  //incomplete code
      double x[N], b[N];
      while ( ... ) //incomplete code
      {
         for ( i = 0; i < N; i++ )
            x[i] = (b[i] - x[i-1] - x[i+1]) * 0.5;
      }
Note that only one array is now required to store X, whereas the Jacobi method needed 2. The time required for each iteration is proportional to $N$ for each column of $\bss{B}$, assuming $\bss{A}$ is sparse. The number of iterations required depends on the details of the problem, on the quality of the initial guess for $\bss{X}$, and on the accuracy of the required solution.
next up previous
Next: Matrix Eigenvalue Problems Up: Iterative Methods Previous: The Jacobi Method