An Implicit Algorithm For Solving A Sparse Linear System

Esmond G. Ng
Lawrence Berkeley National Laboratory
One Cyclotron Road
Mail Stop 50F
Berkeley, CA 94720
U.S.A.
EGNg@lbl.gov

Direct methods for solving a sparse linear system are based on triangular factorizations of the coefficient matrix. Some of the zero entries in the coefficient matrix will become nonzero during the factorization process. Thus, the amount of storage required for the triangular factors is often more than that for the coefficient matrix. The phenomenon will be more pronounced when the coefficient matrix is large. As a result, iterative methods are often used for solving very large sparse linear systems, but their success depends on the spectral property of the coefficient matrix. On the other hand, if storage is not an issue, direct methods provides the most robust approach for computing the solution of a sparse linear system.

In [1], Eisenstat, Schultz, and Sherman proposed a scheme for solving a sparse symmetric positive definite linear system, in which only a small portion of the Cholesky factor is kept in core. The rest of the Cholesky factor is computed but discarded. In such an approach, the portion that is discarded must be recomputed whenever it is needed. We will refer to the Eisenstat-Schultz-Sherman scheme as a fully implicit scheme. The idea of implicit solution also appeared in [2].

In this talk, we propose a generalization to the approach in [1]. In our scheme, certain diagonal blocks of the Cholesky factor are computed and stored, while the nonzero entries of the off-diagonal blocks are computed but discarded. As in [1], if the discarded nonzero entries are needed, they will be recomputed. We refer to our scheme as a partially implicit scheme. Our scheme is a compromise between the explicit scheme (in which all the nonzero entries of the Cholesky factor are stored) and the fully implicit scheme, both in terms of storage requirement and execution time. Our partially implicit scheme is very similar to the approach described in [3,4,6] and Chapter 6 of [5]. The major difference is that our scheme can work with any ordering of the matrix, but the approaches in [3,4,6] and Chapter 6 of [5] work only for certain orderings.

In this talk, we will provide analyses and numerical experiments to demonstrate the performance of our scheme.

This is joint work with Alan George at the University of Waterloo.

References:

[1] S.C. Eisenstat, M.H. Schultz, and A.H. Sherman, "Software for sparse Gaussian elimination with limited core storage", in Sparse Matrix Proceedings 1978, eds. Iain S. Duff and G.W. Sewart, SIAM (1979), pp.135-153.

[2] Alan George, "On block elimination for sparse linear systems", SIAM J. Numer. Anal. (11), pp. 585-603, 1974.

[3] Alan George, "An automatic one-way dissection algorithm for irregular finite element problems", SIAM J. Numer. Anal. (17), pp.740-751, 1980.

[4] Alan George and Joseph Liu, "Algorithms for matrix partitioning and the numerical solution of finite element systems", SIAM J. Numer. Anal. (15), pp.297-327, 1978.

[5] Alan George and Joseph Liu, Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall (1981).

[6] Esmond Ng, "On one-way dissection schemes", M.Math. Thesis, The University of Waterloo (1979).