JacobiIter, SORForwIter, SORBackwIter, SSORIter, ChebyshevIter, CGIter, CGNIter, GMRESIter, BiCGIter, QMRIter, CGSIter, BiCGSTABIter, SetGMRESRestart -- classical iterative, semi-iterative, CG, and CG-like solvers
#include <laspack/itersolv.h> typedef Vector *(*IterProcType)(QMatrix *, Vector *, Vector *, int, PrecondProcType, double) Vector *JacobiIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType Dummy, double Omega); Vector *SORForwIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType Dummy, double Omega); Vector *SORBackwIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType Dummy, double Omega); Vector *SSORIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType Dummy, double Omega); Vector *ChebyshevIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *CGIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *CGNIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *GMRESIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *BiCGIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *QMRIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *CGSIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *BiCGSTABIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType PrecondProc, double OmegaPrecond); Vector *SetGMRESRestart(int Steps);
A x = b
by the Jacobi, and Successive Over-relaxation Method, respectively. For the SOR method, lexicographically forward, lexicographically backward, and symmetric variants are implemented. The parameter Omega is used as relaxation parameter. The blank parameter Dummy ensures the consistency of the procedure prototype with other solvers and should be set to NULL.
If any procedure for preconditioning of the system of equations is specified by the parameter PrecondProc (i.e. if PrecondProc != NULL), the corresponding preconditioned variant of these methods is applied. The procedure used should have the prototype
Vector *(*PrecondProcType)(QMatrix *, Vector *, Vector *, double)The parameter OmegaPrecond will be passed as relaxation parameter to the preconditioner.
All procedures are terminated either after MaxIter iteration steps or if the residual satisfies the condition
|| r ||_2 = || b - A x ||_2 <= eps || b ||_2,
where eps is the accuracy defined by LASPack termination control (module RTC).
For systems with singular matrices, orthogonalization of the solution vector to the null space of the matrix (which should be specified) is applied in all solvers to ensure unique solution as well as convergence in the resulting subspace.
The algorithms used are based on:
R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, Ch. Romine, H. van der Vorst: Templates for the Solution of Linear Systems: Building Blocks for Iterative Solvers, SIAM, Philadelphia, 1994. (HTML version is here).For theoretical foundation of above iterative methods see references therein or e.g.:
W. Hackbusch: Iterative Solution of Large Sparse Systems of Equations, Springer-Verlag, Berlin, 1994.
itersolv.h ... header file
itersolv.c ... source file
The implementation of all above iterative solvers is based on matrix-vector operations, especially the procedures Mul_QV and MulInv_QV from the module OPERATS. This approach allows to formulate the algorithms independently on matrix storage formats. Moreover, the usage of residuals which have to be computed in any way for convergence control leads to computational costs which is comparable to point-wise solvers. Numerical experiments shown differences of about 10% depending on computer architecture. The following example demonstrates how e.g. the Jacobi method is implemented in LASPack .
Vector *JacobiIter(QMatrix *A, Vector *x, Vector *b, int MaxIter, PrecondProcType Dummy, double Omega) { int Iter; double bNorm; size_t Dim; Vector r; Q_Lock(A); V_Lock(x); V_Lock(b); Dim = Q_GetDim(A); V_Constr(&r, "r", Dim, Normal, True); if (LASResult() == LASOK) { bNorm = l2Norm_V(b); Iter = 0; /* r = b - A * x(i) */ if (!IsZero(l1Norm_V(x) / Dim)) { if (Q_KerDefined(A)) OrthoRightKer_VQ(x, A); Asgn_VV(&r, Sub_VV(b, Mul_QV(A, x))); } else { Asgn_VV(&r, b); } while (!RTCResult(Iter, l2Norm_V(&r), bNorm, JacobiIterId) && Iter < MaxIter) { Iter++; /* x(i+1) = x(i) + Omega * D^(-1) * r */ AddAsgn_VV(x, Mul_SV(Omega, MulInv_QV(Diag_Q(A), &r))); if (Q_KerDefined(A)) OrthoRightKer_VQ(x, A); /* r = b - A * x(i) */ if (Iter < MaxIter) Asgn_VV(&r, Sub_VV(b, Mul_QV(A, x))); } } V_Destr(&r); Q_Unlock(A); V_Unlock(x); V_Unlock(b); return(x); }
The following code fragment shown how the system of linear equations A x = b could be solved by the conjugate gradient method with SSOR preconditioning. The solution process should be terminated if the accuracy of 1e-5 was reached but at least after 100 iterations. The relaxation parameter for the SSOR preconditioner is set to 1.2.
QMatrix A; Vector x, b; size_t Dim; Dim = ... ; Q_Constr(&A, "A", Dim, True, Rowws, Normal, True); V_Constr(&x, "x", Dim, Normal, True); V_Constr(&b, "b", Dim, Normal, True); /* generation of the matrix A and the right hand side vector b */ ... /***/ SetRTCAccuracy(1e-5); V_SetAllCmp(&x, 0.0); CGIter(&A, &x, &b, 100, SSORPrecond, 1.2); /* output of the solution vector x */ ... Q_Destr(&A); V_Destr(&x); V_Destr(&b);
If the matrix A is singular, symmetric, and its null space consists e.g. of vectors with equals components, the code should be extended at the point marked by /***/ with following two statements:
V_SetAllCmp(&x, 1.0); Q_SetKer(&A, &x, NULL);
qmatrix(3LAS), vector(3LAS), precond(3LAS), rtc(3LAS), operats(3LAS), errhandl(3LAS)