CG -- Eigenvalues (Eigenvectors) and convergency

Apply a matrix to Eigenvectors <=> Apply an eigenvalue to this vector

Iterative methods often depend on applying matrix B to a vector over and over again.

When B is repeatedly applied to an eigenvector v, one of two things can happen.

-cite from An Introduction to the Conjugate Gradient Method Without the Agonizing Pain

Any vector can be written as the sum of eigenvectors if matrix B is symmetric

If B is symmetric (and often if it is not), then there exists a set of n linearly independenteigenvectors of B denoted v1,v2,....,vn.

Thus, one can examine the effect of B on each eigenvector separately.

Example: Jacobi iterations

The matrix A is split into two parts:

Suppose we start with some arbitrary vector x_(0). For each iteration, we apply B to this vector, then add z to the result.

Spectral radius of a matrix ρ

ρ(B)=max|λi|, λi is an eigenvalue of B

Thus, if ρ(B)\<1, then the error term e_(i) will converge to zero as i approaches infinity. Hence, we have a ==guarantee of convergency ==regardless of the initial vector x_(0) .