$$
\newcommand{\RR}{\mathbb{R}}
\newcommand{\GG}{\mathbb{G}}
\newcommand{\PP}{\mathbb{P}}
\newcommand{\PS}{\mathcal{P}}
\newcommand{\SS}{\mathbb{S}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\CC}{\mathbb{C}}
\newcommand{\HH}{\mathbb{H}}
\newcommand{\ones}{\mathbb{1\hspace{-0.4em}1}}
\newcommand{\alg}[1]{\mathfrak{#1}}
\newcommand{\mat}[1]{ \begin{pmatrix} #1 \end{pmatrix} }
\renewcommand{\bar}{\overline}
\renewcommand{\hat}{\widehat}
\renewcommand{\tilde}{\widetilde}
\newcommand{\inv}[1]{ {#1}^{-1} }
\newcommand{\eqdef}{\overset{\text{def}}=}
\newcommand{\block}[1]{\left(#1\right)}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\trace}[1]{\mathrm{tr}\block{#1}}
\newcommand{\norm}[1]{ \left\| #1 \right\| }
\newcommand{\argmin}[1]{ \underset{#1}{\mathrm{argmin}} }
\newcommand{\argmax}[1]{ \underset{#1}{\mathrm{argmax}} }
\newcommand{\st}{\ \mathrm{s.t.}\ }
\newcommand{\sign}[1]{\mathrm{sign}\block{#1}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\inner}[1]{\langle #1 \rangle}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ddd}[2]{\frac{\partial #1}{\partial #2} }
\newcommand{\db}{\dd^b}
\newcommand{\ds}{\dd^s}
\newcommand{\dL}{\dd_L}
\newcommand{\dR}{\dd_R}
\newcommand{\Ad}{\mathrm{Ad}}
\newcommand{\ad}{\mathrm{ad}}
\newcommand{\LL}{\mathcal{L}}
\newcommand{\Krylov}{\mathcal{K}}
\newcommand{\Span}[1]{\mathrm{Span}\block{#1}}
\newcommand{\diag}{\mathrm{diag}}
\newcommand{\tr}{\mathrm{tr}}
\newcommand{\sinc}{\mathrm{sinc}}
\newcommand{\cat}[1]{\mathcal{#1}}
\newcommand{\Ob}[1]{\mathrm{Ob}\block{\cat{#1}}}
\newcommand{\Hom}[1]{\mathrm{Hom}\block{\cat{#1}}}
\newcommand{\op}[1]{\cat{#1}^{op}}
\newcommand{\hom}[2]{\cat{#1}\block{#2}}
\newcommand{\id}{\mathrm{id}}
\newcommand{\Set}{\mathbb{Set}}
\newcommand{\Cat}{\mathbb{Cat}}
\newcommand{\Hask}{\mathbb{Hask}}
\newcommand{\lim}{\mathrm{lim}\ }
\newcommand{\funcat}[1]{\left[\cat{#1}\right]}
\newcommand{\natsq}[6]{
\begin{matrix}
& #2\block{#4} & \overset{#2\block{#6}}\longrightarrow & #2\block{#5} & \\
{#1}_{#4} \hspace{-1.5em} &\downarrow & & \downarrow & \hspace{-1.5em} {#1}_{#5}\\
& #3\block{#4} & \underset{#3\block{#6}}\longrightarrow & #3\block{#5} & \\
\end{matrix}
}
\newcommand{\comtri}[6]{
\begin{matrix}
#1 & \overset{#4}\longrightarrow & #2 & \\
#6 \hspace{-1em} & \searrow & \downarrow & \hspace{-1em} #5 \\
& & #3 &
\end{matrix}
}
\newcommand{\natism}[6]{
\begin{matrix}
& #2\block{#4} & \overset{#2\block{#6}}\longrightarrow & #2\block{#5} & \\
{#1}_{#4} \hspace{-1.5em} &\downarrow \uparrow & & \downarrow \uparrow & \hspace{-1.5em} {#1}_{#5}\\
& #3\block{#4} & \underset{#3\block{#6}}\longrightarrow & #3\block{#5} & \\
\end{matrix}
}
\newcommand{\cone}[1]{\mathcal{#1}}
$$
Iterative Methods
Consider the following iteration for solving \(Mx + q = 0\):
\[x_{k+1} = x_k - \inv{P}\block{Mx_k +q}\]
We look for conditions on \(P\) such that the iteration matrix \(B =
I - \inv{P}M\) is convergent, that is:
\[\rho(B) < 1\]
TODO convegent matrices
- https://en.wikipedia.org/wiki/Convergent_matrix
- https://en.wikipedia.org/wiki/Spectral_radius#Power_sequence
We start by splitting \(M = P - N\) so that \(B = \inv{P}N\), and let
\(\lambda\) be an eigenvalue of \(B\) with corresponding eigenvector
\(x\), that is:
\[\inv{P}Nx = \lambda x\]
Equivalently:
\[\block{P - M}x = \lambda P x\]
We now look for sufficient conditions for:
\[|\lambda| < 1\]
Case 1: \(M\) is positive definite
In this case, \(x^*Mx >0\) and we may normalize \(x\) so that \(x^*Mx
= 1\). \(x\) verifies:
\[x^*\block{P - M}x = \lambda x^*P x\]
We let \(x^*Px = a + ib\) and rewrite the above as:
\[a + ib - 1 = \lambda \block{a + ib}\]
Taking squared modulus on both sides gives:
\[(a - 1)^2 + b^2 = |\lambda|^2 \block{a^2 + b^2}\]
That is:
\[a^2 + b^2 + 1 - 2a = |\lambda|^2 \block{a^2 + b^2}\]
Our convergence condition reduces to \(1 - 2a < 0\), which we rewrite
as:
\[x^*Mx < x^*Px + \bar{x^*Px} = x^*\block{P + P^T} x\]
A sufficient condition is then:
\[P + P^T - M > 0\]
Jacobi Iteration
Given a diagonal preconditioner \(P\) and a positive definite matrix
\(M\), the above sufficient condition reduces to:
\[2P > M\]
Gauss-Seidel Iteration
Given a positive definite matrix \(M = L + D + L^T\), where \(D\) is
diagonal, the Gauss-Seidel preconditioner is \(P = L + D\) and the
sufficient convergence condition is:
\[L + 2D + L^T - M = D + M - M = D > 0\]
which is always true for \(M\) positive definite.