Solve f(x) = 0. f is once differentiable.
1. f(x) = 0
2. f(x) + Jf(x)(x - x) = 0
Jf(x) is the jacobian of f evaluated at x.
3. f(xt) + Jf(xt)(xt+1 - xt) = 0
4. Jf(xt)(xt+1 - xt) = -f(xt)
Finding the step (xt+1 - xt) involves solving the system of linear equations Jf(xt)(xt+1 - xt) = -f(xt). This iterative scheme for solving nonlinear systems of equations is Newton's method.
The Jacobi method is also an iterative method, but its for solving diagonally dominant systems of linear equations.
Solve Ax = b. A is diagonally dominant.
1. Ax = b
2. (D + L + U)x = b
D is the diagonal of A. L is the lower triangular portion of A. U is the upper triangular portion of A.
3. Dxt+1 = b - Lxt - Uxt
4. xt+1 = D-1(b - (L+U)xt)
Notice the similarity between steps two and three in the derivation of Jacobi's method and Newton's method. Step two exploits an expansion/ decomposition, and then a "leap" is made to go from a system of variable(s) x to a system of xt+1 and xt.
But surely, one can't just use any decomposition they'd like. The iterative method has to be consistent and convergent.
In the next post, I will prove that this is the case for both methods, and then go back to these derivations to recover the train of thought that lead to performing step two and three in the first place. This will allow me to contrive whatever iterative methods, I would like. This is very important, because many systems of equations and optimization programs exhibit lots of structure that can be exploited.