Gaussian elimination is systematic way to solve systems of linear equations in a finite number of steps. Iterative methods for solving linear systems require an infinite number of steps in theory, but may find solutions faster in practice.
Gaussian elimination tells you nothing about the final solution until it’s almost done. The first phase, factorization, takes O(n^3) steps, where n is the number of unknowns. This is followed by the back-substitution phase which takes O(n^2) steps. The factorization phase tells you nothing about the solution. The back-substitution phase starts filling in the components of the solution one at a time. In application n is often so large that the time required for back-substitution is negligible compared to factorization.
Iterative methods start by taking a guess at the final solution. In some contexts, this guess may be fairly good. For example, when solving differential equations, the solution from one time step gives a good initial guess at the solution for the next time step. Similarly, in sequential Bayesian analysis the posterior distribution mode doesn’t move much as each observation arrives. Iterative methods can take advantage of a good starting guess while methods like Gaussian elimination cannot.
Iterative methods take an initial guess and refine it to a better approximation to the solution. This sequence of approximations converges to the exact solution. In theory, Gaussian elimination produces an exact answer in a finite number of steps, but iterative methods never produce an exact solution after any finite number of steps. But in actual computation with finite precision arithmetic, no method, iterative or not, ever produces an exact answer. The question is not which method is exact but which method produces an acceptably accurate answer first. Often the iterative method wins.
Successful projects often work like iterative numerical methods. They start with an approximation solution and iteratively refine it. All along the way they provide a useful approximation to the final product. Even if, in theory, there is a more direct approach to a final product, the iterative approach may work better in practice.
Algorithms iterate toward a solution because that approach may reach a sufficiently accurate result sooner. That may apply to people, but more important for people is the psychological benefit of having something to show for yourself along the way. Also, iterative methods, whether for linear systems or human projects, are robust to changes in requirements because they are able to take advantage of progress made toward a slightly different goal.
The factored operator can be re-used.
I like the visualization on the slide. Do you have a better version of this image?
Unfortunately no. And I don’t know the provenance of the photo either.
That’s a nice thought. But would you really buy a car which was built from a motorcycle, that was actually an improved bike that someone, somehow, improvised from a skateboard?
From my experience, evolutionary rapid prototyping indeed works great for nontrivial real-life projects, but only if the process is accompanied by rather frequent refactoring phases, in which the already existing functionality is reconstructed using a well thought out design based on the better understanding of the complexities and pitfalls gained during the prototyping. It looks nothing like that Spotify ad, which I hope came from their marketing guys, and not from their engineers…
It works only if you can transform a bicycle to a car – in reality – and sometimes – you just can’t.
This is a nice metaphor.
On the mathematics side, it’s interesting to note that Gaussian elimination may be used as an iterative algorithm, too. Let me say no more than refer to the beautifully-written SIAM News article Gaussian Elimination as an Iterative Algorithm by Townsend and Trefethen.
it looks like the what the algebraic multigrid does.
There is a similar dichotomy between primal and dual algorithms for optimization. Primal algorithms start by finding something you could do, then sequentially finding better things that you could do, until they (hopefully) find the best thing you could do. At every stage, the current best solution is feasible, but only the last one is optimal. If you stop in the middle, you might have a solution that is “good enough”.
Dual algorithms go the other way ’round. Every intermediate solution is an optimal solution to the wrong problem, but the difference between the problem solved and the problem you’re _trying_ to solve goes to zero. When you are solving the right problem, you are done.
The magic of duality theory is that this asymmetry is illusory.