Consider the following equation:
(5.1) |
When the process is finished we get the following new equation
(5.2) |
The coefficients b_{i} are quite different now, but
the equation can be solved trivially. And so, we get:
x_{n} = b_{n} | (5.3) |
(5.4) |
Now, if during the computation we were to perform all the motions not only on vector b, but also on another matrix, which has been initialized to 1, we would end up with A^{-1} inside that matrix when the whole thing is over.
When these computations are carried out solutions can be found simultaneously to systems of equations with various right hand sides (but always the same left hand side A so vectors b are often aligned into a matrix, which doesn't have to be square, and that matrix is then also accompanied by a square matrix that has been initialized to 1, so as to yield A^{-1} as well.
Although the above comprises the heart of the method there is one complication that we have to incorporate. It may happen that a particular diagonal term A_{kk} is zero or very small. In that case dividing whatever's left of A_{ki} by A_{kk} may lead to overflows. If this is the case then the solution is to interchange the rows or the columns so as to place the largest element of A_{ki}, which is called a pivot in the A_{kk} position, and get the small one in the pivot's old location. This is an essential part of the Gauss-Jordan Elimination technique and the program must never be written without pivoting.