Invertible matrix
In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or rarely regular) if there exists an n-by-n square matrix B such that
Over a field, a square matrix that is not invertible is called singular or degenerate. A square matrix with entries in a field is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any bounded region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices, i.e. m-by-n matrices for which m ≠ n, do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n, (n ≤ m), then A has a left inverse, an n-by-m matrix B such that BA = In. If A has rank m (m ≤ n), then it has a right inverse, an n-by-m matrix B such that AB = Im.
While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any algebraic structure equipped with addition and multiplication (i.e. rings). However, in the case of a ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than it being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings.
The set of n × n invertible matrices together with the operation of matrix multiplication and entries from ring R form a group, the general linear group of degree n, denoted GLn(R).
Methods of matrix inversion[edit]
Gaussian elimination[edit]
Gaussian elimination is a useful and easy way to compute the inverse of a matrix. To compute a matrix inverse using this method, an augmented matrix is first created with the left side being the matrix to invert and the right side being the identity matrix. Then, Gaussian elimination is used to convert the left side into the identity matrix, which causes the right side to become the inverse of the input matrix.
For example, take the following matrix:
The first step to compute its inverse is to create the augmented matrix
Call the first row of this matrix and the second row . Then, add row 1 to row 2 This yields
Next, subtract row 2, multiplied by 3, from row 1 which yields
Finally, multiply row 1 by −1 and row 2 by 2 This yields the identity matrix on the left side and the inverse matrix on the right:
Thus,
The reason it works is that the process of Gaussian elimination can be viewed as a sequence of applying left matrix multiplication using elementary row operations using elementary matrices (), such as
Applying right-multiplication using we get And the right side which is the inverse we want.
To obtain we create the augumented matrix by combining A with I and applying Gaussian elimination. The two portions will be transformed using the same sequence of elementary row operations. When the left portion becomes I, the right portion applied the same elementary row operation sequence will become A−1.
Newton's method[edit]
A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient, if it is convenient to find a suitable starting seed:
Generalized inverse[edit]
Some of the properties of inverse matrices are shared by generalized inverses (for example, the Moore–Penrose inverse), which can be defined for any m-by-n matrix.[17]