Katana VentraIP

Matrix multiplication

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.[1]

Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812,[2] to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering.[3][4] Computing matrix products is a central operation in all computational applications of linear algebra.

Notation[edit]

This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. A; vectors in lowercase bold, e.g. a; and entries of vectors and matrices are italic (they are numbers from a field), e.g. A and a. Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry in row i, column j of matrix A is indicated by (A)ij, Aij or aij. In contrast, a single subscript, e.g. A1, A2, is used to select a matrix (not a matrix entry) from a collection of matrices.

Definitions[edit]

Matrix times matrix[edit]

If A is an m × n matrix and B is an n × p matrix, the matrix product C = AB (denoted without multiplication signs or dots) is defined to be the m × p matrix[5][6][7][8] such that for i = 1, ..., m and j = 1, ..., p.


That is, the entry of the product is obtained by multiplying term-by-term the entries of the ith row of A and the jth column of B, and summing these n products. In other words, is the dot product of the ith row of A and the jth column of B.


Therefore, AB can also be written as


Thus the product AB is defined if and only if the number of columns in A equals the number of rows in B,[1] in this case n.


In most scenarios, the entries are numbers, but they may be any kind of mathematical objects for which an addition and a multiplication are defined, that are associative, and such that the addition is commutative, and the multiplication is distributive with respect to the addition. In particular, the entries may be matrices themselves (see block matrix).

Matrix times vector[edit]

A vector of length can be viewed as a column vector, corresponding to an matrix whose entries are given by If is an matrix, the matrix-times-vector product denoted by is then the vector that, viewed as a column vector, is equal to the matrix In index notation, this amounts to:

Abstract algebra[edit]

The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems.[13] Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the n × n matrices over a ring form a ring, which is noncommutative except if n = 1 and the ground ring is commutative.


A square matrix may have a multiplicative inverse, called an inverse matrix. In the common case where the entries belong to a commutative ring R, a matrix has an inverse if and only if its determinant has a multiplicative inverse in R. The determinant of a product of square matrices is the product of the determinants of the factors. The n × n matrices that have an inverse form a group under matrix multiplication, the subgroups of which are called matrix groups. Many classical groups (including all finite groups) are isomorphic to matrix groups; this is the starting point of the theory of group representations.


Matrices are the morphisms of a category, the category of matrices. The objects are the natural numbers that measure the size of matrces, and the composition of morphisms is matrix multiplication. The source of a morphism is the number of columns of the corresponding matrix, and the target is the number of rows.

Block matrix operations

defined as AB = BTA

Cracovian product

the dot product of matrices considered as vectors, or, equivalently the sum of the entries of the Hadamard product

Frobenius inner product

of two matrices of the same size, resulting in a matrix of the same size, which is the product entry-by-entry

Hadamard product

or tensor product, the generalization to any size of the preceding

Kronecker product

and Face-splitting product

Khatri-Rao product

also called dyadic product or tensor product of two column matrices, which is

Outer product

Scalar multiplication

Other types of products of matrices include:

for the interaction of matrix multiplication with operations from calculus

Matrix calculus