Understanding Matrices in Mathematics
Matrices
History and Applications
Matrices first appeared around 1850, introduced by J.J. Sylvester. The initial development of the mathematical theory is attributed to W.R. Hamilton in 1853. In 1858, A. Cayley introduced matrix notation as a shorthand for m linear equations with n unknowns.
Matrices are used in numerical computation to solve systems of linear equations, differential equations, and partial derivatives. Beyond linear equations, matrices appear in geometry, statistics, economics, computer science, and physics.
The use of matrices (arrays) is essential in programming languages, as most data is entered into computers as tables organized into rows and columns (spreadsheets, databases, etc.).
Matrix Concept
An array is a set of items (usually numbers) arranged in rows and columns.
A matrix of order “m × n” is a rectangular array of elements aij arranged in m rows and n columns. The order is also called the size, where m and n are natural numbers.
Matrices are denoted by capital letters (A, B, C, …), and their elements by lowercase letters with subscripts indicating their position (a, b, c, …). A generic element in the ith row and jth column is written aij. The entire matrix A can be represented as A = (aij).
Rows and columns are often used interchangeably. The total number of elements in an m × n matrix A is m × n. In mathematics, both arrays and tables are generically called matrices.
A numerical list is a set of numbers arranged sequentially.
Matrix Equality
Two matrices A = (aij)m × n and B = (bij)p × q are equal if and only if they have the same dimensions (m = p and n = q) and corresponding elements are equal (aij = bij).
Types of Matrices
Several matrices appear frequently and are named according to their form and elements:
Type of Matrix | Definition | Example |
Row Matrix | A matrix with a single row (1 × n). | |
Column Matrix | A matrix with a single column (m × 1). | |
Rectangular Matrix | A matrix with different numbers of rows and columns (m × n). | |
Transpose Matrix | The transpose of matrix A (denoted by AT or At) is obtained by interchanging rows and columns. | |
Opposite Matrix | The opposite of matrix A (-A) is obtained by replacing each element with its opposite. | |
Null Matrix | A matrix where all elements are zero (denoted by 0m × n). | |
Square Matrix | A matrix with an equal number of rows and columns (m = n, order n). Main diagonal: elements a11, a22, …, ann Secondary diagonal: elements aij where i + j = n + 1 Trace: the sum of the main diagonal elements (tr A). | Main diagonal: Secondary diagonal: |
Symmetric Matrix | A square matrix equal to its transpose (A = AT, aij = aji). | |
Skew-Symmetric Matrix | A square matrix equal to the opposite of its transpose (A = -AT, aij = -aji). The diagonal elements are necessarily 0. | |
Diagonal Matrix | A square matrix with all off-diagonal elements equal to zero. | |
Scalar Matrix | A diagonal matrix where all diagonal elements are equal. | |
Identity Matrix | A diagonal matrix where all diagonal elements are equal to 1. | |
Triangular Matrix | A square matrix where all elements above (upper triangular) or below (lower triangular) the main diagonal are zero. | |
Orthogonal Matrix | A square, invertible matrix where the inverse is equal to the transpose (A-1 = AT). The determinant is +1 or -1. | |
Normal Matrix | A matrix that commutes with its transpose (AAT = ATA). Symmetric, skew-symmetric, and orthogonal matrices are all normal. | |
Invertible Matrix | A square matrix with an inverse, A-1, such that A · A-1 = A-1 · A = I. |
Matrix algebra governs matrix calculations, operating on matrices instead of numbers.
Matrix Operations
Matrix Addition
The sum of two matrices A = (aij)m × n and B = (bij)p × q of the same dimension (m = p and n = q) is another matrix C = A + B = (cij)m × n = (aij + bij).
Matrix addition is associative and commutative.
Properties:
· Associative: A + (B + C) = (A + B) + C
· Commutative: A + B = B + A
· Identity element: the zero matrix 0m × n, 0 + A = A + 0 = A
· Inverse element: the opposite matrix -A, A + (-A) = (-A) + A = 0The set of m × n matrices with real number elements (Mm × n) forms an abelian group under addition.
Matrix addition and subtraction are undefined for matrices of different dimensions.
Scalar Multiplication
To multiply a matrix by a scalar, multiply each element of the matrix by the scalar, resulting in another matrix of the same order.
Scalar multiplication is distributive and associative.
Matrix Multiplication
Given matrices A = (aij)m × n and B = (bij)p × q, where n = p (the number of columns in A equals the number of rows in B), the product A · B is defined as follows:
Each element (i, j) in the product matrix is the sum of the products of the elements in row i of A and column j of B.
Matrix Inverse
The inverse of a square matrix A (denoted by A-1) is a matrix that satisfies A-1A = A A-1 = I (the identity matrix).
A square matrix is regular if its determinant is nonzero and singular if its determinant is zero.
Properties:
- Only regular square matrices have inverses.
- The inverse of a square matrix, if it exists, is unique.
- There is no division operation for matrices; the inverse performs a similar function.
Methods for finding the inverse matrix:
- Applying the definition
- Using Gaussian elimination
- Using determinants