Linear algebra reference
Linear algebra entails vector spaces, linear transformations and representations through matrices and systems of linear equations.
The most basic object in linear algebra is the vector. A vector in an
Vectors can be added component-wise. For vectors
Scalar multiplication scales each component of a vector by a scalar value
These operations satisfy some algebraic properties. Addition is commutative and associative. Scalar multiplication distributes over vector addition and scalar addition.
The zero vector
Every vector
A vector space is a set of vectors that is closed under addition and scalar multiplication. Real coordinate space
Vectors
If there exists a non-trivial solution, the vectors are linearly dependent, implying that at least one vector can be expressed as a linear combination of the others.
The span of a set of vectors
A basis for a vector space is a linearly independent set of vectors that spans the entire space. Every vector in the space can be uniquely expressed as a linear combination of basis vectors.
The dimension of a vector space equals the number of vectors in any basis for that space. For instance,
The standard basis for
The dot product (inner product) of two vectors
The dot product is commutative and distributive over addition. It also satisfies these scaling properties.
It let's us define the norm (i.e. length) of a vector. The Euclidean norm of vector
Two vectors are orthogonal if their dot product equals zero. A set of vectors is orthogonal if every pair of distinct vectors in the set is orthogonal.
An orthogonal set of nonzero vectors is automatically linearly independent. An orthogonal basis greatly simplifies many calculations in linear algebra.
A vector can be normalized by dividing it by its norm, resulting in a unit vector pointing in the same direction.
The angle between two nonzero vectors
The Cauchy-Schwarz inequality establishes an upper bound for the dot product.
Equality holds if and only if one vector is a scalar multiple of the other.
The triangle inequality follows from Cauchy-Schwarz and states that the norm of a sum of vectors cannot exceed the sum of their norms.
The cross product is a binary operation defined only for three-dimensional vectors. For vectors
The cross product results in a vector perpendicular to both input vectors, with magnitude equal to the area of the parallelogram they form.
Unlike the dot product, the cross product is anti-commutative. It also satisfies these distributive properties.
Matrices provide a way to represent linear transformations and systems of linear equations. An
Matrix addition is performed element-wise for matrices of the same dimensions.
Scalar multiplication of a matrix scales each entry by the scalar.
Matrix multiplication is more complicated. For an
Matrix multiplication is associative but generally not commutative.
The identity matrix
Matrix-vector multiplication treats the vector as a column matrix. For an
A system of linear equations can be expressed in matrix form
The transpose of an
The transpose operation satisfies these properties.
A square matrix
The determinant is a scalar value associated with square matrices. For a
For a
For larger matrices, the determinant can be calculated recursively using cofactor expansion along any row or column.
The determinant has important properties. It equals zero if and only if the matrix is singular (non-invertible). It also behaves multiplicatively.
The inverse of a square matrix
A matrix is invertible if and only if its determinant is nonzero. The inverse of a
For larger matrices, the inverse can be found using the adjugate matrix.
Matrix inversion satisfies these properties.
The rank of a matrix equals the dimension of the vector space spanned by its rows (or columns). It can be determined as the number of linearly independent rows or columns.
A matrix with full rank has as many linearly independent rows or columns as possible. For an
The nullspace (kernel) of a matrix
The dimension of the nullspace is related to the rank through the rank-nullity theorem.
A system of linear equations
Elementary row operations can transform a matrix without changing the solution set of the corresponding linear system. These operations include scaling a row, adding a multiple of one row to another and swapping rows.
Gaussian elimination uses elementary row operations to convert a matrix to row echelon form. In this form, all zero rows appear at the bottom and each leading entry of a nonzero row is to the right of the leading entry in the row above.
Row reduction continues to reduced row echelon form, where each leading entry is 1 and each column containing a leading 1 has zeros elsewhere. The resulting matrix is unique and reveals the solution structure of the corresponding linear system.
LU decomposition expresses a matrix
For matrices with linearly independent columns, the QR decomposition expresses
Eigenvalues and eigenvectors provide details regarding the behavior of linear transformations. An eigenvector
The characteristic equation helps find eigenvalues.
The eigenvalues are the roots of this polynomial equation. Once the eigenvalues are found, the corresponding eigenvectors can be determined by solving
The eigenspace corresponding to an eigenvalue
A matrix is diagonalizable if there exists an invertible matrix
A matrix is diagonalizable if and only if it has
For symmetric matrices, all eigenvalues are real and eigenvectors corresponding to distinct eigenvalues are orthogonal. Every symmetric matrix is orthogonally diagonalizable, meaning there exists an orthogonal matrix
The spectral theorem for symmetric matrices states that any symmetric matrix can be diagonalized by an orthogonal matrix of eigenvectors.
The singular value decomposition (SVD) generalizes the eigendecomposition to any
The columns of
SVD has numerous applications, including least squares problems, image compression and pseudoinverse computation. The pseudoinverse
Vector spaces can be equipped with an inner product, generalizing the dot product in
An inner product space is a vector space equipped with an inner product. The norm in an inner product space is defined using the inner product.
Gram-Schmidt process orthogonalizes a set of vectors in an inner product space. Given linearly independent vectors
The resulting vectors can be normalized to create an orthonormal set.
Linear transformations map vectors from one vector space to another and preserve vector addition and scalar multiplication. For a linear transformation
Every linear transformation between finite-dimensional vector spaces can be represented by a matrix. The columns of this matrix are the images of the basis vectors under the transformation.
The kernel (nullspace) of a linear transformation
The rank-nullity theorem applies to linear transformations.
Change of basis transforms vectors from one coordinate system to another. If
The matrix of a linear transformation
Quadratic forms generalize the concept of squares to higher dimensions. For an
A quadratic form is positive definite if
Principal axis theorem (spectral theorem) diagonalizes quadratic forms through an orthogonal change of variables. For a symmetric matrix
- ← Previous
A dirty way to measure GFLOPS