
A singular matrix is a
square matrix
In mathematics, a square matrix is a Matrix (mathematics), matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Squ ...
that is not
invertible
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
, unlike non-singular matrix which is invertible. Equivalently, an
-by-
matrix
is singular if and only if
determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
,
. In classical linear algebra, a matrix is called ''non-singular'' (or invertible) when it has an
inverse; by definition, a matrix that fails this criterion is singular. In more algebraic terms, an
-by-
matrix A is singular exactly when its columns (and rows) are linearly dependent, so that the linear map
is not one-to-one.
In this case the kernel (
null space) of A is
non-trivial
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or a particularly simple object possessing a given structure (e.g., group, topological space). The noun triviality usual ...
(has dimension ≥1), and the homogeneous system
admits non-zero solutions. These characterizations follow from standard
rank-nullity and invertibility theorems: for a square matrix A,
if and only if
, and
if and only if
.
Conditions and properties
* Determinant is zero: By definition an singular matrix have determinant of zero. Consequently, any
co-factor expansion or determinant formula yields zero.
* Non-invertible: Since
, the classic inverse does not exist in the case of singular matrix.
* Rank deficiency: Any structural reason that reduces the rank will cause singularity. For instance, if in a
-by-
matrix the third row becomes the sum of first two rows,
then it is a singular matrix.
* Numerical noise/
Round off: In numerical computations, a matrix may be nearly singular when its determinant is extremely small (due to floating-point error or ill-conditioning), effectively causing instability. While not exactly zero in finite precision, such near-singularity can cause algorithms to fail as if singular.
''In summary, any condition that forces the determinant to zero or the rank to drop below full automatically yields singularity.''
Computational implications
* No direct inversion: Many algorithms rely on computing
''A-1''. If
is singular the inversion provides a meaningless value.
*
Gaussian-Elimination: In algorithms like Gaussian elimination (
LU factorization), encountering a zero
pivot signals singularity. In practice, with partial pivoting, the algorithm will fail to find a nonzero pivot in some column if and only if
is singular.
Indeed, ''if no nonzero pivot can be found, the matrix is singular''.
* Infinite
condition number: The condition number of a matrix (ratio of largest to smallest singular values) is infinite for a truly singular matrix.
An infinite condition number means any numerical solution is unstable: arbitrarily small perturbations in data can produce large changes in solutions. In fact, a system is "singular" precisely if its condition number is infinite,
and it is "ill-conditioned" if the condition number is very large.
* Information data loss: Geometrically, a singular matrix compresses some dimension(s) to zero (maps whole subspaces to a point or line). In
data analysis
Data analysis is the process of inspecting, Data cleansing, cleansing, Data transformation, transforming, and Data modeling, modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Da ...
or
model
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , .
Models can be divided in ...
ing, this means information is lost in some directions. For example, in graphics or
transformations, a singular transformation (e.g. projection to a line) cannot be reversed.
Applications
* In
robotics
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer s ...
: In mechanical and robotic systems, singular
Jacobian matrices indicate
kinematic singularities. For example, the Jacobian of a
robotic manipulator (mapping joint velocities to end-effector velocity) loses rank when the robot reaches a configuration with constrained motion. At a singular configuration, the robot cannot move or apply forces in certain directions.
This has practical implications for planning and control (avoiding singular poses). Similarly, in structural engineering (finite-element models), a singular stiffness matrix signals an unrestrained mechanism (insufficient boundary conditions), meaning the structure is unstable and can deform without resisting forces.
* Physics and
Network theory
In mathematics, computer science, and network science, network theory is a part of graph theory. It defines networks as Graph (discrete mathematics), graphs where the vertices or edges possess attributes. Network theory analyses these networks ...
: In graph theory and network physics, the
Laplacian matrix of a graph is inherently singular (it has a zero eigenvalue) because each row sums to zero. This reflects the fact that the uniform vector is in its nullspace. Such singularity encodes fundamental conservation laws (e.g.
Kirchhoff’s current law in circuits) or the existence of a connected component. In physics, singular matrices can arise in constrained systems (singular mass or inertia matrices in
multibody dynamics
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements.
Introduction
The systematic treatment of the dynamic behavior of ...
, indicating dependent coordinates) or in degenerate
Hamiltonians (zero-energy modes).
* Computer science and data analysis: In
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
and statistics, singular matrices frequently appear due to
multicollinearity. For instance, a
data matrix leads to a singular
covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
or
matrix if features are linearly dependent. This occurs in
linear regression
In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
when predictors are collinear, causing the normal equations matrix
to be singular. The remedy is often to drop or combine features, or use the
pseudoinverse. Dimension-reduction techniques like
Principal Component Analysis
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
The data is linearly transformed onto a new coordinate system such that th ...
(PCA) exploit SVD:
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
yields low-rank approximations of data, effectively treating the data covariance as singular by discarding small singular values. In numerical algorithms (e.g. solving linear systems, optimization), detection of singular or nearly-singular matrices signals that specialized methods (pseudo-inverse, regularized solvers) are needed.
* Computer graphics: Certain transformations (e.g.
projections from 3D to 2D) are modeled by singular matrices, since they collapse a dimension. Handling these requires care (one cannot invert a projection). In
cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
and
coding theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data sto ...
, invertible matrices are used for mixing operations; singular ones would be avoided or detected as errors.
History
The study of singular matrices is rooted in the early history of
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrix (mathemat ...
. Determinants were first developed (in Japan by
Seki in 1683 and in Europe by
Leibniz
Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Sir Isaac Newton, with the creation of calculus in addition to many ...
and Cramer in the 1690s as tools for solving systems of equations. Leibniz explicitly recognized that a system has a solution precisely when a certain determinant expression equals zero. In that sense, singularity (determinant zero) was understood as the critical condition for solvability. Over the 18th and 19th centuries, mathematicians (
Laplace,
Cauchy, etc.) established many properties of determinants and invertible matrices, formalizing the notion that
characterizes non-invertibility.
The term "singular matrix" itself emerged later, but the conceptual importance remained. In the 20th century, generalizations like the
Moore–Penrose pseudoinverse were introduced to systematically handle singular or non-square cases. As recent scholarship notes, the idea of a pseudoinverse was proposed by
E. H. Moore in 1920 and rediscovered by R. Penrose in 1955,
reflecting its longstanding utility. The pseudoinverse and singular value decomposition became fundamental in both theory and applications (e.g. in quantum mechanics, signal processing, and more) for dealing with singularity. Today, singular matrices are a canonical subject in linear algebra: they delineate the boundary between invertible (well-behaved) cases and
degenerate (ill-posed) cases. In abstract terms, singular matrices correspond to non-
isomorphism
In mathematics, an isomorphism is a structure-preserving mapping or morphism between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between the ...
s in linear mappings and are thus central to the theory of vector spaces and linear transformations.
Example
Example 1 (2×2 matrix):
Compute its determinant:
=
. Thus A is ''singular''. One sees directly that the second row is twice the first, so the rows are linearly dependent. To illustrate failure of invertibility, attempt Gaussian elimination:
* Pivot on the first row; use it to eliminate the entry below:
=
=
Now the second pivot would be the (2,2) entry, but it is zero. Since no nonzero pivot exists in column 2, elimination stops. This confirms
and that A has no inverse.
Solving
exhibits infinite/ no solutions. For example, Ax=0 gives:
which are the same equation. Thus the nullspace is one-dimensional, then Ax=b has no solution.
References
{{reflist
*
Linear algebra