In the
mathematical
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
field of
graph theory
In mathematics and computer science, graph theory is the study of ''graph (discrete mathematics), graphs'', which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of ''Vertex (graph ...
, the Laplacian matrix, also called the
graph Laplacian,
admittance matrix, Kirchhoff matrix, or
discrete Laplacian, is a
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
representation of a
graph
Graph may refer to:
Mathematics
*Graph (discrete mathematics), a structure made of vertices and edges
**Graph theory, the study of such graphs and their properties
*Graph (topology), a topological space resembling a graph in the sense of discret ...
. Named after
Pierre-Simon Laplace
Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
, the graph Laplacian matrix can be viewed as a matrix form of the negative
discrete Laplace operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a Graph (discrete mathematics), graph or a lattice (group), discrete grid. For the case of a finite-dimensional graph ...
on a graph approximating the negative continuous
Laplacian
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is th ...
obtained by the
finite difference method
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating Derivative, derivatives with Finite difference approximation, finite differences. Both the spatial doma ...
.
The Laplacian matrix relates to many functional graph properties.
Kirchhoff's theorem can be used to calculate the number of
spanning tree
In the mathematical field of graph theory, a spanning tree ''T'' of an undirected graph ''G'' is a subgraph that is a tree which includes all of the vertices of ''G''. In general, a graph may have several spanning trees, but a graph that is no ...
s for a given graph. The
sparsest cut of a graph can be approximated through the
Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by
Cheeger's inequality. The
spectral decomposition of the Laplacian matrix allows the construction of
low-dimensional embeddings that appear in many
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
applications and determines a
spectral layout in
graph drawing
Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graph (discrete mathematics), graphs arising from applications such ...
. Graph-based
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
is based on the
graph Fourier transform that extends the traditional
discrete Fourier transform
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
by substituting the standard basis of
complex
Complex commonly refers to:
* Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe
** Complex system, a system composed of many components which may interact with each ...
sinusoid
A sine wave, sinusoidal wave, or sinusoid (symbol: ∿) is a periodic wave whose waveform (shape) is the trigonometric sine function. In mechanics, as a linear motion over time, this is '' simple harmonic motion''; as rotation, it correspond ...
s for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.
The Laplacian matrix is the easiest to define for a
simple graph
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a Set (mathematics), set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called ''Ver ...
but more common in applications for an
edge-weighted graph, i.e., with weights on its edges — the entries of the graph
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
.
Spectral graph theory
In mathematics, spectral graph theory is the study of the properties of a Graph (discrete mathematics), graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacen ...
relates properties of a graph to a spectrum, i.e., eigenvalues and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.
Definitions for ''simple graphs''
Laplacian matrix
Given a
simple graph
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a Set (mathematics), set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called ''Ver ...
with
vertices
, its Laplacian matrix
is
defined element-wise as
:
or equivalently by the matrix
:
where ''D'' is the
degree matrix, and ''A'' is the graph's
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
. Since
is a simple graph,
only contains 1s or 0s and its diagonal elements are all 0s.
Here is a simple example of a labelled, undirected graph and its Laplacian matrix.
We observe for the undirected graph that both the
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
and the Laplacian matrix are symmetric and that the row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular).
For
directed graphs, either the
indegree or outdegree might be used, depending on the application, as in the following example:
In the directed graph, the
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
and Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the
indegree or outdegree has been used.
Laplacian matrix for an undirected graph via the oriented incidence matrix
The
oriented
incidence matrix ''B'' with element ''B''
''ve'' for the vertex ''v'' and the edge ''e'' (connecting vertices
and
, with ''i'' ≠ ''j'') is defined by
:
Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian
matrix ''L'' defined as
:
where
is the
matrix transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of ''B''.
{, class="wikitable"
!
Undirected graph
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called '' vertices'' (also call ...
!
Incidence matrix
! Laplacian matrix
, -
,
100px
,
,
An alternative product
defines the so-called
''edge-based Laplacian,'' as opposed to the original commonly used ''vertex-based Laplacian'' matrix ''L''.
Symmetric Laplacian for a directed graph
The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional
spectral clustering
In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided ...
is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to applying techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter.
In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a
Boolean sum of the adjacency matrix
of the original directed graph and its
matrix transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
, where the zero and one entries of
are treated as logical, rather than numerical, values, as in the following example:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Symmetrized adjacency
! Symmetric Laplacian matrix
, -
,
,
,
Laplacian matrix normalization
A vertex with a large degree, also called a ''heavy node'', results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization.
Symmetrically normalized Laplacian
The symmetrically normalized Laplacian matrix is defined as:
:
where
is the
Moore–Penrose inverse
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Ar ...
of the degree matrix.
The elements of
are thus given by
:
The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric.
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Degree matrix
! Normalized Laplacian
, -
,
,
,
For a non-symmetric adjacency matrix of a directed graph, either of
indegree and outdegree can be used for normalization:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Out-Degree matrix
! Out-Degree normalized Laplacian
! In-Degree matrix
! In-Degree normalized Laplacian
, -
,
,
,
,
,
Left (random-walk) and right normalized Laplacians
The left (random-walk) normalized Laplacian matrix is defined as:
:
where
is the
Moore–Penrose inverse
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Ar ...
.
The elements of
are given by
:
Similarly, the right normalized Laplacian matrix is defined as
:
.
The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example,
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Degree matrix
! Left normalized Laplacian
! Right normalized Laplacian
, -
,
,
,
,
The example also demonstrates that if
has no isolated vertices, then
right stochastic and hence is the matrix of a
random walk
In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some Space (mathematics), mathematical space.
An elementary example of a rand ...
, so that the left normalized Laplacian
has each row summing to zero. Thus we sometimes alternatively call
the
random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian
each column sums to zero since
is
left stochastic.
For a non-symmetric adjacency matrix of a directed graph, one also needs to choose
indegree or outdegree for normalization:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Out-Degree matrix
! Out-Degree left normalized Laplacian
! In-Degree matrix
! In-Degree right normalized Laplacian
, -
,
,
,
,
,
The left out-degree normalized Laplacian with row-sums all 0 relates to
right stochastic , while the right in-degree normalized Laplacian with column-sums all 0 contains
left stochastic .
Definitions for graphs with weighted edges
Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In
spectral clustering
In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided ...
and graph-based
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the
distances
Distance is a numerical or occasionally qualitative measurement of how far apart objects, points, people, or ideas are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two co ...
between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization.
Laplacian matrix
The Laplacian matrix is defined by
:
where ''D'' is the
degree matrix and ''A'' is the
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
of the graph.
For
directed graphs, either the
indegree or outdegree might be used, depending on the application, as in the following example:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! In-Degree matrix
! In-Degree Laplacian
! Out-Degree matrix
! Out-Degree Laplacian
, -
,
,
,
,
,
Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values.
Symmetric Laplacian via the incidence matrix
For graphs with weighted edges one can define a weighted incidence matrix ''B'' and use it to construct the corresponding symmetric Laplacian as
. An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the
incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A
spring system is an example of this model used in
mechanics
Mechanics () is the area of physics concerned with the relationships between force, matter, and motion among Physical object, physical objects. Forces applied to objects may result in Displacement (vector), displacements, which are changes of ...
to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges.
We thus reuse the definition of the weightless
incidence matrix ''B'' with element ''B''
''ve'' for the vertex ''v'' and the edge ''e'' (connecting vertexes
and
, with ''i'' > ''j'') defined by
:
We now also define a diagonal
matrix ''W'' containing the edge weights. Even though the edges in the definition of ''B'' are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian
matrix ''L'' defined as
:
where
is the
matrix transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of ''B''.
The construction is illustrated in the following example, where every edge
is assigned the weight value ''i'', with
{, class="wikitable"
!
Undirected graph
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called '' vertices'' (also call ...
!
Incidence matrix
! Edge weights
! Laplacian matrix
, -
,
100px
,
,
,
Symmetric Laplacian for a directed graph
Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix
of the original directed graph and its
matrix transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
as in the following example:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Symmetrized adjacency matrix
! Symmetric Laplacian matrix
, -
,
,
,
where the zero and one entries of
are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2.
Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the
indegree and outdegree, as in the following example:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Out-Degree matrix
! Out-Degree Laplacian
! In-Degree matrix
! In-Degree Laplacian
, -
,
,
,
,
,
The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix.
Laplacian matrix normalization
The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a
weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors.
Symmetrically normalized Laplacian
The symmetrically normalized Laplacian is defined as
:
where ''L'' is the unnormalized Laplacian, ''A'' is the adjacency matrix, ''D'' is the degree matrix, and
is the
Moore–Penrose inverse
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Ar ...
. Since the degree matrix ''D'' is diagonal, its reciprocal square root
is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of ''D''. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! In-Degree matrix
! In-Degree normalized Laplacian
! Out-Degree matrix
! Out-Degree normalized Laplacian
, -
,
,
,
,
,
The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix ''A'' is symmetric and the diagonal entries of ''D'' are nonnegative, in which case we can use the term the ''symmetric normalized Laplacian''.
The symmetric normalized Laplacian matrix can be also written as
:
using the weightless
incidence matrix ''B'' and the diagonal
matrix ''W'' containing the edge weights and defining the new
weighted incidence matrix
whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge ''e = {u, v}'' has an entry
in the row corresponding to ''u'', an entry
in the row corresponding to ''v'', and has 0 entries elsewhere.
Random walk normalized Laplacian
The random walk normalized Laplacian is defined as
:
where ''D'' is the degree matrix. Since the degree matrix ''D'' is diagonal, its inverse
is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of ''D''. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element
to 0. The matrix elements of
are given by
:
The name of the random-walk normalized Laplacian comes from the fact that this matrix is
, where
is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let
denote the i-th
standard basis
In mathematics, the standard basis (also called natural basis or canonical basis) of a coordinate vector space (such as \mathbb^n or \mathbb^n) is the set of vectors, each of whose components are all zero, except one that equals 1. For exampl ...
vector. Then
is a
probability vector representing the distribution of a random walker's locations after taking a single step from vertex
; i.e.,
. More generally, if the vector
is a probability distribution of the location of a random walker on the vertices of the graph, then
is the probability distribution of the walker after
steps.
The random walk normalized Laplacian can also be called the left normalized Laplacian
since the normalization is performed by multiplying the Laplacian by the normalization matrix
on the left. It has each row summing to zero since
is
right stochastic, assuming all the weights are non-negative.
In the less uncommonly used right normalized Laplacian
each column sums to zero since
is
left stochastic.
For a non-symmetric adjacency matrix of a directed graph, one also needs to choose
indegree or outdegree for normalization:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Out-Degree matrix
! Out-Degree left normalized Laplacian
! In-Degree matrix
! In-Degree right normalized Laplacian
, -
,
,
,
,
,
The left out-degree normalized Laplacian with row-sums all 0 relates to
right stochastic , while the right in-degree normalized Laplacian with column-sums all 0 contains
left stochastic .
Negative weights
Negative weights present several challenges for normalization:
* The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex.
* Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist.
* Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.
Properties
For an (undirected) graph ''G'' and its Laplacian matrix ''L'' with
eigenvalues
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
:
* ''L'' is
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
.
* ''L'' is
positive-semidefinite (that is
for all
). This can be seen from the fact that the Laplacian is symmetric and
diagonally dominant.
* ''L'' is an
M-matrix (its off-diagonal entries are nonpositive, yet the real parts of its eigenvalues are nonnegative).
* Every row sum and column sum of ''L'' is zero. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
* In consequence,
, because the vector
satisfies
This also implies that the Laplacian matrix is singular.
* The number of
connected components in the graph is the dimension of the
nullspace of the Laplacian and the
algebraic multiplicity of the 0 eigenvalue.
* The smallest non-zero eigenvalue of ''L'' is called the
spectral gap
In mathematics, the spectral gap is the difference between the moduli of the two largest eigenvalues of a matrix or operator; alternately, it is sometimes taken as the smallest non-zero eigenvalue. Various theorems relate this difference to oth ...
.
* The second smallest eigenvalue of ''L'' (could be zero) is the
algebraic connectivity (or
Fiedler value) of ''G'' and approximates the
sparsest cut of a graph.
* The
Laplacian
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is th ...
is an operator on the n-dimensional vector space of functions
, where
is the vertex set of G, and
.
* When G is
k-regular, the normalized Laplacian is:
, where A is the adjacency matrix and I is an identity matrix.
* For a graph with multiple
connected components, ''L'' is a
block diagonal
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix w ...
matrix, where each block is the respective Laplacian matrix for each component, possibly after reordering the vertices (i.e. ''L'' is permutation-similar to a block diagonal matrix).
* The trace of the Laplacian matrix ''L'' is equal to
where
is the number of edges of the considered graph.
* Now consider an eigendecomposition of
, with unit-norm eigenvectors
and corresponding eigenvalues
:
:
Because
can be written as the inner product of the vector
with itself, this shows that
and so the eigenvalues of
are all non-negative.
* All eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ
0 ≤ … ≤ μ
n−1 ≤ 2. These eigenvalues (known as the spectrum of the normalized Laplacian) relate well to other graph invariants for general graphs.
* One can check that:
:
,
i.e.,
is
similar to the normalized Laplacian
. For this reason, even if
is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian
.
Interpretation as the discrete Laplace operator approximating the continuous Laplacian
The graph Laplacian matrix can be further viewed as a matrix form of the negative
discrete Laplace operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a Graph (discrete mathematics), graph or a lattice (group), discrete grid. For the case of a finite-dimensional graph ...
on a graph approximating the negative continuous
Laplacian
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is th ...
operator obtained by the
finite difference method
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating Derivative, derivatives with Finite difference approximation, finite differences. Both the spatial doma ...
.
(See
Discrete Poisson equation
In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical an ...
) In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation
stencil
Stencilling produces an image or pattern on a surface by applying pigment to a surface through an intermediate object, with designed holes in the intermediate object. The holes allow the pigment to reach only some parts of the surface creatin ...
at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous
Neumann boundary condition
In mathematics, the Neumann (or second-type) boundary condition is a type of boundary condition, named after Carl Neumann.
When imposed on an ordinary or a partial differential equation, the condition specifies the values of the derivative app ...
, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.
Generalizations and extensions of the Laplacian matrix
Generalized Laplacian
The generalized Laplacian
is defined as:
:
Notice the ordinary Laplacian is a generalized Laplacian.
Admittance matrix of an AC circuit
The Laplacian of a graph was first introduced to model electrical networks.
In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances.
The weight of edge (''i'', ''j'') is, by convention, ''minus'' the reciprocal of the impedance directly between ''i'' and ''j''.
In models of such networks, the entries of the
adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
are complex, but the Kirchhoff matrix remains symmetric, rather than being
Hermitian {{Short description, none
Numerous things are named after the French mathematician Charles Hermite (1822–1901):
Hermite
* Cubic Hermite spline, a type of third-degree spline
* Gauss–Hermite quadrature, an extension of Gaussian quadrature me ...
.
Such a matrix is usually called an "
admittance matrix", denoted
, rather than a "Laplacian".
This is one of the rare applications that give rise to
complex symmetric matrices.
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Weighted degree matrix
! Admittance matrix
, -
,
,
,
Magnetic Laplacian
There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a
Hermitian matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
. The Magnetic Laplacian for a directed graph with real weights
is constructed as the
Hadamard product of the
real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the
complex
Complex commonly refers to:
* Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe
** Complex system, a system composed of many components which may interact with each ...
entries
:
which encode the edge direction into the phase in the complex plane.
In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter
is called electric charge.
In the following example
:
{, class="wikitable"
!
Adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
! Symmetrized Laplacian
! Phase matrix
! Magnetic Laplacian
, -
,
,
,
,
Deformed Laplacian
The deformed Laplacian is commonly defined as
:
where ''I'' is the identity matrix, ''A'' is the adjacency matrix, ''D'' is the degree matrix, and ''s'' is a (complex-valued) number.
The standard Laplacian is just
and
is the signless Laplacian.
Signless Laplacian
The signless Laplacian is defined as
:
where
is the degree matrix, and
is the adjacency matrix. Like the signed Laplacian
, the signless Laplacian
also is positive semi-definite as it can be factored as
:
where
is the incidence matrix.
has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as
:
This has a solution where
if and only if the graph has a bipartite connected component.
Directed multigraphs
An analogue of the Laplacian matrix can be defined for directed multigraphs.
In this case the Laplacian matrix ''L'' is defined as
:
where ''D'' is a diagonal matrix with ''D''
''i'',''i'' equal to the outdegree of vertex ''i'' and ''A'' is a matrix with ''A''
''i'',''j'' equal to the number of edges from ''i'' to ''j'' (including loops).
Open source software implementations
*
SciPy
SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing.
SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, fast Fourier ...
*
NetworkX
*
Julia
Application software
*
scikit-learn
scikit-learn (formerly scikits.learn and also known as sklearn) is a free and open-source machine learning library for the Python programming language.
It features various classification, regression and clustering algorithms including support ...
Spectral Clustering
* PyGSP: Graph Signal Processing in Python
* megaman: Manifold Learning for Millions of Points
* smoothG
* Laplacian Change Point Detection for Dynamic Graphs (KDD 2020)
* LaplacianOpt (A Julia Package for Maximizing Laplacian's Second Eigenvalue of Weighted Graphs)
* LigMG (Large Irregular Graph MultiGrid)
* Laplacians.jl
See also
*
Stiffness matrix
*
Resistance distance
*
Transition rate matrix
In probability theory, a transition-rate matrix (also known as a Q-matrix, intensity matrix, or infinitesimal generator matrix) is an array of numbers describing the instantaneous rate at which a continuous-time Markov chain
A continuous-time ...
*
Calculus on finite weighted graphs
*
Graph Fourier transform
References
{{Matrix classes
Algebraic graph theory
Matrices (mathematics)
Numerical differential equations