HOME

TheInfoList



OR:

In
matrix theory In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begi ...
, the Perron–Frobenius theorem, proved by and , asserts that a
real square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
with positive entries has a unique largest real
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
and that the corresponding
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of
nonnegative matrices In mathematics, the sign of a real number is its property of being either positive, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it ...
. This theorem has important applications to probability theory (
ergodicity In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies th ...
of Markov chains); to the theory of
dynamical systems In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a p ...
( subshifts of finite type); to economics ( Okishio's theorem,
Hawkins–Simon condition The Hawkins–Simon condition refers to a result in mathematical economics, attributed to David Hawkins and Herbert A. Simon, that guarantees the existence of a non-negative output vector that solves the equilibrium relation in the input–outpu ...
); to demography ( Leslie population age distribution model); to social networks ( DeGroot learning process); to Internet search engines (
PageRank PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According ...
); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is
Edmund Landau Edmund Georg Hermann Landau (14 February 1877 – 19 February 1938) was a German mathematician who worked in the fields of number theory and complex analysis. Biography Edmund Landau was born to a Jewish family in Berlin. His father was Leopold ...
.


Statement

Let positive and non-negative respectively describe
matrices Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
with exclusively
positive Positive is a property of positivity and may refer to: Mathematics and science * Positive formula, a logical formula not containing negation * Positive number, a number that is greater than 0 * Plus sign, the sign "+" used to indicate a posi ...
real numbers as elements and matrices with exclusively non-negative real numbers as elements. The
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
s of a real
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
''A'' are
complex numbers In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form a ...
that make up the
spectrum A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors i ...
of the matrix. The exponential growth rate of the matrix powers ''A''''k'' as ''k'' → ∞ is controlled by the eigenvalue of ''A'' with the largest
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), an ...
( modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when ''A'' is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices.


Positive matrices

Let A = (a_) be an n \times n positive matrix: a_ > 0 for 1 \le i,j \le n . Then the following statements hold. # There is a positive real number ''r'', called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue or dominant eigenvalue), such that ''r'' is an eigenvalue of ''A'' and any other eigenvalue ''λ'' (possibly
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
) in
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), an ...
is strictly smaller than ''r'' , , ''λ'', < ''r''. Thus, the
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
\rho(A) is equal to ''r''. If the matrix coefficients are algebraic, this implies that the eigenvalue is a
Perron number In mathematics, a Perron number is an algebraic integer α which is real and exceeds 1, but such that its conjugate elements are all less than α in absolute value. For example, the larger of the two roots of the irreducible polynomial x^ -3x + ...
. # The Perron–Frobenius eigenvalue is simple: ''r'' is a simple root of the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The chara ...
of ''A''. Consequently, the
eigenspace In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
associated to ''r'' is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for ''AT'', the transpose of ''A''.) # There exists an eigenvector ''v'' = (''v''1,...,''v''''n'')''T'' of ''A'' with eigenvalue ''r'' such that all components of ''v'' are positive: ''A v'' = ''r v'', ''v''''i'' > 0 for 1 ≤ ''i'' ≤ ''n''. (Respectively, there exists a positive left eigenvector ''w'' : ''wT A'' = ''r wT'', ''w''''i'' > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, or dominant eigenvector. # There are no other positive (moreover non-negative) eigenvectors except positive multiples of ''v'' (respectively, left eigenvectors except ''w''), i.e., all other eigenvectors must have at least one negative or non-real component. # \lim_ A^k/r^k = v w^T, where the left and right eigenvectors for ''A'' are normalized so that ''wTv'' = 1. Moreover, the matrix ''v wT'' is the projection onto the eigenspace corresponding to ''r''. This projection is called the Perron projection. # Collatz–Wielandt formula: for all non-negative non-zero vectors ''x'', let ''f''(''x'') be the minimum value of 'Ax''sub>''i'' / ''x''''i'' taken over all those ''i'' such that ''xi'' ≠ 0. Then ''f'' is a real valued function whose
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
over all non-negative non-zero vectors ''x'' is the Perron–Frobenius eigenvalue. # A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors ''x'', let ''g''(''x'') be the maximum value of 'Ax''sub>''i'' / ''x''''i'' taken over ''i''. Then ''g'' is a real valued function whose
minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
over all strictly positive vectors ''x'' is the Perron–Frobenius eigenvalue. # BirkhoffVarga formula: Let ''x'' and ''y'' be strictly positive vectors. Then r = \sup_ \inf_ \frac = \inf_ \sup_ \frac = \inf_ \sup_ \sum_^n y_i a_ x_j/\sum_^n y_i x_i. # DonskerVaradhanFriedland formula: Let ''p'' be a probability vector and ''x'' a strictly positive vector. Then r = \sup_p \inf_ \sum_^n p_i xi/x_i. # Fiedler formula: r = \sup_ \ \inf_ \frac = \sup_ \ \inf_\sum_^n y_i a_ x_j/\sum_^n y_i x_i. # The Perron–Frobenius eigenvalue satisfies the inequalities ::\min_i \sum_ a_ \le r \le \max_i \sum_ a_. All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1-7 can be found in Meyerchapter 8
claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectors ''w'' and ''v'' are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector ''v'' sums to one, while w^T v=1.


Non-negative matrices

There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than ''or equal'', in absolute value, to all other eigenvalues. However, for the example A = \left(\begin0 & 1\\ 1 & 0\end\right), the maximum eigenvalue ''r'' = 1 has the same absolute value as the other eigenvalue −1; while for A = \left(\begin0 & 1\\ 0 & 0\end\right), the maximum eigenvalue is ''r'' = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices — ''irreducible'' matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form \omega r, where ''r'' is a real strictly positive eigenvalue, and \omega ranges over the complex ''h''th roots of 1 for some positive integer ''h'' called the
period Period may refer to: Common uses * Era, a length or span of time * Full stop (or period), a punctuation mark Arts, entertainment, and media * Period (music), a concept in musical composition * Periodic sentence (or rhetorical period), a concept ...
of the matrix. The eigenvector corresponding to ''r'' has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.


Classification of matrices

Let ''A'' be a ''n'' × ''n'' square matrix over
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
''F''. The matrix ''A'' is irreducible if any of the following equivalent properties holds. Definition 1 : ''A'' does not have non-trivial invariant ''coordinate'' subspaces. Here a non-trivial coordinate subspace means a linear subspace spanned by any
proper subset In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
of standard basis vectors of ''Fn''. More explicitly, for any linear subspace spanned by standard basis vectors ''e''''i''1 , ..., ''e''''i''k, 0 < ''k'' < ''n'' its image under the action of ''A'' is not contained in the same subspace. Definition 2: ''A'' cannot be conjugated into block upper triangular form by a permutation matrix ''P'': : PAP^ \ne \begin E & F \\ O & G \end, where ''E'' and ''G'' are non-trivial (i.e. of size greater than zero) square matrices. Definition 3: One can associate with a matrix ''A'' a certain
directed graph In mathematics, and more specifically in graph theory, a directed graph (or digraph) is a graph that is made up of a set of vertices connected by directed edges, often called arcs. Definition In formal terms, a directed graph is an ordered pa ...
''G''''A''. It has ''n'' vertices labeled 1,...,''n'', and there is an edge from vertex ''i'' to vertex ''j'' precisely when ''a''''ij'' ≠ 0. Then the matrix ''A'' is irreducible if and only if its associated graph ''G''''A'' is
strongly connected In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of an arbitrary directed graph form a partition into subgraphs that ...
. If ''F'' is the field of real or complex numbers, then we also have the following condition. Definition 4: The
group representation In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to ...
of (\mathbb R, +) on \mathbb^n or (\mathbb C, +) on \mathbb^n given by t \mapsto\exp(tA) has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix is reducible if it is not irreducible. A real matrix ''A'' is primitive if it is non-negative and its ''m''th power is positive for some natural number ''m'' (i.e. all entries of ''Am'' are positive). Let ''A'' be real and non-negative. Fix an index ''i'' and define the period of index ''i'' to be the
greatest common divisor In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers ''x'', ''y'', the greatest common divisor of ''x'' and ''y'' is ...
of all natural numbers ''m'' such that (''A''''m'')''ii'' > 0. When ''A'' is irreducible, the period of every index is the same and is called the period of ''A''. In fact, when ''A'' is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in ''G''''A'' (see Kitchens page 16). The period is also called the index of imprimitivity (Meyer page 674) or the order of cyclicity. If the period is 1, ''A'' is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912.


Perron–Frobenius theorem for irreducible non-negative matrices

Let A be an irreducible non-negative N\times N matrix with period h and
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
\rho(A) = r. Then the following statements hold. * The number r\in\mathbb^+ is a positive real number and it is an eigenvalue of the matrix A. It is called Perron–Frobenius eigenvalue. * The Perron–Frobenius eigenvalue r is
simple Simple or SIMPLE may refer to: *Simplicity, the state or quality of being simple Arts and entertainment * ''Simple'' (album), by Andy Yorke, 2008, and its title track * "Simple" (Florida Georgia Line song), 2018 * "Simple", a song by Johnn ...
. Both right and left eigenspaces associated with r are one-dimensional. * A has both a right and a left eigenvectors, respectively \mathbf v and \mathbf w, with eigenvalue r and whose components are all positive. Moreover these are the only eigenvectors whose components are all positive are those associated with the eigenvalue r. * The matrix A has exactly h (where h is the period) complex eigenvalues with absolute value r. Each of them is a simple root of the characteristic polynomial and is the product of r with an hth
root of unity In mathematics, a root of unity, occasionally called a Abraham de Moivre, de Moivre number, is any complex number that yields 1 when exponentiation, raised to some positive integer power . Roots of unity are used in many branches of mathematic ...
. * Let \omega = 2\pi/h. Then the matrix A is similar to e^A, consequently the spectrum of A is invariant under multiplication by e^ (i.e. to rotations of the complex plane by the angle \omega). * If h>1 then there exists a permutation matrix P such that ::PAP^= \begin O & A_1 & O & O & \ldots & O \\ O & O & A_2 & O & \ldots & O \\ \vdots & \vdots &\vdots & \vdots & & \vdots \\ O & O & O & O & \ldots & A_ \\ A_h & O & O & O & \ldots & O \end, :: where O denotes a zero matrix and the blocks along the main diagonal are square matrices. * Collatz–Wielandt formula: for all non-negative non-zero vectors ''\mathbf x '' let ''f(\mathbf x) '' be the minimum value of '' \mathbf xi/x_i '' taken over all those i such that x_i\neq0 . Then f is a real valued function whose
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
is the Perron–Frobenius eigenvalue. * The Perron–Frobenius eigenvalue satisfies the inequalities ::\min_i \sum_ a_ \le r \le \max_i \sum_ a_. The example A =\left(\begin 0 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end\right) shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks ''A''''j'' need not be square, and ''h'' need not divide ''n''.


Further properties

Let ''A'' be an irreducible non-negative matrix, then: # (I+''A'')''n''−1 is a positive matrix. (Meyerclaim 8.3.5 p. 672
. For a non-negative ''A'', this is also a sufficient condition. (Minc, Corollary 2.2, p. 6). # Wielandt's theorem. If , ''B'', <''A'', then ''ρ''(''B'')≤''ρ''(''A''). If equality holds (i.e. if ''μ=ρ(A)e'' is eigenvalue for ''B''), then ''B'' = ''e''''iφ'' ''D AD''−1 for some diagonal unitary matrix ''D'' (i.e. diagonal elements of ''D'' equals to ''e''''iΘ''''l'', non-diagonal are zero). # If some power ''Aq'' is reducible, then it is completely reducible, i.e. for some permutation matrix ''P'', it is true that: P A^q P^= \begin A_1 & O & O & \dots & O \\ O & A_2 & O & \dots & O \\ \vdots & \vdots & \vdots & & \vdots \\ O & O & O & \dots & A_d \\ \end , where ''Ai'' are irreducible matrices having the same maximal eigenvalue. The number of these matrices ''d'' is the greatest common divisor of ''q'' and ''h'', where ''h'' is period of ''A''. # If ''c''(''x'') ''= xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks'' is the characteristic polynomial of ''A'' in which only the non-zero terms are listed, then the period of ''A'' equals the greatest common divisor of ''k1, k2, ... , ks''. # Cesàro
averages In ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in the list (the arithmetic mean). For example, the average of the numbers 2, 3, 4, 7, ...
: \lim_ 1/k\sum_ A^i/r^i = ( v w^T), where the left and right eigenvectors for ''A'' are normalized so that ''w''''T''''v'' = 1. Moreover, the matrix ''v wT'' is the
spectral projection ''Spectral'' is a 2016 3D military science fiction, supernatural horror fantasy and action-adventure thriller war film directed by Nic Mathieu. Written by himself, Ian Fried, and George Nolfi from a story by Fried and Mathieu. The film stars ...
corresponding to ''r'', the Perron projection. # Let ''r'' be the Perron–Frobenius eigenvalue, then the adjoint matrix for (''r''-''A'') is positive. # If ''A'' has at least one non-zero diagonal element, then ''A'' is primitive. # If 0 ≤ ''A'' < ''B'', then ''r''''A'' ≤ ''r''''B.'' Moreover, if ''B'' is irreducible, then the inequality is strict: ''rA < rB''. A matrix ''A'' is primitive provided it is non-negative and ''Am'' is positive for some ''m'', and hence ''Ak'' is positive for all ''k ≥ m''. To check primitivity, one needs a bound on how large the minimal such ''m'' can be, depending on the size of ''A'': * If ''A'' is a non-negative primitive matrix of size ''n'', then ''A''''n''2 − 2''n'' + 2 is positive. Moreover, this is the best possible result, since for the matrix ''M'' below, the power ''Mk'' is not positive for every ''k'' < ''n''2 − 2''n'' + 2, since (''M''''n''2 − 2''n''+1)11 = 0. :M= \left(\begin 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 \\ 1 & 1 & 0 & 0 & \cdots & 0 \end\right)


Applications

Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain.


Non-negative matrices

The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix ''A'' may be written in upper-triangular block form (known as the normal form of a reducible matrix) ::::''PAP''−1 = \left( \begin B_1 & * & * & \cdots & * \\ 0 & B_2 & * & \cdots & * \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & \cdots & * \\ 0 & 0 & 0 & \cdots & B_h \end \right) where ''P'' is a permutation matrix and each ''Bi'' is a square matrix that is either irreducible or zero. Now if ''A'' is non-negative then so too is each block of ''PAP''−1, moreover the spectrum of ''A'' is just the union of the spectra of the ''Bi''. The invertibility of ''A'' can also be studied. The inverse of ''PAP''−1 (if it exists) must have diagonal blocks of the form ''Bi''−1 so if any ''Bi'' isn't invertible then neither is ''PAP''−1 or ''A''. Conversely let ''D'' be the block-diagonal matrix corresponding to ''PAP''−1, in other words ''PAP''−1 with the asterisks zeroised. If each ''Bi'' is invertible then so is ''D'' and ''D''−1(''PAP''−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if ''Nk'' = 0 the inverse of 1 − ''N'' is 1 + ''N'' + ''N''2 + ... + ''N''''k''−1) so ''PAP''−1 and ''A'' are both invertible. Therefore, many of the spectral properties of ''A'' may be deduced by applying the theorem to the irreducible ''Bi''. For example, the Perron root is the maximum of the ρ(''Bi''). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive.


Stochastic matrices

A row (column)
stochastic matrix In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, ...
is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. If ''A'' is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(''A'') by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If ''A'' is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal.


Algebraic graph theory

The theorem has particular use in
algebraic graph theory Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph th ...
. The "underlying graph" of a nonnegative ''n''-square matrix is the graph with vertices numbered 1, ..., ''n'' and arc ''ij'' if and only if ''Aij'' ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the
adjacency matrix In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simp ...
of a strongly connected graph is irreducible.


Finite Markov chains

The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the
subshift of finite type In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all possible sequences executed by a finite state machine ...
).


Compact operators

More generally, it can be extended to the case of non-negative
compact operator In functional analysis, a branch of mathematics, a compact operator is a linear operator T: X \to Y, where X,Y are normed vector spaces, with the property that T maps bounded subsets of X to relatively compact subsets of Y (subsets with compact c ...
s, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of
transfer operator Transfer may refer to: Arts and media * ''Transfer'' (2010 film), a German science-fiction movie directed by Damir Lukacevic and starring Zana Marjanović * ''Transfer'' (1966 film), a short film * ''Transfer'' (journal), in management studies ...
s, or sometimes Ruelle–Perron–Frobenius operators (after
David Ruelle David Pierre Ruelle (; born 20 August 1935) is a Belgian mathematical physicist, naturalized French. He has worked on statistical physics and dynamical systems. With Floris Takens, Ruelle coined the term '' strange attractor'', and developed a ...
). In this case, the leading eigenvalue corresponds to the
thermodynamic equilibrium Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In the ...
of a
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space. Examples include the mathematical models that describe the swinging of a ...
, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the
arrow of time The arrow of time, also called time's arrow, is the concept positing the "one-way direction" or "asymmetry" of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This ...
in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of
point-set topology In mathematics, general topology is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geomet ...
.


Proof methods

A common thread in many proofs is the
Brouwer fixed point theorem Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function f mapping a compact convex set to itself there is a point x_0 such that f(x_0)=x_0. The simplest ...
. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the
spectral theory In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result ...
from which part of the arguments are borrowed.


Perron root is strictly maximal eigenvalue for positive (and primitive) matrices

If ''A'' is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue ''r'' (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence ''r'' is the
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
of ''A''. This statement does not hold for general non-negative irreducible matrices, which have ''h'' eigenvalues with the same absolute eigenvalue as ''r'', where ''h'' is the period of ''A''.


Proof for positive matrices

Let ''A'' be a positive matrix, assume that its spectral radius ρ(''A'') = 1 (otherwise consider ''A/ρ(A)''). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer ''m'' such that ''Am'' is a positive matrix and the real part of λ''m'' is negative. Let ε be half the smallest diagonal entry of ''Am'' and set ''T'' = ''Am'' − ''εI'' which is yet another positive matrix. Moreover, if ''Ax'' = ''λx'' then ''Amx'' = ''λmx'' thus ''λ''''m'' − ''ε'' is an eigenvalue of ''T''. Because of the choice of ''m'' this point lies outside the unit disk consequently ''ρ''(''T'') > 1. On the other hand, all the entries in ''T'' are positive and less than or equal to those in ''Am'' so by Gelfand's formula ''ρ''(''T'') ≤ ''ρ''(''Am'') ≤ ''ρ''(''A'')''m'' = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices.


Lemma

Given a non-negative ''A'', assume there exists ''m'', such that ''Am'' is positive, then ''A''''m''+1, ''A''''m''+2, ''A''''m''+3,... are all positive. ''A''''m''+1 = ''AA''''m'', so it can have zero element only if some row of ''A'' is entirely zero, but in this case the same row of ''Am'' will be zero. Applying the same arguments as above for primitive matrices, prove the main claim.


Power method and the positive eigenpair

For a positive (or more generally irreducible non-negative) matrix ''A'' the dominant
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
is real and strictly positive (for non-negative ''A'' respectively non-negative.) This can be established using the
power method In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vec ...
, which states that for a sufficiently generic (in the sense below) matrix ''A'' the sequence of vectors ''b''''k''+1 = ''Ab''''k'' / , ''Ab''''k'' , converges to the
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
with the maximum
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
. (The initial vector ''b''0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector ''b''0 produces the sequence of non-negative vectors ''bk''. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for ''A'', proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: :Lemma: given a positive (or more generally irreducible non-negative) matrix ''A'' and ''v'' as any non-negative eigenvector for ''A'', then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive. Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes ''i,j'' there exists ''m'', such that (''A''''m'')''ij'' is strictly positive. Given a non-negative eigenvector ''v'', and that at least one of its components say ''j''-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given ''n'' such that (''A''''n'')''ii'' >0, hence: ''r''''n''''v''''i'' = ''A''''n''''v''''i'' ≥ (''A''''n'')''ii''''v''''i'' >0. Hence ''r'' is strictly positive. The eigenvector is strict positivity. Then given ''m'', such that (''A''''m'')''ij'' >0, hence: ''r''''m''''v''''j'' = (''A''''m''''v'')''j'' ≥ (''A''''m'')''ij''''v''''i'' >0, hence ''v''''j'' is strictly positive, i.e., the eigenvector is strictly positive.


Multiplicity one

This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue ''r'' is one-dimensional. The arguments here are close to those in Meyer. Given a strictly positive eigenvector ''v'' corresponding to ''r'' and another eigenvector ''w'' with the same eigenvalue. (The vectors ''v'' and ''w'' can be chosen to be real, because ''A'' and ''r'' are both real, so the null space of ''A-r'' has a basis consisting of real vectors.) Assuming at least one of the components of ''w'' is positive (otherwise multiply ''w'' by −1). Given maximal possible ''α'' such that ''u=v- α w'' is non-negative, then one of the components of ''u'' is zero, otherwise ''α'' is not maximum. Vector ''u'' is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of ''u'' is zero. The contradiction implies that ''w'' does not exist. Case: There are no Jordan cells corresponding to the Perron–Frobenius eigenvalue ''r'' and all other eigenvalues which have the same absolute value. If there is a Jordan cell, then the
infinity norm In mathematical analysis, the uniform norm (or ) assigns to real- or complex-valued bounded functions defined on a set the non-negative number :\, f\, _\infty = \, f\, _ = \sup\left\. This norm is also called the , the , the , or, when th ...
(A/r)k tends to infinity for ''k → ∞ '', but that contradicts the existence of the positive eigenvector. Given ''r'' = 1, or ''A/r''. Letting ''v'' be a Perron–Frobenius strictly positive eigenvector, so ''Av=v'', then: \, v\, _= \, A^k v\, _ \ge \, A^k\, _ \min_i (v_i), ~~\Rightarrow~~ \, A^k\, _ \le \, v\, /\min_i (v_i) So ''Ak'' is bounded for all ''k''. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan cell implies that ''Ak'' is unbounded. For a two by two matrix: : J^k= \begin \lambda & 1 \\ 0 & \lambda \end ^k = \begin \lambda^k & k\lambda^ \\ 0 & \lambda^k \end, hence ''J''''k'' = , ''k'' + ''λ'', (for , ''λ'', = 1), so it tends to infinity when ''k'' does so. Since ''Jk'' = ''C''−1 ''A''''k''''C'', then ''A''''k'' ≥ ''J''''k''/ (''C''−1 ''C'' ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan cells for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalue ''r'' is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as ''r''. The same claim is true for them, but requires more work.


No other non-negative eigenvectors

Given positive (or more generally irreducible non-negative matrix) ''A'', the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for ''A''. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (''λ'', ''y'') for ''A'', such that vector ''y'' is positive, and given (''r'', ''x''), where ''x'' – is the left Perron–Frobenius eigenvector for ''A'' (i.e. eigenvector for ''AT''), then ''rx''''T''''y'' = (''x''''T'' ''A'') ''y'' = ''x''''T'' (''Ay'') = ''λx''''T''''y'', also ''x''''T'' ''y'' > 0, so one has: ''r'' = ''λ''. Since the eigenspace for the Perron–Frobenius eigenvalue ''r'' is one-dimensional, non-negative eigenvector ''y'' is a multiple of the Perron–Frobenius one.


Collatz–Wielandt formula

Given a positive (or more generally irreducible non-negative matrix) ''A'', one defines the function ''f'' on the set of all non-negative non-zero vectors ''x'' such that ''f(x)'' is the minimum value of 'Ax''sub>''i'' / ''x''''i'' taken over all those ''i'' such that ''xi'' ≠ 0. Then ''f'' is a real-valued function, whose
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
is the Perron–Frobenius eigenvalue ''r''. For the proof we denote the maximum of ''f'' by the value ''R''. The proof requires to show '' R = r''. Inserting the Perron-Frobenius eigenvector ''v'' into ''f'', we obtain ''f(v) = r'' and conclude ''r ≤ R''. For the opposite inequality, we consider an arbitrary nonnegative vector ''x'' and let ''ξ=f(x)''. The definition of ''f'' gives ''0 ≤ ξx ≤ Ax'' (componentwise). Now, we use the positive right eigenvector ''w'' for ''A'' for the Perron-Frobenius eigenvalue ''r'', then '' ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x ''. Hence ''f(x) = ξ ≤ r'', which implies ''R ≤ r''.


Perron projection as a limit: ''A''''k''/''r''''k''

Let ''A'' be a positive (or more generally, primitive) matrix, and let ''r'' be its Perron–Frobenius eigenvalue. # There exists a limit ''Ak/rk'' for ''k → ∞'', denote it by ''P''. # ''P'' is a
projection operator In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it wer ...
: ''P''2 = ''P'', which commutes with ''A'': ''AP'' = ''PA''. # The image of ''P'' is one-dimensional and spanned by the Perron–Frobenius eigenvector ''v'' (respectively for ''PT''—by the Perron–Frobenius eigenvector ''w'' for ''AT''). # ''P'' = ''vw''''T'', where ''v,w'' are normalized such that ''w''''T'' ''v'' = 1. # Hence ''P'' is a positive operator. Hence ''P'' is a
spectral projection ''Spectral'' is a 2016 3D military science fiction, supernatural horror fantasy and action-adventure thriller war film directed by Nic Mathieu. Written by himself, Ian Fried, and George Nolfi from a story by Fried and Mathieu. The film stars ...
for the Perron–Frobenius eigenvalue ''r'', and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrix ''M'' such that there exists an eigenvalue ''r'' which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
. (These requirements hold for primitive matrices as above). Given that ''M'' is diagonalizable, ''M'' is conjugate to a diagonal matrix with eigenvalues ''r''1, ... , ''r''''n'' on the diagonal (denote ''r''1 = ''r''). The matrix ''M''''k''/''r''''k'' will be conjugate (1, (''r''2/''r'')''k'', ... , (''r''''n''/''r'')''k''), which tends to (1,0,0,...,0), for ''k → ∞'', so the limit exists. The same method works for general ''M'' (without assuming that ''M'' is diagonalizable). The projection and commutativity properties are elementary corollaries of the definition: ''MM''''k''/''r''''k'' = ''M''''k''/''r''''k'' ''M'' ; ''P''2 = lim ''M''2''k''/''r''2''k'' = ''P''. The third fact is also elementary: ''M''(''Pu'') = ''M'' lim ''M''''k''/''r''''k'' ''u'' = lim ''rM''''k''+1/''r''''k''+1''u'', so taking the limit yields ''M''(''Pu'') = ''r''(''Pu''), so image of ''P'' lies in the ''r''-eigenspace for ''M'', which is one-dimensional by the assumptions. Denoting by ''v'', ''r''-eigenvector for ''M'' (by ''w'' for ''MT''). Columns of ''P'' are multiples of ''v'', because the image of ''P'' is spanned by it. Respectively, rows of ''w''. So ''P'' takes a form ''(a v wT)'', for some ''a''. Hence its trace equals to ''(a wT v)''. Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that ''P'' acts identically on the ''r''-eigenvector for ''M''. So it is one-dimensional. So choosing (''w''''T''''v'') = 1, implies ''P'' = ''vw''''T''.


Inequalities for Perron–Frobenius eigenvalue

For any non-negative matrix ''A'' its Perron–Frobenius eigenvalue ''r'' satisfies the inequality: : r \; \le \; \max_i \sum_j a_. This is not specific to non-negative matrices: for any matrix ''A'' with an eigenvalue \scriptstyle\lambda it is true that \scriptstyle , \lambda, \; \le \; \max_i \sum_j , a_, . This is an immediate corollary of the Gershgorin circle theorem. However another proof is more direct: Any matrix induced norm satisfies the inequality \scriptstyle\, A\, \ge , \lambda, for any eigenvalue \scriptstyle\lambda because, if \scriptstyle x is a corresponding eigenvector, \scriptstyle\, A\, \ge , Ax, /, x, = , \lambda x, /, x, = , \lambda, . The
infinity norm In mathematical analysis, the uniform norm (or ) assigns to real- or complex-valued bounded functions defined on a set the non-negative number :\, f\, _\infty = \, f\, _ = \sup\left\. This norm is also called the , the , the , or, when th ...
of a matrix is the maximum of row sums: \scriptstyle \left \, A \right \, _\infty = \max \limits _ \sum _ ^n , a_ , . Hence the desired inequality is exactly \scriptstyle\, A\, _\infty \ge , \lambda, applied to the non-negative matrix ''A''. Another inequality is: :\min_i \sum_j a_ \; \le \; r . This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that ''A'' is positive (not just non-negative), then there exists a positive eigenvector ''w'' such that ''Aw'' = ''rw'' and the smallest component of ''w'' (say ''wi'') is 1. Then ''r'' = (''Aw'')''i'' ≥ the sum of the numbers in row ''i'' of ''A''. Thus the minimum row sum gives a lower bound for ''r'' and this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via the Collatz-Wielandt formula. One takes the vector ''x'' = (1, 1, ..., 1) and immediately obtains the inequality.


Further proofs


Perron projection

The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if ''A'' is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if ''P'' is its Perron projection then ''AP'' = ''PA'' = ρ(''A'')''P'' so every column of ''P'' is a positive right eigenvector of ''A'' and every row is a positive left eigenvector. Moreover, if ''Ax'' = λ''x'' then ''PAx'' = λ''Px'' = ρ(''A'')''Px'' which means ''Px'' = 0 if λ ≠ ρ(''A''). Thus the only positive eigenvectors are those associated with ρ(''A''). If ''A'' is a primitive matrix with ρ(''A'') = 1 then it can be decomposed as ''P'' ⊕ (1 − ''P'')''A'' so that ''An'' = ''P'' + (1 − ''P'')''A''''n''. As ''n'' increases the second of these terms decays to zero leaving ''P'' as the limit of ''An'' as ''n'' → ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. If ''v'' and ''w'' are the positive row and column vectors that it generates then the Perron projection is just ''wv''/''vw''. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.


Peripheral projection

The analysis when ''A'' is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(''A'') that negate use of the power method and prevent the powers of (1 − ''P'')''A'' decaying as in the primitive case whenever ρ(''A'') = 1. So we consider the peripheral projection, which is the spectral projection of ''A'' corresponding to all the eigenvalues that have modulus ''ρ''(''A''). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.


Cyclicity

Suppose in addition that ρ(''A'') = 1 and ''A'' has ''h'' eigenvalues on the unit circle. If ''P'' is the peripheral projection then the matrix ''R'' = ''AP'' = ''PA'' is non-negative and irreducible, ''Rh'' = ''P'', and the cyclic group ''P'', ''R'', ''R''2, ...., ''R''''h''−1 represents the harmonics of ''A''. The spectral projection of ''A'' at the eigenvalue λ on the unit circle is given by the formula \scriptstyle h^\sum^h_1\lambda^R^k. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of ''A'' is given by ''A'' = ''R'' ⊕ (1 − ''P'')''A'' so the difference between ''An'' and ''Rn'' is ''An'' − ''Rn'' = (1 − ''P'')''A''''n'' representing the transients of ''An'' which eventually decay to zero. ''P'' may be computed as the limit of ''Anh'' as ''n'' → ∞.


Counterexamples

The matrices ''L'' = \left( \begin 1 & 0 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end \right), ''P'' = \left( \begin 1 & 0 & 0 \\ 1 & 0 & 0 \\ \!\!\!-1 & 1 & 1 \end \right), ''T'' = \left( \begin 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end \right), ''M'' = \left( \begin 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 \end \right) provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of ''L'' are both equal to ''P'', thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix ''T'' is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. ''M'' is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of ''M'' are with a dimension 2 eigenspace for -1 so ω and ω5 are both absent.


Terminology

A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms ''strictly positive'' and ''positive'' to mean > 0 and ≥ 0 respectively. In this article ''positive'' means > 0 and ''non-negative'' means ≥ 0. Another vexed area concerns ''decomposability'' and ''reducibility'': ''irreducible'' is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix ''A'' such that 1 + ''A'' is primitive is sometimes said to be ''connected''. Then irreducible non-negative square matrices and connected matrices are synonymous.For surveys of results on irreducibility, see
Olga Taussky-Todd Olga Taussky-Todd (August 30, 1906, Olomouc, Austria-Hungary (present-day Olomouc, Czech Republic) – October 7, 1995, Pasadena, California) was an Austrian and later Czech-American mathematician. She published more than 300 research papers on ...
and Richard A. Brualdi.
The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
and is sometimes called a ''stochastic eigenvector''. ''Perron–Frobenius eigenvalue'' and ''dominant eigenvalue'' are alternative names for the Perron root. Spectral projections are also known as ''spectral projectors'' and ''spectral idempotents''. The period is sometimes referred to as the ''index of imprimitivity'' or the ''order of cyclicity''.


See also

*
Min-max theorem In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators o ...
*
Z-matrix (mathematics) In mathematics, the class of ''Z''-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form: :Z=(z_);\quad z_\leq 0, \quad i\neq j. Note that this definition coincides precisely wit ...
*
M-matrix In mathematics, especially linear algebra, an ''M''-matrix is a ''Z''-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular ''M''-matrices are a subset of the class of ''P''-matrices, and also of the class of inverse-p ...
*
P-matrix In mathematics, a -matrix is a complex square matrix with every principal minor is positive. A closely related class is that of P_0-matrices, which are the closure of the class of -matrices, with every principal minor \geq 0. Spectra of -matri ...
*
Hurwitz matrix In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial. Hurwitz matrix and the Hurwitz stability criterion Namely, given a ...
*
Metzler matrix In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero): : \forall_\, x_ \geq 0. It is named after the American economist Lloyd Metzler. Metzler matrices appear in sta ...
(
Quasipositive matrix In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero): : \forall_\, x_ \geq 0. It is named after the American economist Lloyd Metzler. Metzler matrices appear in st ...
) *
Positive operator In mathematics (specifically linear algebra, operator theory, and functional analysis) as well as physics, a linear operator A acting on an inner product space is called positive-semidefinite (or ''non-negative'') if, for every x \in \mathop(A), \l ...


Notes


References

* * * * * (1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.) * * * * * * *


Further reading

* Abraham Berman,
Robert J. Plemmons Robert James Plemmons (born December 18, 1938) is an American mathematician specializing in computational mathematics. He is the Emeritus Z. Smith Reynolds Professor of Mathematics and Computer Science at Wake Forest University. In 1979, Plemmon ...
, ''Nonnegative Matrices in the Mathematical Sciences'', 1994, SIAM. . *
Chris Godsil Christopher David Godsil is a professor and the former Chair at the Department of Combinatorics and mathematical optimization, Optimization in the University of Waterloo Faculty of Mathematics, faculty of mathematics at the University of Waterloo. ...
and
Gordon Royle Gordon F. Royle is a professor at the School of Mathematics and Statistics at The University of Western Australia. Royle is the co-author (with Chris Godsil) of the book ''Algebraic Graph Theory'' (Springer Verlag, 2001, ). Royle is also known ...
, ''Algebraic Graph Theory'', Springer, 2001. * A. Graham, ''Nonnegative Matrices and Applicable Topics in Linear Algebra'', John Wiley&Sons, New York, 1987. * R. A. Horn and C.R. Johnson, ''Matrix Analysis'', Cambridge University Press, 1990 * Bas Lemmens and Roger Nussbaum, ''Nonlinear Perron-Frobenius Theory'', Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012. * S. P. Meyn and R. L. Tweedie
''Markov Chains and Stochastic Stability''
London: Springer-Verlag, 1993. (2nd edition, Cambridge University Press, 2009) * Seneta, E. ''Non-negative matrices and Markov chains''. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) * (The claim that ''A''''j'' has order ''n''/''h'' at the end of the statement of the theorem is incorrect.) *. {{DEFAULTSORT:Perron-Frobenius Theorem Matrix theory Theorems in linear algebra Markov processes