Gershgorin Circle Theorem
   HOME
*



picture info

Gershgorin Circle Theorem
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let A be a complex n\times n matrix, with entries a_. For i \in\ let R_i be the sum of the absolute values of the non-diagonal entries in the i-th row: : R_i= \sum_ \left, a_\. Let D(a_, R_i) \subseteq \Complex be a closed disc centered at a_ with radius R_i. Such a disc is called a Gershgorin disc. :Theorem. Every eigenvalue of A lies within at least one of the Gershgorin discs D(a_,R_i). ''Proof.'' Let \lambda be an eigenvalue of A with corresponding eigenvector x = (x_j). Find ''i'' such that the element of ''x'' with the largest absolute value is x_i. Since Ax=\lambda x, in particular we take the ''i''th component of that e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cambridge University Press
Cambridge University Press is the university press of the University of Cambridge. Granted letters patent by Henry VIII of England, King Henry VIII in 1534, it is the oldest university press A university press is an academic publishing house specializing in monographs and scholarly journals. Most are nonprofit organizations and an integral component of a large research university. They publish work that has been reviewed by schola ... in the world. It is also the King's Printer. Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. It became part of Cambridge University Press & Assessment, following a merger with Cambridge Assessment in 2021. With a global sales presence, publishing hubs, and offices in more than 40 Country, countries, it publishes over 50,000 titles by authors from over 100 countries. Its publishing includes more than 380 academic journals, monographs, reference works, school and uni ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Muirhead's Inequality
In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means. Preliminary definitions ''a''-mean For any real number, real vector space, vector :a=(a_1,\dots,a_n) define the "''a''-mean" [''a''] of positive real numbers ''x''1, ..., ''x''''n'' by :[a]=\frac\sum_\sigma x_^\cdots x_^, where the sum extends over all permutations σ of . When the elements of ''a'' are nonnegative integers, the ''a''-mean can be equivalently defined via the monomial symmetric polynomial m_a(x_1,\dots,x_n) as :[a] = \frac m_a(x_1,\dots,x_n), where ℓ is the number of distinct elements in ''a'', and ''k''1, ..., ''k''ℓ are their multiplicities. Notice that the ''a''-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if a_1+\cdots+a_n=1. In the general case, one can consider instead [a]^, which is called a Muirhead mean. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Metzler Matrix
In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero): : \forall_\, x_ \geq 0. It is named after the American economist Lloyd Metzler. Metzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of nonnegative matrices to matrices of the form ''M'' + ''aI'', where ''M'' is a Metzler matrix. Definition and terminology In mathematics, especially linear algebra, a matrix is called Metzler, quasipositive (or quasi-positive) or essentially nonnegative if all of its elements are non-negative except for those on the main diagonal, which are unconstrained. That is, a Metzler matrix is any matrix ''A'' which satisfies :A=(a_);\quad a_\geq 0, \quad i\neq j. Metzler matrices are also sometimes referred to as Z^-matrices, as a ''Z''-matrix is equivalent to a negated quasip ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Joel Lee Brenner
Joel Lee Brenner ( – ) was an American mathematician who specialized in matrix theory, linear algebra, and group theory. He is known as the translator of several popular Russian texts. He was a teaching professor at some dozen colleges and universities and was a Senior Mathematician at Stanford Research Institute from 1956 to 1968. He published over one hundred scholarly papers, 35 with coauthors, and wrote book reviews.LeRoy B. Beasley (1987) "The Mathematical Work of Joel Lee Brenner", Linear Algebra and its Applications 90:1–13 Academic career In 1930 Brenner earned a B.A. degree with major in chemistry from Harvard University. In graduate study there he was influenced by Hans Brinkmann, Garrett Birkhoff, and Marshall Stone. He was granted the Ph.D. in February 1936. Brenner later described some of his reminiscences of his student days at Harvard and of the state of American mathematics in the 1930s in an article for American Mathematical Monthly. In 1951 Brenner publis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hurwitz Matrix
In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial. Hurwitz matrix and the Hurwitz stability criterion Namely, given a real polynomial :p(z)=a_z^n+a_z^+\cdots+a_z+a_n the n\times n square matrix : H= \begin a_1 & a_3 & a_5 & \dots & \dots & \dots & 0 & 0 & 0 \\ a_0 & a_2 & a_4 & & & & \vdots & \vdots & \vdots \\ 0 & a_1 & a_3 & & & & \vdots & \vdots & \vdots \\ \vdots & a_0 & a_2 & \ddots & & & 0 & \vdots & \vdots \\ \vdots & 0 & a_1 & & \ddots & & a_n & \vdots & \vdots \\ \vdots & \vdots & a_0 & & & \ddots & a_ & 0 & \vdots \\ \vdots & \vdots & 0 & & & & a_ & a_n & \vdots \\ \vdots & \vdots & \vdots & & & & a_ & a_ & 0 \\ 0 & 0 & 0 & \dots & \dots & \dots & a_ & a_ & a_n \end. is called Hurwitz matrix corresponding to the polynomial p. It was established by Adolf Hurwitz in 1895 that a real polynomial with a_0 > 0 is stable (that is, all its roots have ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Doubly Stochastic Matrix
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix X=(x_) of nonnegative real numbers, each of whose rows and columns sums to 1, i.e., :\sum_i x_=\sum_j x_=1, Thus, a doubly stochastic matrix is both left stochastic and right stochastic. Indeed, any matrix that is both left and right stochastic must be square: if every row sums to one then the sum of all entries in the matrix must be equal to the number of rows, and since the same holds for columns, the number of rows and columns must be equal. Birkhoff polytope The class of n\times n doubly stochastic matrices is a convex polytope known as the Birkhoff polytope B_n. Using the matrix entries as Cartesian coordinates, it lies in an (n-1)^2-dimensional affine subspace of n^2-dimensional Euclidean space defined by 2n-1 independent linear constraints specifying that the row and column sums all equal one. (There are 2n-1 constraints rather than ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Perron–Frobenius Theorem
In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems ( subshifts of finite type); to economics ( Okishio's theorem, Hawkins–Simon condition); to demography ( Leslie population age distribution model); to social networks ( DeGroot learning process); to Internet search engines (PageRank); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diagonally Dominant Matrix
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix ''A'' is diagonally dominant if :, a_, \geq \sum_ , a_, \quad\text i \, where ''a''''ij'' denotes the entry in the ''i''th row and ''j''th column. Note that this definition uses a weak inequality, and is therefore sometimes called ''weak diagonal dominance''. If a strict inequality (>) is used, this is called ''strict diagonal dominance''. The unqualified term ''diagonal dominance'' can mean both strict and weak diagonal dominance, depending on the context. Variations The definition in the first paragraph sums entries across each row. It is therefore sometimes called ''row diagonal dominance''. If one changes the definition to sum down each column, this is called ''column diagonal dominance''. Any stric ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gershgorin Disk Theorem Example
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let A be a complex n\times n matrix, with entries a_. For i \in\ let R_i be the sum of the absolute values of the non-diagonal entries in the i-th row: : R_i= \sum_ \left, a_\. Let D(a_, R_i) \subseteq \Complex be a closed disc centered at a_ with radius R_i. Such a disc is called a Gershgorin disc. :Theorem. Every eigenvalue of A lies within at least one of the Gershgorin discs D(a_,R_i). ''Proof.'' Let \lambda be an eigenvalue of A with corresponding eigenvector x = (x_j). Find ''i'' such that the element of ''x'' with the largest absolute value is x_i. Since Ax=\lambda x, in particular we take the ''i''th component of tha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Matrix Inverse
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . A square matrix that is ''not'' invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (-by- matrices for which ) do not hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Preconditioning
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P^A has a smaller condition number than A. It is also common to call T=P^ the preconditioner, rather than P, since P itself is rarely explicitly available. In modern preconditioning, the application of T=P^, i.e., multiplication of a column vector, or a block of column vectors, by T=P^, is commonly performed in a matrix-free fashion, i.e., where neither P, nor T=P^ (and often not even A) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a linear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]