Quadratic Eigenvalue Problem
In mathematics, the quadratic eigenvalue problemF. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 (2001), pp. 235–286. (QEP), is to find scalar eigenvalues \lambda, left eigenvectors y and right eigenvectors x such that : Q(\lambda)x = 0 ~ \text ~ y^\ast Q(\lambda) = 0, where Q(\lambda)=\lambda^2 A_2 + \lambda A_1 + A_0, with matrix coefficients A_2, \, A_1, A_0 \in \mathbb^ and we require that A_2\,\neq 0, (so that we have a nonzero leading coefficient). There are 2n eigenvalues that may be ''infinite'' or finite, and possibly zero. This is a special case of a nonlinear eigenproblem. Q(\lambda) is also known as a quadratic polynomial matrix. Applications A QEP can result in part of the dynamic analysis of structures discretized by the finite element method. In this case the quadratic, Q(\lambda) has the form Q(\lambda)=\lambda^2 M + \lambda C + K, where M is the mass matrix, C is the damping matrix and K is the stiffness matrix. Other applications ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Scalar (mathematics)
A scalar is an element of a field which is used to define a ''vector space''. In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers). A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a '' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Nonlinear Eigenproblem
In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form : M (\lambda) x = 0 , where x\neq0 is a vector, and ''M'' is a matrix-valued function of the number \lambda. The number \lambda is known as the (nonlinear) eigenvalue, the vector x as the (nonlinear) eigenvector, and (\lambda,x) as the eigenpair. The matrix M (\lambda) is singular at an eigenvalue \lambda. Definition In the discipline of numerical linear algebra the following definition is typically used. Let \Omega \subseteq \Complex, and let M : \Omega \rightarrow \Complex^ be a function that maps scalars to matrices. A scalar \lambda \in \Complex is called an ''eigenvalue'', and a nonzero vector x \in \Complex^n is called a ''right eigevector'' if M (\lambda) x = 0. Moreover, a nonzero vector y \in \Complex^n is called a ''left ei ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Polynomial Matrix
In mathematics, a polynomial matrix or matrix of polynomials is a matrix whose elements are univariate or multivariate polynomials. Equivalently, a polynomial matrix is a polynomial whose coefficients are matrices. A univariate polynomial matrix ''P'' of degree ''p'' is defined as: :P = \sum_^p A(n)x^n = A(0)+A(1)x+A(2)x^2+ \cdots +A(p)x^p where A(i) denotes a matrix of constant coefficients, and A(p) is non-zero. An example 3×3 polynomial matrix, degree 2: : P=\begin 1 & x^2 & x \\ 0 & 2x & 2 \\ 3x+2 & x^2-1 & 0 \end =\begin 1 & 0 & 0 \\ 0 & 0 & 2 \\ 2 & -1 & 0 \end +\begin 0 & 0 & 1 \\ 0 & 2 & 0 \\ 3 & 0 & 0 \endx+\begin 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \endx^2. We can express this by saying that for a ring ''R'', the rings M_n(R and (M_n(R)) /math> are isomorphic. Properties *A polynomial matrix over a field with determinant equal to a non-zero element of that field is called unimodular, and has an inverse that is also a polynomial matrix. Note that the only scal ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Discretization
In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification). Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, ''discretization'' may also refer to modification of variable or category ''granularity'', as when multiple discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level conside ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Finite Element Method
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The FEM is a general numerical method for solving partial differential equations in two or three space variables (i.e., some boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The sim ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mass Matrix
In analytical mechanics, the mass matrix is a symmetric matrix that expresses the connection between the time derivative \mathbf\dot q of the generalized coordinate vector of a system and the kinetic energy of that system, by the equation :T = \frac \mathbf^\textsf \mathbf \mathbf where \mathbf^\textsf denotes the transpose of the vector \mathbf. This equation is analogous to the formula for the kinetic energy of a particle with mass and velocity , namely :T = \frac m, \mathbf, ^2 = \frac \mathbf \cdot m\mathbf and can be derived from it, by expressing the position of each particle of the system in terms of . In general, the mass matrix depends on the state , and therefore varies with time. Lagrangian mechanics yields an ordinary differential equation (actually, a system of coupled differential equations) that describes the evolution of a system in terms of an arbitrary vector of generalized coordinates that completely defines the position of every particle in the system. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Damping Matrix
In applied mathematics, a damping matrix is a matrix corresponding to any of certain systems of linear ordinary differential equations. A damping matrix is defined as follows. If the system has ''n'' degrees of freedom ''u''''n'' and is under application of ''m'' damping Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. In physical systems, damping is produced by processes that dissipate the energy stored in the oscillation. Examples in ... forces. Each force can be expressed as follows: : f_=c_ \dot+c_ \dot+\cdots+c_ \dot=\sum_^n c_\dot It yields in matrix form; : F_D=C \dot where C is the damping matrix composed by the damping coefficients: : C=(c_)_ References *(frMechanics of structures and seisms Mechanical engineering Classical mechanics {{mathapplied-stub ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Stiffness Matrix
In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation. The stiffness matrix for the Poisson problem For simplicity, we will first consider the Poisson problem : -\nabla^2 u = f on some domain , subject to the boundary condition on the boundary of . To discretize this equation by the finite element method, one chooses a set of ''basis functions'' defined on which also vanish on the boundary. One then approximates : u \approx u^h = u_1\varphi_1+\cdots+u_n\varphi_n. The coefficients are determined so that the error in the approximation is orthogonal to each basis function : : \int_\Omega \varphi_i\cdot f \, dx = -\int_\Omega \varphi_i\nabla^2u^h \, dx = -\sum_j\left(\int_\Omega \varphi_i\nabla^2\varphi_j\,dx\right)\, u_j = \sum_j\left(\int_\Omega \nabla\var ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Generalized Eigenvalue Problem
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Fundamental theory of matrix eigenvectors and eigenvalues A (nonzero) vector of dimension is an eigenvector of a square matrix if it satisfies a linear equation of the form :\mathbf \mathbf = \lambda \mathbf for some scalar . Then is called the eigenvalue corresponding to . Geometrically speaking, the eigenvectors of are the vectors that merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvalues : p\left(\lambda\right) = \det\le ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |