Eigenvalues And Eigenvectors Of The Second Derivative
   HOME
*





Eigenvalues And Eigenvectors Of The Second Derivative
Explicit formulas for eigenvalues and eigenvectors of the second derivative with different boundary conditions are provided both for the continuous and discrete cases. In the discrete case, the standard Central difference#Higher-order differences, central difference approximation of the second derivative is used on a uniform grid. These formulas are used to derive the expressions for eigenfunctions of Laplacian in case of separation of variables, as well as to find eigenvalues and eigenvectors of multidimensional Discrete Laplace operator, discrete Laplacian on a regular grid, which is presented as a Kronecker sum of discrete Laplacians in one-dimension. The continuous case The index j represents the jth eigenvalue or eigenvector and runs from 1 to \infty . Assuming the equation is defined on the domain x \in [0,L], the following are the eigenvalues and normalized eigenvectors. The eigenvalues are ordered in descending order. Pure Dirichlet boundary conditions : \lambda_j = -\f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Second Derivative
In calculus, the second derivative, or the second order derivative, of a function is the derivative of the derivative of . Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation: :\mathbf = \frac = \frac, where ''a'' is acceleration, ''v'' is velocity, ''t'' is time, ''x'' is position, and d is the instantaneous "delta" or change. The last expression \tfrac is the second derivative of position (x) with respect to time. On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Central Difference
A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. The difference operator, commonly denoted \Delta is the operator that maps a function to the function \Delta /math> defined by :\Delta x)= f(x+1)-f(x). A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations, specially in the solving methods. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences. In numerical analysis, finite differences are widely used for approximating derivatives, and the term "fini ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenfunctions
In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as Df = \lambda f for some scalar eigenvalue \lambda. The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions. An eigenfunction is a type of eigenvector. Eigenfunctions In general, an eigenvector of a linear operator ''D'' defined on some vector space is a nonzero vector in the domain of ''D'' that, when ''D'' acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where ''D'' is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function ''f'' is an eigenfunction of ''D'' if it satisfies the equation where λ is a scalar. The solutions to Equation may also ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Laplacian
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is the nabla operator), or \Delta. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from . The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that densit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Separation Of Variables
In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation. Ordinary differential equations (ODE) Suppose a differential equation can be written in the form :\frac f(x) = g(x)h(f(x)) which we can write more simply by letting y = f(x): :\frac=g(x)h(y). As long as ''h''(''y'') ≠ 0, we can rearrange terms to obtain: : = g(x) \, dx, so that the two variables ''x'' and ''y'' have been separated. ''dx'' (and ''dy'') can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of ''dx'' as a differential (infinitesimal) is somewhat advanced. Alternative notation Those who dislike Leibniz's notation may prefer to write this as :\frac \frac = g(x), but that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Discrete Laplace Operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a Graph (discrete mathematics), graph or a lattice (group), discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix. The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for cluster analysis, clustering and semi-supervised learning on neighborhood graphs. Definitions Graph Laplacians There are various definitions of the ''discrete Laplacian'' for Graph (discrete mathematics), graphs, differing by sign and scale factor (sometime ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Regular Grid
A regular grid is a tessellation of ''n''-dimensional Euclidean space by congruent parallelotopes (e.g. bricks). Its opposite is irregular grid. Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences, structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods. Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates (i\cdot dx, j\cdot dy) in 2D or (i\cdot dx, j\cdot dy, k\cdot dz) in 3D for some real numbers ''dx'', ''dy'', and ''dz'' representing the grid spacing. Related grids A Cartesian grid is a special case where the elements are uni ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Kronecker Sum Of Discrete Laplacians
In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain. General form of the Kronecker sum of discrete Laplacians In a general situation of the separation of variables in the discrete case, the multidimensional discrete Laplacian is a Kronecker sum of 1D discrete Laplacians. Example: 2D discrete Laplacian on a regular grid with the homogeneous Dirichlet boundary condition Mathematically, using the Kronecker sum: :L = \mathbf\oplus\mathbf=\mathbf\otimes\mathbf+\mathbf\otimes\mathbf, \, where \mathbf and \mathbf are 1D discrete Laplacians in the ''x''- and ''y''-directions, correspondingly, and \mathbf are the identities of appropriate sizes. Both \mathbf and \mathbf must correspond to the case of the homogeneous Dirichlet boundary condition at end points of the ''x''- and ''y''-intervals, in order to generate the 2D discrete Lap ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Chebyshev Polynomials
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kind T_n are defined by : T_n(\cos \theta) = \cos(n\theta). Similarly, the Chebyshev polynomials of the second kind U_n are defined by : U_n(\cos \theta) \sin \theta = \sin\big((n + 1)\theta\big). That these expressions define polynomials in \cos\theta may not be obvious at first sight, but follows by rewriting \cos(n\theta) and \sin\big((n+1)\theta\big) using de Moivre's formula or by using the angle sum formulas for \cos and \sin repeatedly. For example, the double angle formulas, which follow directly from the angle sum formulas, may be used to obtain T_2(\cos\theta)=\cos(2\theta)=2\cos^2\theta-1 and U_1(\cos\theta)\sin\theta=\sin(2\theta)=2\cos\theta\sin\theta, which are respectively a polynomial in \cos\th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Operator Theory
In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis. If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory. Single operator theory Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category. Spectrum of operators The spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides cond ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]