Crank–Nicolson Method
   HOME
*



picture info

Crank–Nicolson Method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century. For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step \Delta t times the thermal diffusivity to the square of space step, \Delta x^2, is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations. The method ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Numerical Analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt at finding approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living ce ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Midpoint Method
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, : y'(t) = f(t, y(t)), \quad y(t_0) = y_0 . The explicit midpoint method is given by the formula the implicit midpoint method by for n=0, 1, 2, \dots Here, h is the ''step size'' — a small positive number, t_n=t_0 + n h, and y_n is the computed approximate value of y(t_n). The explicit midpoint method is sometimes also known as the modified Euler method, the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods. The name of the method comes from the fact that in the formula above, the function f giving the slope of the solution is evaluated at t = t_n + h/2= \tfrac, the midpoint between t_n at which the value of y(t) is known and t_ at which the va ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cartesian Grid
A regular grid is a tessellation of ''n''-dimensional Euclidean space by congruent parallelotopes (e.g. bricks). Its opposite is irregular grid. Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences, structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods. Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates (i\cdot dx, j\cdot dy) in 2D or (i\cdot dx, j\cdot dy, k\cdot dz) in 3D for some real numbers ''dx'', ''dy'', and ''dz'' representing the grid spacing. Related grids A Cartesian grid is a special case where the elements are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Advection
In the field of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is also a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved, extensive quantity can be advected by a fluid that can hold or contain the quantity or substance. During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution ov ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Inversion
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . A square matrix that is ''not'' invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (-by- matrices for which ) do not hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Finite Difference
A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. The difference operator, commonly denoted \Delta is the operator that maps a function to the function \Delta /math> defined by :\Delta x)= f(x+1)-f(x). A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations, specially in the solving methods. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences. In numerical analysis, finite differences are widely used for approximating derivatives, and the term " ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diffusion Equation
The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation, when bulk velocity is zero. It is equivalent to the heat equation under some circumstances. Statement The equation is usually written as: where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear. The equation above applies wh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Tridiagonal Matrix Algorithm
In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system for ''n'' unknowns may be written as :a_i x_ + b_i x_i + c_i x_ = d_i, where a_1 = 0 and c_n = 0. : \begin b_1 & c_1 & & & 0 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ 0 & & & a_n & b_n \end \begin x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end = \begin d_1 \\ d_2 \\ d_3 \\ \vdots \\ d_n \end . For such systems, the solution can be obtained in O(n) operations instead of O(n^3) required by Gaussian elimination. A first sweep eliminates the a_i's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrices commonly arise fr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tridiagonal
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: :\begin 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end. The determinant of a tridiagonal matrix is given by the '' continuant'' of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm. Properties A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of ''p'' 1-by-1 and ''q'' 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Temporal Discretization
Temporal discretization is a mathematical technique applied to transient problems that occur in the fields of applied physics and engineering. Transient problems are often solved by conducting simulations using computer-aided engineering (CAE) packages, which require discretizing the governing equations in both space and time. Such problems are unsteady (e.g. flow problems), and therefore require solutions in which position varies as a function of time. Temporal discretization involves the integration of every term in different equations over a time step (\Delta t). The spatial domain can be discretized to produce a semi-discrete form: \frac(x,t) = F(\varphi).~ If the discretization is done using backward differences, the first-order temporal discretization is given as: \frac = F(\varphi), And the second-order discretization is given as: \frac = F(\varphi), where * \varphi is a scalar quantity. * n + 1 is the value at the next time level, t + \Delta t. * n is the value ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Forward Euler Method
In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book ''Institutionum calculi integralis'' (published 1768–1870). The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method. Informal geometrical description Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]