HOME



picture info

Crank–Nicolson Method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a Big O notation, second-order method in time. It is Explicit and implicit methods, implicit in time, can be written as an implicit Runge–Kutta method, and it is Numerical stability, numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s. For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally Numerical stability, stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step \Delta t times the thermal diffusivity to the square of space step, \Delta x^2, is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Numerical Analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulati ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Trapezoidal Rule (differential Equations)
In numerical analysis and scientific computing, the trapezoidal rule is a numerical method to solve ordinary differential equations derived from the trapezoidal rule for computing integrals. The trapezoidal rule is an implicit second-order method, which can be considered as both a Runge–Kutta method and a linear multistep method. Method Suppose that we want to solve the differential equation y' = f(t,y). The trapezoidal rule is given by the formula y_ = y_n + \tfrac 1 2 h \Big( f(t_n,y_n) + f(t_,y_) \Big), where h = t_ - t_n is the step size. This is an implicit method: the value y_ appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler method to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method. Cutting short, using only the guess from Eulers method is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cartesian Grid
A regular grid is a tessellation of ''n''-dimensional Euclidean space by congruent parallelotopes (e.g. bricks). Its opposite is irregular grid. Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences, structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods. Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates (i\cdot dx, j\cdot dy) in 2D or (i\cdot dx, j\cdot dy, k\cdot dz) in 3D for some real numbers ''dx'', ''dy'', and ''dz'' representing the grid spacing. Related grids A Cartesian grid is a special case where the elements are un ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Advection
In the fields of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is also a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved extensive quantity can be advected by a fluid that can hold or contain the quantity or substance. During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Inversion
In linear algebra, an invertible matrix (''non-singular'', ''non-degenarate'' or ''regular'') is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. An invertible matrix multiplied by its inverse yields the identity matrix. Invertible matrices are the same size as their inverse. Definition An -by- square matrix is called invertible if there exists an -by- square matrix such that\mathbf = \mathbf = \mathbf_n ,where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix. Over a field, a square matrix that is ''not'' invertible is called singular or degenerat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Finite Difference
A finite difference is a mathematical expression of the form . Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation. The difference operator, commonly denoted \Delta, is the operator (mathematics), operator that maps a function to the function \Delta[f] defined by \Delta[f](x) = f(x+1)-f(x). A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations. Certain Recurrence relation#Relationship to difference equations narrowly defined, recurrence relations can be written as difference equations by replacing iteration notation with finite differences. In numerical analysis, finite differences are widely used for #Relation with derivatives, approximating derivatives, and the term "finite difference" is often used a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diffusion Equation
The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances. Statement The equation is usually written as: \frac = \nabla \cdot \big D(\phi,\mathbf) \ \nabla\phi(\mathbf,t) \big where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Tridiagonal Matrix Algorithm
In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve Tridiagonal matrix, tridiagonal systems of equations. A tridiagonal system for ''n'' unknowns may be written as :a_i x_ + b_i x_i + c_i x_ = d_i, where a_1 = 0 and c_n = 0. : \begin b_1 & c_1 & & & 0 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ 0 & & & a_n & b_n \end \begin x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end = \begin d_1 \\ d_2 \\ d_3 \\ \vdots \\ d_n \end . For such systems, the solution can be obtained in O(n) operations instead of O(n^3) required by Gaussian elimination. A first sweep eliminates the a_i's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrice ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tridiagonal
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: :\begin 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end. The determinant of a tridiagonal matrix is given by the ''continuant'' of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm. Properties A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of ''p'' 1-by-1 and ''q'' 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Temporal Discretization
In applied physics and engineering, temporal discretization is a mathematical technique for solving transient problems, such as flow problems. Transient problems are often solved using computer-aided engineering (CAE) simulations, which require discretizing the governing equations in both space and time. Temporal discretization involves the integration of every term in various equations over a time step (\Delta t). The spatial domain can be discretized to produce a semi-discrete form: \frac(x,t) = F(\varphi).~ The first-order temporal discretization using backward differences is \frac = F(\varphi), And the second-order discretization is \frac = F(\varphi), where * \varphi is a scalar * n + 1 is the value at the next time, t + \Delta t * n is the value at the current time, t * n - 1 is the value at the previous time, t - \Delta t The function F(\varphi) is evaluated using implicit- and explicit-time integration. Description Temporal discretization is done by in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Forward Euler Method
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book ''Institutionum calculi integralis'' (published 1768–1770). The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method. Geometrical description Purpose and why it works Consider the problem of calculating the shape of an unknown curve which starts at a given point and satis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Partial Differential Equation
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to how is thought of as an unknown number solving, e.g., an algebraic equation like . However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]