Bubnov-Galerkin Method
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, in the area of numerical analysis, Galerkin methods, named after the Russian mathematician Boris Galerkin, convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: * Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive definite
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
in the weak formulation, where the differential equation for a
physical system A physical system is a collection of physical objects. In physics, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment. The environment is ignored except for its effects on the ...
can be formulated via minimization of a
quadratic function In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before 20th century, the distinction was unclear between a polynomial ...
representing the system energy and the approximate solution is a linear combination of the given set of the basis functions.A. Ern, J.L. Guermond, ''Theory and practice of finite elements'', Springer, 2004, * Bubnov–Galerkin method (after Ivan Bubnov) does not require the
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
to be symmetric and substitutes the energy minimization with
orthogonality In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator. * Petrov–Galerkin method (after Georgii I. Petrov"Georgii Ivanovich Petrov (on his 100th birthday)", Fluid Dynamics, May 2012, Volume 47, Issue 3, pp 289-291, DOI 10.1134/S0015462812030015) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation. Examples of Galerkin methods are: * the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method,S. Brenner, R. L. Scott, ''The Mathematical Theory of Finite Element Methods'', 2nd edition, Springer, 2005, P. G. Ciarlet, ''The Finite Element Method for Elliptic Problems'', North-Holland, 1978, * the boundary element method for solving integral equations, * Krylov subspace methods. Y. Saad, ''Iterative Methods for Sparse Linear Systems'', 2nd edition, SIAM, 2003,


Example: matrix linear system

We first introduce and illustrate the Galerkin method as being applied to a system of linear equations A\mathbf x = \mathbf b with the following symmetric and positive definite matrix :A = \begin 2 & 0 & 0\\ 0 & 2 & 1\\ 0 & 1 & 2 \end and the solution and right-hand-side vectors :\mathbf x = \begin 1 \\ 0 \\ 0 \end, \quad \mathbf b = \begin 2 \\ 0 \\ 0 \end. Let us take :V = \begin 0 & 0\\ 1 & 0\\ 0 & 1 \end, then the matrix of the Galerkin equation is :V^* A V = \begin 2 & 1\\ 1 & 2 \end, the right-hand-side vector of the Galerkin equation is : V^* \mathbf b = \begin 0 \\ 0 \end, so that we obtain the solution vector :\mathbf y = \begin 0 \\ 0 \end to the Galerkin equation \left(V^* A V\right) \mathbf y = V^* \mathbf b, which we finally uplift to determine the approximate solution to the original equation as :V \mathbf y = \begin 0 \\ 0 \\ 0 \end. In this example, our original Hilbert space is actually the 3-dimensional Euclidean space \mathbb^3 equipped with the standard scalar product (\mathbf u, \mathbf v) = \mathbf u^T \mathbf v , our 3-by-3 matrix A defines the bilinear form a(\mathbf u, \mathbf v) = \mathbf u^T A \mathbf v , and the right-hand-side vector \mathbf b defines the bounded linear functional f(\mathbf v) = \mathbf b^T \mathbf v . The columns :\mathbf e_1 = \begin 0\\ 1\\ 0 \end \quad \mathbf e_2 = \begin 0\\ 0\\ 1 \end, of the matrix V form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix V^* A V are a(e_j, e_i),\, i, j = 1, 2, while the components of the right-hand-side vector V^* \mathbf b of the Galerkin equation are f(e_i),\, i = 1, 2. Finally, the approximate solution V \mathbf y is obtained from the components of the solution vector \mathbf y of the Galerkin equation and the basis as \sum_^2 y_j \mathbf e_j.


Linear equation in a Hilbert space


Weak formulation of a linear equation

Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
V, namely, : find u\in V such that for all v\in V, a(u,v) = f(v). Here, a(\cdot,\cdot) is a
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
(the exact requirements on a(\cdot,\cdot) will be specified later) and f is a bounded linear functional on V.


Galerkin dimension reduction

Choose a subspace V_n \subset V of dimension ''n'' and solve the projected problem: : Find u_n\in V_n such that for all v_n\in V_n, a(u_n,v_n) = f(v_n). We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute u_n as a finite linear combination of the basis vectors in V_n .


Galerkin orthogonality

The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since V_n \subset V, we can use v_n as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, \epsilon_n = u-u_n which is the error between the solution of the original problem, u, and the solution of the Galerkin equation, u_n : a(\epsilon_n, v_n) = a(u,v_n) - a(u_n, v_n) = f(v_n) - f(v_n) = 0.


Matrix form of Galerkin's equation

Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically. Let e_1, e_2,\ldots,e_n be a basis for V_n. Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find u_n \in V_n such that :a(u_n, e_i) = f(e_i) \quad i=1,\ldots,n. We expand u_n with respect to this basis, u_n = \sum_^n u_je_j and insert it into the equation above, to obtain :a\left(\sum_^n u_je_j, e_i\right) = \sum_^n u_j a(e_j, e_i) = f(e_i) \quad i=1,\ldots,n. This previous equation is actually a linear system of equations Au=f, where :A_ = a(e_j, e_i), \quad f_i = f(e_i).


Symmetry of the matrix

Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form a(\cdot,\cdot) is symmetric.


Analysis of Galerkin methods

Here, we will restrict ourselves to symmetric
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
s, that is :a(u,v) = a(v,u). While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case. The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution u_n. The analysis will mostly rest on two properties of the
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
, namely * Boundedness: for all u,v\in V holds *:a(u,v) \le C \, u\, \, \, v\, for some constant C>0 * Ellipticity: for all u\in V holds *:a(u,u) \ge c \, u\, ^2 for some constant c>0. By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).


Well-posedness of the Galerkin equation

Since V_n \subset V, boundedness and ellipticity of the bilinear form apply to V_n. Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.


Quasi-best approximation (Céa's lemma)

The error u-u_n between the original and the Galerkin solution admits the estimate :\, u-u_n\, \le \frac \inf_ \, u-v_n\, . This means, that up to the constant C/c, the Galerkin solution u_n is as close to the original solution u as any other vector in V_n. In particular, it will be sufficient to study approximation by spaces V_n, completely forgetting about the equation being solved.


Proof

Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary v_n\in V_n: :c\, u-u_n\, ^2 \le a(u-u_n, u-u_n) = a(u-u_n, u-v_n) \le C \, u-u_n\, \, \, u-v_n\, . Dividing by c \, u-u_n\, and taking the infimum over all possible v_n yields the lemma.


Galerkin's best approximation property in the energy norm

For simplicity of presentation in the section above we have assumed that the bilinear form a(u, v) is symmetric and positive definite, which implies that it is a scalar product and the expression \, u\, _a=\sqrt is actually a valid vector norm, called the ''energy norm''. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm. Using Galerkin a-orthogonality and the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality fo ...
for the energy norm, we obtain :\, u-u_n\, _a^2 = a(u-u_n, u-u_n) = a(u-u_n, u-v_n) \le \, u-u_n\, _a \, \, u-v_n\, _a. Dividing by \, u-u_n\, _a and taking the infimum over all possible v_n\in V_n proves that the Galerkin approximation u_n\in V_n is the best approximation in the energy norm within the subspace V_n \subset V, i.e. u_n\in V_n is nothing but the orthogonal, with respect to the scalar product a(u, v), projection of the solution u to the subspace V_n.


Galerkin method for stepped Structures

I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.


History

The approach is usually credited to Boris Galerkin. The method was explained to the Western reader by Hencky and Duncan among others. Its convergence was studied by Mikhlin and Leipholz Its coincidence with Fourier method was illustrated by Elishakoff et al. Its equivalence to Ritz's method for conservative problems was shown by Singer. Gander and Wanner showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. Elishakoff, Kaplunov and Kaplunov.Elishakoff, I., Julius Kaplunov, Elizabeth Kaplunov, 2020, “Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statement”, in Nonlinear Dynamics of Discrete and Continuous Systems (A. Abramyan, I. Andrianov and V. Gaiko, eds.), pp. 63-82, Springer, Berlin. show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.


See also

* Ritz method


References


External links

*
Galerkin Method from MathWorld
{{DEFAULTSORT:Galerkin Method Numerical analysis Numerical differential equations Articles containing proofs