Poisson Integral Formula
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson. Poisson kernels commonly find applications in control theory and two-dimensional problems in electrostatics. In practice, the definition of Poisson kernels are often extended to ''n''-dimensional problems. Two-dimensional Poisson kernels On the unit disc In the complex plane, the Poisson kernel for the unit disc is given by P_r(\theta) = \sum_^\infty r^e^ = \frac = \operatorname\left(\frac\right), \ \ \ 0 \le r < 1. This can be thought of in two ways: either as a function of ''r'' and ''θ'', or as a family of functions of ''θ'' indexed by ''r''. If is the open [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Potential Theory
In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that the two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the Mathematical singularity, singularities of harmonic functions would be said to belong to potential theory whilst a result ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Maximum Principle
In the mathematical fields of differential equations and geometric analysis, the maximum principle is one of the most useful and best known tools of study. Solutions of a differential inequality in a domain ''D'' satisfy the maximum principle if they achieve their maxima at the boundary of ''D''. The maximum principle enables one to obtain information about solutions of differential equations without any explicit knowledge of the solutions themselves. In particular, the maximum principle is a useful tool in the numerical approximation of solutions of ordinary and partial differential equations and in the determination of bounds for the errors in such approximations. In a simple two-dimensional case, consider a function of two variables such that :\frac+\frac=0. The weak maximum principle, in this setting, says that for any open precompact subset of the domain of , the maximum of on the closure of is achieved on the boundary of . The strong maximum principle says that, unle ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Lp Space
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz . spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. Preliminaries The -norm in finite dimensions The Euclidean length of a vector x = (x_1, x_2, \dots, x_n) in the n-dimensional real vector space \Reals^n is given by the Euclidean norm: \, x\, _2 = \left(^2 + ^2 + \dotsb + ^2\right)^. The Euclidean distance between two points x and y is the length \, x - y\, _2 of the straight line b ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Möbius Transformation
In geometry and complex analysis, a Möbius transformation of the complex plane is a rational function of the form f(z) = \frac of one complex number, complex variable ; here the coefficients , , , are complex numbers satisfying . Geometrically, a Möbius transformation can be obtained by first applying the inverse stereographic projection from the plane to the unit sphere, moving and rotating the sphere to a new location and orientation in space, and then applying a stereographic projection to map from the sphere back to the plane. These transformations preserve angles, map every straight line to a line or circle, and map every circle to a line or circle. The Möbius transformations are the projective transformations of the complex projective line. They form a group (mathematics), group called the Möbius group, which is the projective linear group . Together with its subgroups, it has numerous applications in mathematics and physics. Möbius geometry, Möbius geometries and t ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Upper Half-plane
In mathematics, the upper half-plane, is the set of points in the Cartesian plane with The lower half-plane is the set of points with instead. Arbitrary oriented half-planes can be obtained via a planar rotation. Half-planes are an example of two-dimensional half-space. A half-plane can be split in two quadrants. Affine geometry The affine transformations of the upper half-plane include # shifts (x,y)\mapsto (x+c,y), c\in\mathbb, and # dilations (x,y)\mapsto (\lambda x,\lambda y), \lambda > 0. Proposition: Let and be semicircles in the upper half-plane with centers on the boundary. Then there is an affine mapping that takes A to B. :Proof: First shift the center of to Then take \lambda=(\text\ B)/(\text\ A) and dilate. Then shift to the center of Inversive geometry Definition: \mathcal := \left\ . can be recognized as the circle of radius centered at and as the polar plot of \rho(\theta) = \cos \theta. Proposition: in and are collinear points. In ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Conformal Map
In mathematics, a conformal map is a function (mathematics), function that locally preserves angles, but not necessarily lengths. More formally, let U and V be open subsets of \mathbb^n. A function f:U\to V is called conformal (or angle-preserving) at a point u_0\in U if it preserves angles between directed curves through u_0, as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature. The conformal property may be described in terms of the Jacobian matrix and determinant, Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (Orthogonal matrix, orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix. For mappings in two dimensions, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Banach Space
In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly. Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term " Fréchet space". Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. Definition A Banach space is a complete nor ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hardy Space
In complex analysis, the Hardy spaces (or Hardy classes) H^p are spaces of holomorphic functions on the unit disk or upper half plane. They were introduced by Frigyes Riesz , who named them after G. H. Hardy, because of the paper . In real analysis Hardy spaces are spaces of distributions on the real -space \mathbb^n, defined (in the sense of distributions) as boundary values of the holomorphic functions. Hardy spaces are related to the ''Lp'' spaces. For 1 \leq p < \infty these Hardy spaces are s of spaces, while for |
|
Antiholomorphic
In mathematics, antiholomorphic functions (also called antianalytic functionsEncyclopedia of Mathematics, Springer and The European Mathematical Society, https://encyclopediaofmath.org/wiki/Anti-holomorphic_function, As of 11 September 2020, This article was adapted from an original article by E. D. Solomentsev (originator), which appeared in Encyclopedia of Mathematics, .) are a family of functions closely related to but distinct from holomorphic functions. A function of the complex variable z defined on an open set in the complex plane is said to be antiholomorphic if its derivative with respect to \bar z exists in the neighbourhood of each and every point in that set, where \bar z is the complex conjugate of z. A definition of antiholomorphic function follows: " function f(z) = u + i v of one or more complex variables z = \left(z_1, \dots, z_n\right) \in \Complex^n s said to be anti-holomorphic if (and only if) itis the complex conjugate of a holomorphic function \overline = ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Holomorphic Function
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex derivative in a neighbourhood is a very strong condition: It implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (is '' analytic''). Holomorphic functions are the central objects of study in complex analysis. Though the term '' analytic function'' is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis. Holomorphic functions are also sometimes referred to ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Absolutely Convergent
In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series \textstyle\sum_^\infty a_n is said to converge absolutely if \textstyle\sum_^\infty \left, a_n\ = L for some real number \textstyle L. Similarly, an improper integral of a function, \textstyle\int_0^\infty f(x)\,dx, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if \textstyle\int_0^\infty , f(x), dx = L. A convergent series that is not absolutely convergent is called conditionally convergent. Absolute convergence is important for the study of infinite series, because its definition guarantees that a series will have some "nice" behaviors of finite sums that not all convergent series possess. For instance, rearrangements do not change the value of the sum, which is not necessarily true for conditionally convergen ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Abel's Theorem
In mathematics, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel, who proved it in 1826. Theorem Let the Taylor series G (x) = \sum_^\infty a_k x^k be a power series with real coefficients a_k with radius of convergence 1. Suppose that the series \sum_^\infty a_k converges. Then G(x) is continuous from the left at x = 1, that is, \lim_ G(x) = \sum_^\infty a_k. The same theorem holds for complex power series G(z) = \sum_^\infty a_k z^k, provided that z \to 1 entirely within a single ''Stolz sector'', that is, a region of the open unit disk where , 1-z, \leq M(1-, z, ) for some fixed finite M > 1. Without this restriction, the limit may fail to exist: for example, the power series \sum_ \frac n converges to 0 at z = 1, but is unbounded near any point of the form e^, so the value at z = 1 is not the limit as z tends to 1 in the whole open disk. Note that G(z ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |