HOME

TheInfoList



OR:

In mathematics, in the area of
wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
analysis, a refinable function is a function which fulfils some kind of
self-similarity __NOTOC__ In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically se ...
. A function \varphi is called refinable with respect to the mask h if :\varphi(x)=2\cdot\sum_^ h_k\cdot\varphi(2\cdot x-k) This condition is called refinement equation, dilation equation or two-scale equation. Using the
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
(denoted by a star, *) of a function with a discrete mask and the dilation operator D one can write more concisely: :\varphi=2\cdot D_ (h * \varphi) It means that one obtains the function, again, if you convolve the function with a discrete mask and then scale it back. There is a similarity to
iterated function systems In mathematics, iterated function systems (IFSs) are a method of constructing fractals; the resulting fractals are often self-similar. IFS fractals are more related to set theory than fractal geometry. They were introduced in 1981. IFS fractal ...
and
de Rham curve In mathematics, a de Rham curve is a certain type of fractal curve named in honor of Georges de Rham. The Cantor function, Cesàro curve, Minkowski's question mark function, the Lévy C curve, the blancmange curve, and Koch curve are all spe ...
s. The operator \varphi\mapsto 2\cdot D_ (h * \varphi) is linear. A refinable function is an
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
of that operator. Its absolute value is not uniquely defined. That is, if \varphi is a refinable function, then for every c the function c\cdot\varphi is refinable, too. These functions play a fundamental role in
wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
theory as scaling functions.


Properties


Values at integral points

A refinable function is defined only implicitly. It may also be that there are several functions which are refinable with respect to the same mask. If \varphi shall have finite support and the function values at integer arguments are wanted, then the two scale equation becomes a system of
simultaneous linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in th ...
. Let a be the minimum index and b be the maximum index of non-zero elements of h, then one obtains \begin \varphi(a)\\ \varphi(a+1)\\ \vdots\\ \varphi(b) \end = \begin h_ & & & & & \\ h_ & h_ & h_ & & & \\ h_ & h_ & h_ & h_ & h_ & \\ \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ & h_ & h_ & h_ & h_ & h_ \\ & & & h_ & h_ & h_ \\ & & & & & h_ \end \begin \varphi(a)\\ \varphi(a+1)\\ \vdots\\ \varphi(b) \end. Using the discretization operator, call it Q here, and the transfer matrix of h, named T_h, this can be written concisely as Q\varphi = T_h Q\varphi. This is again a fixed-point equation. But this one can now be considered as an
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
-
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
problem. That is, a finitely supported refinable function exists only (but not necessarily), if T_h has the eigenvalue 1.


Values at dyadic points

From the values at integral points you can derive the values at dyadic points, i.e. points of the form k\cdot 2^, with k\in\Z and j\in\N. :\varphi = D_ (2\cdot (h * \varphi)) :D_2 \varphi = 2\cdot (h * \varphi) :Q(D_2 \varphi) = Q(2\cdot (h * \varphi)) = 2\cdot (h * Q\varphi) The star denotes the
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
of a discrete filter with a function. With this step you can compute the values at points of the form \frac. By replacing iteratedly \varphi by D_2 \varphi you get the values at all finer scales. Q(D_\varphi) = 2\cdot (h * Q(D_\varphi))


Convolution

If \varphi is refinable with respect to h, and \psi is refinable with respect to g, then \varphi*\psi is refinable with respect to h*g.


Differentiation

If \varphi is refinable with respect to h, and the derivative \varphi' exists, then \varphi' is refinable with respect to 2\cdot h. This can be interpreted as a special case of the convolution property, where one of the convolution operands is a derivative of the Dirac impulse.


Integration

If \varphi is refinable with respect to h, and there is an antiderivative \Phi with \Phi(t) = \int_0^\varphi(\tau)\,\mathrm\tau, then the antiderivative t \mapsto \Phi(t) + c is refinable with respect to mask \frac\cdot h where the constant c must fulfill c\cdot \left(1 - \sum_j h_j\right) = \sum_j h_j \cdot \Phi(-j). If \varphi has bounded support, then we can interpret integration as convolution with the
Heaviside function The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive argume ...
and apply the convolution law.


Scalar products

Computing the scalar products of two refinable functions and their translates can be broken down to the two above properties. Let T be the translation operator. It holds \langle \varphi, T_k \psi\rangle = \langle \varphi * \psi^*, T_k\delta\rangle = (\varphi*\psi^*)(k) where \psi^* is the
adjoint In mathematics, the term ''adjoint'' applies in several situations. Several of these share a similar formalism: if ''A'' is adjoint to ''B'', then there is typically some formula of the type :(''Ax'', ''y'') = (''x'', ''By''). Specifically, adjoin ...
of \psi with respect to
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
, i.e., \psi^* is the flipped and
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
d version of \psi, i.e., \psi^*(t) = \overline. Because of the above property, \varphi*\psi^* is refinable with respect to h*g^*, and its values at integral arguments can be computed as eigenvectors of the transfer matrix. This idea can be easily generalized to integrals of products of more than two refinable functions.


Smoothness

A refinable function usually has a fractal shape. The design of continuous or smooth refinable functions is not obvious. Before dealing with forcing smoothness it is necessary to measure smoothness of refinable functions. Using the Villemoes machine one can compute the smoothness of refinable functions in terms of Sobolev exponents. In a first step the refinement mask h is divided into a filter b, which is a power of the smoothness factor (1,1) (this is a binomial mask) and a rest q. Roughly spoken, the binomial mask b makes smoothness and q represents a fractal component, which reduces smoothness again. Now the Sobolev exponent is roughly the order of b minus
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 of ...
of the
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
of T_.


Generalization

The concept of refinable functions can be generalized to functions of more than one variable, that is functions from \R^d \to \R. The most simple generalization is about
tensor product In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \to V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otime ...
s. If \varphi and \psi are refinable with respect to h and g, respectively, then \varphi\otimes\psi is refinable with respect to h\otimes g. The scheme can be generalized even more to different scaling factors with respect to different dimensions or even to mixing data between dimensions. Instead of scaling by scalar factor like 2 the signal the coordinates are transformed by a matrix M of integers. In order to let the scheme work, the absolute values of all eigenvalues of M must be larger than one. (Maybe it also suffices that \left, \det M\>1.) Formally the two-scale equation does not change very much: \varphi(x)=\left, \det M\\cdot\sum_ h_k\cdot\varphi(M\cdot x-k) \varphi=\left, \det M\\cdot D_ (h * \varphi)


Examples

* If the definition is extended to distributions, then the Dirac impulse is refinable with respect to the unit vector \delta, that is known as
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
. The n-th derivative of the Dirac distribution is refinable with respect to 2^n\cdot\delta. * The
Heaviside function The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive argume ...
is refinable with respect to \frac\cdot\delta. * The
truncated power function In mathematics, the truncated power function with exponent n is defined as :x_+^n = \begin x^n &:\ x > 0 \\ 0 &:\ x \le 0. \end In particular, :x_+ = \begin x &:\ x > 0 \\ 0 &:\ x \le 0. \end and interpret the exponent as conventional power ...
s with exponent n are refinable with respect to \frac\cdot\delta. * The
triangular function A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as ''th ...
is a refinable function. B-spline functions with successive integral nodes are refinable, because of the convolution theorem and the refinability of the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
for the interval ,1)_(a_boxcar_function). *_All_polynomial_function.html" ;"title="boxcar_function.html" ;"title=",1) (a boxcar function">,1) (a boxcar function). * All polynomial function">boxcar_function.html" ;"title=",1) (a boxcar function">,1) (a boxcar function). * All polynomial functions are refinable. For every refinement mask there is a polynomial that is uniquely defined up to a constant factor. For every polynomial of degree n there are many refinement masks that all differ by a mask of type v * (1,-1)^ for any mask v and the convolutional power (1,-1)^. * A rational function \varphi is refinable if and only if it can be represented using partial fractions as \varphi(x) = \sum_ \frac, where k is a positive number, positive [ atural number and s is a real sequence with finitely many non-zero elements (a
Laurent polynomial In mathematics, a Laurent polynomial (named after Pierre Alphonse Laurent) in one variable over a field \mathbb is a linear combination of positive and negative powers of the variable with coefficients in \mathbb. Laurent polynomials in ''X'' f ...
) such that s , (s \uparrow 2) (read: \exists h(z)\in\mathbb ,z^ h(z)\cdot s(z) = s(z^2)). The Laurent polynomial 2^\cdot h is the associated refinement mask.


References

{{reflist


See also

* Subdivision scheme Wavelets