HOME

TheInfoList



OR:

The optical transfer function (OTF) of an optical system such as a
camera A camera is an instrument used to capture and store images and videos, either digitally via an electronic image sensor, or chemically via a light-sensitive material such as photographic film. As a pivotal technology in the fields of photograp ...
,
microscope A microscope () is a laboratory equipment, laboratory instrument used to examine objects that are too small to be seen by the naked eye. Microscopy is the science of investigating small objects and structures using a microscope. Microscopic ...
,
human eye The human eye is a sensory organ in the visual system that reacts to light, visible light allowing eyesight. Other functions include maintaining the circadian rhythm, and Balance (ability), keeping balance. The eye can be considered as a living ...
, or
projector A projector or image projector is an optical device that projects an image (or moving images) onto a surface, commonly a projection screen. Most projectors create an image by shining a light through a small transparent lens, but some newer type ...
is a scale-dependent description of their imaging contrast. Its magnitude is the image contrast of the
harmonic In physics, acoustics, and telecommunications, a harmonic is a sinusoidal wave with a frequency that is a positive integer multiple of the ''fundamental frequency'' of a periodic signal. The fundamental frequency is also called the ''1st har ...
intensity pattern, 1 + \cos(2\pi \nu \cdot x), as a function of the spatial frequency, \nu, while its complex argument indicates a phase shift in the periodic pattern. The optical transfer function is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array,
retina The retina (; or retinas) is the innermost, photosensitivity, light-sensitive layer of tissue (biology), tissue of the eye of most vertebrates and some Mollusca, molluscs. The optics of the eye create a focus (optics), focused two-dimensional ...
, screen, or simply the next item in the optical transmission chain. Formally, the optical transfer function is defined as the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
of the
point spread function The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response ...
(PSF, that is, the
impulse response In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (). More generally, an impulse response is the reac ...
of the optics, the image of a point source). As a Fourier transform, the OTF is generally complex-valued; however, it is real-valued in the common case of a PSF that is symmetric about its center. In practice, the imaging contrast, as given by the magnitude or modulus of the optical-transfer function, is of primary importance. This derived function is commonly referred to as the modulation transfer function (MTF). The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular
pupil The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina.Cassin, B. and Solomon, S. (1990) ''Dictionary of Eye Terminology''. Gainesville, Florida: Triad Publishing Company. It appears black becau ...
. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp.


Definition and related concepts

Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of
spatial frequency In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier tra ...
. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively. Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not
monotonic In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions: :\mathrm(\nu)=\mathrm(\nu)e^ where :\mathrm(\nu) = \left\vert \mathrm(\nu) \right\vert, :\mathrm(\nu) = \mathrm(\mathrm(\nu)), and \mathrm(\cdot) represents the complex argument function, while \nu is the spatial frequency of the periodic pattern. In general \nu is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern. The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope. True to the definition of
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a function (mathematics), mathematical function that mathematical model, models the system's output for each possible ...
, \mathrm(0)=\mathrm(0) should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence \mathrm(0)\equiv 1. Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors. Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF).


Examples


Ideal lens system

A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure. It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500 of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between units is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200. The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the
Nyquist–Shannon sampling theorem The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample r ...
, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover,
aliasing In signal processing and related disciplines, aliasing is a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the ori ...
may lead to a further reduction of the image fidelity.


Imperfect lens system

An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure. As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and ''vice versa'', this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns. While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.


Optical system with a non-rotational symmetric aberration

Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with
trefoil A trefoil () is a graphic form composed of the outline of three overlapping rings, used in architecture, Pagan and Christian symbolism, among other areas. The term is also applied to other symbols with a threefold shape. A similar shape with f ...
, a non-rotational-symmetric aberration. Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.


Practical example – high-definition video system

While
optical resolution Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. E ...
, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language. Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the
Nyquist rate In signal processing, the Nyquist rate, named after Harry Nyquist, is a value equal to twice the highest frequency ( bandwidth) of a given function or signal. It has units of samples per unit time, conventionally expressed as samples per se ...
will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses
oversampling In signal processing, oversampling is the process of sampling (signal processing), sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if ...
, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.


The three-dimensional optical transfer function

Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions. A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the ''z''-axis, for ''x'' = ''y'' = 0, the transfer function is zero everywhere except at the origin. This ''missing cone'' is a well-known problem that prevents optical sectioning using a wide-field microscope. The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the ''z''-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the ''z''-axis for ''z'' ≠ 0; its integral, the 2D optical transfer, reaching a maximum at ''x'' = ''y'' = 0. This is only possible because the 3D optical transfer function diverges at the origin ''x'' = ''y'' = ''z'' = 0. The function values along the ''z''-axis of the 3D optical transfer function correspond to the
Dirac delta function In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line ...
.


Calculation

Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia,
GNU Octave GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
or
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches: # as the Fourier transform of the incoherent
point spread function The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response ...
, or # as the auto-correlation of the pupil function of the optical system Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.


Example


Ideal lens system with circular aperture


=Auto-correlation of the pupil function

= Since the optical transfer function is the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
of the
point spread function The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response ...
, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the
convolution theorem In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the product of their Fourier transforms. More generally, convolution in one domain (e.g., time dom ...
it can be seen that the optical transfer function is in fact the
autocorrelation Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at differe ...
of the pupil function. The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of 2\nu, where \nu is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for \nu = 0, so the resulting area should be divided by \pi. The intersecting area can be calculated as the sum of the areas of two identical
circular segment In geometry, a circular segment or disk segment (symbol: ) is a region of a disk which is "cut off" from the rest of the disk by a straight line. The complete line is known as a '' secant'', and the section inside the disk as a '' chord''. More ...
s: \theta/2 - \sin(\theta)/2, where \theta is the circle segment angle. By substituting , \nu, = \cos(\theta/2), and using the equalities \sin(\theta)/2 = \sin(\theta /2)\cos(\theta /2) and 1 = \nu^2 + \sin(\arccos(, \nu, ))^2, the equation for the area can be rewritten as \arccos(, \nu, ) - , \nu, \sqrt. Hence the normalized optical transfer function is given by: : \operatorname(\nu) = \frac \left(\arccos(, \nu, )-, \nu, \sqrt\right). A more detailed discussion can be found in and.


Numerical evaluation

The one-dimensional optical transfer function can be calculated as the
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
of the line spread function. This data is graphed against the
spatial frequency In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier tra ...
data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data. The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations: :\operatorname = \mathcal \left \operatorname\right\qquad \qquad \operatorname= \int f(x) e^\, dx Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform \mathcal. :\operatorname = \mathcal operatorname= Y_k = \sum_^ y_n e^ \qquad k\in
, N-1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
where * Y_k\, = the k^\text value of the MTF * N\, = number of data points * n\, = index * k\, = k^\text term of the LSF data * y_n\, = n^\text\, pixel position * i=\sqrt : e^ = \cos(a) \, \pm \, i \sin(a) :\operatorname= \mathcal operatorname= Y_k = \sum_^ y_n \left cos\left(k\frac n\right) - i\sin\left(k \frac n\right)\right\qquad k\in ,N-1/math> The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.


The vectorial transfer function

At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a ''vectorial'' point spread function. Similarly, a ''vectorial'' optical transfer function can be determined as shown in () and ().


Measurement

The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.


Starting from the point spread function

The optical transfer function is defined as the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
of the
impulse response In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (). More generally, an impulse response is the reac ...
of the optical system, also called the
point spread function The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response ...
. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.


Using extended test objects for spatially invariant optics

When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.


The line-spread function

The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles. The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.


Edge-spread function

The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric. As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a
black body A black body or blackbody is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. The radiation emitted by a black body in thermal equilibrium with its environment is ...
. The box area is defined to be approximately 10% of the total frame area. The image
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a Raster graphics, raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, p ...
data is translated into a two-dimensional array (
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a Raster graphics, raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, p ...
intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function. :\operatorname = \frac \qquad \qquad \sigma\, = \sqrt \qquad \qquad \mu\, = \frac where * ESF = the output array of normalized pixel intensity data * X\, = the input array of pixel intensity data * x_i\, = the ''i''th element of X\, * \mu\, = the average value of the pixel intensity data * \sigma\, = the standard deviation of the pixel intensity data * n\, = number of pixels used in average The line spread function is identical to the
first derivative First most commonly refers to: * First, the ordinal form of the number 1 First or 1st may also refer to: Acronyms * Faint Images of the Radio Sky at Twenty-Centimeters, an astronomical survey carried out by the Very Large Array * Far Infrared a ...
of the edge spread function,Mazzetta, J.A.; Scopatz, S.D. (2007). Automated Testing of Ultraviolet, Visible, and Infrared Sensors Using Shared Optics.'' Infrared Imaging Systems: Design Analysis, Modeling, and Testing XVIII, Vol. 6543'', pp. 654313-1 654313-14 which is differentiated using
numerical methods Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods t ...
. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows: :\operatorname = \frac \operatorname(x) Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the
finite difference A finite difference is a mathematical expression of the form . Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation. The difference operator, commonly d ...
: : \operatorname = \frac\operatorname(x) \approx \frac : \operatorname \approx \frac where: * i\, = the index i = 1,2,\dots,n-1 * x_i\, = i^\text\, position of the i^\text\, pixel * \operatorname_i\, = ESF of the i^\text\, pixel


Using a grid of black and white lines

Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).


Factors affecting MTF in typical camera systems

In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the
Nyquist rate In signal processing, the Nyquist rate, named after Harry Nyquist, is a value equal to twice the highest frequency ( bandwidth) of a given function or signal. It has units of samples per unit time, conventionally expressed as samples per se ...
may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a ' phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce
aliasing In signal processing and related disciplines, aliasing is a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the ori ...
by eliminating spatial frequencies above the
Nyquist rate In signal processing, the Nyquist rate, named after Harry Nyquist, is a value equal to twice the highest frequency ( bandwidth) of a given function or signal. It has units of samples per unit time, conventionally expressed as samples per se ...
of the display.


Oversampling and downconversion to maintain the optical transfer function

The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the
Nyquist rate In signal processing, the Nyquist rate, named after Harry Nyquist, is a value equal to twice the highest frequency ( bandwidth) of a given function or signal. It has units of samples per unit time, conventionally expressed as samples per se ...
to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by
convolution In mathematics (in particular, functional analysis), convolution is a operation (mathematics), mathematical operation on two function (mathematics), functions f and g that produces a third function f*g, as the integral of the product of the two ...
with a two-dimensional sin(''x'')/''x''
weighting The process of frequency weighting involves emphasizing the contribution of particular aspects of a phenomenon (or of a set of data) over others to an outcome or result; thereby highlighting those aspects in comparison to others in the analy ...
function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as
Photoshop Adobe Photoshop is a raster graphics editor developed and published by Adobe for Windows and macOS. It was created in 1987 by Thomas and John Knoll. It is the most used tool for professional digital art, especially in raster graphics editin ...
. Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the
BBC The British Broadcasting Corporation (BBC) is a British public service broadcaster headquartered at Broadcasting House in London, England. Originally established in 1922 as the British Broadcasting Company, it evolved into its current sta ...
did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing). Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency. Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range.


Trend to large-format DSLRs and improved MTF potential

There has recently been a shift towards the use of large image format
digital single-lens reflex camera A digital single-lens reflex camera (digital SLR or DSLR) is a digital camera that combines the optics and mechanisms of a single-lens reflex camera with a solid-state image sensor and digitally records the images from the sensor. The reflex des ...
s driven by the need for low-light sensitivity and narrow
depth of field The depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus (optics), focus in an image captured with a camera. See also the closely related depth of focus. Factors affecting depth ...
effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.


Digital inversion of the OTF

Due to optical effects the contrast may be sub-optimal and approaches zero before the
Nyquist frequency In signal processing, the Nyquist frequency (or folding frequency), named after Harry Nyquist, is a characteristic of a Sampling (signal processing), sampler, which converts a continuous function or signal into a discrete sequence. For a given S ...
of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio.


Limitations

In general, the
point spread function The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response ...
, the image of a point source also depends on factors such as the
wavelength In physics and mathematics, wavelength or spatial period of a wave or periodic function is the distance over which the wave's shape repeats. In other words, it is the distance between consecutive corresponding points of the same ''phase (waves ...
(
color Color (or colour in English in the Commonwealth of Nations, Commonwealth English; American and British English spelling differences#-our, -or, see spelling differences) is the visual perception based on the electromagnetic spectrum. Though co ...
), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly, the optical transfer function does not describe the optical system accurately. Inaccuracies can often be mitigated by a collection of optical transfer functions at well-chosen wavelengths or field-positions. However, a more complex characterization may be necessary for some imaging systems such as the Light field camera.


See also

*
Bokeh In photography, bokeh ( or ; ) is the aesthetic quality of the blur produced in out-of-focus parts of an image, whether foreground or background or both. It is created by using a wide aperture lens. Some photographers incorrectly restr ...
*
Gamma correction Gamma correction or gamma is a Nonlinearity, nonlinear operation used to encode and decode Relative luminance, luminance or CIE 1931 color space#Tristimulus values, tristimulus values in video or still image systems. Gamma correction is, in the s ...
* Minimum resolvable contrast * Minimum resolvable temperature difference *
Optical resolution Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. E ...
*
Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in deci ...
*
Signal transfer function The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives ...
* Strehl ratio *
Transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a function (mathematics), mathematical function that mathematical model, models the system's output for each possible ...
* Wavefront coding


References

{{Reflist


External links


"Modulation transfer function"
by Glenn D. Boreman on SPIE Optipedia.
"How to Measure MTF and other Properties of Lenses"
by Optikos Corporation.
Transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a function (mathematics), mathematical function that mathematical model, models the system's output for each possible ...