How does hyperacuity differ from traditional acuity?
The best example of the distinction between acuity and hyperacuity comes from vision, for example when observing stars on a night sky. The first stage is the optical imaging of the outside world on the retina. Light impinges on the mosaic of receptor sense cells, rods and cones, which covers the retinal surface without gaps or overlap, just like the detectingAnalysis of hyperacuity mechanism
Details of the neural apparatus for achieving hyperacuity still await discovery. That the hyperacuity apparatus involves signals from a range of individual receptor cells, usually in more than one location of the stimulus space, has implications concerning performance in these tasks. Low contrast, close proximity of neighboring stimuli (crowding), and temporal asynchrony of pattern components are examples of factors that cause reduced performance. Of some conceptual interest are age changes and susceptibility to perceptual learning which can help in understanding underlying neural channeling. Two basic algorithms have been proposed to explain mammalian visual hyperacuity: spatial, based on population firing rates, and temporal, based on temporal delays in response to miniature eye movements. While none of them gained empirical support so far, the plausibility of the former had been critically questioned by the discrete nature of neural firing Rucci, M., Ahissar, E., and Burr, D. (2018) Temporal coding of visual space. Trends in cognitive sciences 22, 883-895 doi: 10.1016/j.tics.2018.07.009 The optics of the human eye are extremely simple, the main imaging component being a single element lens which can change its strength by muscular control. There is only limited facility for correction of many of the aberrations which are normally corrected in good quality instrumental optical systems.''Computer Vision: A unified, biologically-inspired approach'' by Ian Overington Pages 7 to 8 and 31 to 34 Published by Elsevier North Holland 1992 Such a simple lens must inevitably have a significant amount of spherical aberration, which produces secondary lobes in the spread function. However, it has been found by experiment that light entering the pupil off-axis is less efficient in creating an image (the Stiles-Crawford effect), which has the effect of substantially reducing these unwanted side lobes. Also, the effects of diffraction limits can, with care, be used to partially compensate for the aberrations. The retinal receptors are physically situated behind a neural layer carrying the post-retinal processing elements. Light cannot pass through this layer undistorted. In fact, measurements on the Modulation Transfer Function (MTF) suggest that the MTF degradations due to the diffusion through that neural layer are of a similar order as those due to the optics. By interplay of these different components it has been found that the overall optical quality, although poor compared to photographic optics, can remain tolerably near constant through a considerable range of pupil diameters and light levels. When presented with colored information the optical imperfections are particularly great. The optics have residual uncorrected chromatic aberration of nearly 2 dioptres from extreme red to extreme blue/violet, mainly in the green to blue/violet region. Ophthalmologists have for many decades used this large change of focus through the spectrum in the process of providing correction spectacles. This means that such corrections can be as simple as the eye lens itself. In addition, this large chromatic aberration has also been used to advantage within the make up of the eye itself. Instead of having the three primary colors (red, green & blue) to manipulate, nature has used this gross chromatic shift to provide a cortical visual function which is based on three sets of color opponency instead of three basic primary colors.''Computer Vision: A unified, biologically-inspired approach'' by Ian Overington Pages 8 to 15 and 286 - 288 Published by Elsevier North Holland 1992 These are red / green, yellow / blue and black / white, this black / white being synonymous with brightness. Then, by using just one very high resolution opponency between red & green primaries, nature cleverly uses a mean of these two colors (i.e. yellow ), together with very low resolution blue to create a background color wash capability. In turn (by using the hyperacuity capability on the low resolution opponency) this can also serve as the source of perception of 3D depth. The human eye has a roughly hexagonal matrix of photodetectors.''Computer Vision: A unified, biologically-inspired approach'' by Ian Overington Pages 34-36 Published by Elsevier North Holland 1992 There is now considerable evidence that such a matrix layout provides optimum efficiency of information transfer. A number of other workers have considered using hexagonal matrices, but they then seem to subscribe to a mathematical approach and axes at 60 degrees differential orientation. In turn this must then make use of complex numbers. Overington & his team sought (and found), instead, a way to approximate to a hexagonal matrix, while at the same time retaining a conventional Cartesian layout for processing. Although there are many and varied spatial interactions evident in the early neural networks of the human visual system, only a few are of great importance in high fidelity information sensing. The rest are predominantly associated with processes such as local adaptation. It has therefore been found that the most important interactions are of very local extent, but it is the subtleties of usage of these interactions which seem most important. For hexagonal matrices a single ring of six receptors surrounding an addressed pixel is the simplest symmetrical layout. The general finding from primate receptive field studies is that any such local group yields no output for a uniform input illumination. So this is essentially similar to one of the classical Laplacian receptive fields for square arrays - that with weightings of -1 on each side and -0.5 on each corner. The only difference is an aspect ratio of 8:7.07 (or approximately 8:7 to within 1%). Very useful further evidence of the processes going on in his area comes from the electron-microscopy studies of Kolb ''Organization of the outer plexiform layer of the primate retina: electron microscopy of Golgi-impregnated sells'' by H. Kolb Philosophical Transactions of the Royal Society Lond B, Soc Lond B Volume 258 page 261 1970 These clearly show the neural structures which lead to difference signals being transmitted further. If one combines a point spread function having a Gaussian form and having an S.D. of 1.3 'pixels' with a single ring Laplacian - type operator, the resultant is a function with very similar properties to a DOG function as discussed by Marr.''Vision'' by David Marr Published by Freeman, San Francisco 1988 It is normally assumed, both in computer image processing and in visual science, that a local excitatory / inhibitory process is effectively a second differencing process. However, there seems to be strong psychophysical evidence for human vision that it is first differences which control human visual performance. It is necessary for the positive & negative parts of all outputs from Laplacian-like neurones to be separated for sending onwards to the cortex, since it is impossible to transmit negative signals. This means that each neurone of this type must be considered to be a set of six dipoles, such that each surround inhibition can only cancel its own portion of the central stimulation. Such a separation of positive and negative components is totally compatible with retinal physiology and is one possible function for the known pair of midget bipolar channels for each receptor.''Computer Vision: A unified, biologically-inspired approach'' by Ian Overington Pages 45-46 Published by Elsevier North Holland 1992 The basic evidence for orientation sensing in human vision is that it appears to be carried out (in Area 17 of the striate cortex) by banks of neurones at fairly widely spaced orientations.''Plasticity of ocular dominance columns in monkey striate cortex'' by D.H. Hubel, T.N Wiesel and S. LeVay, Philosophical Transactions of the Royal Society B, Volume 278, Page 377, 1977 The neurones as measured have characteristically elliptical receptive fields.''Computer Vision: A unified, biologically-inspired approach'' by Ian Overington Pages 46 to 49 Published by Elsevier North Holland 1992 However, both the actual interval between the orientations and the exact form & aspect ratio of the elliptical fields is open to question, but at the same time the said receptive fields have to have been compounded with the midget receptive fields at the retina. Yet again, for probe measurements of 'single neurone' performance, the receptive field measured includes the effects of all stages of optical & neural processing that have gone before. For orientation specific units operating on a hexagonal matrix, it makes most sense to have them with their primary & secondary axes occurring every 30 degrees of orientation. This 30 degree separation of orientations agrees with angular spacing of such units deduced to be desirable byHyperacuity in various sense modalities
The distinction between resolving power or acuity, literally sharpness, which depends on the spacing of the individual receptors through which the outside world is sampled, and the ability to identify individual locations in the sensory space is universal among modalities. There are many other examples where the organism's performance substantially surpasses the spacing of the concerned receptor cell population. The normal human has only three kinds of color receptors in the retina, yet inClinical applications
In clinical vision tests, hyperacuity has a special place because its processing is at the interfaces of the eye's optics, retinal functions, activation of the primary visual cortex and the perceptual apparatus. In particular, the determination of normalReferences
{{reflist Braille Eye Vision Ophthalmology