The OPTACON (OPtical to TActile CONverter) is an electromechanical
device that enables blind people to read printed material that has not
been transcribed into
* 1 Description
* 2 History
* 3 Previous history of blind reading machine development
* 4 Funding for
The main electronics unit contains a "tactile array" onto which the
blind person places his/her index finger. The
In 1962, during a sabbatical year in Switzerland, Linvill visited an
IBM laboratory in Germany, where he observed a high speed printer that
used a set of small pins—like hammers—to print letters onto strips
of paper. He thought, "If you could feel the hammers with your
fingertip, you could surely recognize the image." So on our return to
Zurich, I told my wife and son and daughter, Candy, who was blind:
"Guys, I've got the most magnificent idea. We'll make something that
will let Candy read ordinary printed material." And although his
family laughed at this notion, "oh, that'll never work!" the idea for
Upon returning to Stanford, Linvill, together with graduate students
G.J. Alonzo and John Hill, developed the concept further with the
support of the
Office of Naval Research
* The high power efficiency of the piezoelectric bimorphs made a battery-powered reading machine possible. * The small size and weight of the bimorphs was also essential for portability. * Later psychophysical experiments discovered that vibration around the resonance of conveniently sized bimorphs was optimum for the sense of touch.
In 1964 Linvill applied for a patent, and U.S. Patent 3,229,387 was granted in January 1966.
PREVIOUS HISTORY OF BLIND READING MACHINE DEVELOPMENT
Amazingly, in 1913 a reading machine for the blind, called the
optophone , was built by Fournier d’Albe in England. It used
selenium photosensors to detect black print and convert it into an
audible output which could be interpreted by a blind person. Only a
few units were built and reading was exceedingly slow. In 1943,
In 1957 U.S. Veteran’s Administration , Prosthetic and Sensory Aids Service (PSAS), under Dr. Eugene Murphy, began funding the development of a reading machine for the blind. The principal investigator on this project was Hans Mauch , a German scientist brought to the U.S. after World War II. (During World War II Mauch worked for the German Air Ministry as part of the German V-1 missile development team.)
Mauch worked on reading machines having an “optophone-like” output, a “speech-like” sound output, and a synthetic speech output . The only one of these that was competitive to the Optacon development was the Stereotoner, basically an improved optophone. The Stereotoner design concept was that the user would move a vertical array of photosensors across a line of text. Each photosensor would send its signal to an audio oscillator set to a different frequency, with the top photosensor driving the highest frequency and the bottom photosensor driving the lowest frequency. The user would then hear tones and chords from which the letters could be identified.
Initially Linvill was unaware that the
FUNDING FOR OPTACON RESEARCH AND DEVELOPMENT
After Linvill and Bliss decided to join forces to work on Linvill’s
vision of a reading machine, it became apparent that they needed to
obtain funding for this objective, rather than the objectives of
Department of Defense and
Armed with this result, Bliss and Linvill made an appointment to
visit Dr. Murphy in
As it turned out, this rejection was fortunate. The Office of
Education was directed by a colleague of Linvill’s from when he
worked at Bell Laboratories. Development of a reading aid for the
blind was very relevant to their mission since providing instructional
material to blind mainstreamed students was an important problem.
Linvill presented the
This higher level of funding was necessary to develop the custom
integrated circuits that enabled the Optacon’s small size, which was
critical to its success. The
DEVELOPMENT OF THE OPTACON
With funding established, Bliss joined the Stanford faculty half-time, the other half being at SRI. At SRI tactile reading experiments were conducted to maximize the reading rates achievable with the Optacon, as well as development of the bimorph tactile array and the optics for the camera. At Stanford custom integrated circuits were developed including the silicon retina and the drivers for the bimorphs, since they required a higher voltage than normal for solid state circuits at that time.
The first technical challenge toward developing the reading machine was how to build a "tactile screen" that could create a dynamic tactile image which was perceivable by the user and that had a refresh rate fast enough for useful reading rates. Linvill's initial work with graduate students Alonzo and Hill indicated that a piezoelectric bimorph could be suitable as the transducer to convert an electrical signal into a mechanical motion. The advantages of bimorphs were efficient transduction of electrical to mechanical energy (important for battery operation), small size, fast response, and relatively low cost.
Alonzo determined that at vibration frequencies around 300 Hz, the amplitude needed for detection was much less than for frequencies around 60 Hz. Moreover, for reading rates of 100 words per minute, vibration rates of at least 200 Hz were needed. Linvill calculated the length, width, and thickness of a bimorph reed necessary for a resonance frequency of 200 Hz that could produce enough mechanical energy to stimulate a fingertip above the threshold of the sense of touch.
Based on these calculations, an array of bimorphs was constructed for
reading rate tests with the computer simulation at SRI. The computer
simulation presented tactile images of perfectly formed and aligned
letters in a stream that moved across the bimorph array. Candy Linvill
and other blind subjects learned to read text presented in this
fashion with encouraging results. However, this simulation differed
from the conditions that the user would encounter with an
In considering the transition from the text being presented by the
computer to the user moving a camera across a printed page, Bliss
realized that there was a critical flaw in the design of the Veteran
Administration Stereotoner. Since English alphabetic characters can be
adequately displayed with 12 vertical pixels, the Stereotoner designer
had assumed that only 12 photocells would be needed in the camera.
However, this assumes perfect alignment between the camera and the
printed text, which is never the case with a hand held camera. When
the alignment is random, as with a hand held camera, a well known
engineering theorem states that twice as many pixels are needed.
When a single column of 24 pixels is scanned across a line of text,
all of the information is acquired. However, with the sense of touch,
people are capable of perceiving two dimensional images. Bliss
wondered if the reading rate would be higher if more than one column
of 24 pixels were used, and if so, how many columns would be
appropriate? Experiments with the computer simulation determined that
reading rate increased dramatically up to 6 columns, which was a
window width of about one letter space and this was about the maximum
number of columns that could be placed on one finger. Jon Taenzer, one
of Bliss’ Stanford graduate students, ran visual reading experiments
on the same computer simulation and determined that for visual
reading, reading rates continued to increase up to a window width of
up to about 6 letter spaces. This led to a number of experiments
toward trying to increase the tactile reading rate by increasing the
number of columns in the tactile screen so more than one letter would
be in view at a time. Instead of moving the text across only the index
fingertip, tests were run with a screen wide enough for both the index
finger and the middle finger to be used so two letters could be
simultaneously tactually sensed. In another experiment the moving belt
of text was run down the length of the fingers, rather than across
them. The only approach that showed any promise of increasing the
reading rate was when both index fingers were used, rather than the
index finger and the adjacent middle finger. However, the use of both
index fingers was incompatible with the design concept of using one
hand to control the camera while the other hand sensed the tactile
Other questions had to do with the spacing between the tactile pins in the bimorph array and their frequency of vibration. It was well known from experiments reported in the literature that people could distinguish two points from one with their index finger when they were a millimeter apart. However these previous experiments had not been done with vibrating pins. What effect would the vibration have and was there an optimum vibration frequency? These questions were answered by experiments conducted by Charles Rogers, a Stanford graduate student working with Bliss.
While the neurophysiological data suggested that the smallest two point thresholds would be at vibration frequencies less than 60 Hertz, Roger’s experiments showed that the two point thresholds around 200 Hertz were actually smaller. Bliss hosted a conference at SRI, including some leading neurophysiologists and psychophysicists, to try to resolve this discrepancy, but no one had an explanation. From a practical standpoint, Roger’s result was very fortunate because the higher frequencies were required for refresh rates fast enough for reading up to 100 words per minute and for use of bimorphs small enough to construct a 24-by-6 array that fit on a fingertip.
The question of whether 144 tactile stimulators on a fingertip could be independently distinguished led to a confrontation at a scientific conference between Bliss and Frank Geldard, a University of Virginia professor. Geldard had written a major book on the human senses and was a leading researcher on using the sense of touch to communicate information. When asked how many tactile stimulators should be used in a tactile display, he maintained that no more than 8 tactile stimulators could be independently distinguished, and these should be on widely separated parts of the body. Bliss’ data showing useful reading with 144 stimulators on a fingertip appeared to be in conflict with Geldard’s research. The difference was between communicating using two dimensional tactile images versus an 8 point code. Both Bliss and Geldard were reporting similar reading rates, but in the days before high accuracy optical character recognition, the Optacon approach was much more practical.
These experiments determined the design parameters for Optacon’s
man-machine interface: a 24-by-6 array of tactile stimulators,
vibrating between 250 and 300 Hz, and with the rows spaced at 1 mm and
the columns spaced at 2 mm (See Fig. 2). Fig. 2
In parallel with this human factors research was a pioneering effort to realize this design in a convenient portable unit, which would be critical for its success.
OPTACON INTEGRATED CIRCUIT DEVELOPMENT
In the 1960s, when the
The successful fabrication of such a silicon retina was a major milestone toward a practical Optacon.
OPTACON ELECTRONICS, OPTICS, AND PACKAGING
As integrated circuit technology progressed, another custom
integrated circuit was developed in the Stanford laboratories. This
integrated circuit contained 12 bimorph drivers and interfaced between
the 5 Volt circuitry and the 45 Volts required to drive the bimorphs.
The incorporation of this circuit and the use of lower power
components enabled the size to be reduced to 8″ by 6″ by 2″ and
the weight to be reduced to four pounds. Again the team of Brugler,
Young, Baer, and Gill were responsible for the design of electronics,
optics, and packaging. The first
With a number of operational prototype Optacons available, an effort
was made to get them in daily use by blind people in the community.
The engineers were anxious to know how well the
The first issue was how should a blind person be taught to read with and Optacon? Some blind people were unaware of letter shapes, and most were not familiar with the various type fonts. In addition spelling was usually not a strong point, since the education of blind students had often been in Braille, which has about 180 contractions. Of course, none were familiar with recognizing vibratory tactile images of letters moving across their index finger.
Weil developed lessons to teach recognition of letters presented in
this fashion using both the computer simulation and the Optacon
prototypes. It soon became apparent that while letter recognition
could be taught in a few days, building reading speed was much more
time consuming. However, there were soon a number of blind people
effectively using an
* Candy Linvill – John Linvill’s daughter who was a Stanford
undergraduate at this time. She used the
FROM COMMERCIALIZATION TO DISCONTINUANCE
Throughout the 1970s and into the 1980s, the
The design decision to reduce the number of image pixels from 144 to
100 to lower cost resulted in
In the 1990s Telesensory increasingly shifted its emphasis toward the low-vision market and became less devoted to the Optacon. Page scanners with optical character recognition had come to be the tool of choice for blind people wanting access to print. Page scanners were less expensive and had a much shallower learning curve than the Optacon. In addition, blind people could generally read through material more quickly with a page scanner than with an Optacon.
In 1996 Telesensory announced that it would no longer manufacture the
Many blind people continue to use their Optacons to this day. The
* ^ L.H. Goldish and H.E. Taylor, "The Optacon: A Valuable Device
for Blind Persons", NEW OUTLOOK FOR THE BLIND, published by the
American Foundation for the Blind,Feb. 1974, pp. 49-56
* ^ J.G. Linvill and J.C. Bliss, "A Direct Translation Reading Aid
for the Blind", Proceedings of the IEEE, Vol. 54, No. 1, Jan. 1966,
* ^ https://www.youtube.com/optaconmovies
* ^ J.C. Bliss, "A Relatively High-Resolution Reading Aid for the
Blind", IEEE Transactions on Man-Machine Systems, Vol. MMS-10, No. 1,
March 1969, pp. 1-9
* ^ C.H. Rogers, "Choice of Stimulator Frequency for Tactile
Arrays" IEEE Transactions on Man-Machine Systems, Vol. MMS-11, No. 1,
March 1970, pp. 5-11