icon
Brain Building Kit™
BBK™

Mathematics of the Brain
artificial brain assembly tools
with sample neuron and brain libraries
Home
Catechism
Process
   Paradoxes
References
Login
noScroll
Sense
macula
Transduce
macula
Shape
macula
Collect/Coincide
Transport/Project

macula
Discriminate
aberrant
Predict

Choose

Act

Introduction to some Paradoxes

Modern cameras make such clear pictures that an average person need not understand how the camera works to make very good photographs. Images made with modern cameras have sharp details, and objects have the expected colors. Designers of cameras work very hard to make cameras that produce excellent everyday pictures. Even the simplest of modern cameras are extremely complicated and subtle. But what happens when you take unusual pictures with one of these cameras?

If the lighting in a room has a reddish, or bluish caste, our eyes may note the color, but the colors of objects in the room look right nonetheless. But if we take a photograph in this room, our eyes note that the camera has made a mistake. Where we perceive normal colors, the photograph tinges objects reddish or bluish.

If the brightness in a room is very uneven, our eyes may note the unevenness with which objects are lit, but the details of objects look right. But if we take a photograph in this room, our eyes note that the camera has made another mistake. Either the bright areas are washed out, or the dark areas are without detail. High-end cameras have rudimentary balancing of brightness to reduce this error, and specialty cameras have significant on-board processing.

Historical cameras

Why do we think the photograph a camera takes is wrong? What makes our eyes capable of perceiving these mistakes? Why can expert photographers make a bad picture look good? Why don't our eyes make the mistakes that cameras make?

Cameras from over a hundred years ago were more like our eyes than modern cameras. These cameras had technical problems that made photographs appear seriously defective. These defects arose because light bends and forms patterns when it goes through lenses, slits, and other apertures. Our eyes bend light and form it into patterns, yet we perceive the world as if they did not. Many paradoxes are involved.

What a white TV pixel looks like on your retina

If you had perfect human vision, and were you to look at the center of a TV screen from several feet back in a dark room, and the screen was dark except for a white dot near the top of the screen the image below left is what would appear on your retina below your center of vision. The image below right is what you perceive using the same scale.

Chromatic Aberration Perceived

As you can see, what you perceive is very much smaller than what appears on the retina. How the brain makes this happen has been considered a mystery. My inventions makes this happen by imitating the nerves of the eye and visual brain.

A significant portion of the discussion to come will explain features of this retinal image of a white TV pixel, and how it relates to the perceived white pixel in the right hand image.

Look most particularly at the blue disk near the center. This is called an Airy disk. You can also see blue rings centered around the blue disk. These are called Airy rings.

Below the blue Airy disk and ring set are red and green sets. The disks and rings for red and green are so interlaced that it is hard to make them out clearly. They have the same structure as the blue set. Where they overlap, they appear yellow. If you look closely, you will also see that overlaps with blue rings also appear cyan and magenta. Where all three overlap, it appears white.

If you look closely again, you will also note that the blue airy disk is smaller than the green one which is smaller than the red one.

One last thing to note is that the blue disk is very much further away from the green disk than the green is from the red.

How hyperacuity is measured by Optometrists

The common measure is the "visual angle" over which photons from the visual scene fall on a particular photoreceptor cone. This angle is 30" (30 seconds of arc). We are able to perceive a pencil at a distance of 1 Km (about 1/3 mile). The angle the pencil makes at that distance is about 1" (1 second of arc). So, we are able to perceive something 30 times smaller across than a single cone. Two objects that are 2" to 10" apart can be discriminated from each other. So, we are able to tell things apart that are less than 1/3 of a cone apart.

Given the retinal image of a white TV pixel, where the blue disk is 5' across (5 minutes of arc or 300 seconds of arc), it is not obvious how that image can be converted back to the white TV pixel that we perceive.

we can perceive an object that is smaller than 1/100th the size of its Airy disk, and we can discriminate two objects that are closer than 1/30th the size of their Airy disks.

Astronomy uses two common discrimination measures called the "Rayleigh" limit and the "Sparrow" limit. These limits are measures of how reliably one point source can be discriminated from another equal point source based on the diameters of their Airy disks. Neither the Rayleigh nor Sparrow limits support discrimination better than about the width of the Airy disks.

The "Fourier Transform" improves acuity somewhat achieving discrimination on the order of 0.25 the width of the Airy disk. But this isn't very close to human hyperacuity.

I add to this list, my Hyperacuity Limit of approximately 0.01 the width of the Airy disk, and often better.

Here is a scale showing a visual representation of these differences. The black vertical bar to the left is the width of my Hyperacuity Limit in relation to the width of the Airy disk (green) and acuity of three other algorithms (red, yellow, and blue).

There is one notable distinction between my Hyperacuity and other acuity algorithms. The acuity of this algorithm increases with decreasing pupil size while the acuity of all the other algorithms decreases with decreasing pupil size.

The Rayleigh Criterion is given as 1.22λ/d. The Sparrow Criterion is given as 0.95λ/d. The Fourier Transform is able to give between 0.25λ/d and 0.125λ/d.

Until this hyperacuity was developed, a mathematical method for achieving hyperacuity to human limitations has not been available. It is now possible to give this level of hyperacuity to cameras and other sensing devices.

General anatomy of the eye


Image Source From Walls, 1942

The general color perception paradox (including color constancy)

Consider light going through an ordinary magnifying lens. The edge of the lens is thin, and the middle is thicker. This makes lenses act like prisms. Ordinary white light going through a lens comes out separated into a rainbow-like fringe on objects. The ends of this rainbow are usually called Red and Blue, and have longer and shorter wavelengths respectively. This bending of light is called refraction.

It is improper to call parts of the rainbow by color names because color perception can be strongly disconnected from parts of the rainbow. We perceive the entire range of colors when only two wavelengths of "green" photons illuminate the scene you are viewing. Color perception has always been paradoxical. Many theories have been proposed.

The retinal image of the white TV pixel image doesn't show as a rainbow smear but as three distinct superimposed diffraction patterns because a white TV pixel is composed from three components that are very narrow wavelength slices of a rainbow usually called RGB for Red Green Blue.

The general refraction paradox

Your eyes have two major lenses. The first lens is the cornea. The second lens is inside, behind your iris. Stretching and relaxing it focuses your vision. Both lenses separate colors, like a magnifying lens does. This separation shows as color splitting on your retina. In fact, if you use a microscope to look at a person's retina while they look at a bright white thread on a black velvet background, its image away from the center of sight is tinged Red, and towards the center is tinged blue. Paradoxically, the person looking at the thread doesn't perceive these colors.

In the retinal image of the white TV pixel, Airy disks separation is evidence that color splitting happens for the three wavelengths produced by TVs.

The general diffraction paradox

Now consider light going through a slit. In high school science, you were shown the "two slit" photon experiment. Light going through two slits forms a diffraction pattern on a focal surface behind the slits. Likewise, light going through a circular hole forms a pattern on the focal surface . After your circular pupil, your retina is the focal surface. The diffraction pattern looks like concentric light and dark rings on your retina. These rings are wider for more reddish light and narrower for more bluish light. This difference in width is called "longitudinal chromatic aberration" and the rings are called "Airy" rings.

These patterns show clearest when you use one wavelength of light from a very small and distant "point source" of single-color light, like a red star or, even better, a single red dot on your TV screen when seen from a distance. Again, paradoxically, the person looking at the TV perceives no rings.

In the retinal image of the white TV pixel, that the pattern shows up as an Airy disk and rings for each wavelength shows that diffraction through the pupil happens separately for each of the three wavelengths (RGB) used in TV pixels.

The trichromatic coaxial splitting paradox

Dots on TV screens, also known as "pixels", are made from exactly three wavelengths called RGB for Red, Green, and Blue. When you perceive a white dot on the TV screen in a dark room, you are actually sensing three separate images of the dot: one Red image, one Green image, and one Blue image. These images on the retina are quite distinct under a microscope. This is another paradox. The distinct microscopic images are not perceived when watching TV at a comfortable distance.

Looking straight at the white TV pixel, the separate sets of diffraction rings are centered and piled one on top of the other, and the microscope image will show color variation starting from the center as white, and changing color as the light and dark diffraction rings mix with each other in different ways at different distances from the center.

The retinal image of the white TV pixel, is subject to the rule that the size of a diffraction pattern depends on wavelength.

The trichromatic eccentric splitting paradox

"Eccentricity" is the name for looking slightly or further to the side of a dot. If you look slightly to the side of a dot, something very different happens than when looking straight on. The rings separate sideways from each other, and no longer share the same center. This is called "transverse chromatic aberration". The paradox here is that you perceive that there is a unique center for a white TV pixel, but there is no such center on the retina. But the centers of each line up along a line to the center of vision.

In the retinal image of the white TV pixel, that the Airy disks do not have a common center shows that the eccentric displacement of light through a lens depends on wavelength.

The diffraction pattern diameter invariance paradox

When you are reading, your pupil constricts. When a large animal attacks you, your pupil dilates.

Another paradox is that all diffraction rings change size when your pupil changes size, but you will always perceive the white TV pixel as a point of light. The central disk of the diffraction pattern always covers many cones. But TV screens look the same when you read words on the screen or are surprised by some movie monster, making your pupils constrict and dilate.

In the retinal image of the white TV pixel, were you to constrict your pupil the size of the Airy disks would grow and, were you to dilate your pupil the size of the Airy disks would shrink. However, you would perceive no change in the white TV pixel.

The spectral line paradox

The light from televisions has been designed to fool humans into thinking they are seeing the right colors. But consider this. If you burn sodium, it gives off a characteristic spectrum with a yellowish hue. In fact, sodium gives off three distinct spectral lines, two very strong yellow bands and one weaker blue band. If you record this Sodium light with a TV camera and show it on a TV screen, it looks right. But if you use a spectrum analyzer on the TV image, the spectrum analyzer will show one bright red band, one bright green band, and one dim blue band of the wrong blue. So the television is, in fact, giving us a false set of wavelengths that we perceive as if they were a true color. The fact that we are fooled into perceiving a wavelength that isn't there is another paradox. Animals with different color vision are not fooled by a TV.

The full spectrum smear paradox

Sunlight is also different from TV light. Unlike Sodium light, sunlight is mostly from the glow of a hot object, much like the filament in a light bulb. This is called "black body radiation", and is the most common kind of light around us. When we perceive one white pixel from a television image of a sunlit scene, a microscopic look at the retina shows bright red, green, and blue "Airy" rings. But the same one pixel seen through a pinhole in real sunlight makes no such rings on the retina. Instead, an entire rainbow spectrum of rings is smeared over the area where the TV image appeared as sharp rings. As observers, we perceive no qualitative difference between sunlit scenes and TV scenes. Here, the paradox is that either a smear of almost indistinguishable colors, or a set of sharply colored rings, can be perceived as the same sharp dot.

Were you to replace the white TV pixel with a pinhole through a black piece of paper through which light from an ordinary filament light bulb was streaming, in the retinal image you would see a broad smear encompassing all disks and rings. However, you would perceive the same white dot as if you were looking at the white TV pixel.

The hyperacuity paradox

Our eyes have a sensitive surface called a retina. The retina has many different kinds of cells and many layers. The light sensitive cells have many names like photoreceptors, sensors, rods, and cones. They are like other cells in many ways. Their size varies a little but not a lot. They have width and length. A receptor's length goes through the retina from the back surface in. The width is parallel to the retina surface. People usually think that you ought not to be able to perceive things smaller than the width of a single cone. But this is wrong. For centuries observers have known that you can perceive much smaller details. Modern processing techniques used on images from nearly perfect optical devices discriminate details down to ¼ the size of a pixel.

However, when our eyes look at very small details, we perceive things that are smaller across than even 1/10th the width of a single photoreceptor. So it is actually an error to think that photoreceptor size directly limits our ability to perceive detail. The traditional measure of detail perception is called "acuity". A special word, "hyperacuity", is used to describe our ability to perceive details far smaller across than a single receptor. That we perceive detail as well as we do is a paradox unsolved by modern processing techniques.

Hyperacuity, even for excellent optics, has not been given a clear explanation.

The diameter of the blue Airy disk in the retinal image of a white TV pixel is 100 times larger than the white dot you perceive.

The reliability when defective paradox

The image below shows the diffraction pattern from a point source on the retina of a very good eye. The smallest pattern in this image covers several cones. The largest pattern covers hundreds of cones. The sizes refer to the diameter of the pupil in millimeters when the image was photographed.

But things are even harder to understand when you look through the microscope at images on the retina. Eyes are rarely perfect. The image below is pathological, but the patient still sees fairly well. Images of circular dots are rarely circular. They have odd lumpy and pointy shapes protruding in various directions caused by "astigmatism" and injuries. The cornea is rarely round, and densities are uneven in the clear structures of your eye. It can have many kinds of defects in shape from injuries and local changes in density.

It is entirely paradoxical that we can perceive details less than 1/10th the size of a pixel when the image on the retina is wavelength split, smeared, diffracted, and then grossly distorted by optical imperfections of size, shape, clarity, and density.

So, the image of an off-center distant point of white light on the retina, as seen through a microscope, is a very smeared and distorted set of colored rings. This image is between two and twenty photoreceptors across depending on the size of your pupil. Yet most people will "perceive" this image as a point source of light smaller across than 1/10th the width of a single photoreceptor. More than 100 times smaller across.

The retinal image of the white TV pixel would have each of the RGB components distorted almost identically. The shapes of the diffraction pattern for each component will fit the shapes of the other components by changing its overall size. There are numerous names for "same shape, different size". Here, it is named "homologous" (same shape, possibly different size) rather than "congruent" (same shape, same size).

The narrow pupil hyperacuity paradox

The tiny pattern labelled 7mm in the left hand image is over 20 times larger than the size of the perceived dot. This tiny pattern appears when you look at a star at night with a wide open pupil. When you are reading in bright light, your pupil narrows to 1mm. Your acuity is best in bright light with a narrow pupil. Surprisingly, you will perceive a tinier dot for the pattern labelled 1mm than you will for the pattern labelled 7mm.

In other words, you perceive more detail when the width of the disk is over 100 times larger than what you report you can see. This is a major unsolved paradox that is rarely reported in the literature.

In the retinal image of the white TV pixel, when you are reading letters from the TV screen (your pupils are constricted) the blue Airy disk covers hundreds of photoreceptors.

The need for dynamic contrast paradox

When a doctor paralyzes your eye-moving muscles and prevents your head from moving, and presents you with an unmoving scene, your vision fades to gray. Any movement brings much of the scene back. Needing movement to see is a paradox. Careful experiments have shown that eyes only detect moving points, boundaries, and vertices; and general changes in light. Everything else that is perceived has been added by the nervous system.

Although the retinal image of the white TV pixel doesn't show it, a growing number of publications describe how nervous systems appear to anticipate the future positions of points, boundaries, and vertices.

Movement is not, in itself, required. It is also possible to see an image that has been flashed onto the retina in a millionth of a second such that the photon count is about the same as the number that would arrive in about a 20th of a second. Images shown in this way can appear in full color and with hyperacuity.

The red versus green paradox

Photoreceptors do not pick up just one wavelength of photon. At one small span of wavelengths, a photoreceptor is at its most sensitive. In fact, most photoreceptors will absorb and signal for photons that are ideally absorbed and signaled by another photoreceptor. Statistically, the number of signals generated per photon decreases when the photon is not ideally absorbed. This dropping off of signal for lower and higher wavelengths is called the "Efficiency" curve.

Photoreceptors that are called "Blue" absorb far more short wavelength photons than do those called "Red" and "Green". But each of the latter two pick up photons most efficient for the other so efficiently, that the distinction in signal is fairly little until the photons being absorbed are of sharply different wavelengths spanning the efficiency curves.

It is a paradox that we distinguish red from green so well and with such subtlety that perceptually we consider red, green, and blue to be equally spaced colors.

The contrast invariance paradox

The final paradox to consider here is that our eyes seem to work just fine in a dark cave and in bright sunshine with only a relatively short time of adaptation. This is curious because as few as 20 photons total from a single source can be used to locate the light source in a dark cave, but as many as 1 Billion photons per second can be incident on each photoreceptor and still be useful for seeing in bright sunshine.

Adapting between these two lighting scenarios is quick. It is also possible to see details in a dark room when looking from a brightly lit room, and vice versa. This means that our eyes adapt to use both dark and bright scene lighting at the same time.

There is no substantial difference in the appearance of objects regardless of the light coming from its environment. A dark thing stays relatively dark compared to its neighbors. A light thing stays relatively light compared to its neighbors.

Combined paradoxes

Using conventional engineering practices it is hard to understand how a human eye can work. Almost every effort to improve imaging develops ways to improve each element of the photon path, like lenses, and improve the image in preparation for using techniques, like Fast Fourier Transform, Cosine Transform, and Sinc function convolution.

But the very best image processing operating on the data an eye receives comes nowhere close to matching the acuity of a person with poor eyesight due to refractive problems or injured retina or astigmatic cornea.

Every technical optical problem found in eyes has been overcome by modern engineering practices, but photographs are no match for the human eye, and no processing method is known that overcomes the problems of the eye's image. What could the eye and brain be doing that makes the technical optical problems irrelevant, and maybe even of positive value?

Back to camera technology

In complete contrast with human eyes, camera makers seek to produce perfect optics and use only the central disk of the round point diffraction pattern.

Modern camera makers have designed lenses, apertures, film, and digital sensors that use special and expensive tricks to prevent awareness of the effects described earlier. Camera lenses are not like ordinary magnifying lenses. By making many layered lenses of different material, the prismatic separation of color can be reversed. Lenses of this sort are called "achromatic". Film and digital sensors are made to have color dots about the same size as the central "Airy" disk of a perfectly circular diffraction pattern. Ring data is rarely used, and almost always discarded or ignored.

Special single color cameras have been made that can take high quality pictures of scenes that have both light and dark objects or bright lighting in one area and dim lighting in another. These are called "logarithmic" cameras. Pictures from normal cameras can partially imitate logarithmic cameras by a special post-processing called "Gamma correction". This correction is limited and loses discrimination

Underexposed images that have been brightened exhibit quantum noise and noise-floor defects. Saturated images (too bright) exhibit slightly different defects.

But when all the tricks camera makers use are taken together, they seem to be a lot of work to make images that look right to our eyes only in constrained (though common) environments..

Questions to ask

Is a modern camera more complicated than our eyes? Could simpler and better cameras be built by imitating eyes? Can existing images from modern cameras be automatically post-processed to make them better? Are there other devices that can be made better by imitating nerves? Do nervous system paradoxes only arise as a result of rigid notions about nervous system processing?

The working model behind this technology addresses these and other questions. The technology imitates perception, action, and reflex paths connecting perception to action. Imitating portions of human vision illustrates the model well.

I explore how the human eye processes images produced by modern cameras and display devices like computer screens. My image processing produces hyperacute images from diffraction limited and moving images of point sources, and then predicts where moving boundaries and vertices will be.

All the paradoxes described here, and others, are resolved by this technology. My inventions have similar strengths and weaknesses to human and animal perception, because they are close imitations of nerve shapes and actions.


Copyright(c)2009 Jonathan D. Lettvin, All Rights Reserved October 22, 2017, 12:20 am Contact: (617) 600-4499 email