Reverse-Engineering the Human Visual System

On Monday — just before staffing The MathWorks’ booth — I attended Maria Petrou’s plenary session: “Reverse-Engineering the Human Visual System.” I found it rather interesting, though it tried to cover too much ground for one hour.* Nevertheless, it was quite interesting to ponder how we can use our understanding of the human visual system to perform better digital image processing.

Among the more interesting ideas:

  • There’s a difference between vision and perception. One concerns stimuli, the other processing. One is well-modeled; the other was the reason for the session.
  • The rods and cones of the eye are not located along a grid. It is possible to use normalized convolution to produce acceptable images from a random sample of as few as 5% of the pixels on a regular grid, approximating the visual field. It’s possible to do even better by mimicking the distribution of cones, which are most dense around the fovea.
  • The human visual system performs a form of edge detection in the visual cortex. The principles of these saliency maps can be applied to digital image processing. (For example: Plinio and Li Zhaoping.)

Also of interest: Hiroshi Momiji’s Retinal Vision for Engineers.

* — Perhaps I’ve been out of academia too long. Perhaps medical imaging does less for me than in the past. Perhaps it was just an off-year at SPIE Medical. At any rate, I didn’t attend many paper sessions, but those I did hear were a little disappointing.

This entry was posted in Color and Vision. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>