Wednesday, December 21, 2011

Los Alamos Scientists Mimic Neuron Function to Help Computers to See More Clearly

The brain has an uncanny ability to detect and identify certain things, even if they’re barely visible. Now the challenge is to get computers to do the same thing. And programming the computer to process the information laterally, like the brain does, might be a step in the right direction.

...“This model is biologically inspired and relies on leveraging lateral connections between neurons in the same layer of a model of the human visual system,” said Vadas Gintautas of Chatham University in Pittsburgh and formerly a researcher at Los Alamos.

Neuroscientists have characterized neurons in the primate visual cortex that appear to underlie object recognition, noted senior author Garrett Kenyon of Los Alamos. “These neurons, located in the inferotemporal cortex, can be strongly activated when particular objects are visible, regardless of how far away the objects are or how the objects are posed, a phenomenon referred to as viewpoint invariance.” _HPCwire
The scientists want to create computer models of human vision that are capable of picking out complex objects from a cluttered visual field, and do it as well as humans -- except faster.
To quantify the temporal dynamics underlying visual processing, we performed speed-of-sight psychophysical experiments that required subjects to detect closed contours (amoebas) spanning a range of shapes, sizes and positions, whose smoothness could be adjusted parametrically by varying the number of radial frequencies (with randomly chosen amplitudes). To better approximate natural viewing conditions, in which target objects usually appear against noisy backgrounds and both foreground and background objects consist of similar low-level visual features, our amoeba/no-amoeba task required amoeba targets to be distinguished from locally indistinguishable open contour fragments (clutter). For amoeba targets consisting of only a few radial frequencies (), human subjects were able to perform at close to accuracy after seeing target/distractor image pairs for less than 200 ms, consistent with a number of studies showing that the recognition of unambiguous targets typically requires 150-250 ms to reach asymptotic performance [22], [23], [35], here likely aided by the high intrinsic saliency of closed shapes relative to open shapes [7]. Because mean inter-saccade intervals are also in the range of 250 ms [34], speed-of-sight studies indicate that unambiguous targets in most natural images can be recognized in a single glance. Similarly, we found that closed contours of low to moderate complexity readily “pop out” against background clutter, implying that such radial frequency patterns are processed in parallel, presumably by intrinsic cortical circuitry optimized for automatically extracting smooth, closed contours. As saccadic eye movements were unlikely to play a significant role for such brief presentations, it is unclear to what extent attentional mechanisms are relevant to the speed-of-sight amoeba/no-amoeba task.

Our results further indicate that subjects perform no better than chance at SOAs shorter than approximately 20 ms. _PLoS
The PLos link above allows access to the entire study.

These findings provide additional insight into the unconscious nature of neural processing, previously touched on in the previous posting here.

Researchers are attempting to take these profound insights and use them for devising computer models which simulate various unconscious brain functions. It will not be an easy task, but by approaching the problem in a system by system manner, limited success is quite possible within a reasonable time frame.

If computers ever learn to "see" and distinguish objects within complex and dynamically changing fields, as well or better than humans, there will be a number of profitable applications waiting.

No comments: