CCNBook/Perception/Ventral Path Data

From Computational Cognitive Neuroscience Wiki
Jump to: navigation, search

Back to Perception Chapter

Figure 1: Responses of V2 neurons to a wide range of visual stimuli, showing that they can respond to more complex visual features, including intersections of simple oriented line features that are known to be encoded in V1. Reproduced from Hegde & VanEssen (2003).
Figure 2: Responses of V4 neurons to more complex object shape features (shown in white) -- the colored box below the object shapes summarizes the population response in V4 as a function of various continuously-varied shape parameters (position and size of various forms of convexity vs. concavity). The left shape is the original input shape, and the right shape is the one reconstructed from the neural activity pattern shown below the shapes -- to the extent that these match, the V4 neurons are accurately capturing shape features. Reproduced from Pasupathy & Connor (2002).
Figure 3: . Ability to generalize across position, scale and context variability for V4 and IT neurons, showing that IT neurons are significantly more invariant in their responses than V4 neurons. A classifier was trained using the neural responses of V4 and IT neurons for a set of visual inputs, and that was then tested on novel test images varying along these different dimensions, to produce these results. Reproduced from Rust & DiCarlo (2010).

This section provides a more detailed treatment of the data from ventral visual pathway (the "what" pathway) neurons.

Figure 1 shows that V2 neurons can respond to more complex visual shapes, suggesting that they might be encoding conjunctions of V1 simple oriented edge features.

shows an attempt to more rigorously characterize the response properties of V4 neurons, in terms of their ability to encode a wide variety of parametrically-defined shapes. One of the most difficult issues in any of this work is that you tend to get out of an experiment what you put into it, in terms of the visual stimuli that are shown. Neurons will generally exhibit some kind of response to a visual input, so how do you know how selective they are to that particular visual input? Lots of testing of the same neuron with many different visual inputs is required. And even then, it is not clear that the appropriate range of stimuli were sampled. The Pasupathy & Connor (2002) study represents one attempt to deal with these issues, by presenting a large range of shapes that are parametrically defined along various continuous dimensions.

A particularly interesting paper by Hegde & VanEssen (2007) followed up on the data from V2 shown above, directly comparing response properties in V2 vs. V4. They didn't find nearly as strong of differences as one might expect, for example based on the Kobatake & Tanaka (1994) study shown in the main chapter. Interestingly, Kobatake & Tanaka (1994) used anesthetized monkeys, while Hegde & VanEssen (2007) used awake monkeys. One important difference here is that awake monkeys are likely to have much stronger top-down activation flow, such that the V2 response properties that were measured may reflect a strong top-down influence from V4 neurons. Also, it is not clear that the Hegde & VanEssen (2007) stimulus set samples a wide enough range of inputs to really distinguish between these two areas. Further careful research is required to determine what really distinguishes these areas, but the basic patterns of connectivity and the results from our computational models all suggest that the hierarchical view is likely to be accurate, as reflected in the more feedforward dynamics of the anesthetized monkeys. 2