CECN1 Pattern Completion

From Computational Cognitive Neuroscience Wiki
Jump to: navigation, search

Bidirectional Pattern Completion

Back to CECN1 Projects

Project Documentation

(note: this is a literal copy from the simulation documentation -- it contains links that will not work within the wiki)

Bidirectional Pattern Completion

Now we will explore pattern completion in a network with bidirectional connections within a single layer of units. Thus, instead of top-down and bottom-up processing, this network exhibits lateral processing. The same mechanisms are at work in both cases. However, laterality implies that the units involved are somehow more like peers, whereas top-down bottom-up implies that they have a hierarchical relationship.

  • To start, it is usually a good idea to do Object/Edit Dialog in the menu just above this text, which will open this documentation in a separate window that you can more easily come back to. Alternatively, you can always return by clicking on the ProjectDocs tab at the top of this middle panel.

We will see in this exploration that by activating a subset of a "known" pattern in the network (i.e., one that is consistent with the network's weights), the bidirectional connectivity will fill in or complete the remaining portion of the pattern. This is the pattern completion phenomenon. A great deal of human cognition can be understood in terms of this relatively simple concept.

You should see a network window displaying a single-layered network. As usual, we begin by examining the weights of the network.

  • To the left of the network view window, select r.wt and click on different units. To do so, you must use the arrow pointer, selected by clicking on its icon in the upper right corner of the network, or by hitting the "Esc" key.

You can see that all the units that belong to the image of the digit 8 are interconnected with weights of value 1, while all other units have weights of 0. Thus, it would appear that presenting any part of the image of the 8 would result in the activation of the remaining parts. We will first test the network's ability to complete the pattern from the initial subset of half of the image of the digit 8.

  • In the network view window, select the act button to view the unit activities in the network. Now press Init and then Step (in either the .PanelTab.ControlPanel, or the LeabraEpoch program, which you can access through the hierarchy view in the left panel). Keep pressing Step until units start to become active; the first ones to activate are the input pattern. Keep hitting Step to observe how these units activate the others in this network's "memorized" pattern.

What you just did presented an input to some of the units, and the others become activated. Notice that we are viewing the activations being updated during settling, so that you can tell which ones were clamped from the environment and which are being completed as a result- the ones that activate first are the ones that were part of the input pattern. Step through the above instructions again, to closely observe this process.

There is no fundamental difference between this pattern completion phenomenon and any of the other types of excitatory processing examined so far -- it is simply a result of the unclamped units detecting a pattern of activity among their neighbors and becoming activated when this pattern sufficiently matches the pattern encoded in the weights. However, pattern completion as a concept is particularly useful for thinking about cued recall in memory, where some cue (e.g., a distinctive smell, an image, a phrase, a song) triggers one's memory and results in the recall of a related episode or event. This will be discussed further in chapter 9.

Note that unlike previous simulations, this one employs something called soft clamping. With soft clamping, the inputs from the event pattern are presented as additional excitatory input to the neurons instead of directly setting the activations. That form of clamping (which we have been using previously) is called hard clamping, and it results in faster processing than soft clamping. However, soft clamping is necessary in this case because some of the units in the layer need to update their activation values as a function of their weights to produce pattern completion.

In order to experiment further with this network, you'll need to change the input pattern.

  • New way in version 5.0: just click with left mouse button on the squares in the .T3Tab.Digit_Images tab to turn on, and click with middle mouse button (Ctrl+mouse on Mac) to turn off.
  • Old way: Access the input pattern in the project strucure outline, in the far left window (it starts with "Project_0"), Click on the plus sign next to the green data entry. This will expand that entry so you can view all of the data associated with the project. Now, expand the InputData subgroup entry, then click directly on Digit_Images. In the middle panel, click on the single (Matrix) entry. You should now see a table of zeros and ones, making up the pattern input that was clamped to the hidden layer (the right half of the digital 8 image). Click on individual cells and enter 0s and 1s to change which units are soft clamped.

To view the input pattern graphically, select the .T3Tab.Digit_Images view tab in the right panel -- this is just a grid view of the input data table.

  • Press Init and then repeatedly Step to see the results of your new input

Question 3.7 (a) Given the pattern of weights, what is the minimal number of units that need to be clamped to produce pattern completion to the full 8? You can determine your answer by toggling off the units in the event pattern one-by-one until the network no longer produces the complete pattern when it is Run. Don't forget to hit Init after changing the input pattern). (b) The g_bar_l parameter can be altered (in the ControlPanel) to lower this minimal number. What value of this parameter allows completion with only 5 inputs active?

Question 3.8 (a) What happens if you activate only inputs which are not part of the 8 pattern? Why? (b) Could the weights in this layer be configured to support the representation of another pattern in addition to the 8 (such that this new pattern could be distinctly activated by a partial input), and do you think it would make a difference how similar this new pattern was to the 8 pattern? Explain your answer.

A phenomenon closely related to pattern completion is mutual support or resonance, which happens when the activity from a set of units that have excitatory interconnections produces extra activation in all of these units. The amount of this extra activation provides a useful indication of how strongly interconnected these units are, and also enables them to better resist interference from noise or inhibition. We next observe this effect in the simulation.

  • Click the entire pattern of the 8 to be input to the network, together with one other unit which is not part of the 8. Then, change the leak current (g_bar_l) from 3 to 7.5, and then Init and repeatedly Step.

Note that the units which are part of the interconnected 8 pattern experience mutual support, and are thus able to overcome the relatively strong level of leak current, whereas the unit that does not have these weights suffers significantly from it.

Finally, note that this (and the preceding simulations) are highly simplified, to make the basic phenomena and underlying mechanisms clear. As we will see when we start using learning algorithms in chapters 4-6, these networks will become more complex and can deal with a large number of intertwined patterns encoded over the same units. This makes the resulting behavior much more powerful, but also somewhat more difficult to understand in detail. Nevertheless, the same basic principles are at work.