From Computational Cognitive Neuroscience Wiki
Jump to: navigation, search
Project Name detector
Filename File:detector.proj Open Project in emergent
Author Randall C. O'Reilly
Publication OReillyMunakataFrankEtAl12
First Published Jul 26 2016
Tags Weights, Detector, Neuron, Pattern
Description Detector of patterns using weights -- illustrates key behavior of a unit.
Updated 28 July 2016, 11 January 2017, 13 January 2017, 11 January 2018
Versions 8.0.0, 8.0.2, 8.0.3, 8.0.4
Emergent Versions 8.0.0, 8.0.4, 8.5.0
Other Files File:DetectorNet.wts

Back to CCNBook/Sims/All or Neuron Chapter.


Having explored the basic equations that govern the point neuron activation function, we can now explore the basic function of the neuron: detecting input patterns. We will see how a particular pattern of weights makes a simulated neuron respond more to some input patterns than others. By adjusting the level of excitability of the neuron, we can make the neuron respond only to the pattern that best fits its weights, or in a more graded manner to other patterns that are close to its weight pattern. This provides some insight into why the point neuron activation function works the way it does.

It is recommended that you click here to undock this document from the main project window. Use the Window menu to find this window if you lose it, and you can always return to this document by browsing to this document from the docs section in the left browser panel of the project's main window.

The Network and Input Patterns

We begin by examining the Network View panel (right-most window). The network has an Input layer that will have patterns of activation in the shape of different digits, and these input neurons are connected to the receiving neuron via a set of weighted synaptic connections. Be sure you are familiar with the operation of the Net View, which is explained in the Emergent wiki here: Network View. We can view the pattern of weights (synaptic strengths) that this receiving unit has from the input, which should give us an idea about what this unit will detect.

Select r.wt as the value you want to display (on the left side of the 3D network view) and then click on the receiving unit to view its receiving weights.

You should now see the input grid lit up in the pattern of an 8. This is the weight pattern for the receiving unit for connections from the input units, with the weight value displayed in the corresponding sending (input) unit. Thus, when the input units have an activation pattern that matches this weight pattern, the receiving unit will be maximally activated. Input patterns that are close to the target 8 input will produce graded activations as a function of how close they are. Thus, this pattern of weights determines what the unit detects, as we will see. First, we will examine the patterns of inputs that will be presented to the network.

Select the digits tab at the top of the right view window.

The display that comes up shows all of the different input patterns (input data) that will be presented to the receiving unit to measure its detection responses. Each row of the display represents a single event that will be presented to the network. As you can see, the input data this case contains the digits from 0 to 9, represented in a simple font on a 5x7 grid of pixels (picture elements). Each pixel in a given event (digit) will drive the corresponding input unit in the network.

Running the Network

To see the receiving neuron respond to these input patterns, we will present them one-by-one, and determine why the neuron responds as it does given its weights. Thus, we need to view the activations again in the network window.

Return to the Network view in the right panel, and select act (activation) as the value to display in the network view (again on the left side of the 3D network view).
Then, select the ControlPanel tab in the middle panel, and press the Init and Step Trial buttons, to process the first event in the input data (the digit 0). You will just see the input activate in the pattern of a 0, but the receiving neuron remains inactive. Press Step Trial one more time, and you will see the next digit (1) presented.

Each Step causes the input units to have their activation values fixed or clamped to the values given in the input pattern of the event, followed by a settling process where the activation of the receiving unit is iteratively updated over a series of cycles according to the point neuron activation function (just as the unit in the previous simulation was updated over time). This settling process continues until the activations in the network approach an equilibrium (i.e., the change in activation from one cycle to the next, shown as variable da in the simulator, is below some tolerance level). The network view is updated during every step of this settling process, but it happens so quickly you likely won't really see it.

To see the settling process in detail, use Step Cycle -- you'll need to press it a number of times, once for each cycle of updating.

You should have seen the input pattern of the digits 0 and 1 in the input layer. However, the receiving unit showed an activity value of 0 for both inputs, meaning that it was not activated above threshold by these input patterns. Before getting into the nitty-gritty of why the unit responded this way, let's proceed through the remaining digits and observe how it responds to other inputs.

Continue to press Step Trial until the digit 9 has been presented.

You should have seen the receiving unit activated when the digit 8 was presented, with an activation of zero for all the other digits. Thus, as expected, the receiving unit acts like an 8 detector.

We can use a graph to view the pattern of receiving unit activation across the different input patterns.

Select the TrialOutputData tab in the right window -- you will see a graph and a colorful grid display.

The graph shows the activation for the unit as a function of trial (and digit) number along the X axis. You should see a flat line with a single peak at 8.

Computing Net Input

Now, let's try to understand exactly why the unit responds as it does. The key to doing so is to understand the relationship between the pattern of weights and the input pattern. Thus, we will configure the NetView to display both the weights and the current input pattern.

Select the Network view tab in the right panel to return to the network view. Then, in the middle network view configuration panel, select the Wt Lines button (just above the color scale bar). If you do not see lines from the input to the receiving unit, select the red arrow button at upper right, and click on the receiving unit. This shows the weight values into the receiving unit.
Now go back to the ControlPanel and do Init and Step Trial to present the digit 0 again.

Question 2.8: For each digit, report the number of input units where there is a weight of 1 and the input unit is also active. This should be easily visually perceptible in the display (you may need to rotate it a bit with the V.Rot dial to see the lines more clearly). You should find some variability in these numbers across the digits.

The number of inputs having a weight of 1 that you just calculated should correspond to the total excitatory input or net input to the receiving unit:

  •  g_e(t) = \frac{1}{n} \sum_i x_i w_i

Let's confirm this.

Go back to the TrialOutputData to display that tab in the far right panel, and also click the corresponding tab in the middle panel. Locate the two tabs at the bottom of the associated middle panel. One is for the Grid display, and the other is for the Graph. Click on the Graph one, and then you'll see a full set of graph configuration options. Locate the row near the top that starts with Y1: -- click on the act button, and select net instead. This will select net input to view in the graph instead of activations. Run the full set of digits again to get a full graph display.

Compare the values in the graph with the numbers you computed -- they should be proportional to each other -- there are some normalization factors that convert the raw counts into the actual net input values shown in the graph.

As a result of working through the net input calculation, you should now have a detailed understanding of how the net excitatory input to the neuron reflects the degree of match between the input pattern and the weights. You have also observed how the activation value can ignore much of the graded information present in this input signal. Now, we will explore how we can change how much information is conveyed by the activation signal. We will manipulate the leak current (g_bar.l), which has a default value of 2, which is sufficient to oppose the strength of the excitatory inputs for all but the strongest (best fitting) input pattern (the 8).

Manipulating Leak

Select the ControlPanel middle tab. Notice the g_bar.l variable there, with its default value of 2. Reduce g_bar.l to 1.8. (Note that you can just hit the Run button in the control panel to both apply the new value of g_bar.l and run the epoch process for one sweep through the digits). If in TrialOutputData you still have the Y1: set as net from the last question, change it back to select act so we can look at activity again instead of net input.

Question 2.9: What happens to the pattern of receiving unit activity when you change g_bar.l to 1.8, 1.5, and 2.3? Explain the qualitative effects of changing g_bar.l in terms of the point neuron activation function.

Question 2.10: What might the consequences of these different response patterns have for other units that might be listening to the output of this receiving unit? Try to give some possible advantages and disadvantages for both higher and lower values of g_bar.l.

It is clearly important how responsive the neuron is to its inputs. However, there are tradeoffs associated with different levels of responsivity. The brain solves this kind of problem by using many neurons to code each input, so that some neurons can be more "high threshold" and others can be more "low threshold" types, providing their corresponding advantages and disadvantages in specificity and generality of response. The bias weights can be an important parameter in determining this behavior. As we will see in the next chapter, our tinkering with the value of the leak current g_bar.l is also partially replaced by the inhibitory input, which plays an important role in providing a dynamically adjusted level of inhibition for counteracting the excitatory net input. This ensures that neurons are generally in the right responsivity range for conveying useful information, and it makes each neuron's responsivity dependent on other neurons, which has many important consequences as one can imagine from the above explorations.

Finally, we can peek under the hood of the simulator to see how events are presented to the network. This is done using something called a Program, which is like a conductor that orchestrates the presentation of the events in the input data to the network. We interact with programs through program control panels (not to be confused with the overall simulation control panels -- see the Wiki page: Program for more info).

To see the program control panel for this simulation, go to the left-most browser panel of the window, and click open the programs item, and then again the LeabraAll_Std subgroup. You will see a LeabraEpoch program there, which you can click on. This will bring up its control panel in the middle panel. You can then do Init, Run, and Step using the buttons at the bottom of this window.

Although the simulation exercises will not typically require you to access these program control panels directly, they are always an option if you want to obtain greater control, and you will have to rely on them when you make your own simulations.

You may now close the project (use the window manager close button on the project window or File/Close Project menu item) and then open a new one, or just quit emergent entirely by doing Quit emergent menu option or clicking the close button on the root window.