AXTut OutputData

From emergent
Jump to: navigation, search
← previous Programs AX Tutorial → next TaskProgram

NOTE: Within emergent, you can return to this document by selecting docs → OutputData section of the left browser panel).


In this Output Data section we explore various ways to monitor, analyze, and display network output data in order to understand better how the network is performing...

Graphing Learning Performance over Epochs

To see how your network is learning a given task, the first step is to generate a graph of the learning performance (sum squared error on the training patterns) over epochs. Fortunately, the default programs that run the network automatically collect this data for us and we can just display the graph after starting the network.

The data is being recorded in the EpochOutputData datatable object in the data → OutputData data group. Assuming you have been doing this tutorial in the standard order, you should see a few rows of data from the previous run, and you should notice that the avg_sse column shows a general decrease in average sum-squared-error on the training patterns, ending in a 0, which is what stopped the training.

To graph this data, we will need to generate a graph view of this datatable. As a general rule in the software, all view information is always generated from a given main object like a datatable or a network -- you don't create a graph and then associate data with it -- it goes the other way. The menu selection to create the graph view from the datatable is View/New Graph View at the bottom of the middle Editor window (with the EpochOutputData datatable selected in the left Navigator). This time, instead of creating this graph view in its own separate panel, let's add it in the Network_1 panel.

Note: If you happened to have put the graph view in its own panel initially, don't worry -- just bring up the context menu over the tab for that panel in the Visualizer on the right (the tab should be called EpochOutputData), and select Delete Panel. In fact, you don't even really need to delete that tab to add the graph view to the Network_1 panel as you can actually create multiple views of the same underlying data.

After selecting Network_1 and clicking Ok you should see a graph object appear somewhere in the upper part of your Network_1 display, showing a decreasing avg_sse line from left to right. By default the line is colored black, but you can control this and many other features of the graph display in the graph control panel.

Where is that panel? You should see a Network_1 tab among the middle Editor tabs. If you click on that tab, you will see the display options for Network_1 in the top part of the window to start. Look at the bottom of this Editor panel and you will see additional tab selectors for the different view objects within this one view frame, in this case the Net View tab for the Network_1 network view and the EpochOutputData Graph tab for the EpochOutputData graph view. Select the EpochOutputData Graph tab to bring up the parameter options for that object. You should mouse over the various controls and play around with them. As you can see, there are many different ways of configuring a graph -- feel free to explore. You might want to make note of the Z-axis (Off by default), which is typically used to graph the multiple training runs in a batch. Note that there are several other variables that you could plot, including average cycles to settle (avg_cycles), and a count of the number of errors made across trials (cnt_err). Also see the wiki Graph View page for more details.

To see your graph updating in real-time, you can re-initialize and run the LeabraTrain program: Init and Run.

Arranging the 3D View

Although the current Network_1 view is sufficient, it could be configured to look better. We could shrink the graph view object a bit, and orient it better. To do this (optional), click on the red arrow button at the top-right of the Visualizer panel, and then grab the upper horizontal bar of the small purple box in the lower-left hand corner of the graph view object. Drag this slowly down -- you'll see the green frame rotating as you do. Do this to the point where graph is angled more "head on". Similarly, you can grab the left vertical bar and rotate the graph to the left a bit to make it more face on. Next, grab any corner of the box, and shrink the object a bit (maybe to half or so of its original size). Finally, you can move the whole object down and to the left a bit, by clicking on the face of the cube (not on any of the purple elements) and drag it to fit in between the Hidden and Output layers.

When you've got it the way you want, you can press the eye button to reset the display to fit, and then maybe Zoom in a bit. You could perhaps pan using SHIFT-mouse drag to center the view (or use H.Pan and V.Pan). When it looks good, click and hold the Vw_0 button at the bottom of the view to save your new view. Now let's also rename this view by right-/context-clicking the Vw_0 tab and selecting the Name option. This will bring up a little edit window where you can enter "MainView", or whatever you want to call it.

NOTE: The position, scale, and orientation of each object in any Visualizer panel can also be set manually by accessing each panel's object properties, but we will not cover that here.

Improving Learning Performance

To improve the learning performance of the network, we need to adjust one critical parameter, which is set to a value that works better for larger networks -- as opposed to the relatively small network we're using here. This parameter controls the amount of inhibition in the Hidden layer. It needs to be reduced somewhat.

In the left Navigator tree scroll down to Network_1 and double-click on it if it's not already open. Find and select HiddenLayer under the network specs section (NOT the layers section), which is the set of parameters and functions ("specs" = specifications) for the Hidden layer. Near the top, on the lay_inhib line, is a gi parameter that should currently have the value of 1.8. Enter 1.5 instead, and hit the Apply button at the bottom of the dialog. This has just reduced the global inhibitory conductance for this layer somewhat.

To see the effects of this parameter change, re-Init and Run the LeabraTrain program again. You should see a significant improvement in learning performance manifest as a steeper initial slope and faster approach to 0.0 avg_sse.

Recording Network Activations for a Grid View

Another common and very useful tool is to look at the pattern of activations across trials. To do this, we need to record activation values to our trial-level output data table (which was automatically created by the wizard). One way to do this is to select the network object in the Network_1 view by clicking on the thin green frame surrounding the network, and then using the right mouse button (or ctrl + mouse on a mac) to get the context menu, and select LeabraNetwork/Monitor Var. For the net_mon field, click and select the trial_netmon, which is for monitoring information at the trial level (the other one is for the epoch level). For the variable field, type in act_m, to record minus-phase activations (the monitor runs at the end of the trial, after the plus-phase). This will record activations for all three layers in the network in one easy step.

NOTE: You could alternatively select Monitor Var from the Network_1 object in the left Navigator tree, on each of the layers individually, or on any other object in the network for that matter, and record any variable. In the left Navigator tree, the Monitor Var operation can be selected by right clicking on the target object and selecting Leabra Network → Monitor Var from the context menu. See the wiki Monitor Data for more information on using the NetMonitor object in emergent.

Next, we want to make a new Grid View of the resulting activation data, which will be recorded in the TrialOutputData object -- with the TrialOutputData datatable selected in the left Navigator tree do a View/New Grid View at the bottom of the middle Editor panel, and again let's add this to the Network_1 frame. Follow the same general steps as before (see "Arranging the 3D View" above) to position this new grid view into the bottom right hand region of the view.

The grid view will not contain new information until the LeabraTrain program is Init and Run again. After doing that, you need to scroll the grid view display all the way over to the right -- there are too many columns to fit within the 5 columns that the standard grid view is configured to display. To do this, select the red arrow Visualizer tool and drag the purple bar at the bottom of the TrialOutputData grid view all the way to the right. You should see some colored squares with the Hidden and Output column headers.

To really make things clean, you can hide the column headers of the columns you don't want to display (e.g., ext_rew, minus cycles). Select the red arrow view tool, left-click to highlight the column header you wish to remove, then right-click to bring up the context menu for the column header. Select GridColView and then Hide. Ideally, you'd just want to see the trial name, sse, and the different layer activation columns.

NOTE: If you make a mistake, the Undo tool in the toolbar along the top of the main project window (Ctrl-Z/Command-Z key board shortcut) can be used to undo the last action. If you wish to restore all hidden columns in the grid view object, first select the red arrow view tool. Next, left-click to highlight the grid view object, and then right-click to bring up the context menu. Select GridTableView and then ShowAllCols. This will restore all hidden columns, so you can start over and hide any of the columns you do not wish to display.

As there are only 6 events in this training program, we can set the rows to display to 6 in the grid view panel, to make the display fit just right. To do so, context-click on the grid view object again, then select GridTableView/Object/View_Properties. This will bring up a new editing window where you will find a view rows parameter near the top. Change 10 to 6 and Apply. While you're there you may want to take some time to look over the considerable list of parameters you can edit to customize the look of a grid view object.

Init and Run the LeabraTrain program again to update the display.

Analyzing the Hidden Layer Representations

Now that we have some data from the network, we can perform some powerful analysis techniques on that data.

First, we can make a cluster plot of the Hidden Layer activations, to see if we can understand better what is being encoded. To do this, go to the application's top-level Data menu (which contains many operations that can be performed on DataTable objects), and select Cluster under the Analyze menu, and set the following parameters (leave the rest at their defaults):

  • view = on (generate a graph of the cluster data)
  • src_data = TrialOutputData
  • data_col_nm = Hidden_act_m (specifies the column of data that we want to cluster)
  • name_col_name = trial_name (specifies the column with labels for the cluster nodes)

You should see a new graph show up, with the A,B,C,X,Y,Z labels grouped together into different clusters. Most likely, you should observe in a trained network that the A and X are grouped together, separate from the other items. Can you figure out why this would be the case?

Another way to process this high-dimensional activation pattern data is to project it down into 2 dimensions. Two techniques for this are principal components analysis (PCA) and multidimensional scaling (MDS). To try PCA, select PCA2dPrjn -- fill in the same info you did for the Cluster plot. You should see that the labels are distributed as points in a two-dimensional space, with the X-axis being the dimension (principal component) of the hidden layer activation patterns that captures the greatest amount of variance across patterns, and the Y-axis being the second strongest component. Accordingly, you should see A and X on the left or the right side of the graph, and the others on the other side. It is not clear what the vertical axis represents..

There are many more things one could do, but hopefully this gives a flavor. The next step:TaskProgram is to write a program to automatically generate our input patterns for training the network -- initially we'll start out with the simple task we ran already, but then we'll progressively expand to more complex tasks.

→ next TaskProgram