The network is the main object for a neural network simulation, containing Layers of Units which are connected through Projections. It also contains all the Specs used by these objects, which specify the parameters and equations used.
- See NetworkTips for detailed tips for creating and configuring a network.
- See Network View control panel for details on controlling the 3d graphical display of a network view.
- See InputData for information on how input patterns from a DataTable are applied to the network.
In addition to encapsulating and containing the structure of the network, the Network object also encapsulates all the functions of a network (note this is different from PDP++, where the Process objects did a lot of that). Thus, every form of processing that the network can support should have a corresponding function defined on the Network object. This includes commonly used statistics and other types of processing that are not strictly the network algorithm per se, but are so commonly used that it makes sense to support them directly on the network.
Specific Network Functionality
There are many different pages detailing specific aspects of overall network computation:
- Lesion -- different ways of reversibility damaging the network at different levels.
- State Separation -- New in Version 8.5, separates the computational State from the overall Network configuration, which now only goes down to the Layer and Projection level, but no longer to Units
- Thread Optimization -- Released in Version 8.0, optimized memory layout for multi-threaded computation. State separation in 8.5 takes this all the way..
- Distributed Memory -- computation distributed across multiple nodes, interconnected via MPI interface over fast network interconnects (InfiniBand).
- Vectorizing -- using vector math operations (MMX, SIMD) on normal CPU's
- GPU -- graphical processing unit computation -- e.g., NVIDIA CUDA -- dramatically improved with State Separation
Weight File Format
Emergent uses an XML-style weight file format, which produces much more robust loading of weight files after the network structure has changed, compared to the PDP++ format.
How to Save and Load Weights
From the Tree Viewer, click on Networks, then your network. Its associated properties should appear in the Panel View on a pinkish background. There are three menus immediately above them: Object, ControlPanels, and Actions. The Object menu has Load Weights and Save Weights commands near the bottom. Alternatively, you can run the SaveWeights program from the LeabraAll_Std subgroup in the Programs group in the Tree View.
Note: Layers are looked up by name and thus if you change the layer name, a weight file will no longer load correctly. Because these files are text files, you can just look for that layer name in the file and change it there as well. Or, better yet, load the weights with the old layer name, then change the name, then save the weight file again.
Example of an Emergent Weight File
Below is a schematic of the initial weights from a 4-layer feed forward back-propagation network. The input layer has 4 units, the first hidden layer has three units, the second hidden layer has 2 units and the output layer has 1 unit. The units are numbered in bold blue text. There is a bias weight value for each unit, shown in the box to the left of the unit number, and there is a weight associated with each connection shown in a list to the left of the units. For example, to the left of unit hidden_0_0, there is a bulleted list of four numbers labeled with their array positions 0-3. The first value in this list "-0.2027" is the weight for the connection between input unit 0 (input_0) and hidden unit 0_0 (hidden_0_0). The second value in the list "0.0484" is the weight for the connection between input_1 and hidden_0_0.
Figure 1: Graphic Representation of Information Stored in an Emergent Weight File
- Note: The network weights and bias weight values in emergent are single precision real numbers, for the sake of space, the values have been rounded to 4 decimal places for display in this section of documentation.
Below in figure 2 is a section of the actual weights file from which the information in the schematic was taken. The network weight and bias weight entries for layer hidden_0, unit 0 (hidden_0_0) are shown here. The data from the weights file is shown on the left in bold black text. Annotations have been provided to label the xlm tags.
Figure 2: Weight and Activation data for unit Hidden_0_0
The section begins with the notation for the layer, which in this case is Hidden_0. The entry for unit hidden_0_0 begins with the "<UgUn 0 >" xml tag noting the unit indexed in array position "0". The first entry under <UgUn 0 > gives the bias weight value for unit hidden_0_0 and is labeled with the xml tag <Un>. In this case the bias weight value for hidden_0_0 is "-0.3014". The bias weight is followed by the weights for the connections between the layer "Input" and unit "hidden_0_0". The notation for this is <Cg 0 Fm:Input> meaning "connections to 'Unit_0' from layer 'Input'". The next tag, <Cn 4>, indicates that there are 4 connections between the layer "Input" and unit "hidden_0_0", one for each unit in the layer "Input". The <Cn> tag has 4 elements labeled 0,1,2,3. The value "-0.2027" is given next to the label "0" this indicates that "-0.2027" is the weight for the connection between input_0 and hidden_0_0. The value "0.0484" is given next to the label "1" this indicates that "0.0484" is the weight for the connection between input_1 and hidden_0_0. The "unit group/unit" block concludes with the xml stop tags.
The entire Weight File, from which the information in graphics above was extracted, is available by following the link below. The data for unit hidden_0_0, given in figure 2, begins on the 28th line of the weights file and starts with the XML tag <Lay Hidden_0>.