Documentation for the
LeabraFlex Project template, which provides a different way of configuring the Leabra programs that is more flexible and modular -- very useful for more complex projects that require extensive novel functionality, including interactive behaviors between the network and the programs that generate inputs to the network, things that happen at different points in the settling process, and supporting multiple different ways of running the model in a more efficient, modular fashion.
ParamSet Configuration Mechanism
In version 8.0.6 and above, the previous ConfigTable mechanism has been replaced with a new ParamSet mechanism that is simpler and more easily configured and extensible.
The master ControlPanel and ClusterRun now point to the run_params and misc_params variables on the MasterTrain program. When MasterTrain runs (interactively in the gui), it Activates the currently-selected ParamSet configuration -- this copies the saved_value settings into the live, "active" current values of each item in the param set. These items include variables in other key Programs, and parameters in the network that you want to control. It is easy to add any number of different sets of such parameters -- you can either add new parameters to the existing Params groups, or you can make new groups for sets of parameters that vary mostly independently of the existing ones.
The param sets live in the param_sets group in the Project, in sub-groups called RunParams and MiscParams. Within each of these groups, the first ParamSet object serves as the Master ParamSet, while all the remaining ones are Clones. Whenever you add, remove, or modify an item within the Master, those changes are automatically copied to all the Clones. This makes it easy to maintain a consistent set of parameters across all the configurations. To make this more visually apparent, the Clones are shown with a grey background. Nevertheless, the Master and Clone ParamSet objects all function in the identical way, and each has its own independent set of saved_value settings. It is these saved values that distinguish each configuration. When you Activate a ParamSet, you are copying the saved_value into the current "live" active value of that given item (whether it is a variable in a Program, or a value in a Spec, or any other value on any object anywhere in the software!).
We have removed the StdGlobalsInit program, which used to serve as the way to load all the values from the ConfigTable into live, active variables that you can then use in Programs. Instead, these variables just live in the programs that use them directly, and the ParamSet's just point into those variables. The TrainStart and TrainEnd programs have many of the RunParams variables. To see where a variable is, just do the context menu on a given item label in the ParamSet, and select Go To Object and you'll be taken right there. This is a major advantage over the previous DataTable mechanism, which didn't have this direct connection to the variables it controlled.
Converting Existing Projects
If you want to do it manually, here are the main steps to convert an existing LeabraFlex project over to the new ParamSet mechanism:
- Drag the MasterTrain, MasterRun, MasterStartup, TrainStart (TODO: others???) programs from a new LeabraFlex project on top of your existing versions of these programs (in that order -- select Assign To from the drop menu -- it is a good idea to first do Compare To and make sure that you didn't have any custom code in any of these programs that you'll want to keep -- this is relatively rare however).
- Do a Find on programs (all programs) and look for config_id -- this variable was passed around in the prior versions, but is absent in the new versions -- please remove it from all existing programs that have it).
- Drag over the RunParams NetConfig and LearnParams groups into param_sets from the new LeabraFlex -- these should be able to find the variables in your project once you've updated the above programs. The default LearnParams just has one token item -- you'll probably want to adapt that to your actual project, and may have params in your ClusterRun or ControlPanel that you'll like to add. If these params are variables in StdGlobalsInit (likely) move them to TrainStart or another program and then add each one to the RunMaster or other param_set by selecting the variable and on the context menu choosing "Add Var to Control Panel".
- Make sure you've got everything you want from your existing ConfigTable into In the RunParams and / or MiscParams Master's, and then go to the small menu in edit panel for RunParams group (and MiscParams if relevant) in middle Editor Panel, select Control Panel/CopyFromDataTable menu item, and select the ConfigTable -- this will copy over each of the rows of the config table as a new ParamSet in that group, with saved_value's copied from the column with the same name as the variable in the ParamSet. Hence, make sure your param set variables are named the same as your ConfigTable column names. You can change the names later in your ParamSet if you want.. the Master will auto-copy to all the clones!
- You can copy over the ClusterRun and ControlPanel (drop on existing and Assign To) once you've moved all the variables to the MiscParams configs. These ones have the pointers to the MasterTrain variables.
- Then you can remove the ConfigTable and the StdGlobalsInit. Try to run your project from the ControlPanel. Fix any errors!
- It is really easy to create a new ParamSet_Group within the objs of a Program (there is a new button to do it) -- you can then select a set of variables from the program to add to the master param set in the group, and then add a call at the top of your prog_code to ActivateParamSet() in the ParamSet_Group object, with the name from a String variable that contains the name of the ParamSet to use -- and with that, you've got a program that can be run with different named configurations of its variables, making it easier for other programs to configure things. You can then add this String variable to your MiscParams or other top-level param sets. When you have a lot of complex functionality embedded within a single Program (e.g., a program that generates a complex environment), this really makes it a lot easier to configure and control, while still ensuring that each program is fully self-contained!
- Please update these instructions with any other helpful tips or missing steps!
The call sequence is:
- InitProgs[init_prog] (as spec'd in ConfigTable)
- loop over MasterRun until stop_train is set to true
- RunProgs[run_prog] (as spec'd in ConfigTable)
- typically calls one or more TaskProgs, increments counters and does appropriate Epoch and higher-level housekeeping as necessary.
MasterRun now has a stop_step_grain parameter that allows you to Step this program at different grain sizes (e.g., after an Epoch) and for different conditions -- this is completely extensible if you look at the underlying code, and very powerful.
The UtilProgs contain all the basic functionality for training a network and monitoring, etc -- the task programs should consist of calls to these util programs in the desired order to achieve whatever form of processing is required. The built-in BasicTrain program just does the equivalent of a standard LeabraTrial.
The most efficient way to use multiple distributed memory (dmem) processors (via MPI) is to split different events across processors. The standard Leabra programs do this by interleaving trials according to dmem processor (e.g., for 10 processors, proc 0 gets trials 0, 10, 20, ..; proc 1 gets 1, 11, 21, etc..). For these flex progs, which can have interactive behavior and ill-defined sequences of events, this strategy is not always possible (the ChoosePermutedEvent program in EnviroProgs does implement this behavior however). Here are some tips:
- After version 8.0, you can add taMisc::dmem_proc to all calls to Random or various Permute etc functions that use random numbers and take a thr_no (thread number) argument -- as documented in MTRnd, this thr_no arg selects different random number sequences that are initialized from a single common seed, and will produce statistically independent sequences. Thus, you can ensure that each node gets different random inputs and other parameters, as appropriate, without needing to have an entirely different seed for each node. This also means that results can be fully replicable when starting from the same random seed, and that nodes can also generate the same random numbers as needed (if this is needed, it is a good idea to pass taMisc::dmem_proc + 1 to the Random calls, and reserve 0 for the common shared thread that should only be called identically from all processors -- otherwise it will diverge from the 0'th node relative to the others.
- After common weight initialization etc, the TrainStart program has the option to generate a new random seed just before starting the Epoch -- see the dmem_new_seed flag -- each processor at this point will be operating with their own random seed, so any dynamic event generation code that uses random numbers (hint: it should!) will result in different things on different nodes. However, as a result of this, every run of the simulation will be different, even if using common initial starting seeds. See first note to alternative that avoids these consequences.
- It is essential that the different nodes still end up calling Compute_Weights the same number of times, after roughly the same amount of overall computation -- this is where the weight changes are synchronized across processors -- the simulation will lock up and hang if these do not align.
- The epoch_trials value gets divided by dmem_nprocs in StdGlobalsInit, such that each processor runs this reduced number of trials, with the network->trial counter going from 0 to this lower epoch_trials value -- it does not do the interleaving that is done in the hierarchical programs. This also means that trial numbers will be duplicated if you merge separate trial log data tables into a single table, which is not necessarily a problem, but you may need to do things a bit differently in analyzing the data.
- If you are using a traditional list of events in an input_data table, use ChoosePermutedEvent and it will work properly with dmem, just like the standard programs. This program maintains its own internal counter, and simply iterates in a permuted order through the events and then repeats -- it does not need to be synchronized with the trial and epoch structure of the programs (to achieve this, simply ensure that the epoch_trials is the same as the number of rows in the input data table).