Network Algorithm Guide
The system supports the following neural network algorithms -- follow the links for more details. To create a project that uses a given algorithm, you typically create a project of the given type (e.g., a BpProject for Backprop). This sets all the appropriate defaults (no more .def defaults files as used in PDP++). However, you can easily just mix and match within the same project -- all of the relevant default information is contained within the type-specific Network object (e.g., BpNetwork).
- Backpropagation (Bp) -- feedforward and recurrent networks learning from backpropagated error signals.
- Leabra -- Local, error-driven and associative, biologically realistic algorithm -- combines Bp-like error driven learning with self organizing learning plus inhibitory competition and constraint satisfaction processing in a biologically plausible manner. Used to simulate many different cognitive neuroscience phenomena. See Leabra Params for hints on parameter setting.
- Constraint Satisfaction (Cs) -- symmetric, bidirectionally connected networks that settle into a state that maximizes the various internal and external constraints (e.g., Boltzmann machines, Hopfield networks).
- Self Organizing (So) -- learn without an explicit teacher, based on Hebbian learning, includes Kohonen networks and various flavors of Competitive Learning.
- ACT-R -- the combined symbolic and subsymbolic cognitive modeling environment is now available directly in emergent, written natively in C++ code (does not require lisp or loading the original ACT-R code). This is a work in progress.