From Computational Cognitive Neuroscience Wiki
Jump to: navigation, search

Back to Main Page

This page summarizes some of the major differences between this book and the previous hard copy version, Computational Explorations in Cognitive Neuroscience CECN, for those who are familiar with it.


Overall, the new wiki version is much shorter (at least in the main contents of each chapter), and more focused on capturing the central issues without as much distracting further detail. Some of these details are still available in subtopics linked from each chapter. We have reorganized things a bit, so that Learning Mechanisms are all covered in a single chapter, instead of 3 separate chapters in the prior edition. As a result, the treatment is much more focused, but also considerable savings was achieved as a result of the new form of learning that is described (see below). In general, the Part I Basic Mechanisms chapters can now be covered in 3 or so weeks at the start of the course, leaving more time to cover the cognitive neuroscience content more fully. There is a new chapter on Motor Control and Reinforcement Learning which covers the reinforcement learning mechanisms based on the biology of dopamine, along with other motor control systems including the basal ganglia and cerebellum. The Learning and Memory chapter now focuses much more on episodic memory and the hippocampus, leaving the activation-based memory systems in the prefrontal cortex to be covered in detail in the Executive Function chapter. Overall, these changes allow each chapter to maintain a more coherent overall flow and more focused domain of material, making everything much more accessible.


There are two major new improvements to the basic algorithms employed in the models, and described in the first 3 chapters, one having to do with the activation function described in Neuron chapter, and the other with the learning algorithm described in the Learning Mechanisms chapter.

The discrete spiking activation function is now equivalent to the adaptive exponential (AdEx) model developed by Gerstner and colleagues, which provides a very close fit to cortical neuron spiking behavior, and the rate code model fixes a major "bug" with the previous model by driving rate code directly from excitatory net input, instead of from the membrane potential (Vm) -- the net input has a monotonic relationship with the spiking rate of the AdEx neurons, whereas the asymptotic Vm does not, meaning that you cannot correctly simulate actual spiking rates from this variable.

The learning mechanism is now based on the XCAL algorithm -- temporally eXtended Contrastive Attractor Learning, which was derived directly from a detailed model of spike-timing dependent plasticity, and thus has a much firmer basis in biology at a detailed level. This algorithm retains the essential feature of a combination of self-organizing and error-driven learning mechanisms, but achieves this in a much more elegant, unified manner. The self-organizing learning is now very similar to the Bienenstock, Cooper & Munro (1982) (BCM) algorithm, and it is capable by itself of much more successful learning compared to the prior CPCA Hebbian learning mechanism. For example, it can learn the family trees problem to roughly 50% accuracy, compared to essentially 0% for CPCA. For more details on the direct comparison, see Leabra Details.


We are attempting to make the material much more accessible to people with limited mathematical backgrounds. The math has now been more clearly separated from the key concepts, and the key concepts presented in an intuitive, graphical manner that should be comprehensible to everyone. We have plans to add very elaborated pedagogical treatments of the math in subtopic pages that would potentially allow people who understand the concepts to really understand the math as well.

Simulation Exercises

The simulation exercises are no longer integrated directly into the main flow of the text, but are just a quick link away, with all the documentation provided directly on the wiki, with much richer formatting to make it clearer what you need to do. This also makes it much easier to integrate fixes to the docs. We also simplified the questions, so that students should have a much clearer idea about what exactly is being expected of them, and it should be much easier to grade, etc. There are no longer multi-part questions, so the number of points available per chapter is more evident, as is the workload, which is now much more balanced (and reduced in those chapters that used to be quite long and tedious for students). See Pedagogy for more discussion.