OReillyWyatteRohrlich17

From Computational Cognitive Neuroscience Wiki
(Redirected from OReillyWyatteRohrlichIP)
Jump to: navigation, search


OReillyWyatteRohrlich17
* WikiCite / Zotero Entry
  • Title: Deep Predictive Learning: A Comprehensive Model of Three Visual Streams
  • Author(s): O'Reilly, Randall C. and Wyatte, Dean R. and Rohrlich, John
  • Journal: arXiv:1709.04654 [q-bio]
  • Date: 2017-09-14
  • URL: [1]


[Back to CCNLab/publications ]

OReillyWyatteRohrlich17 O'Reilly, R.C., Wyatte, D., and Rohrlich, J. (2017/submitted). Deep Predictive Learning: A Comprehensive Model of Three Visual Streams. Preprint avail at: https://arxiv.org/abs/1709.04654 pdf icon.png OReillyWyatteRohrlich17.pdf (Web)

Abstract:

How does the neocortex learn and develop the foundations of all our high-level cognitive abilities? We present a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive data at each level. Learning is based on making predictions about what the senses will report at 100 msec (alpha frequency) intervals, and adapting synaptic weights to improve prediction accuracy. The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas and levels of abstraction. The sparse driving inputs from layer 5 intrinsic bursting neurons provide the target signal, and the temporal difference between it and the prediction reverberates throughout the cortex, driving synaptic changes that approximate error backpropagation, using only local activation signals in equations derived directly from a detailed biophysical model. In vision, predictive learning requires a carefully-organized developmental progression and anatomical organization of three pathways (What, Where, and What * Where), according to two central principles: top-down input from compact, high-level, abstract representations is essential for accurate prediction of low-level sensory inputs; and the collective, low-level prediction error must be progressively and opportunistically partitioned to enable extraction of separable factors that drive the learning of further high-level abstractions. Our model self-organized systematic invariant object representations of 100 different objects from simple movies, accounts for a wide range of data, and makes many testable predictions.