SunOReillyBhattacharyyaEtAl15
Abstract 

People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones—the gambler’s fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the bestfitting biasgain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences. 
SunOReillyBhattacharyyaEtAl15  

* WikiCite / Zotero Entry

[Back to CCNLab/publications ]
SunOReillyBhattacharyyaEtAl15 Sun, Y., O'Reilly, R.C., Bhattacharyya, R., Smith, J.W., Liu, X., and Wang, H. (2015). Latent structure in random sequences drives neural learning toward a rational bias. Proceedings of the National Academy of Sciences (USA). SunOReillyBhattacharyyaEtAl15.pdf (Web)
Abstract:
People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones  the gambler’s fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the bestfitting biasgain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences.