One candidate approach to creating artificial general intelligence (AGI) is to imitate the essential computations of human cognition. This process is sometimes called ‘reverse-engineering the brain’ and the end product called ‘neuromorphic.’ We argue that, unlike with other approaches to AGI, anthropomorphic reasoning about behaviour and safety concerns is appropriate and crucial in a neuromorphic context. Using such reasoning, we offer some initial ideas to make neuromorphic AGI safer. In particular, we explore how basic drives that promote social interaction may be essential to the development of cognitive capabilities as well as serving as a focal point for human-friendly outcomes.
|* WikiCite / Zotero Entry
[Back to CCNLab/publications ]
JilkHerdReadEtAl17 Jilk, D. J., Herd, S. J., Read, S. J., & O’Reilly, R. C. (2017). Anthropomorphic reasoning about neuromorphic AGI safety. Journal of Experimental & Theoretical Artificial Intelligence, 0(0), 1–15. JilkHerdReadEtAl17.pdf (Web)