AI (GPT-3) & the human brain

This article features some quick thoughts and notes only…

The brain has been understood for decades
In 2005, Ray Kurzweil wrote that: ‘There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substrates… The human brain is a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling.’ (Read more about integrated AI)

Along came transformers
In 2019, transformer-based models like GPT-2 were studied and compared with the human brain. These models were found to be using similar processing to get to the same output.

Specific models accurately predict human brain activity… with up to 100% predictivity… transformers such as BERT, predict large portions of the data. The model that predicts the human data best across datasets is GPT2-xl [this paper was written before GPT-3 was released], which predicts [test datasets] at close to 100%… These scores are higher in the language network than other parts of the brain.

[Language model] architecture alone, with random weights, can yield representations that match human brain data well. If we construe model training as analogous to learning in human development, then human cortex might already provide a sufficiently rich structure that allows for the rapid acquisition of language. Perhaps most of development is then a combination of the system wiring up and learning the right decoders on top of largely structurally defined features. In that analogy, community development of new architectures could be akin to evolution, or perhaps, more accurately, selective breeding with genetic modification.

Neural predictivity correlates across datasets spanning recording modalities (fMRI, ECoG, reading times) and diverse materials presented visually and auditorily…

An intriguing possibility is therefore that both the human language system and the ANN models of language are optimized to predict upcoming words in the service of efficient meaning extraction.
— Schrimpf et al. (2020).

While quantity is a quality of its own, it is time to focus on ensuring that our highest good is being selected and advanced at all times. This begins with ensuring data quality via summum bonum—our ultimate good—in the datasets used to train AI language models.

Dr Alan D. Thompson is an Australian AI expert and consultant. He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to major AI projects with intergovernmental organisations and impactful companies. Contact.

This page last updated: 13/Jul/2021.