A neural model of task compositionality with natural language instructions

  • Date: Jun 20, 2022
  • Time: 02:00 PM - 03:00 PM
  • Speaker: Alexandre Pouget
  • University of Geneva, Dept des Neurosciences Fondamentales, Switzerland
  • Location: Zoom
  • Host: Peter Dayan (Philipp Schwartenbeck & Sebastian Bruijns)
A neural model of task compositionality with natural language instructions

We present neural models of one of humans’ most astonishing cognitive feats: the ability to interpret linguistic instructions in order to perform novel tasks with just a few practice trials. We trained recurrent neural networks on a set of commonly studied psychophysical tasks, and receive linguistic instructions embedded by transformer architectures pre-trained on natural language processing. Our best performing models can perform an unknown task with a performance of 80% correct on average based solely on linguistic instructions (i.e. 0-shot learning). We found that the resulting neural representations capture the semantic structure of interrelated tasks even for novel tasks, allowing for the composition of practiced skills in unseen settings. Finally, we also demonstrate how this model can generate a linguistic description of a task it has identified using motor feedback, which, when communicated to another network, leads to near perfect performance (95%). To our knowledge, this is the first experimentally testable model of how language can structure sensorimotor representations to allow for task compositionality.

Go to Editor View