Reinforcement learning (RL) algorithms provide powerful explanations for simple learning and decision-making behaviors and the functions of their underlying neural substrates. Unfortunately, in real-world situations that involve many stimuli and actions, these algorithms learn pitifully slowly, exposing their inferiority in comparison to animal and human learning. Here we suggest that one reason for this discrepancy is that humans and animals take advantage of structure that is inherent in real-world tasks to simplify the learning problem. We survey an emerging literature on 'structure learning'–using experience to infer the structure of a task–and how this can be of service to RL, with an emphasis on structure in perception and action.
Starting in 2022, the lab has decided to post here only the archival, open-access version of our publications. This is part of the movement to emphasize quality and content over the impact factor or prestige of the journal a paper is published in. Full citations (for referencing papers in your own work) can be found on pubmed and/or within the archival version, which will be updated once a paper is accepted for publication after peer review.