we study how people and animals learn from trial and error (and from rewards and punishments) to make decisions, combining computational, neural, and behavioral perspectives. we focus on understanding how subjects cope with computationally demanding decision situations, notably choice under uncertainty and in tasks (such as mazes or chess) requiring many decisions to be made sequentially. in engineering, these are the key problems motivating reinforcement learning and bayesian decision theory. we are particularly interested in using these computational frameworks as a basis for analyzing and understanding biological decision making. some ongoing projects include:
computational models in neuroscientific experiments:
computational models (such as reinforcement learning algorithms) are more than cartoons: they can provide exquisitely detailed trial-by-trial hypotheses about how subjects might approach tasks such as decision making. by fitting such models to behavioral and neural data, and comparing different candidates, we can understand in detail the processes underlying subjects’ choices. such models can also quantitatively characterize hitherto subjective phenomena (such as the anticipation of reward or punishment), allowing the principled study of their neural representations. methodologically, we are interested in developing experimental designs and analytical techniques for such issues as how to use models to pool heterogeneous data sources (such as simultaneously obtained choice behavior, eye monitoring, and bold signals from multiple brain areas). practically, we apply these methods in behavioral and functional imaging experiments to study human decision making, including some of the issues discussed below.
interactions between multiple decision-making systems:
the idea that the brain contains multiple, separate decision systems is as ubiquitous (in psychology, neuroscience, and even economics) as it is bizarre. for instance, much evidence points to competition between a reflective or cognitive planning system centered in prefrontal cortex, and a more stimulus-bound ‘habitual’ controller associated with dopamine and the basal ganglia. such competition has often been implicated in self-control issues such as dieting or drug addiction. but (as these examples suggest) having multiple solutions to the problem of making decisions actually compounds the decision problem, by requiring the brain to choose between the systems. the computational underpinnings and neural substrates for this sort of arbitration are poorly understood. i have pursued computational models of multiple decision making systems and their interactions; armed with such a detailed characterization, we are beginning to search for the fingerprints of these interactions in human functional imaging data.
learning and neuromodulation:
much evidence has now amassed for the idea that the neuromodulator dopamine serves as a particular sort of teaching signal for reinforcement learning in appetitive tasks. this relatively good characterization can now provide a foothold for extending this understanding in a number of exciting new directions. these include computational (e.g., how can this system balance the need to explore unfamiliar options versus exploit old favorites), behavioral (how is dopaminergically mediated learning manifest; how is it deficient in pathologies such as drug addiction or parkinson’s disease), and neural (what is the contribution of systems that interact with dopamine, such as serotonin and the prefrontal cortex). one recent example that crosscuts these categories is the interaction of appetitive and aversive learning. psychologists have long suggested that the brain contains parallel, opponent motivational systems for reward and punishment; the identification of the former with dopamine allowed us to suggest an account of serotonin as its opponent for aversive learning. we are presently investigating these ideas with imaging and pharmacological studies of decision making under reward and punishment.