Making Sense of Probabilistic Visual Representations

Project Awarded: $30,000

All organisms must respond to properties of their environment that cannot be determined with certainty. The idea that they accomplish this by using probabilisticrepresentations has been irresistible to researchers across neuroscience, psychology, and philosophy. Understanding probabilistic representation promises to provideinsights into how humans and other organisms are able to cope with uncertainty, and also how we sometimes fail to behave adaptively. But theories of probabilisticrepresentation are very controversial, and researchers have different intuitions about how to define them. Because of this, it is not even clear what wouldcount asevidence for probabilistic representation, and some have claimed that probabilistic accounts are “justso” stories with no explanatory or predictive power. Itis essential that we develop a testable theory of probabilistic representation: doing so would lead to progress infields as diverse as perception, psychiatry and addiction studies, philosophy of mind, and economics. In this proposal, we will describe a theoretical framework that leads to empirical predictions as well as experiments for testing them. Our framework is based on the notion that probabilistic representations must be representations of uncertainty that are used in a way that is uniquely probabilistic: they should be marginalized over. This leads to two testable characteristics of a probabilistic representation: source invariance and probabilistic task transfer. Probabilistic representations should encode uncertainty in a manner that is invariant to multiple environmental causes, and they should be marginalized over in a way that can transfer to multiple downstream tasks. Aim 1 will provide a behavioral test of probabilistic representations by measuring behavioral generalizations in task settings which require marginalization. Aim 2will test that this generalization involves source invariant representations of uncertainty in the visual system, using cross decoding from functional magnetic resonance imaging. Aim 3will be to develop neural network models which possess source invariant representations of uncertainty and exhibit probabilistic task transfer. Taken together, this approach will use converging methods to test whether and how the visual system makes use of probabilistic representations. More broadly, it will provide a general framework for characterizing probabilistic representations that can help resolve longstanding debates that have impeded progress across a wide range of fields.

Raphael Gerraty, PhD. Postdoctoral Fellow, Columbia University

Raphael Gerraty, PhD. Postdoctoral Fellow, Columbia University

Gerardo Viera, PhD. Lecturer, University of Sheffield

Gerardo Viera, PhD. Lecturer, University of Sheffield

Jessica Thompson, PhD. Postdoctoral Researcher, University of Oxford

Jessica Thompson, PhD. Postdoctoral Researcher, University of Oxford