Published on Thu Sep 30 2021

Performance-optimized neural networks as an explanatory framework for decision confidence

Webb, T. W., Miyoshi, K., So, T. Y., Rajananda, S., Lau, H.

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct. This work has generally relied on idealized, low-dimensional modeling frameworks. We developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images.

3
1
6
Abstract

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a principled explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.