THE FEEDBACK

Powered by: the Society for Philosophy & Neuroscience

Reasoning Goals and Representational Decisions in Computational Cognitive Neuroscience: Lessons From the Drift Diffusion Model

Reasoning Goals and Representational Decisions in Computational Cognitive Neuroscience: Lessons From the Drift Diffusion Model
By: Ari Khoudary (UC Irvine), Megan A.K. Peters (UC Irvine), & Aaron Bornstein (UC Irvine)

Abstract:
Computational cognitive models are powerful tools for enhancing the quantitative and theoretical rigor of cognitive neuroscience. It is thus imperative that model users—researchers who develop models, use existing models, or integrate model-based findings into their own research—understand how these tools work and what factors need to be considered when engaging with them. To this end, we developed a philosophical toolkit that addresses core questions about computational cognitive models in the brain and behavioral sciences. Drawing on recent advances in the philosophy of modeling, we highlight the central role of model users’ reasoning goals in the application and interpretation of formal models. We demonstrate the utility of this perspective by first offering a philosophical introduction to the highly popular drift diffusion model (DDM) and then providing a novel conceptual analysis of a long-standing debate about decision thresholds in the DDM. Contrary to most existing work, we suggest that the two model structures implicated in the debate offer complementary—rather than competing—explanations of speeded choice behavior. Further, we show how the type of explanation provided by each form of the model (parsimonious and normative) reflects the reasoning goals of the communities of users who developed them (cognitive psychometricians and theoretical decision scientists, respectively). We conclude our analysis by offering readers a principled heuristic for deciding which of the models to use, thus concretely demonstrating the conceptual and practical utility of philosophy for resolving meta-scientific challenges in the brain and behavioral sciences.

Now published in the European Journal of Neuroscience

Commentary from Ari Khoudary:
Something I never anticipated when beginning my PhD in computational cognitive neuroscience is just how many decisions I have to make every day in order to make progress on my research. Some of these decisions are relatively trivial, like choosing conventions for how to name and organize different files on my computer. Others are more substantive, like deciding which type of model to use and how I’ll visualize its results for different audiences. My background in experimental psychology gave me some useful prior knowledge about strategies that worked for me in the past. But once my research started becoming more computationally intensive, the space of necessary decisions seemed to grow exponentially. A poor choice about something as simple as naming convention or file organization could make the coding process much more tedious and error-prone, and I have “burned it all down” only to “build it back better” more times than I would like to admit. 

In parallel to learning how to cope with the decision-ladenness of computational modeling, I was also learning that it was possible to say that computer simulations alone give explanations of already-existing empirical data. In fact, the first poster I ever presented as a graduate student contained simulation results only, and, as a trained experimentalist, I could not believe that I was allowed to display work at a conference without ever having “made contact with reality” (i.e., taken my own measurements of human behavior). Learning more about the sequential sampling family to which my simulation model belonged further left me with more questions than answers. Models in this family explain decisions by appealing to a set of latent variables, the values of which are obtained by maximum likelihood estimation on observed data. Each model posits a slightly different set or arrangement of these latent variables, and adjudicating between them often comes down to small differences in the shape of reaction time distributions. Further, most models in this family have successfully linked behavioral data with neural measurements ranging in granularity from single cells to group-averaged fMRI signals. The question plagued me: what exactly am I simulating–and by extension, explaining–when I run my model? 

My advisors are quite familiar with the philosophical bend of my thinking, and thus thought that a special issue on the relevance of philosophy for neuroscience would be a great place for me to work out some of these issues and finally publish this model. To all of our surprise, however, the paper grew into something much larger than that: a general-purpose philosophical “toolkit” for cognitive modeling coupled with a conceptual analysis of a long-standing debate in the sequential sampling literature. The toolkit addresses some of the core questions I wrestled with early in my PhD, like how models represent and give explanations of empirical phenomena. All of our answers are predicated on the claim that mathematical models of empirical targets require idealization, a process that makes all models technically false representations of their targets. We draw on Weisberg’s (2013) notion of a model’s construal–a modeler’s intentions in building and applying a model–and Harvard & Winsberg’s (2022) notion of representational decisions–deciding what to represent in your model and how to represent it–to argue for the central role of reasoning goals in the development, application, and interpretation of cognitive models. We then demonstrate the utility of these conceptual tools by tracing the history of reasoning goals contributing to the two model forms implicated in the sequential sampling debate, and use the difference in goals to give researchers a heuristic for deciding how to “take a stance” in the debate (i.e., make a principled choice about which form of the model to use). Altogether, the paper demonstrates the utility of philosophy for the practicing scientist, giving newer researchers tools for managing the “existemic” feelings (see p. 14 of the paper) that computational modeling can so frequently incur while also giving more seasoned researchers a different meta-scientific perspective on model selection in the brain and cognitive sciences.

Leave a comment

About

THE FEEDBACK is a forum for sharing newly published work in philosophy and neuroscience. Each blog post will be accompanied by a short commentary from the publishing author(s).

Hosted by The Society for Philosophy & Neuroscience (philandneuro.com)