Abstract
There is an ongoing philosophical and scientific debate concerning the nature of computational explanation in the neurosciences. Recently, some have cited modeling work involving so-called canonical neural computations—standard computational modules that apply the same fundamental operations across multiple brain areas—as evidence that computational neuroscientists sometimes employ a distinctive explanatory scheme from that of mechanistic explanation. Because these neural computations can rely on diverse circuits and mechanisms, modeling the underlying mechanisms is supposed to be of limited explanatory value. I argue that these conclusions about computational explanations in neuroscience are mistaken, and rest upon a number of confusions about the proper scope of mechanistic explanation and the relevance of multiple realizability considerations. Once these confusions are resolved, the mechanistic character of computational explanations can once again be appreciated.
Original language | English |
---|---|
Title of host publication | Explanation and integration in mind and brain science |
Editors | David M. Kaplan |
Place of Publication | Oxford, United Kingdom |
Publisher | Oxford University Press |
Chapter | 8 |
Pages | 164-189 |
Number of pages | 26 |
ISBN (Electronic) | 9780191508714 |
ISBN (Print) | 9780199685509 |
DOIs | |
Publication status | Published - 2017 |
Keywords
- canonical neural computation
- multiple realizability
- mechanistic explanation
- mechanism
- computation
- computational neuroscience
- computational models