Neural computation, multiple realizability, and the prospects for mechanistic explanation

David M. Kaplan*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    7 Citations (Scopus)


    There is an ongoing philosophical and scientific debate concerning the nature of computational explanation in the neurosciences. Recently, some have cited modeling work involving so-called canonical neural computations—standard computational modules that apply the same fundamental operations across multiple brain areas—as evidence that computational neuroscientists sometimes employ a distinctive explanatory scheme from that of mechanistic explanation. Because these neural computations can rely on diverse circuits and mechanisms, modeling the underlying mechanisms is supposed to be of limited explanatory value. I argue that these conclusions about computational explanations in neuroscience are mistaken, and rest upon a number of confusions about the proper scope of mechanistic explanation and the relevance of multiple realizability considerations. Once these confusions are resolved, the mechanistic character of computational explanations can once again be appreciated.
    Original languageEnglish
    Title of host publicationExplanation and integration in mind and brain science
    EditorsDavid M. Kaplan
    Place of PublicationOxford, United Kingdom
    PublisherOxford University Press
    Number of pages26
    ISBN (Electronic)9780191508714
    ISBN (Print)9780199685509
    Publication statusPublished - 2017


    • canonical neural computation
    • multiple realizability
    • mechanistic explanation
    • mechanism
    • computation
    • computational neuroscience
    • computational models


    Dive into the research topics of 'Neural computation, multiple realizability, and the prospects for mechanistic explanation'. Together they form a unique fingerprint.

    Cite this