Body

Neuroscientists, aided by an increasingly powerful repertoire of experimental techniques, have produced a wealth of information about the dynamics, structure and function of neurobiological circuitry over the past several decades. Integrating this information into a coherent understanding of the brain remains a significant challenge, however. Because of the scale and complexity of this challenge, the field increasingly relies on computational modeling techniques. The most promising models are those that can concisely explain a large volume and variety of data. Unfortunately, there are few quantitative metrics useful for demonstrating a model's explanatory power, and most models today are validated against only a narrow cross-section of data. This work aims to lay the foundations for a formal infrastructure within which neurobiological data and models can be described, shared, and validated by the neuroscientific community. This infrastructure is based on collections of "validation functions" associated with shared datasets. These functions are designed to evaluate the fit of candidate models. A validation function's "signature" – a notion taken from type theory – constrains the universe of models that could possibly be considered, implicitly creating a formal ontology for the field. By building a large collection of annotated datasets, and building models against this common high-level ontology, we may be able to operationalize the penultimate question in the field: What would an adequate explanation of neural computation even look like?

Abstract Author(s)
Cyrus Omar
University
Carnegie Mellon University