# Mathematical Sciences Research Institute

Home » Workshop » Schedules » Discovering implicit computation graphs in nonlinear brain dynamics

# Discovering implicit computation graphs in nonlinear brain dynamics

## [Moved Online] Hot Topics: Topological Insights in Neuroscience May 04, 2021 - May 11, 2021

May 05, 2021 (08:00 AM PDT - 08:45 AM PDT)
Speaker(s): Xaq Pitkow (Baylor College of Medicine)
Location: MSRI: Online/Virtual
Tags/Keywords
• graph neural network

• inference

• symmetry

• brain

Primary Mathematics Subject Classification No Primary AMS MSC
Secondary Mathematics Subject Classification
Video

#### Discovering Implicit Computation Graphs in Nonlinear Brain Dynamics

Abstract

Repeating patterns of microcircuitry in the cerebral cortex suggest that the brain reuses elementary or canonical'' computations. Neural representations, however, are distributed, so the relevant operations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define these canonical computations. We present a theory-driven mathematical framework for inferring implicit canonical computations from large-scale neural measurements. This work is motivated by one important class of cortical computation, probabilistic inference. We posit that the brain has a structured internal model of the world, and that it approximates probabilistic inference on this model using nonlinear message-passing implemented by recurrently connected neural population codes. Our general analysis method simultaneously finds (i) the neural representation of relevant variables, (ii) interactions between these latent variables that define the brain's internal model of the world, and (iii) canonical message-functions that specify the implicit computations. With enough data, these properties are statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation of all interactions. As a concrete demonstration of this framework, we analyze artificial neural recordings generated by a model brain that implicitly implements advanced mean-field inference. Given external inputs and noisy neural activity from the model brain, we successfully estimate the latent dynamics and canonical parameters that explain the simulated measurements. In this first example application, we use a simple polynomial basis to characterize the latent canonical transformations. While this construction matches the true model, it is unlikely to capture a real brain's nonlinearities efficiently. To address this, we develop a general, flexible variant of the framework based on Graph Neural Networks, to infer approximate inferences with known neural embedding. Finally, analysis of these models reveal certain features of experiment design required to successfully extract canonical computations from neural data.