Mathematical Sciences Research Institute

Home » Workshop » Schedules » Nerve theorems for fixed points of neural networks

Nerve theorems for fixed points of neural networks

[Moved Online] Hot Topics: Topological Insights in Neuroscience May 04, 2021 - May 11, 2021

May 07, 2021 (08:00 AM PDT - 08:45 AM PDT)
Speaker(s): Daniela Egas Santander ( École polytechnique fédérale de Lausanne)
Location: MSRI: Online/Virtual
  • computational neuroscience

  • CTLN

  • graph rules

  • nerve theorem

Primary Mathematics Subject Classification
Secondary Mathematics Subject Classification No Secondary AMS MSC

Nerve Theorems for Fixed Points of Neural Networks


A fundamental question in computational neuroscience is to understand how the network’s connectivity shapes neural activity. A popular framework for modeling neural activity are a class of recurrent neural networks called threshold linear networks (TLNs). A special case of these are combinatorial threshold-linear networks (CTLNs) whose dynamics are completely determined by the structure of a directed graph, thus being an ideal setup in which to study the relationship between connectivity and activity.

Even though nonlinear network dynamics are notoriously difficult to understand, work of Curto, Geneson and Morrison shows that CTLNs are surprisingly tractable mathematically. In particular, for small networks, the fixed points of the network dynamics can often be completely determined via a series of combinatorial {\it graph rules} that can be applied directly to the underlying graph. However, for larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties.

In this talk, we will present a method of covering graphs of CTLNs with a set of smaller {\it directional graphs} that reflect the local flow of activity. The combinatorial structure of the graph cover is captured by the {\it nerve} of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three “nerve theorems” that provide strong constraints on the fixed points of the underlying network from the structure of the nerve effectively providing a kind of “dimensionality reduction” on the dynamical system of the underlying CTLN. We will illustrate the power of our results with some examples.

This is joint work with F. Burtscher, C. Curto, S. Ebli, K. Morrison, A. Patania, N. Sanderson

Supplements No Notes/Supplements Uploaded
Video/Audio Files

Nerve Theorems for Fixed Points of Neural Networks

Troubles with video?

Please report video problems to itsupport@msri.org.

See more of our Streaming videos on our main VMath Videos page.