Research

Page under construction


Exploiting synergies between
Machine Learning and neuroscience to demystify brain functions

Cognitive and computational neuroscience has flourished in recent years with the arrival of innovations enabling us to record and track thousands of neurons for long periods from multiple brain regions. However, the conceptual insights into brain function that we can extract from such data are only as deep as the analysis we can do and the models we can understand. In parallel, artificial intelligence (AI) and the study of in silico neural networks have undergone a great revolution. Much like their biological counterparts, our understanding of why and how modern Machine Learning (ML) algorithms work lags behind technological achievements. The parallel developments in neuroscience and ML afford exciting research directions to understand how cognition and behavior emerge from circuit dynamics and connectivity.

We aim to exploit synergies between model ML and neuroscience along several principal directions:

  1. Study underlying principles of learning and inference by neural networks with emphasis on biological-plausible implementations

  2. Use artificial neural networks as surrogate brain circuits: a hypothesis-generating tool.

  3. Develop algorithms and theories targeted at specific computational neuroscience needs. We are interested in analyzing large-scale neural data from complex behavioral experiments and designing optical stimulation experiments.

  4. Highlight the differences between modern ML frameworks and their neuronal counterparts to understand the deficiencies and benefits of either.

Dynamics and computation of cortical circuits

Cortical circuits are large complex networks of interconnected neurons of various types and present rich dynamical repertoire. Modern artificial neural networks (ANN)often try to mimic the biological structures with hopes to reproduce some of their computational power. Due to the highly recurrent connectivity, the self-generated dynamics in the system are fundamental. To understand what computations can be performed by recurrent circuits, we first need to understand their dynamical properties. With the use of statistical physics, we describe the typical behavior of these networks through a small number of order parameters. These parameters are emergent properties of the large networks and help us abstract the complex microscopic behavior. Reducing the high dimensional dynamics to a few relevant measures helps us characterize the phase space of the activity, and study the computational properties. Of particular interest are transition points between two dynamical phases. A system poised at such critical points presents unique behavior that can be shown to be computation-beneficial.

Through the study of in-silico ANN, we test these theories and show the emergence of computation. Furthermore, modern experimental techniques help us test these theories in-vivo. By combining calcium imaging and optogenetic simulations we can test whether cortical circuits are tuned to critical points in the dynamical phase space.

Deep neuronal circuits in the brain


Statistical mechanics of inference in high dimensions