Neuro-inspired theory, modeling and applications

Modern-day physics’ most vexing mysteries concern structures that lie at the very ends of a scale that covers tens of orders of magnitude. However, mid-way between the extremely small (quantum particles and fields) and the extremely large (our universe), systems remain that we do not yet fully understand – not necessarily because of scales that are not directly accessible to experiment, but because of their intrinsic complexity. Examples of such systems abound, some more exotic, such as high-temperature superconductors, and some intriguingly mundane, such as Earth’s climate or the human brain.

Our theoretical research is concerned with understanding and analytically describing various aspects of neural network dynamics. On the modeling side, we are mainly interested in functional networks (i.e., networks that do something that we consider useful), for which we take inspiration from both biology and AI research. An essential aspect of model functionality concerns robustness, since one of our goals is the embedding of functional networks in neuromorphic substrates and their application to real-world problems.

Our Research Interests


  • Our group leader speaks his mind!
  • Logo
    Read the Interview that Mihai gave to Uniaktuell - the online magazine of the University of Bern - on Science and Politics.

  • New Article in NeurIPS: Learning with slow neurons is not a problem anymore!
    And this awesome work was selected for an oral presentation at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021)!
    Paul Haider 's Talk (Oral session 1) was scheduled on Tuesday, December 7 followed by a Poster presentation (session 3) on Wednesday, December 8 - Info
  • LEnews
    "Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons".
    See Publications - Press Release

  • Now in Nature Machine Intelligence!
  • New Article in eLife: Evolving to Learn!