Plasticity and Learning

Understanding the process of self-organization and learning represents the key for understanding intelligence, be it biological or artificial. Lying at the core of this line of research, synaptic plasticity is is one of the most important and multi-faceted phenomena in the brain. As it occurs on a range of very different time scales, it takes on multiple roles, such as mediating homeostasis, facilitating exploration or enabling the formation and consolidation of memories. The orders of magnitude spanning these temporal scales represent a key motivation for developing accelerated emulation platforms, as they promise insight into processes that are otherwise prohibitive to classical simulation.

Contrastive Hebbian learning in hierarchical networks

When learning a probabilistic (generative) model of a given dataset, one implicitly strives to maximize the likelihood of the data under the model. In hierarchical networks, a sequence of 'hidden' layers shapes the state distribution of the 'visible' layer (and vice-versa). Maximum-likelihood learning is thus equivalent to shaping the unconstrained activity of the network (sleep phase or dreaming) to resemble the one produced when the visible layer is constrained by data samples (awake state). Under certain conditions, such wake-sleep learning takes the form of Hebbian plasticity, where synaptic updates depend only on pre- and postsynaptic spikes. Find out more: Petrovici 2016 - Leng 2014 - Schroeder 2016 - Fischer 2017 - Petrovici et al. 2016 - Petrovici et al. 2017 - Petrovici et al. 2017 - Leng et al. 2018 - Dold et al. 2019 .

Figure 1 Figure 2 Figure 3 Figure 4

Short-term plasticity

When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. However, similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, these networks can even outperform tempering-based approaches when the training data is imbalanced. Find out more: Leng et al. 2018 - Dold et al. 2019 .

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6

Natural gradient learning

There are many equivalent ways to describe the strength of a synaptic connection in the brain, such as in terms of an EPSP slope, an EPSP amplitude, or the number of receptors at the synaptic cleft. In neuromorphic hardware, on the other hand, two synaptic weights that are intended to be equal are often represented in slightly different ways due to variations in construction. In both cases, the specific choice of a synaptic weight parametrization should not influence the learning enabled by the correspondingly transformed synaptic plasticity rule. This suggests an alternative model for synaptic plasticity in supervised learning with spiking neurons that is based on a parametrization-invariant natural gradient algorithm.

Figure 1 Figure 2 Figure 3

Sequence learning by shaping hidden connectivity

Behavior can often be described as a temporal sequence of actions. These sequences are grounded in neural activity. In order for neural networks to learn complex sequential patterns, memories of past activities are required. These past activities need to be stored by 'hidden' neurons in the network from which the 'visible' neurons can read out the memory.

Figure 1

Lagrangian model of deep learning

The backpropagation-of-errors algorithm lies at the heart of the deep-learning AI boost. Despite the algorithm's elegant nature, it remains an open question whether the brain has developed a similar strategy to shape cortical circuits. However, in an appropriately integrated theory of neuro-synaptic mechanics in multilayer networks, dendritic structure and error-correcting synaptic plasticity seem to appear naturally from first principles.

Figure 1

Hardware in the loop

Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. Iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate; importantly, parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Find out more: Schmitt et al. 2017 - Petrovici et al. 2017 .

Figure 1 Figure 2 Figure 3 Figure 4