We aim to advance our understanding of biological computation and harness it for building better artificial brains. We are thus mainly interested in functional physical neuronal networks, i.e., networks that (learn to) do something useful, and that can be efficiently embedded in a physical substrate. A key ingredient is the closeness between network and substrate dynamics - emulation trumps simulation. Along with pure performance at their respective tasks, we consider robustness to be an essential aspect of model functionality, as any physical implementation that is both effective and efficient will ultimately needs to deal with real-world imperfections, regardless of whether it has evolved over eons or has been designed by our own hands.