Lindsay M Smith, Jason Z Kim, Zhixin Lu, Dani S Bassett

Chaos: An Interdisciplinary Journal of Nonlinear Science

DOI: https://doi.org/10.1063/5.0075572

 

Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood, thereby hindering the development of more advanced functions. Here, we train a 1000-neuron RNN—a reservoir computer (RC)—to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories. Furthermore, we explain the abstraction mechanism with a new theory. By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors as quantified by an extra Lyapunov exponent equal to zero. We propose a theoretical mechanism of this abstraction by combining ideas from differentiable generalized synchronization and feedback dynamics. Our results quantify abstraction in simple neural systems, enabling us to design artificial RNNs for abstraction and leading us toward a neural basis of abstraction.
Neural systems learn and store information as memories and can even create abstract representations from these memories, such as how the human brain can change the pitch of a song or predict different trajectories of a moving object. Because we do not know how neurons work together to generate abstractions, we are unable to optimize artificial neural networks for abstraction or directly measure abstraction in biological neural networks. Our goal is to provide a theory for how a simple neural network learns to abstract information from its inputs. We demonstrate that abstraction is possible using a simple neural network and that abstraction can be quantified and measured using existing tools. Furthermore, we provide a new mathematical mechanism for abstraction in artificial neural networks, allowing for future research applications in neuroscience and machine learning.