My research focuses on the design and/or learning of complex energy landscapes in physical systems, such as flow and elastic networks, self-folding sheets, and other glassy systems. To study these systems my work draws heavily on ideas and methods from machine learning and high-dimensional statistics, and I believe such ideas can flow both ways. An important part of my work involves the study of learning machines, or physical learning algorithms. These systems blur line the between computational and biological systems and help redefine our very concept of learning. I would like to understand how such learning machines can be made, and how they behave when subjected to the constraints of the real world.


Current research – Learning in physical systems

I believe that physical networks, such as mechanical metamaterials, are an interesting avenue for studying different concepts of learning. My current research explores these ideas by identifying “physical learning rules”. Such learning rules may turn an ordinary piece of paper into an adaptable classifier that can be trained through the analog of supervised learning, i.e. physically applying forces to a physical object in order to facilitate learning. I think that the migration of ideas from computer science to physics should be a two-way street. It is thus interesting to ask whether physically inspired systems can inform novel machine learning algorithms endowed with unique properties.

Some specific questions and ongoing projects I am interested in at the moment:

  • (Supervised learning) How can a physical network learn to emulate or map external inputs? What are the relevant physically plausible learning rules (local rules, no computation allowed, no explicit information of desired behavior) required to enable supervised learning in a real-world machine?
  • How can physical learning rules be implemented in experimental settings? What is necessary to construct a real-world learning machine?
  • Can a physical network learn out-of-equilibrium? What happens if the time scale of learning is similar to that of the underlying physical processes?
  • (Unsupervised learning) Biological brains make limited use of supervised learning. How can a physical machine learn to classify inputs with no external supervision, and labels that are themselves inputs? Can we experimentally construct a rudimentary autonomous brain from simple mechanical elements?




Curious agents and reinforcement learning

Humans and animals exhibit “curious” behaviors, actions that do not seem to have an immediate benefit for their survival or well-being. Such behaviors are thought to promote learning new things about the environment or allow the actor to adapt to changes in the world. Based on ideas from reinforcement learning and statistical physics, my research attempts to more precisely define curious behaviors and search for the appropriate circumstances in which curiosity is a useful strategy.


Physics of self-folding sheets

In Chicago, I studied self-folding origami. Such structures can be easily actuated to exhibit interesting low-energy motions. Disordered self-folding origami constitutes an interesting “glassy” system where different questions relating to mechanical design and\or learning can be tackled in experimentally realizable origami sheets. My research showed that actually folding self-folding origami may be harder than expected, and how one might rescue self-foldability using stiff joints.


Simple glass-forming models

At Tel-Aviv University, my research focused on the theory of glass transition in simple models. We studied a square lattice gas model with hard repulsive interactions up to the 3rd nearest neighbor (N3 model) analytically and numerically. The research identified and characterized unexpected hurdles in simulating lattice gasses in the supercooled liquid regime, and exhibited the advantages of some simulation techniques over others.