**Human graph learning and information processing**

Humans are adept at uncovering abstract associations in the world around them, either between words in language, notes in music, or even concepts in classroom lectures. Most research in cognitive science has focused on people’s capacity to learn the probabilities of transitions between items in a sequence. However, recent results indicate that people are also sensitive to variations in the large-scale network structure of transitions.

To understand how people detect the topological features of transition networks, we propose a simple hypothesis: that rather than being the result of complex mental process, humans infer network structure through natural errors in learning and memory. Combining ideas from information theory and reinforcement learning, we construct a maximum entropy model of people’s internal representations of the transitions between concepts or stimuli. The model itself derives from the free energy principle in statistical mechanics, which has recently become recognized by computational neuroscientists as a useful tool for formalizing abstract assumptions about the function of the brain. Importantly, our model (i) affords a concise analytic form, thereby aiding intuition; (ii) qualitatively explains the effects of transition network structure on human expectations; and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways and have direct implications for the study and design of optimally learnable sources of information.

Given an understanding of how humans learn and represent transition networks, one can begin to study how much information a sequence of items conveys to a human observer. While the amount of information produced by a sequence (its entropy) depends only on individual transition probabilities, we demonstrate that the amount of information a human perceives depends systematically on the network structure of transitions. We develop an analytical framework to study the information generated by a system as perceived by a human observer. Applying our framework to several real networks, we find that they communicate a large amount of information (having high entropy) and do so efficiently (maintaining low divergence from human expectations). Moreover, we show that such efficient communication arises in networks that are simultaneously heterogeneous, with high-degree hubs, and clustered, with tightly-connected modules — the two defining features of hierarchical organization. Together, these results suggest that many real networks are constrained by the pressures of information transmission, and that these pressures select for specific structural features.

References:

**Christopher W. Lynn** and Danielle S. Bassett. Graph learning: How humans infer and represent networks. In revision, *Proceedings of the National Academy of Sciences* (arxiv.org/abs/1909.07186).

**Christopher W. Lynn**, Lia Papadopoulos, Ari E. Kahn, and Danielle S. Bassett. Human information processing in complex networks. In revision, *Nature Physics* (arxiv.org/abs/1906.00926).

**Christopher W. Lynn**, Ari E. Kahn, Nathaniel Nyema, and Danielle S. Bassett. Structure from noise: Mental errors yield abstract representations of events. Accepted, *Nature Communications*. (arxiv.org/abs/1805.12491)

**Collective human activity emerges from pairwise correlations**

Modern life is rife with examples of correlated human behavior, from spikes in online activity to traffic jams on the highway. But where do these surges in human activity come from? Existing explanations have focused primarily on context-specific mechanisms, such as daily and weekly routines giving rise to traffic jams and natural disasters sparking increases in demand for emergency services. Here, we propose and test a more general explanation: that surges in human activity emerge organically from fine-scale correlations between pairs of individuals within a population.

To investigate the flow of information from the scale of individual humans to an entire population, we adapt and extend ideas from information theory and statistical mechanics. The result is a maximum entropy model that provides accurate quantitative predictions of large-scale human behaviors based only upon the simple pairwise correlations between individuals. The accuracy of the maximum entropy model, in turn, suggests that patterns of activity across an entire human population can be understood as emerging from an aggregation of simple pairwise correlations, fundamentally shifting the way that we perceive many collective human behaviors.

By harkening back to centuries-old intuitions from statistical physics, these results open the door for new predictive models of human activity on large scales, with implications for resource allocation in communication and transportation networks, understanding social organization, and preventing viral epidemics.

References:

**Christopher W. Lynn**, Lia Papadopoulos, Daniel D. Lee, and Danielle S. Bassett. Surges of collective human activity emerge from simple pairwise correlations*. Physical Review X* 9,011022 (2019).

**Control of Ising networks**

The last 10 years have witnessed a dramatic increase in the use of maximum entropy models to describe a diverse range of real-world systems, including networks of neurons in the brain, flocks of birds in flight, and interactions between proteins. Broadly speaking, the maximum entropy principle allows researchers to formalize the hypothesis that large-scale patterns in complex systems emerge organically from an aggregation of simple fine-scale interactions between individual elements. Given the wealth of real-world systems that are quantitatively described by maximum entropy models, it is of fundamental practical and scientific interest to understand how external influence affects the dynamics of these systems. Fortunately, all maximum entropy models are similar, if not formally equivalent, to the Ising model, a foundational model in statistical physics.

As a first step toward understanding how to control such complex systems, we study the problem of maximizing the total activity of an Ising network given a budget of external influence. To solve this problem, which we coin *Ising influence maximization*, we propose a standard gradient ascent algorithm. Interestingly, the gradient in our algorithm is equivalent to the susceptibility of the system in statistical mechanics, and, in theory, can be calculated using the foundational fluctuation-dissipation theorem. From a practical perspective, however, calculations in the Ising model require an amount of time that is exponential in the size of the system, making an exact computation of the susceptibility infeasible in most real-world settings. To address this problem, we propose a sequence of approximations to the susceptibility based on the Plefka expansion that achieve progressively more accurate solutions to the Ising influence maximization problem. Moreover, we find that the structure of optimal stimulation undergoes a transition from focusing influence on central hubs in the Ising network at high temperatures to peripheral nodes at low temperatures. This final result suggests that optimal control strategies for complex systems depend critically on the strength of random fluctuations in the interactions between elements.

References:

**Christopher W. Lynn** and Daniel D. Lee. Maximizing Activity in Ising Networks via the TAP Approximation. In *Association for the Advancement of Artificial Intelligence* (2018).

**Christopher W. Lynn** and Daniel D. Lee. Statistical Mechanics of Influence Maximization with Thermal Noise. *EPL (Europhysics Letters)* 117.6: 66001 (2017).

**Christopher W. Lynn** and Daniel D. Lee. Maximizing Influence in an Ising Network: A Mean-Field Optimal Solution. In *Advances in Neural Information Processing Systems* (2016).

**Simulating channeling radiation at Fermilab**

I spent the summer of 2013 at Fermilab working with Tanaji Sen. Together, we developed simulation software to predict the results of electron-channeling experiments at the LINAC at Fermilab (previously named ASTA). In the experiments, a tight beam of electrons is fired parallel to the carbon planes of a diamond. Once the electrons enter the diamond, they become trapped in the inter-planar potential and hop between the induced quantum excited states. When electrons hop from high- to low-energy states, they release radiation that is Lorentz-boosted into the X-ray spectrum in the direction of the electron beam. Given the structure of the beam and the diamond crystal, our simulations were able to accurately predict the spectrum of the resulting channeling radiation from first-principles.

References:

Tanaji Sen and **Christopher Lynn**. Spectral Brilliance of Channeling Radiation at the ASTA Photoinjector. *Journal of Modern Physics A* 29.30: 1450179 (2014).

Ben Blomberg, Daniel Mihalcea, Harsha Panuganti, Philippe Piot, Charles Brau, Bo Choi, William Gabella, Borislav Ivanov, Marcus Mendenhall, **Christopher Lynn**, Tanaji Sen, Wolfgang Wagner. Planned High-brightness Channeling Radiation Experiment at Fermilab’s Advanced Superconducting Test Accelerator. *International Particle Accelerator Conference* (2014).