I am a fifth-year graduate student in the Department of Psychology at the University of Pennsylvania working with John Trueswell and Russell Epstein.
I am broadly interested in the link between what we see and how we conceptualize it.
More specifically: Even though we receive a continuous stream of sensory information from the world, our minds interpret the input in terms of discrete things happening (i.e. events). And importantly, events are more than discrete atomic entities—they have an inherent internal structure involving the spatial, temporal, and causal relationships between people and objects. For example, recognizing a soccer star biting his opponent is a complex process that quite literally involves many moving parts (see photo on the right).
My goal is to understand event cognition by integrating the study of events in psycholinguistics, vision, and cognitive neuroscience, and the methods of each.
In several behavioral experiments, I have found evidence that extraction of event structure (who is doing what to whom) from visual scenes is both a rapid and automatic process. This is important because it shows that the human visual system is continuously engaged in extracting the structure of what is happening in the world, even when this information is not explicitly attended.
In other work using fMRI, I have identified brain regions that contain neural codes for action categories (e.g., kicking, pushing) that are consistent across the format of visual input, e.g., whether viewing kicking in full-motion videos (in which the entire action sequence can be observed), or in visually varied still photographs (from which the action sequence must be inferred). This is important, because it shows that—just as in linguistic descriptions of actions—there exist representations of actions elicited by visual input that are invariant to incidental visual features, such the actors, objects, scene context, and the presence or absence of dynamic motion information.
In ongoing work (stay tuned!):
- Using fMRI, I am investigating the neural encoding of theoretically relevant semantic properties of events (such as causation, motion, and state change) elicited by both linguistic and visual input.
- Along with Julian De Freitas, I am investigating how event information that is rapidly available in visual scenes (such as who did what to whom, and harm) contributes to moral judgments, and whether we can gain insight into the processing of this morally relevant information by comparing human performance with a deep convolutional neural network model (trained only to recognize object categories).
Anna Papafragou (Professor, Univ. of Delaware)
Julian De Freitas (Graduate Student, Harvard University)
Brent Strickland (Assistant Professor, Ecole Normale Superieure/Institut Jean Nicod)