
How is non-literal meaning encoded into words?
Jeremy Zehr is a linguist who studies pragmatics and processing. Some of the fundamental questions that he is interested in include:
- How do we distinguish between literal and non-literal meanings?
- Are our brains equipped with distinct mechanisms to process different types of meaning?
- Do we make that distinction at the level of individual words?
- How do literal and non-literal meanings interact when we compute the meaning of sentences in context?
- As we learn new words in context, how do we identify what should go into its literal vs non-literal meaning?
With financial support for the University Research Foundation and the ILST, Dr. Zehr has developed a solution for web-based experiments, providing me and other researchers with the tools to meet the technical requirements of the tasks we need to design. Investigating non-literal meaning often consists in measuring reaction and response times when presenting stimuli that support literal vs non-literal information differently. In Florian Schwarz’s Lab, he has conducted several eye-tracking experiments investigating the online processing of presuppositions.
He has developed a solution for web-based experiments, PennController for IBEX:
The project is supported by a URF grant and the funding of the ILST position and is now used by over 200 researchers. Jeremy has also given workshops on PennController for Ibex at multiple venues, including a webinar for the Linguistic Society of America and a lecture at the LSA Summer Institute

Selected Publications
Siegel Muffy, Zehr J., Bacovcin H. A., Schofield Steuerle L., Schwarz, F. (2018). “The Verbatim Access Effect: Implicature in Experimental Context.” in Language and Cognition, 10(4), 595-625.