Many of the people in our lab are interested in understanding how children — and adults — learn the meanings of words, and how they learn the syntax of their language, so as to combine those words in to sentences that convey more complex thoughts and ideas. Much of our research uses methods that are designed to study the moment-by-moment mental processes that comprise real-time language understanding. Most notably, our lab has developed a set of child-friendly methods for recording the eye movements of children as they hear spoken descriptions of their surroundings.  This ‘visual world’ approach to studying the act of listening (and speaking) allows us to obtain a moment-by-moment record of what a child considers are the referents for the utterance as the speech unfolds over time.  In some of these studies we give children sentences that could have more than one meaning and we see which meaning they pick on the fly. In this way we gain insight into what they already know about the language, but also, importantly, how they process language. Similarly, by providing children with utterances containing novel words (“Oh, she’s blicking the ball!”) we can explore what information in the sentence the child might be using to learn the meaning of this new word.

Where we do our research

We conduct research at our lab on Penn’s campus and at area preschools who work with us. Parents sign up for the research and give permission to let their child participate. We then visit the school to conduct the study, or the parent visits our lab with their child.  If you would like to participate please, click here.

How we do our research

Most of our studies are brief and last about 10 to 20 minutes. A member of our research team would first get acquainted with your child, often playing a simple game. Once your child is relaxed and comfortable we would begin the actual study. For this, your child would likely watch videos on a computer screen or play with toy objects on a table. We often record the eye movements of children using a special computer screen that has high-speed cameras embedded in the panel.  For example, in the picture below, a participant is watching video on our Tobii eyetracker.

We then analyze the eye movement data collected from many children to learn something about how children typically process speech. We do not test each child’s individual abilities nor do we provide linguistic assessments. Instead, your child’s participation will help us learn more about how children acquire and use language.

Example from a Study

In the video below, the speaker on the screen uses a made-up verb to describe the scene:  “He’s biffing him.”  This sentence can be taken from different perspectives, either the dog’s perspective (“The dog is chasing the man” — and so “biff” means something like “chase”) or the man’s perspective (“The man is running away from the dog” — and so biff means something like “flee”). This experiment was designed to see how the the speaker influences the listener’s interpretation of the meaning of new words. The blue dot and its trail are the path of the watcher’s/listener’s eyegaze. As you can see, her gaze starts at the speaker and moves to examine the different actors in the scene. Although you cannot hear it in the video, she then tells us what she thinks “biffing” means.

Another example

In other studies we record where the child is looking as they hear spoken instructions to move objects on a table. A a videotape is made of the child’s eyes and face as he/she carries out spoken instructions to move objects about on a platform. The videotape is later analyzed by hand for gaze direction and word onsets.