Current Projects
Who did it to whom? Argument roles in sentence comprehension & production
Comprehenders sometimes show initial blindness to argument roles, or who did it to whom, when predicting upcoming verbs in real-time (e.g., …which waitress the customer *served). However, when asked to say aloud predictions under time pressure, people generate responses that are overwhelmingly role-sensitive. The apparent contrast between comprehension and production measures has motivated a series of work where I try to identify the sources of role-(in)sensitivity and clarify whether the observed variability arises from shared or distinct mechanisms between comprehension and production.
Collaborators: Dr. Masato Nakamura (Saarland University, Germany), Dr. Colin Phillips (University of Oxford, UK; University of Maryland, USA)
Click for related preprint

Context-driven lexical prediction in children and adults
“Race the Robot” @ Planet Word Museum, Washington D.C.
In partnership with the Planet Word Museum, a language museum located in Washington D.C., our “Planet Cloze” team is conducting a series of speeded cloze studies on-site at the museum, where visitors voluntarily participate in our “Race the Robot” language science game. The museum setting has given us the opportunity to examine understudied populations, including school-age children, who seem to show a similar race-like mechanism in predicting upcoming words based on context, like adults, though their developing vocabulary and world knowledge sometimes lead to different candidate profiles.
Collaborators: Katherine Howitt (University of Maryland, USA), London Dixon (University of Maryland, USA), Dr. Masato Nakamura (Saarland University, Germany), Dr. Tal Ness, Dr. Colin Phillips (University of Oxford, UK; University of Maryland, USA), Language Science Station at Planet Word (a, b)
Click for related preprint

Role-reversal anomalies in large language models
Role-reversal illusions (i.e., swapping the order of who did what to whom) provide a fertile ground for comparing large language models’ behavior against humans. There may be various reasons to suspect that language models will diverge from humans to some extent. Our findings based on surprisal, representational probing, and attention analyses of GPT2 and other pre-trained language models indicate that the models show human-like processing behavior at the surface level, but when rigorously tested with psycholinguistic methods, they do not show the same systematic patterns across different types of sentences as humans. While the parallels between models and humans suggest that some patterns naturally arise from statistical co-occurrences, the significant differences indicate that the systematic patters in human behavior are driven by additional cognitive mechanisms (e.g., memory retrieval, inhibition of contextually inappropriate words, etc.). This can inform psycholinguistic theories aimed to better understand what factors contribute to successful sentence processing in humans.
Collaborators: Sathvik Nair (University of Maryland, USA), Dr. Naomi Feldman (University of Maryland, USA)
EMNLP Findings Paper: http://arxiv.org/abs/2410.16139

Production of possessive pronouns
Do syntactically irrelevant elements interfere with the processing of dependencies between words during sentence production? Previous studies have shown that the grammatical number or gender of other nouns can erroneously interfere with the selection of the correct pronoun in real-time production. In ongoing work, we use naturalistic pictures to elicit sentences describing possessive relations (e.g., Susan chased her grandpa/grandma) and examine whether the gender of the pronoun-irrelevant possessee noun interferes with the selection of the possessive pronoun in English. Contrary to other types of dependencies, we find only weak evidence in accuracy and timing measures for interference in the production of possessive pronouns.
Collaborators: Dr. Sol Lago (Goethe University Frankfurt, Germany)
Previous Projects

Agreement Attraction in L2 Korean learners of English
Collaborators: Dr. Colin Phillips (University of Oxford, UK; University of Maryland, USA)

Morpho-syntactic sensitivity in L2 real-time sentence processing
Collaborators: Dr. Michael Long (University of Maryland, USA)

A corpus study on the use of personal pronouns in writing by L2 Korean learners of English
Collaborators: Dr. Sun-Young Oh (Seoul National University, South Korea)

Lyrical and non-lyrical background music and sentence processing
Collaborators: Dr. Sung Eun Lee (Seoul National University, South Korea), Dr. Young Sung Kwon (Dong-A University, South Korea)

NPI illusions in GPT2
Negative polarity illusions (e.g., The journalist that no editors recommended for the assignment has ever…) provide a fertile ground for comparing language model behavior against humans. Our surprisal and attention analyses of GPT2 indicate human-like patterns of vulnerability in several aspects. However, the model diverges from humans in ways that are difficult to capture with the same mechanisms proposed to explain humans’ systematic susceptibility to NPI illusions.
Collaborators: Seyed Sajjad Nezhadi (University of Maryland, USA)