About the Parrots

Gabe Doyle

I’m an Associate Professor in the Department of Linguistics and Asian/Middle Eastern Languages at San Diego State University. My main research interest is computational psycholinguistics — building computer models of why we say the things we say, and how others interpret them.

I’ve worked on the tech side, with publications from machine learning and electrical engineering conferences, and on the language side, including a three years as a postdoc in a child language acquisition lab at Stanford.

AI-generated image, from dream.ai, using the "Throwback" model. Prompt was: chubby white guy, very long wavy brown hair, black-rimmed eyeglasses, sunglasses on hair, synthwave palm tree background, word balloon saying "COMPUTER!!", magazine text, short beard. electronics, computers, text, nerd, glitch.

I’m frustrated by what I see as the core paradox of large language model research. LLMs really are a sea change in non-human representation of language, as they hurdle a ton of the previously-insurmountable barriers in computational models of language, like long-distance dependencies. There’s a ton to be learned from how these models are, and especially aren’t, behaving like human language users. Yet most of the supposedly revolutionary applications of LLMs are things that LLMs aren’t really good for.

LLMs are great at syntactic structure, for instance; they’re almost never grammatically incorrect somehow. They’re terrible at factual recall, though, tending to hallucinate facts and present them as confidently as a professor would. In this blog, I hope to strike a balance between addressing the real, exciting advances that cutting-edge AI models represent and popping the hype bubbles of horribly conceived AI applications.

We’ve never been able to talk to something non-human before, so we must be careful to not import the subconscious assumption “if it quacks like a duck, it’s a duck.” LLMs may sound human, but we need to keep in mind the ways that they are not — for better and worse!

Amanda Simons

I teach ESOL (English to Speakers of Other Languages, often called ESL or English as a Second Language) at San Diego College of Continuing Education, where my students are immigrants and refugees, sometimes learning their first words in English. I’m a part-time instructor there, and I started in January 2023. 

I’ve been teaching since 2012, and have taught in a few different US-based teaching settings. My critical eye of LLMs really emerged when I was teaching in an EAP (English for Academic Purposes) university setting in 2022. Late spring/early summer 2022 really blew my mind open: Gabe telling me about GPT 3 and Anna Mills’ tweets were my biggest influences. 

I’m still navigating my use of LLMs in my setting and beyond, and that brings up all kinds of questions I hope to address in this blog, as I make connections to education policy, language discrimination, labor, and more. Stay tuned!

Design a site like this with WordPress.com
Get started