Feeds:
Posts
Comments

Posts Tagged ‘LLMs’

Image credit: https://www.nature.com/articles/s41586-024-07566-y

In recent discussions about language model construction, particularly with the rise of large language models (LLMs), there has been growing concern over the phenomenon of model collapse—where LLMs trained on synthetic data begin to diverge from real-world language. A recent article in Communications of the ACM, titled “The Collapse of GPT” (Savage, 2025), explores this issue and highlights its relevance to the future of AI systems. This article brought to mind work I co-authored with my colleague Peter Wyard in the late 1990s, titled “An Internet Agent for Language Model Construction.” Peter was a member of the BT Labs NLP research group, and I was working there on a Research Fellowship. At the time, we were grappling with similar challenges—how to build domain-specific language models for speech recognition systems when collecting enough in-domain data was both costly and time-consuming.

(more…)

Read Full Post »

I recently had the opportunity to speak at ATLAS 2025Language Technologies Applied to Society at the Universidad de La Rioja. It was a pleasure to return to this thoughtful and collaborative research community, having attended last year as well.

In my talk, “Reflections on Building and Deploying LLMs in Production”, I explored the complexities and challenges involved in moving large language models (LLMs) from the research phase into real-world production. From deployment strategies to ethical considerations and scalability, the journey is far from straightforward.

(more…)

Read Full Post »

Design a site like this with WordPress.com
Get started