Previously, on Reasonable People, we saw how the absence of reliable signals could prevent useful markets forming. Despite sellers who want to sell, and buyers who would be willing to buy, if it is impossible for those with high-quality goods to signal that they have high-quality goods, buyers might end up resigning themselves to making bad trades. This follows from a famous theoretical paper from Akerlof (1970), on ‘the Market for Lemons’, as given life by Tom Slee’s retelling in his 2006 book on market failure, game theory and individual choice.
Then, last week on Reasonable People, we looked at signals in evolution, and used the example of badgers, and sexual selection, to show how honest signals can be a powerful driver of evolution — powerful enough to cause bones to be added, or lost, from the basic mammal skeleton. We can also borrow from evolutionary theory a definition of what is required for a signal to evolve to resist being mimicked by fakers - a signal has to cost fakers more to produce than it costs the honest if it is going to survive as a reliable indicator.
So, across economics and evolution we have a powerful set of results which show the importance of signalling in coordination games, whether those coordination games are in markets, in individual behaviour, or even in the design of bodies changing over the incomprehensible timespans of evolution. The framework of signalling theory helps us make sense of these results (a framework so important it has separate Wikipedia pages for signalling theory in evolution and in economics).
Today, I wanted to write about a new study which underscores the importance of signalling, and shows what happens in one market when honest signalling broke down. In Making Talk Cheap: Generative AI and Labor Market Signaling, Jesse Silbert and Anaïs Galdin show, month by month, how an online marketplace for freelancers changed when it became trivial for every worker to use AI to produce bespoke pitches for every job.
The market is freelancer.com. Here people who need work doing can post job specifications and the pool of available workers bid for the job, stating a price and providing a description of their experience and suitability for the job, their “proposal”.
These proposals are signals between the workers, who we assume vary in actual skill and experience, and the buyers, who we assume want to hire the best workers at the price they can afford. Buyers want every hint they can get, before committing to hire, of exactly how good the different freelancers are.
It’s a busy, competitive market, ideal for analysis of the forces at play which affect behaviour on both sides. Workers in their sample were bidding between $30 and $250 to complete the work. The research looked at 61,000 job postings, and 2.7 million applications from 212,000 unique workers, between 2021 and 2024.
This platform allows Galdin & Silbert to measure both the time it took each worker to submit their proposal, and the proposal text (as well as the job specification). Many jobs receive multiple responses very quickly, and for their analysis Galdin & Silbert restrict their data to proposals which are submitted within 12 minutes.
Some workers will have a good fit to a particular job - some bit of experience or special skills - which they can include in their proposal, justifying a higher price maybe, but crucially, signalling their higher quality. Other workers, those without special skills, are tempted to bid for every job, spamming a copy/paste of their CV to as many applications as quickly as possible. So proposals are a vital signalling mechanism which allow workers to convey their quality, and which those hiring can use to discern it.
Or at least, they were.
In November 2022 ChatGPT was launched, and in April 2023 freelancer.com launched its own AI tool, integrated into the site, to help workers write proposals. The effect was immediate. Proposals got longer — the average proposal pre-LLMs was less than a hundred words. Post-LLMs it was double that.

To advance their analysis, Galdin & Silbert develop a measure of the effort a worker puts into their proposal. This captures things like quality of writing, correcting for copy-pasting behaviour, and weighting more highly indicators that the worker has customised their proposal to the specific job they are applying for (this includes things like that they have tailored the proposal to the job post, avoiding boilerplate language, that the proposal shows they understand the task described in the job post, shows they understand what it will involve, and have completed any specific questions requested in the job post). Showing attention and effort to the job post are qualities by which —pre-LLMs — a worker might signal their ability to actually do the work.
When LLMs are introduced, the signal value of proposals changes, with the average signal value leaping and continuing to climb as more and more workers adopt LLMs into their process (and the LLMs themselves improve):

So from the hiring perspective, on the surface everything looks better. The market is now full of workers providing high quality applications, tailoring their proposals to the job and addressing all specific requests made about the job.
But while the quality of the proposals has improved, their reliability as signals has collapsed. Proposals no longer indicate the effort the worker put into applying for the job, nor have much connection to the worker’s underlying skill and experience, with deleterious consequences. Pre-LLMs those workers submitting high quality proposals were more likely to be hired. Post-LLMs, proposal quality has no relation to probability of being hired:

Everyone is submitting high quality proposals, so proposal quality stops helping with making hiring decisions. Neither workers nor employers can rely on the proposal as a signal of ability any more.
Using the effort workers put into their proposals, and the rates they charge, the researchers construct a measure of workers’ beliefs about their own ability — this is the crucial thing that they are trying to signal to get hired. Post-LLMs, the highest ability workers are less likely to get hired, a 19% drop in hiring of those in the top 20% of the worker pool. The reverse happens for workers in the bottom 20% of pool, who are 14% more likely to be hired.
In the market overall, wages drop 5% and there’s also a dip in the overall number of jobs which successfully find workers. Despite the price drop, the authors conclude that the market overall is less efficient, with both workers and employers losing out (although most of the loss is borne by workers, who now are competing harder without access to one reliable way of indicating their ability). The LLMs push the market to a new equilibrium which favours low-ability workers over high-ability ones.
The authors conclude:
these results imply that many markets that rely on costly written communication may face significant welfare and meritocratic threats from generative AI’s ability to cheaply produce expertly-written text.
Much is made of the potential productivity gains of LLMs. This paper shows how, if it affects our ability to reliably signal underlying effort, ability or intentions in text, LLMs may also lead to losses. Even though each individual is helped by the LLM, and so notionally more productive, the market as a whole can become less functional.
Disruption does not have to be permanent of course. Employers or marketplaces can adapt. They can produce new screening mechanisms — perhaps even using LLMs to help design more discriminating application criteria. One mechanism suggested by Galdin & Silbert which I think is intriguing, is that these changes incentivise exploratory contracts — short term hires which allow an employers to discover worker ability based on how they perform, and so avoid the need to rely on pre-hire signals.
Expect to see such disruption, and counter-measures to try to adapt, in all domains where we’ve become used to relying on text as a signal of effort and ability.
This newsletter is free for everyone to read and always will be. If you can afford it, feel free to chip in to help me keep writing for everyone: upgrade to a paid subscription (more on why here).
Below, references, and other things I’ve been thinking about.
References
The paper: Galdin, A., & Silbert, J. (2025). Making Talk Cheap: Generative AI and Labor Market Signaling. arXiv preprint arXiv:2511.08785.
Thanks to T. who shared this paper with me, after the 3rd Workshop on Funding of Science and Innovation in Como, Italy.
The website freelancer.com
Context (from me): #1 The Surprising Deceptions of Individual Choice (about the Market for Lemons and the importance of signalling to cooperation). #2 Let’s talk about the badger (about the importance of honest signalling in evolution. Honest).
The Professor is in visiting
I am now a Visiting Professor at the Department of Computer Science and Technology, University of Cambridge. I’ll be continuing the work on group decision making, dialogue and language models with Andreas Vlachos and others there.
I’ll also be giving a seminar on 20th of May, so if you’re in town please come and say hi
Reasoning in groups, with human and artificial agents
In the right circumstances groups can use deliberation to outperform the ability of each individual group member. This talk will review work done in collaboration with Prof. Vlachos and other colleagues in the Department which looks at when, and how, the benefits of group deliberation can manifest, revealing insights into both human psychology and the power of argument exchange. Ultimately the ambition is to design artificial dialogue agents which can positively contribute to, and so enhance, group discussions.
On a separate topic, I’ll be presenting at UCL on 8th of June as part of the Behavioural Data Science Seminar series (title ‘Quantifying the benefits of using decision models with response time and accuracy data’).
In general, I put up talks I’m about to give (and slides from talks I have given) here
… And finally
Comments? Feedback? Honest signals of effort? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online
AI declaration: I write all the words and think all the thoughts myself. I ask Gemini to check for spelling and grammar, and this time I asked it a technical question on the paper about how worker ability was defined (it was helpful). Prompts available on request.
















