The main reason for visiting the Photographers’ Gallery at the moment is to see the hugely enjoyable exhibition of wonderful photos of the 1970s society, reggae and punk rock by Dennis Morris. But if you’re there it’s worth making the effort to check out this much more challenging installation by contemporary photo-artist Felicity Hammond.
As ‘V3’ suggests this is the third of a series of four installations. They are about contemporary issues around the brave new world of digital imagery, artificial intelligence, and their real-world costs and implications.
The key concept is model collapse which has at least two meanings.
AI model collapse
In the digital realm it refers to the progressive deterioration in quality of AI outputs. First generation AI is trained on all the content of the internet (which contains plenty that is imperfect or misleading). AI then generates a new generation of content which contains all the errors it inherited and adds countless ‘hallucinations’ and errors of its own. The next generation is then trained on a totality of data which contains a large amount of errors, and in turn generates fresh errors. Thus the introduction of artificial intelligence tools will inevitably and unstoppably lead to the degradation of information on the internet.
With conscious irony, here’s a definition of model collapse generate by Google AI:
Model collapse in AI refers to the phenomenon where generative models, trained on their own or other models’ outputs (synthetic data), degrade in performance over time. This degradation manifests as reduced diversity, increased bias, and ultimately, the model producing nonsensical or repetitive outputs. Essentially, the model ‘learns’ to imitate its own errors, leading to a decline in its ability to accurately represent the original data distribution.
Environmental collapse
At the same time as the digital world is being irreversibly degraded so, of course, is the real world. Presumably everyone knows that making AI work requires enormous new datacentres, in vast air-conditioned warehouses which, of course, use up a lot of energy and a lot of water, which is an increasingly precious resource in our overheating world. But there’s also the well-known mining of rare and precious metals which are needed in our shiny digital gadgets, namely smartphones.
So ‘collapse’ has a double meaning, referring to both the collapse and degradation of quality in an AI-infested digital world, and also the environmental collapse and degradation required by our digital technologies.
As it happens there’s also a double meaning to the word ‘mining’. In the digital world, data mining refers to the process of extracting information from vast datasets (like the whole internet); but ‘mining’ also has its older meaning of referring to digging up stuff under the ground, namely the rare minerals and metals required for this technology, such as lithium.
Ditto ‘extraction’: data extraction refers, fairly obviously, to AI’s mining of the internet’s data resources, and has obviously been adapted or copied from the older real-world term which describes the extraction of actual mineral wealth…
One wall label explains that one particular form of mining exposes buried sulphides which oxidise a bright orange on contact with air, and is often washed by the water involved in mining operations into streams and rivers and lakes, creating large toxic orange swamps, obviously killing all forms of life. These toxic orange waste dumps dominate the palette of the exhibition.
Model collapse summary
To summarise, then: ‘model collapse’ is a technical term referring to the degradation of AI information, which also echoes the physical degradation of the natural environment caused by the real-world requirements of supporting the digital realm.
Model collapse depicted in art
So what about the art? Well it comes in roughly two forms: there are relatively flat images hung on walls, and then there are a couple of big installations set back from the viewer with space in front covered in mining detritus etc. The biggest one, with huge, digitally fragmented images of orange mudpools at the back and industrial scraps and sacking scattered around in the foreground, kind of speaks for itself.
But something more complex is going on with many of her images. Basically she uses digital feedback to distort, degrade and fragment the original imagery. Grasp this simple principle and you understand most of what’s going on.
Also, this being V3, it refers back to the earlier versions, V1 and V2, so here’s a brief recap.
V1. Content Aware, 24 to 27 October 2024, Brighton
V1 was staged in Brighton in a shipping container, the kind used in their tens of thousands to move goods around the world. these standardised objects bear coded numbers so are part of a digital system. they criss-cross the oceans above the deep sea cables which carry all our digital data. And, seen from above and at a distance, they resemble the pixels which digital images are made out of. Hammond’s images play with all these intersections and ambiguities. So it was a kind of the investigation of the global infrastructure that supports the digital economy,
‘Content Aware’ is also the name of an image editing tool. Hammond likes these puns or multiple meanings.
The installation included cameras which recorded visitors movements and interactions.
V2. Rigged, 13 March to 15 June 2025, Derby
Rigged is another pun in the sense that a rig can refer to the enormous structures which drill for oil land gas at sea. But it’s also the term for the structures which hold cameras in a studio setting. there’s also a connotation of the game being ‘rigged’ because Hammond used images of visitors to V1 and fed them through AI algorithms to generate ‘mean’ or ‘average’ images of humans. As you might expect, these didn’t come out too well.
V3. Model Collapse gallery
So what does it all look like?
Installations

Installation view of V3 Model Collapse by Felicity Hammond @ the Photographers’ Gallery (photo by the author)
Installation consisting of a massive photo of open cast mining, surrounded by detailed photos, all presented at the back of a kind of sandbox of industrial detritus.

Installation view of V3 Model Collapse by Felicity Hammond @ the Photographers’ Gallery © Felicity Hammond
On the wall
Shards and fragments, visual representations of the fragmented outputs of AI and the environmental collapse involved in digital technology.

Installation view of V3 Model Collapse by Felicity Hammond @ the Photographers’ Gallery (photo by the author)
On a screen made up of 80 or so imperfect mirrors, further muddied by white smearing, are hung four images which were probably originally fairly straightforward self-portraits taken against Hammond’s emblematic green and orange designs, but have been distorted to represent AI degradation.

Installation view of V3 Model Collapse by Felicity Hammond @ the Photographers’ Gallery (photo by the author)
Portrait of the artist, through an AI glass, distortedly
Close-up of one of those four self-portraits showing how AI mostly captures the details of the original but with inexplicable ‘hallucinations’ and distortions.
The video
Related links
- Felicity Hammond: V3 Model Collapse continues at the Photographers’ Gallery until 28 September 2025
- List of resources i.e. related articles, books and videos
- Felicity Hammond on the Photoworks website
- Variations on the Format website



