- About
- Store
On behalf of Hartwig Art Foundation, Holly Herndon let us listen through it: “What goes in, comes out”

Last week, Hartwig Art Foundation brought together artists, philosophers, builders and technologists in De Thomas (Church) in Amsterdam. Among them were keynote speakers Yuk Hui, Benjamin Bratton, john gerrard, and Marina Otero, just to name a few. Under the title Digital Cosmos, the two-day forum (Friday November 21 and Saturday November 22) was organized to think through the role of art amid the rapid rise of artificial intelligence.
On Friday night, musician and artist Holly Herndon opened the program with something more extensive and intimate than your average conference presentation: she held a listening session, with the church proving to be the perfect setting for that. Alone on stage but also speaking on behalf of herself and her long-time partner and collaborator Mat Dryhurst, she unfolded their latest project Starmirror – now on view at KW Institute for Contemporary Art in Berlin.


Holly Herndon and Mat Dryhurst have spent more than a decade working with machine learning inside their musical and artistic practice – long before the current wave of large models. In 2019, Herndon’s album PROTO already featured ‘Spawn’: an early AI vocal model trained through live community sessions, where a choir helped ‘raise’ the system as if it were another ensemble member.
Over the years, the pair have become key voices (pun intended) in the debate around training data and using data, concerning authorship and consent. They co-founded Spawning, an organization through which they built tools such as HaveIBeenTrained.com. This site allows artists to see whether their work has been scraped into popular image datasets and to request removal. They also launched Source.plus, which lets people ‘curate’ bespoke datasets for training – selecting specific images, styles or creators rather than scraping indiscriminately.
But what might easily be missed when people summarize Herndon and Dryhurst as AI ethics figures, is that their approach is first of all aesthetic. Herndon did not come to machine learning through policy, but through composition. In Amsterdam, she described her early experiments with AI as a very logical consequence: as an electronic musician, the idea that a computer could sing with her voice and vary it infinitely was immediately interesting for her. It felt like a natural extension of algorithmic and systems-based music, of her long-standing interest in digital signal processing.
Crucially, those early years forced them to build their own datasets from scratch. Off-the-shelf models did not yet exist. There was nothing to ‘use’ or ‘just plug into’. If they wanted a choral model, they had to record choirs. If they wanted a voice model, they had to sing. So they did, and actually, it fits the artists’ practice perfectly. Herndon has never been much of a sampler as a musician, as she thinks it’s both not fair but also not very creative. Therefore she prefers to generate her own material, pushing herself to come up with new things. Now, with advanced models all around, Herndon and Dryhurst are still consistent with training their own data: “What goes in, comes out,” Herndon says during her talk. She explains that they feel it’s important to make those decisions: what do I want to put in – because it will determine the quality of the output.

During the Q&A in Amsterdam, moderator Leonardo Dellanoce (also co-host of the event together with Arthur Steiner) asked Herndon to expand on the phrase ‘beautiful data’ – a term she and Dryhurst have used in other contexts. When we talk about AI, she said, we rarely talk about the aesthetics of the dataset itself. We might debate scale, quality, usefulness overall, or gaps, but we hardly ever ask whether the training material is coherent, if it was all gathered intentionally, and if it’s maybe even pleasurable data.
For Starmirror and its sibling project The Call at Serpentine North, the artists treated dataset construction as a compositional task in its own right. Together with Serpentine’s Arts Technologies team, they travelled through the UK to record fifteen community choirs, from Blackburn to Belfast to Penarth, using a purpose-written songbook and a clear recording protocol. These choirs entered into a Data Trust experiment, in which members collectively decide how the resulting choral models may be used.
For Starmirror, they extended that logic. The work is not an installation you visit and where you can listen in, but it’s a live training environment. On several Sundays throughout the exhibition, the audience is invited to join a live process of AI training as the KW hall transforms into a recording studio where choirs, an ensemble, and visitors contribute their voices in call-and-response sessions. These recordings will form the basis of a public choral dataset which will be used to train a Berlin AI choir to debut at the Kunstsammlung Nordrhein-Westfalen in Düsseldorf in 2026.
This is where ‘beautiful data’ becomes even more than high-quality, personal, (studio) recordings from the artists. Here, it is also about the social form in which the data is produced: instead of using anonymous cultural ‘exhaust’ from platforms, Herndon and Dryhurst are building datasets with identifiable communities or individuals, with clear agreements and statements of usage. The material is even more so special, as a trace of a real gathering.
And so the beauty I think lies in two things: 1) it is partly sonic: the range of regional accents, the blend of early church music with contemporary choral traditions, and just the individual sound that each person brings to the table. 2) But it is also procedural, with the beauty lying in the decision to treat the organizing and governance of data as part of the artwork, not as a hidden technical step in the background. As the artists put it, the way you categorize and annotate recordings – by choir, by place, by piece etc. – is itself a creative act, because it determines what the model can later ‘remember’, and then recombine.

In Berlin, Starmirror turns KW’s halls into something it’s not been before. Working with the architecture studio sub, the artists built a spatial structure in which wooden frameworks, loudspeakers and GPU-powered instruments share the same liturgical logic. One room introduces a large image model, trained exclusively on public-domain material, which continuously generates archetypal landscapes and architectures. Another focuses on the vocal works emerging from the choral datasets. At the conceptual centre of the project sits Hildegard von Bingen’s Ordo Virtutum, a 12th-century morality play in which a soul must choose between good and evil. Herndon and Dryhurst have used this piece to generate a new songbook, which in turn acts as a starting point for the AI models. The artists explicitly draw an analogy between Von Bingen’s celestial hierarchies – a structured cosmos of virtues, vices and intermediaries – and today’s protocol stacks and machine-learning architectures. The comparison suggests that our contemporary ‘extended reality’ – the mixture of online feeds, algorithmic recommendations, and sensor-laden environments we move through – is not just a technical infrastructure. It is also a cosmology: a way of arranging who can see whom, who can speak, and which voices are amplified or buried.
The model is not accessed via a prompt window but via the act of singing.
By routing these questions through choral music and religious architecture, Starmirror reframes AI as a technology of coordination (instead of replacement). The installation does not picture a machine dreaming on its own, but it stages a situation in which people and models must continually attune to one another. And that staging seems to matter a lot. Herndon mentioned that visitors often tell her they don’t go to church, but they still crave a place where they can gather, talk, sing, and feel part of something larger. Herndon tells us that Dryhurst likes to say that “scrolling is for bots, and strolling is for humans.” And this makes perfect sense looking at Starmirror: instead of yet another screen-based interface, it insists on bodies in space, singing at the same time.

During the discussion, someone asked Herndon how she and Dryhurst arrived at choirs as a central motif in their work. Part of the answer was biographical for Herndon: she grew up singing in church and school choirs. But the more interesting part was structural. Choirs, she noted, are already small democracies. There is usually a conductor, but also forms of voting, consensus, (dis)agreement over repertoire, and negotiations around who sings what and when. In other words, you could say that choirs already encode a ‘grammar of governance’: different parts, unequal ranges, a need for coordination, moments of tension, and an underlying commitment to making something together. For Herndon and Dryhurst, this makes choirs an ideal microcosm [to stay in the ‘Digital Cosmos’ theme] for thinking about AI governance. Once you have several choirs, each with its own internal rules and culture, you can start asking: what would a meta-structure look like that connects them? Who decides how their voices are mixed in the model? How are credits and revenues distributed?
The artists’ Serpentine Data Trust experiments ask these questions head-on, and the same ideas run through Starmirror. When visitors in Berlin sing into the system on weekends, they are not just ‘feeding’ a neutral machine – and it’s important for them to fully realize the extent of this. They are entering into an arrangement in which consent, different attributions and future use have been considered in advance. This is where I feel Herndon and Dryhurst’s practice stands out within AI deployment in the cultural field: many large models treat cultural data as an undifferentiated resource, ripe for extraction, whereas Herndon and Dryhurst treat it as a set of relationships that need to be maintained, almost like a client management system that you have to carefully categorize and for which you have privacy guidelines.

One anecdote from Herndon’s talk in Amsterdam stayed with me in particular. She recalled how some choir members were hesitant to let their voices be used as training data. They were worried about what AI might ‘do’ with their sound; where it might end up; how it might be misused. Yet the same people were happily uploading their performances to YouTube. This to me exposes exactly the gap that exists in how we talk about data. When we post to platforms, we tend to imagine an audience – like friends, family, or fans – rather than a machine. People then think in terms of attention, not ‘ingestion’. But from the point of view of a model, there is no difference between a carefully produced album uploaded by the artist and a pirated upload recorded from the back of a concert hall. Both are just streams of numbers (ready to be scraped).
By using this example, Herndon reframed these anxieties. If we are feeding the internet with images, texts, recordings, and videos, it’s important to understand where that material ends up (and can possibly float to). Here, the idea of ‘beautiful data’ merges with ‘accountable data’, which is something most ‘uploaders’ are not aware of but should be made clear to them. If you want to gather ‘accountable data’, it is not only about the quality of the sound or the historical depth of the references; it is also about designing conditions under which people understand what they are contributing to.
Listening to all the generated music fragments in Amsterdam, I kept returning to the thought that AI becomes a problem when there is no dialogue. Most of the current anxiety around large models stems from their scale and opacity. They are trained on ‘everything’ and answer about anything, yet they rarely disclose what specific texts or sounds shaped a given output. They present their responses as neutral, even when their behaviour reflects the biases and omissions of those who built and trained them. In that setting, dialogue collapses. You can type into a chat box, but you cannot meaningfully address the model’s underlying assumptions or its training corpus. You are talking to a wall of averages.
Herndon and Dryhurst’s work suggests a different configuration. In Starmirror, the model does not arrive as a finished authority. It is, visibly, a work in progress – a system that is still being trained, in public, by identifiable people whose names and locations you could in principle learn. You can hear the training data in its raw form before listening to the model’s remix. You can compare, and you can decide to participate or not.
For me, this is not some naïve fantasy of “good AI”, but a reminder that AI is not a monolith. There will be gigantic, closed, general-purpose models, and there will be smaller, situated systems like the ones Herndon and Dryhurst build with choirs. I guess the question is not which one will ‘win’, but how we learn to inhabit all without giving up our ability to converse, negotiate, reflect, and then possibly refuse.

So what, then, is the extent of the extended reality we live in?
After Herndon’s talk, my sense is that the answer is twofold. On the one hand, our reality has undeniably expanded. Artists like her can feed decades’ worth of sounds, demos, recordings (of choral sessions) into a model and coax out new works that none of the original contributors could have predicted. A choir recorded in a small town in the UK can resonate, months later, inside a Berlin exhibition space, surprisingly braided together with Hildegard von Bingen’s medieval chants ánd GPU fan noise.
On the other hand, this expansion comes with an unwanted, but very much existing, narrowing pressure: the tendency to collapse everything into a single, frictionless feed; a ‘large everything model’ that spits out plausible content for any prompt, with no memory of the specific human arrangements that made it possible. That is the version of extended reality that feels most suffocating to me, and I guess for a lot of other people: the endless outputs, where there is very little awareness of and discussion about the actual inputs.
Herndon and Dryhurst’s work pushes in the opposite direction. Their practice shows and insists that extended reality should be polyphonic rather than flattened; built from many partial models, grounded in substance. The practical takeaway, for anyone working with AI, would then be: talk to the machine, and listen to what it is echoing back. Feed its outputs into your own critical faculties. Ask where the training material comes from. Consider what kind of ‘songbook’ you are writing for your models, and who is invited to sing from it [metaphorically, but in this use case quite literally).
You don’t accept everything a colleague or a friend or a neighbor tells you without question, let alone a stranger. The same should hold for models. The AI you’re asking things to is in fact a stranger, and there’s a big need to be critical of its answers. Skepticism is there to guide you, to make sure quality is warranted, and artists like Holly Herndon and Mat Dryhurst offer you the handles, guidelines, opportunities, and thoughts to better understand (the scope of) what you’re dealing with. In that light, what Starmirror proposes is that AI can be treated as a coordination tool for collective intelligence, rather than a replacement for it. A mirror for many stars with all their different spectra, not a single blinding sun.

<div style="height: 100px; width: 100%;"></div>
In 2023, Hartwig Art Foundation embarked on a five-year research journey exploring how virtual technologies like AI, VR, AR, game engines, blockchain, metaverse tech and more are reshaping the cultural landscape and how artists and cultural institutions can critically engage with these developments. The aim of this journey is to bring together a community of practice – artists, technologists, and cultural institutions – through research, workshops, and events.