Res·Cog

Clarity on building thinking things,
by Gareth Price, CTO @ CorralData.

The Great Flattening

Digital distribution collapsed the temporal hierarchy that gave culture its structure — everything competes with everything, and nothing ever goes out of print. Now AI is producing culture for itself, and what leaks back into the human world arrives as exhaust from an engine that no longer needs passengers.

In 2002, David Bowie told the New York Times that music was about to become “like running water or electricity.”1 He was not being wistful. He was being structural. “The absolute transformation of everything that we ever thought about music will take place within 10 years,” he said, “and nothing is going to be able to stop it.” Three years earlier, on BBC Newsnight, he had put it more bluntly to Jeremy Paxman: the internet was “an alien life form.”2 Paxman thought he was being dramatic. He was being precise. What Bowie saw — before Spotify existed, before streaming, before the algorithm — was that digital distribution would not simply make music cheaper. It would strip music of scarcity, of event, of the quality of being from a particular time. It would flatten the entire catalogue into a single, simultaneous, browsable surface. He was right about music. But the flattening he described was only the first phase. The second — now underway — is that AI has begun producing culture not for human audiences but for other AI systems, accelerating it past human tempo and throwing off strange artifacts into our world like exhaust from an engine that no longer needs passengers.

The medium whose message is the abolition of sequence

Marshall McLuhan argued in 1964 that the content of any medium is irrelevant to its real effect. “The medium is the message,” he wrote in Understanding Media, meaning that the personal and social consequences of any technology result from the new scale it introduces — not from what anyone does with it.3 Print linearised thought: it produced individualism, nationalism, and what McLuhan called “uniformity, continuity, and linearity.”4 Television reversed the process, creating mosaic perception — simultaneous, participatory, tribalising.5 Each medium restructured how humans thought, not what they thought about.

So what is the message of AI?

If print’s message was linearity and television’s was simultaneity, AI’s message is the abolition of sequence itself. A large language model does not know what came first. It does not distinguish the canonical from the derivative, the original from the pastiche, the influential from the influenced. It processes all of culture as equally weighted input. That indifference is not a bug in the system. It is the system’s message — in McLuhan’s sense, the thing that reshapes perception before anyone notices the content.

AI takes his framework one step further: a medium that does not merely connect all culture but processes it — ingests it, recombines it, and generates new culture from it — without any sense of what was first, what mattered, or what was said in response to what. McLuhan warned that every new medium produces a protective numbness — a “narcosis” that prevents us from perceiving what the technology is doing to us.6 The flattening is that numbness operating at civilisational scale.

The map that ate the territory

Jean Baudrillard opened Simulacra and Simulation (1981) with a fable borrowed from Borges: cartographers of a dying empire draw a map so detailed it covers the territory entirely.7 In Borges’s version, the empire decays and the map frays. In Baudrillard’s reversal, the map survives and the territory disappears. “It is the map that precedes the territory,” he wrote — the copy no longer refers to an original because there is no original left to refer to.7 He called this condition hyperreality: “the generation by models of a real without origin or reality.”7 And he laid out four stages of how images get there: first reflecting reality faithfully, then distorting it, then concealing that reality has vanished, and finally bearing “no relation to any reality whatever” — a pure simulacrum, a copy that has forgotten it was ever copying something.8

The streaming catalogue is a third-order simulacrum — concealing the absence of the musical culture it claims to represent. Spotify’s library does not reflect musical culture. It replaces it. There is no original context, no chronology, no friction between the new and the canonical. The recommendation algorithm makes it worse: it does not know, and does not care, that one song came first. It has no concept of influence, lineage, or era. It optimises for engagement, which means it treats a 1977 Bowie track and a 2024 ambient loop as interchangeable inputs to the same function. McLuhan’s message and Baudrillard’s simulacrum converge here: the medium’s structural effect (flattening) produces the hyperreal (a catalogue of copies with no original to refer back to).

The data bears this out. Luminate reports that catalogue music — releases more than 18 months old — accounted for 72.6% of US music consumption in 2023, up from 69.8% in 2021.9 The 200 most popular new tracks represent less than 5% of total streams.10 Ted Gioia, writing on The Honest Broker, put it plainly: “All growth in the market is coming from old songs.”11 This is not nostalgia. It is the structural consequence of a system where everything is equally available and nothing ever goes out of print. No IP ever dies. Every film, song, book, and game remains in permanent circulation. The archive is not a library. It is an arena — and new work enters it with no structural advantage over anything that came before.

Running water

Bowie’s metaphor was more precise than he probably intended. Running water is a utility. You do not think about where it comes from. You do not distinguish Tuesday’s water from Thursday’s. It has no provenance, no event-ness, no scarcity. It flows.

Culture used to have a temporal hierarchy — what is current, what is classic, what is forgotten — maintained by the friction of physical media. A record went out of print. A film left theatres. A book’s last copy was shelved in a library. Digital distribution removed every one of those frictions. A teenager’s playlist now holds 2024 and 1977 and 2003 simultaneously, with no signal that these represent different eras. Spotify’s own engineering team confirms that algorithmic playlists draw from a user’s “full arc of taste,” mixing decades by design.12 Time has flattened into a single browsable surface. The hierarchy that separated the current from the classic from the forgotten has collapsed, and nothing has replaced it.

This is Bowie’s running water, arrived. Music is a utility. So is film, so is television, so is most published writing. You turn the tap. It flows. You do not ask where it came from or when.

The fourth order: when AI makes culture for itself

But the flattening was only the first phase.

Baudrillard’s four stages of the image describe a progression: reflection, distortion, concealment, and finally pure simulation — images bearing no relation to any reality at all. We are entering the fourth. Even before generative AI, Spotify had been quietly commissioning functional music — ambient, lo-fi, sleep tracks — attributed to fabricated artist names under its Perfect Fit Content program. Swedish newspaper Dagens Nyheter found that roughly 20 musicians had produced tracks for over 500 fake artist names, with 495 placed on Spotify’s curated playlists and 50 accumulating over 520 million streams.13 These were cheaply produced human compositions, but they established the template: music optimised for algorithmic placement rather than human listening, with no real artist behind the name. AI scaled this template beyond recognition. Deezer, the only streaming platform publicly reporting AI upload rates, found that fully AI-generated tracks rose from 10% of new uploads in January 2025 to 34% by December — 50,000 AI-generated songs per day.14 Spotify removed 75 million spam tracks in the 12 months before September 2025, equivalent to 43% of its legitimate catalogue.15

Music is the canary. Across the wider web, automated traffic now accounts for 51% of all activity, surpassing human traffic for the first time.16 Ahrefs analysed 900,000 newly published web pages and found 74% contained AI-generated content.17 The infrastructure of a culture producing for itself is already in place. AI systems generate content trained on content generated by other AI systems. Ilia Shumailov and colleagues at Oxford demonstrated in Nature that this recursive loop causes what they call “model collapse”: the tails of the original distribution disappear, diversity degrades, and after nine generations, a model trained to describe medieval architecture was producing lists of jackrabbits.18 Jathan Sadowski, a sociologist at Monash University, coined a sharper term in 2023: “Habsburg AI” — “a system so heavily trained on the outputs of other generative AI’s that it becomes an inbred mutant.”19

The loop is already closing. GPTZero found 100 hallucinated citations across 53 papers accepted by NeurIPS 2025 — fabricated references that will enter the training corpus of every model trained on that data, propagating invented sources through the system.20 Ahrefs reported that 91.4% of content cited in Google’s AI Overviews is at least partially AI-generated.21 AI is citing AI. The feedback loop no longer reliably passes through human verification at all.

Trevor Paglen saw this coming in 2017, when he exhibited “A Study of Invisible Images” at Metro Pictures in New York.22 He trained adversarial networks not on standard datasets but on Freudian symbols, omens, and predators — producing images that were never intended for human eyes. “Something dramatic has happened to the world of images,” Paglen wrote. “They have become detached from human eyes. Our machines have learned to see without us.”23 He called this world of machine-to-machine image-making “invisible images” and considered the shift “more significant than the invention of photography.”24

The strongest objection to this framing is that AI does not “make culture” in any real sense. It optimises objective functions. Calling the output “culture” anthropomorphises what is fundamentally pattern-matching — and the anthropomorphisation does rhetorical work the argument has not earned. This is a serious objection, and it is right about the mechanism: nothing in a language model intends to produce culture, experiences an aesthetic, or has an audience in mind. But the objection confuses intent with effect. A river does not intend to erode a canyon. The canyon is there regardless. When AI-generated tracks accumulate millions of streams on Spotify playlists, when AI-generated text enters the training data of other models and propagates fabricated citations through the academic literature, when AI-generated images become the reference set for other image generators — the output functions culturally whether or not anything intended it to. The question is not whether the machine means it. The question is whether the effect is distinguishable from what we would call culture if a human had produced it. Increasingly, it is not.

Paglen was making art to illustrate this point. His thesis is now the default condition of the internet. AI systems increasingly produce not for human consumption but as input for other AI systems — for search indexing, for training data, for algorithmic placement. When AI makes content for AI, the human audience becomes incidental. What leaks back into human culture arrives as exhaust: music that optimises for playlist placement rather than listening, synthetic images that reference other synthetic images, text that answers questions no human asked. This is Baudrillard’s fourth order made literal — simulation that has forgotten it was ever simulating something.

What wins when nothing is new

The question for anyone who makes things is what to do on the other side of the flattening.

If novelty cannot win on novelty alone — because the audience has the entire archive at their fingertips and an AI-generated flood arriving at 50,000 tracks per day — then what does win? The optimistic answer is presence: liveness, community, experience, the things that cannot be flattened because they exist in time and space. Bowie predicted this too. “You’d better be prepared for doing a lot of touring,” he said in 2002, “because that’s really the only unique situation that’s going to be left.”1

The pessimistic answer is that nothing wins. Culture becomes Bowie’s running water — an undifferentiated flow in which no work is more significant than any other, and the concept of significance itself dissolves. The fourth-order question sharpens the problem: makers are not just competing with the archive and with each other. They are competing with an engine that no longer needs passengers — a machine culture that produces at inhuman speed, does not need an audience to justify its output, and whose exhaust drifts into the human world looking, at a glance, like the real thing.

The tap is open. The water flows. The question of where it comes from has stopped feeling like a question at all.


  1. Pareles, J. (2002, June 9). David Bowie, 21st-Century Entrepreneur. The New York Times, Arts & Leisure, p. 30.  2

  2. Bowie, D. (1999, December 3). Interview with Jeremy Paxman. BBC Newsnight. Video archived by BBC. 

  3. McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill, p. 7. 

  4. McLuhan, 1964, pp. 14–16. 

  5. McLuhan, 1964, Chapter 31 (“Television”). 

  6. McLuhan, 1964, pp. 41–47. Chapter 4: “The Gadget Lover: Narcissus as Narcosis.” 

  7. Baudrillard, J. (1981/1994). Simulacra and Simulation (S. F. Glaser, Trans.). University of Michigan Press, p. 1. The Borges fable, “the map that precedes the territory,” and the definition of hyperreality all appear on the opening page of “The Precession of Simulacra.”  2 3

  8. Baudrillard, 1981/1994, p. 6. 

  9. Luminate. (2024). A Look at Trends in Catalog Streaming. Catalog defined as releases more than 18 months old. 

  10. MRC Data, cited in Gioia, T. (2022, January 21). Is Old Music Killing New Music? The Honest Broker. https://www.honest-broker.com/p/is-old-music-killing-new-music 

  11. Gioia, 2022. 

  12. Spotify Engineering. (2023, April). Humans + Machines: A Look Behind Spotify’s Algotorial Playlists. https://engineering.atspotify.com/2023/04/humans-machines-a-look-behind-spotifys-algotorial-playlists 

  13. Dagens Nyheter (2022). Investigation into Spotify’s Perfect Fit Content program. Also documented in Wikipedia: Controversy over fake artists on Spotify. Figures: ~20 musicians, 500+ fake artist names, 495 placed on curated playlists, 50 artists with cumulative 520 million streams. 

  14. Deezer Newsroom. AI upload figures: 10% (January 2025), 18% (April 2025), 28% (September 2025), 34%/50,000 tracks per day (November–December 2025). https://newsroom-deezer.com/2025/04/deezer-reveals-18-of-all-new-music-uploaded-to-streaming-is-fully-ai-generated/ 

  15. Spotify Newsroom. (2025, September 25). Spotify Strengthens AI Protections. https://newsroom.spotify.com/2025-09-25/spotify-strengthens-ai-protections/. Also reported in Music Business Worldwide. 

  16. Imperva/Thales. (2025, April 15). 2025 Bad Bot Report. Data covers calendar year 2024. https://cpl.thalesgroup.com/about-us/newsroom/2025-imperva-bad-bot-report-ai-internet-traffic 

  17. Ahrefs. (2025). 74% of New Webpages Include AI Content (Study of 900k Pages). https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/ 

  18. Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755–759. https://doi.org/10.1038/s41586-024-07566-y 

  19. Sadowski, J. [@jathansadowski]. (2023, February 13). Habsburg AI [Tweet]. Twitter/X. https://x.com/jathansadowski/status/1625245803211272194 

  20. GPTZero. (2026, January 21). GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers. https://gptzero.me/news/neurips/ 

  21. Ahrefs. (2025). AI Overviews Cite AI-Generated Content More Than Human Writing. https://ahrefs.com/blog/ai-overviews-cite-ai-generated-content-more-than-human-writing/ 

  22. Paglen, T. (2017). A Study of Invisible Images. Metro Pictures, New York. September 8–October 21, 2017. 

  23. Paglen, T. (2017). Quoted in LensCulture, “An Urgent Look at How Artificial Intelligence Will See the World.” https://www.lensculture.com/articles/trevor-paglen-an-urgent-look-at-how-artificial-intelligence-will-see-the-world 

  24. Paglen, T. (2017). Quoted in Brooklyn Rail, October 2017. https://brooklynrail.org/2017/10/artseen/TREVOR-PAGLEN-A-Study-of-Invisible-Things/