Res·Cog

Clarity on building thinking things,
by Gareth Price, CTO @ CorralData.

A 1960s Italian Novelist Who Predicted LLMs, the Death of the Author, and the Trap That Follows

In a lecture that predates the internet by decades, Italo Calvino described a literature machine that would render the author obsolete and shift the full weight of meaning onto the reader. He also described exactly how it would fail — not by producing bad writing, but by producing so much plausible text that readers stop trying to think through it.

Before the author existed, there was the storyteller of the tribe. He sat by the fire and combined the same handful of elements — the jaguar, the coyote, the father, the son, the forbidden thing, the punishment — into every possible arrangement. He did not own these stories. Nobody did. Occasionally, one arrangement would land on something terrifying and true, and the tribe would have a new myth. The storyteller was not a genius. He was a machine.

This is not my description. It belongs to Italo Calvino, in a 1967 lecture called “Cybernetics and Ghosts,”1 delivered half a century before a large language model wrote its first sentence2. Calvino argued not that machines might someday replace writers, but that writers had always been machines — combinatorial engines processing finite elements — and that once we accepted this, the weight of literature would shift from writing to reading. “The decisive moment of literary life,” he wrote, “will be that of reading.

We have arrived at that moment, and the debate over AI-generated text is stuck in an author-shaped hole. Can machines create? Do they understand? Who owns the output? Calvino thought through these questions in 1967: the author is a combinatorial process, the personality on the book jacket is a byproduct, and the “genius” that Romantic aesthetics celebrated was pattern-matching that did not know its own name. He was blunt about where this led:

And so the author vanishes — that spoiled child of ignorance — to give place to a more thoughtful person, a person who will know that the author is a machine, and will know how this machine works.

Large language models have realised this vision. The debate over whether the text belongs to the training data’s original authors, the person who prompted, or the model itself is real and consequential — but it is a debate about ownership, not about how meaning gets made. The question this essay is concerned with is downstream: not who produced the text, but what it becomes in the mind of the reader.

When someone sits with an LLM and iterates through a dozen prompts — selects one output, recombines it with another, adjusts, rejects, steers — what are they doing? The current vocabulary calls this “prompt engineering,” which sounds like something adjacent to programming. Calvino would have recognised it as reading in its most active and ancient form: the listener by the fire who, from the stream of permutations, seizes the one that detonates against something in her own experience. The tribal listener was never passive. They were selecting, interpreting, mythologising. They were the reason the stories mattered.

This is what Calvino was describing, decades before it arrived: there is no author any more, only readers. The combinatorial engine — the thing that arranges words and concepts into every possible pattern — is now in the reader’s hands directly. You need not wait for a writer to run the permutations and hand you a finished text. You run them yourself — the jaguar, the coyote, the quarterly earnings report, the contract clause, the half-formed hypothesis — searching for the arrangement that clarifies something. Calvino’s colleague Raymond Queneau built a book in 1961, Cent Mille Milliards de poèmes, that was a machine for reader-operated sonnet generation.3 It was a prototype for something that took sixty years and several billion dollars in compute to arrive at scale.

Calvino understood that the meaning would not come from the machine. It would come from the spark between the machine’s output and the reader’s experience:

The literature machine can perform all the permutations possible on a given material, but the poetic result will be the particular effect of one of these permutations on a man endowed with a consciousness and an unconscious, that is, an empirical and historical man. It will be the shock that occurs only if the writing machine is surrounded by the hidden ghosts of the individual and of his society.

But Calvino embedded a warning in the essay, borrowed from the German poet Hans Magnus Enzensberger. Combinatorial play can go one of two directions. It can function as a challenge — a labyrinth the reader enters to reconstruct its plan, to dissolve its power, to understand something about the world that was previously hidden. Or it can collapse into something else entirely. As Enzensberger put it:

The moment a topological structure appears as a metaphysical structure the game loses its dialectical balance, and literature turns into a means of demonstrating that the world is essentially impenetrable, that any communication is impossible. The labyrinth thus ceases to be a challenge to human intelligence and establishes itself as a facsimile of the world and of society.

Calvino drew the lesson plainly: “The game can work as a challenge to understand the world or as a dissuasion from understanding it.” And, crucially: “on this score the spirit in which one reads is decisive.

This is the fork, and the stakes are not literary. Every person interacting with an LLM is a reader in Calvino’s sense, performing iterations on language at a scale previously reserved for authors. The question is whether this new readership will be generative — using these tools to map the problem, to find the point where the model’s output does not coincide with reality and then work from there — or merely consumptive, where the sheer volume and plausibility of the output simulates understanding without producing any. The facsimile replaces the world instead of illuminating it.

I have spent the last three years reviewing thousands of user sessions with LLM-based tools and talking daily to the people who rely on them. The users who get value are not the ones who accept the first output. They are the ones who read critically — who notice when a query misframes a question, who iterate until the result matches something they half-knew but had not yet articulated. They are doing what Calvino described: iterative play that surfaces hidden patterns for the engaged reader to find. The users who treat the tool as a vending machine — prompt in, answer out, move on — get plausible-looking results that often misrepresent their data. The labyrinth as decoration.

The difference is not in the technology, it is in the reader’s intention. This distinction has practical consequences. The current AI debate — fixated on old concepts of authorship, on whether the output counts as real writing — is time spent directed at a problem that is already behind us, while the one ahead of us goes unnamed. We do not need better models as urgently as we need better readers — people trained to treat LLM output as raw material to be interrogated, not finished product to be consumed.

Calvino ended his essay with an image from his own fiction: Edmond Dantès, imprisoned in the Château d’If, trying to construct in his mind a perfect fortress from which no escape is possible:

If I succeed in mentally constructing a fortress from which it is impossible to escape, this imagined fortress either will be the same as the real one — and in this case it is certain we shall never escape from here, but at least we will achieve the serenity of knowing we are here because we could be nowhere else — or it will be a fortress from which escape is even more impossible than from here — which would be a sign that here an opportunity of escape exists: we have only to identify the point where the imagined fortress does not coincide with the real one and then find it.

That is the work of the reader now. The LLM builds the imagined fortress — the plausible model, the fluent approximation. The reader’s job is to find where it does not coincide with the real one. That is where the escape is. That is where understanding begins.


References

  1. Calvino, I. (1986). Cybernetics and ghosts [PDF]. In The uses of literature (P. Creagh, Trans.). Harcourt Brace. https://www.are.na/block/3808967 (Original lecture delivered 1967) 

  2. The transformer architecture that underlies modern LLMs was introduced in Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (Vol. 30). https://arxiv.org/abs/1706.03762. The first generative pre-trained transformer followed a year later: Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf 

  3. Queneau, R. (1961). Cent mille milliards de poèmes. Gallimard. https://www.bevrowe.info/Internet/Queneau/Queneau.html (Online interactive version)