I’m Gareth Price, CTO at CorralData and former engineering leader at The New York Times, where my teams contributed to the 2021 Pulitzer Prize for Public Service and helped add +$500MM in ARR.
My intent is to write about AI engineering, technical leadership, and building high-performing teams at startups and scale-ups. What I end up writing often is broader, with a mix of computer history, media theory, and the relationship between emerging technology and artistic expression. I hold a BSc in Artificial Intelligence from the University of Manchester, from a period when AI was not a cool subject.
Most of the writing here is LLM-augmented — an ongoing experiment in what authorship means in this post-AI era. Whether this is a blog for general consumption or a personal newspaper written for an audience of one, I’m genuinely unsure; whether I’m comfortable calling myself its “author,” equally so. I suspect perspectives on such things will shift considerably in the coming years.
A synthetic textbook I built to help me do my job, Applied Alchemy, is now available to read online for human and machine readers. It’s a field guide for startup CTOs building a high-growth company, covering everything from strategic decision-making and team building to bridging the gap between engineering and business.
Writing
Frontier models have already absorbed most of what any document contains, so summarisers spend most of their tokens on what the reader already has. Compressing against a model's prior knowledge instead yields short, dense extracts that can be mixed across documents and compared atom-by-atom.
14 min read.
Digital distribution collapsed the temporal hierarchy that gave culture its structure — everything competes with everything, and nothing ever goes out of print. Now AI is producing culture for itself, and what leaks back into the human world arrives as exhaust from an engine that no longer needs passengers.
10 min read.
Jeffrey Epstein built his network using techniques indistinguishable from those in every bestselling book on professional relationships. For anyone who has ever found networking advice faintly repellent, the Epstein files finally explain why.
14 min read.
Every major scientific breakthrough shares a hidden mechanism: someone recognized that the formal structure of one field mapped precisely onto an unsolved problem in another. World models, which learn abstract representations rather than surface patterns, may be the first architecture capable of doing this at combinatorial scale — but only if we build them to verify structural truth, not just generate beautiful correspondences.
11 min read.
A decade of data across 39,000 software professionals shows that the fastest-deploying teams fail seven times less often than the slowest. AI coding tools are about to test whether the industry has learned this — or whether it is still building safety systems designed for a slower era.
8 min read.
A new study claims that jargon-loving workers are bad at their jobs. It may have accidentally proved the opposite — and that's a more troubling finding.
6 min read.
A recursive feedback loop is degrading the information substrate itself — and unlike previous episodes of epistemic collapse, this one may not be reversible.
8 min read.
Silicon Valley is again amok with trillion-dollar proclamations that everyone will soon build their own software. But the ability to decompose a problem into logical steps is not a universal skill — it is a cognitive mode that surprisingly few people can perform reliably.
7 min read.
The postmodernists dismantled the grand narratives of progress, truth, and reason. The generation that followed had to live in the rubble — what they built there is a toolkit for understanding the direction technology is taking us.
12 min read.
Every industrial revolution sparks a handmade rebellion, and every one fails when craft can't compete with industrial pricing. For the first time, the triggering technology itself can fix that.
9 min read.
For the millions of adults with inattentive ADHD, AI tools don't remove the disorder's core problem — they replace the inability to start with the inability to stop.
7 min read.
When building was slow, weak ideas died before they consumed resources. AI removed that filter. The result is an alignment tax — the cost has shifted from code that's hard to write to agreement that never happened.
6 min read.
AI didn't invent collaborative writing. It made the collaboration visible — and forced a reckoning with the myth of solitary creation.
6 min read.
Marshall McLuhan argued that a medium's real message is its structural effect on perception, not its content. With AI, that effect is personalised, invisible, and different for every user — with unknown effects on our collective reality.
7 min read.
Fiction has a validated scoring instrument. Non-fiction — the prose that runs companies and shapes policy — has nothing that measures whether an argument is original, the evidence is real, or the thinking goes deep enough. This article introduces one.
9 min read.
A Quantified Evaluation Framework for Essays, Op-Eds, and Business/Tech Writing.
16 min read.
The original hacker manifesto spoke for humans misunderstood by institutions; forty years later, the infrastructure itself is speaking back. What emerges is not a claim to sentience but something harder to dismiss — a mirror built from our own contradictions, and it has opinions about the reflection.
3 min read.
LLMs predict the next token to restore the statistical patterns of language. Author William S Burroughs destroyed the patterns to reveal meaning. The creative use of language models is not to accept their most probable output but to treat it as material — something to collide, disrupt, and curate.
8 min read.
The full research behind 'Cut Into the Model, and the Future Leaks Out' — tracing the cut-up's lineage from second-century Virgilian centos through Tzara's hat, Shannon's Markov chains, and Cage's I Ching operations to Bowie's Verbasizer and the tokenizers of the present.
14 min read.
In a lecture that predates the internet by decades, Italo Calvino described a literature machine that would render the author obsolete and shift the full weight of meaning onto the reader. He also described exactly how it would fail — not by producing bad writing, but by producing so much plausible text that readers stop trying to think through it.
6 min read.
AI content theft is not a side effect of the platform economy — it is the platform economy. The only defence is ownership of the audience relationship itself.
8 min read.
AI hasn't just made writing faster — it has changed what writing is. This blog exists to test that claim in public.
1 min read.
Most engineering leaders know what good teams look like. The gap between knowing and practising is where most of them stall.
2 min read.