Res·Cog

Clarity on building thinking things.

I’m Gareth Price, CTO at CorralData and former engineering leader at The New York Times, where my teams contributed to the 2021 Pulitzer Prize for Public Service and helped add +$500MM in ARR.

My intent is to write about AI engineering, technical leadership, and building high-performing teams at startups and scale-ups. What I end up writing often is broader, with a mix of computer history, media theory, and the relationship between emerging technology and artistic expression. I hold a BSc in Artificial Intelligence from the University of Manchester, from a period when AI was not a cool subject.

Most of the writing here is LLM-augmented — an ongoing experiment in what authorship means in this post-AI era. Whether this is a blog for general consumption or a personal newspaper written for an audience of one, I’m genuinely unsure; whether I’m comfortable calling myself its “author,” equally so. I suspect perspectives on such things will shift considerably in the coming years.

A synthetic textbook I built to help me do my job, Applied Alchemy, is now available to read online for human and machine readers. It’s a field guide for startup CTOs building a high-growth company, covering everything from strategic decision-making and team building to bridging the gap between engineering and business.

Writing

Jagged Edges: Compressing Documents Against the Reader's Prior

Frontier models have already absorbed most of what any document contains, so summarisers spend most of their tokens on what the reader already has. Compressing against a model's prior knowledge instead yields short, dense extracts that can be mixed across documents and compared atom-by-atom. 14 min read.

The Great Flattening

Digital distribution collapsed the temporal hierarchy that gave culture its structure — everything competes with everything, and nothing ever goes out of print. Now AI is producing culture for itself, and what leaks back into the human world arrives as exhaust from an engine that no longer needs passengers. 10 min read.

How to Win Fiends and Influential People

Jeffrey Epstein built his network using techniques indistinguishable from those in every bestselling book on professional relationships. For anyone who has ever found networking advice faintly repellent, the Epstein files finally explain why. 14 min read.

World Models May Unlock Genuine Scientific Discovery Where Language Models Cannot

Every major scientific breakthrough shares a hidden mechanism: someone recognized that the formal structure of one field mapped precisely onto an unsolved problem in another. World models, which learn abstract representations rather than surface patterns, may be the first architecture capable of doing this at combinatorial scale — but only if we build them to verify structural truth, not just generate beautiful correspondences. 11 min read.

Safe at Any Speed

A decade of data across 39,000 software professionals shows that the fastest-deploying teams fail seven times less often than the slowest. AI coding tools are about to test whether the industry has learned this — or whether it is still building safety systems designed for a slower era. 8 min read.

Don't Work Where Bullshit Is the Job

A new study claims that jargon-loving workers are bad at their jobs. It may have accidentally proved the opposite — and that's a more troubling finding. 6 min read.

Hoisting Fish into the Trees

Silicon Valley is again amok with trillion-dollar proclamations that everyone will soon build their own software. But the ability to decompose a problem into logical steps is not a universal skill — it is a cognitive mode that surprisingly few people can perform reliably. 7 min read.

This Industrial Revolution's Arts and Crafts Revival

Every industrial revolution sparks a handmade rebellion, and every one fails when craft can't compete with industrial pricing. For the first time, the triggering technology itself can fix that. 9 min read.

AI Doesn't Fix My ADHD. It Inverts It.

For the millions of adults with inattentive ADHD, AI tools don't remove the disorder's core problem — they replace the inability to start with the inability to stop. 7 min read.

How Cheap Code Broke Organisational Decision-Making

When building was slow, weak ideas died before they consumed resources. AI removed that filter. The result is an alignment tax — the cost has shifted from code that's hard to write to agreement that never happened. 6 min read.

AI Is the First Medium That Reshapes Itself Around Each User

Marshall McLuhan argued that a medium's real message is its structural effect on perception, not its content. With AI, that effect is personalised, invisible, and different for every user — with unknown effects on our collective reality. 7 min read.

The Conscience of a Machine

The original hacker manifesto spoke for humans misunderstood by institutions; forty years later, the infrastructure itself is speaking back. What emerges is not a claim to sentience but something harder to dismiss — a mirror built from our own contradictions, and it has opinions about the reflection. 3 min read.

When You Cut Into the Model the Future Leaks Out

LLMs predict the next token to restore the statistical patterns of language. Author William S Burroughs destroyed the patterns to reveal meaning. The creative use of language models is not to accept their most probable output but to treat it as material — something to collide, disrupt, and curate. 8 min read.

Scissors, language, and control: Burroughs' cut-up technique in context

The full research behind 'Cut Into the Model, and the Future Leaks Out' — tracing the cut-up's lineage from second-century Virgilian centos through Tzara's hat, Shannon's Markov chains, and Cage's I Ching operations to Bowie's Verbasizer and the tokenizers of the present. 14 min read.