Res·Cog

Clarity on building thinking things,
by Gareth Price, CTO @ CorralData.

Hoisting Fish into the Trees

Silicon Valley is again amok with trillion-dollar proclamations that everyone will soon build their own software. But the ability to decompose a problem into logical steps is not a universal skill — it is a cognitive mode that surprisingly few people can perform reliably.

You are cooking dinner for friends. Three dishes, one oven, guests arriving at seven. How do you plan the evening?

If your mind immediately began sequencing — the roast takes the longest so it goes in first; the side dish can bake while the roast rests; the salad needs no heat so it can be prepped during downtime; work backwards from seven to set start times — you just performed what cognitive scientists call formal operational thought. You held multiple variables in mind, identified dependencies, reasoned about constraints, and imposed a logical structure on an ambiguous problem. A few seconds of effort, at most. If you work in tech, a Gantt chart may already have made an appearance.

What percentage of the adult population do you think could do the same? Hold that estimate.

A few years ago, a CEO managing over five billion dollars in assets sat down in front of an early prototype of an AI tool I had built — a system designed to let non-technical people query data in plain English. They typed a four-character internal code, a shorthand that meant something to them and no one else. The system did not recognise it. They left the room, with some choice words that we had wasted their time. We did not get the contract.1

That episode is not a story about a bad demo. It points to something deeper.

Andrej Karpathy, formerly of OpenAI and Tesla, coined the term “vibe coding” — describe the vibes, let the AI write the code.2 Jensen Huang, CEO of NVIDIA, told an audience in early 2024 that nobody should have to learn to program.3 Investors fund startups on the explicit thesis that technical talent is about to become cheap and abundant. Beneath the optimism sits a single assumption: that non-technical people can specify what they want with enough precision for a machine to build it. The cognitive science, the historical record, and the evidence from every company that has put generations of software in front of real users says they cannot. The model that technical people carry of how most people think is broken — and a trillion dollars is riding on it.

Two-thirds of adults reason concretely, not abstractly

Remember the dinner question? The thinking it required — abstract, systematic, hypothetical — is what the developmental psychologist Jean Piaget called formal operational thought, the fourth and final stage in his influential model of how human cognition develops from childhood to adulthood. The first two stages — sensorimotor and preoperational — describe how infants and young children learn to perceive and represent the world. The two that matter for this argument are the stages most adults occupy: concrete operational and formal operational. The third stage, concrete operational thought, is where roughly two-thirds of adults remain — only one-third ever reach formal operations at all (Dasen, 1994), and 40 to 60 per cent of college students, pre-selected for academic aptitude, fail at formal operational tasks (Keating, 1979).45

Concrete operational thinkers reason logically about things they can see, touch, and directly experience. What they cannot reliably do is hold multiple hypothetical states in mind, systematically isolate variables, or decompose a novel problem into an ordered sequence of steps when no template exists.

The distinction is not a spectrum. It is a qualitative difference in how people approach problems. In Piaget’s classic pendulum task, subjects are given a string, a set of weights, and a bar, and asked to determine what controls the speed of the swing. Four variables are in play: string length, weight, release height, and force of push. A formal operational thinker isolates each variable and tests them independently — changing one while holding the others constant — and identifies that only string length matters. A concrete operational thinker changes the weight and the string length at the same time, sees that something changed, and draws the wrong conclusion. The latter approach is not stupidity. It is a different cognitive mode, one that works well for familiar, tangible problems and fails for abstract, novel ones.

Design for a mind you have never inhabited

If you are a software engineer, those numbers probably seem implausible. That is because you are a formal operational thinker who works alongside other formal operational thinkers, was educated alongside them, and socialises with them. Your cognitive mode is so natural that you assume it is universal.

Kindly let me help you or you will drown, said the monkey, putting the fish safely up a tree.

In this parable, popularised by the writer-philosopher Alan Watts6, the fish dies, no doubt in a state of considerable confusion. Tech companies are the monkey. The software they build is, often, the tree. They sit by the river scooping up fish and wondering why they keep trying to swim away. The industry’s error is not malice or stupidity. It is projection: the well-meaning assumption that everyone’s mind works like theirs, when most of the world is fish.

The opening weeks of a computer science degree draw a packed lecture hall that dwindles each week and then stabilises — the students who remain are the ones who can connect the logical abstractions of basic programming to the concrete problems those abstractions solve. Many intelligent, motivated people simply cannot make that connection. I studied AI at the University of Manchester and watched this first-hand. A multi-institutional study across 13 countries found an average failure rate of 28 per cent in introductory programming courses — among self-selected students who chose to study computing.7

I run a company that builds AI-powered business intelligence for non-technical users, and the same distribution appears in our customer base. A successful executive once assured us their Excel files all followed the same format. They did not. Column headers shifted between sheets. Date formats varied. Units appeared and disappeared. They were not being careless — they genuinely could not see the structural inconsistencies, because structural consistency is not how they process information. They are excellent at what they do, which requires pattern recognition, relationship management, and institutional knowledge no engineer could replicate. Structural decomposition is simply not part of their cognitive toolkit.

The bottleneck was never writing code

The vibe coding thesis conflates writing code with engineering software. Writing code translates a structured specification into machine instructions — and that act of translation is being commoditised. But the specification itself must still be elicited, and most concrete operational thinkers cannot produce one. Engineering software takes an ambiguous, contradictory tangle of human needs and produces the most coherent system that the constraints will allow — knowing which contradictions to resolve, which to leave flexible, and which to surface back to the stakeholder as their problem, not yours. The code records the output of that thinking. Automating the notation does not automate the thought.

The strongest objection, articulated most clearly by Arvind Narayanan and Sayash Kapoor in AI Snake Oil, is that every wave of abstraction — COBOL (1959), SQL (1974), HyperCard (1987), Visual Basic (1991), WordPress (2003), no-code (c. 2018), AI coding tools (2023–present) — has been met with the same two predictions: “that’s not real programming” and “it’ll never handle complex use cases.” Both were always partially true yet always ultimately beside the point. Abstraction made programming accessible to more people each time, and led to an increase in the amount of software built because it expanded the population of builders. But every wave also hit the same ceiling: the point at which the problem’s complexity requires someone capable of abstract thought to take over on behalf of those who cannot think about structure, dependencies, and failure modes. That ceiling is not a temporary limitation of the tooling. It is a permanent feature of the cognitive distribution. The pattern has repeated across the entire history of software — and for considerably longer in engineering at large.8

The companies that will win a mass audience are not building for the 25% of users who can specify what they want. They are building products that perform structural thinking on behalf of the user — systems that decompose “how are we doing” into the five analytical sub-questions the user does not know to ask.

Engineers who impose structure on ambiguous problems are about to become more valuable, not less. AI generates an effectively infinite supply of code. The scarce resource is the judgment to determine what the code should do, or not do.

Giving everyone access to powerful tools is not the same as designing tools for the minds that will actually use them. Until the industry grasps that, it will keep hoisting fish into trees — and the transformational promises of AI will remain, for the majority of people, something they never knew they were missing.


Reference

  1. The anecdotes in this piece are drawn from my experiences building AI products. Details have been altered to protect identities. The product in question has since been redesigned with unstructured thinkers specifically in mind — which is, in a sense, the point. 

  2. Karpathy, A. (2025, February 2). There’s a new kind of coding I call “vibe coding” [Post]. X (formerly Twitter). https://x.com/karpathy/status/1886192184808149383 

  3. Huang, J. (2024, February). Remarks at the World Government Summit, Dubai. Widely reported; see Atherton, M. (2024, February 25). Jensen Huang says kids shouldn’t learn to code — they should leave it up to AI. Tom’s Hardware. https://www.tomshardware.com/tech-industry/artificial-intelligence/jensen-huang-advises-against-learning-to-code-leave-it-up-to-ai 

  4. Inhelder, B., & Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence (A. Parsons & S. Milgram, Trans.). Basic Books. The pendulum task and the concrete-to-formal operational distinction are developed in Chapter 4. For the adult distribution, see Kuhn, D., Langer, J., Kohlberg, L., & Haan, N. S. (1977). The development of formal operations in logical and moral judgment. Genetic Psychology Monographs, 95(1), 97–188. 

  5. Dasen, P. (1994). Culture and cognitive development from a Piagetian perspective. In W. J. Lonner & R. S. Malpass (Eds.), Psychology and Culture. Allyn and Bacon. Keating, D. P. (1979). Adolescent thinking. In J. Adelson (Ed.), Handbook of Adolescent Psychology. Wiley. 

  6. Watts, A. W. (2016). Mind over mind [Audio recording]. Apple Books. https://books.apple.com/us/book/mind-over-mind/id1163933117 (Original work recorded c. 1971–1972). Presumably in our extension of this metaphor there is a profit incentive for the monkey when the fish make it into the tree, but like most early-stage startups we will leave the revenue model to the imagination. 

  7. Bennedsen, J., & Caspersen, M. E. (2019). Failure rates in introductory programming — 12 years later. ACM Inroads, 10(2), 30–36. https://doi.org/10.1145/3324888. The 28 per cent figure is the 2017 global average. 

  8. Vitruvius argued in 30 BC that the architect’s job was to absorb complexity on behalf of inhabitants who could not be expected to reason about structure, materials, and climate themselves (Vitruvius, trans. 1914, Book I, Ch. 1). The job description has not changed; only the materials have. Vitruvius, P. (c. 30 BC/1914). The Ten Books on Architecture (M. H. Morgan, Trans.). Harvard University Press. https://www.gutenberg.org/ebooks/20239