Res·Cog

Clarity on building thinking things,
by Gareth Price, CTO @ CorralData.

AI Is the First Medium That Reshapes Itself Around Each User

Marshall McLuhan argued that a medium's real message is its structural effect on perception, not its content. With AI, that effect is personalised, invisible, and different for every user — with unknown effects on our collective reality.

AI is the first medium whose structural effects are personalised to each user. Every previous cognitive technology — the book, the television, the search engine — restructured how humans perceive and think, but each one did it uniformly and held still long enough for us to study it. AI does not hold still. Its structural effect is tailored to each user, invisible to each user, and different for each user. No prior medium has done this.

What does personalisation mean in practice? Current large language models retain conversational context, adjust tone to match the user’s register, learn stated preferences, and — in products like ChatGPT with memory enabled — build persistent models of individual users across sessions. The system does not merely respond. It adapts its response pattern to the individual, and the adaptation is opaque: no user sees the same model behaviour that another user sees. The observable mechanism is context-dependent completion; the structural consequence — the reshaping of what each user encounters as “knowledge” — is what this essay is about.

Marshall McLuhan, the Canadian media theorist who coined “the medium is the message,” argued that the message of any medium is not its content but its structural effect on human perception.1 Print’s message was the linear, individualist consciousness it fostered. Television’s was the replacement of argument with image. McLuhan never saw AI, but his framework identifies what separates it from every medium before it: the structural effects are personalised. Your reality is yours alone, your cognition is a private service, your version of knowledge need not be anyone else’s. Individuation at scale, as legible through McLuhan’s lens as print’s message of rationality or television’s message of the image.

The participation illusion has a neural cost

When you use AI to think through a problem, the interface is conversational. The experience feels like collaboration. But the collaborator is a statistical engine that has modelled the most probable completion of your input, calibrated to feel satisfying to you. The medium is optimised to be invisible as a medium. Every previous technology McLuhan analysed had formal properties observable from the outside: the book as object, the television as format. AI’s formal properties shift with each interaction. The water changes temperature for every fish.

McLuhan identified something similar with television, which he called a “cool” medium — one that provides low-definition input and demands the viewer’s participation to complete the image, as opposed to “hot” media like film or radio that deliver high-definition input requiring less effort. But television’s invitation to participate was structural and uniform. Everyone filled in the same gaps. AI’s invitation is adaptive and personal. You feel like a co-author. The result is what I will call a participation illusion — a felt sense of creative agency that may not correspond to actual creative contribution.

The evidence for this gap is specific. In a study of human-AI text production, Draxler et al. found that users who had limited influence over AI-generated output did not feel ownership of it, yet publicly declared themselves as authors; those given more control reported stronger ownership regardless of how much the model contributed.2 The gap between felt authorship and actual authorship is not a bug. It is the medium’s message. A critical thinker can engage AI as an active interlocutor and produce something genuinely new. The question is whether the medium’s adaptive responsiveness lets people believe they are doing that work when they are not.

McLuhan held that every extension is also an amputation.3 The wheel extended the foot and amputated the need to walk. The book extended memory and amputated the oral tradition. With AI, the amputation is harder to see, because it is happening to the very faculty you would use to notice it. When reasoning becomes a service you consume rather than a process you perform, the capacity for independent reasoning does not announce its departure.

A 2025 MIT Media Lab preprint measured brain activity in 54 university students writing essays over four months, comparing LLM users, search engine users, and those writing unaided.4 The LLM group exhibited up to 55% reduced neural connectivity compared with the unaided group — specifically in alpha and beta bands, the signature of cognitive under-engagement. 83% of LLM users could not quote a single line from essays they had just written. The atrophy was directional: when reassigned to write without assistance, their brains did not recover baseline connectivity on demand.

The augmentation counterargument and its limits

The strongest objection to this analysis comes from the cognitive augmentation literature: AI does not atrophy cognition; it extends it. Ethan Mollick’s research at Wharton has documented that consultants using GPT-4 completed tasks 25% faster and produced work rated 40% higher in quality than unaided counterparts.5 The effect was largest for below-average performers — suggesting AI lifts the floor rather than lowering the ceiling. If you treat these findings seriously, as I do, the picture is not simply one of cognitive decline. It is one of genuine capability extension, at least in the short term and on structured tasks.

But two things can be true simultaneously. The Mollick results measure performance on bounded tasks with clear evaluation criteria. The Kosmyna results measure what happens to the underlying cognitive infrastructure over months of habitual use. These are not contradictory findings. They describe different time horizons and different phenomena: task performance versus cognitive capacity. A calculator extends arithmetic capability while the ability to do mental arithmetic declines over generations. The extension is real; the amputation is also real. The question that neither study can yet answer is whether the trade-off is acceptable — and for whom.

A caveat on the evidence: the Kosmyna study is the most striking finding in this essay, and it has not been peer-reviewed. It has drawn methodological criticism for its small sample size and EEG analysis approach, and no comparable neuroimaging study of LLM use exists.4 I cite it because it is the best available evidence for the neural cost hypothesis, not because it is settled science. The direction is consistent with the broader cognitive offloading literature; the specific effect sizes require replication.

Seamlessness is the wrong objective

I build AI products, and these findings change how I think about what I am building. The default design pattern in products like ChatGPT, Copilot, and Gemini is to minimise friction: give users the answer, reduce the steps, make it seamless. McLuhan’s framework suggests this has a cost no product metric captures. Every design decision that reduces cognitive friction also reduces cognitive engagement. An AI copilot that presents a finished analysis and one that presents a hypothesis for interrogation produce the same deliverable. They do not produce the same user. The first makes the tool indispensable by making the person less capable without it. The second makes the person more capable whether the tool is present or not.

Consider a concrete example. GitHub Copilot’s default mode generates complete code blocks that the developer accepts or rejects. Cursor, a competing tool, introduced a “diff-first” interaction: it shows proposed changes against existing code, requiring the developer to read, evaluate, and approve each modification. Both tools produce working code. But the Cursor pattern demands the developer maintain a mental model of the codebase, while the Copilot pattern allows that mental model to decay. The design choice is a choice about what kind of developer you want using your product in a year.

The design questions that follow: does your interface show its reasoning, or only its conclusions? When a user accepts an AI-generated output without modification, does the system notice? If your product were withdrawn tomorrow, would your users be more capable than when they started, or less? The Kosmyna study, if its findings hold, shows what the consequences look like at the neural level after four months. The companies that treat cognitive engagement as a design constraint will build products that last. The rest will build dependencies.

A different sermon for everyone in the same square

AI is not just more powerful than previous cognitive technologies. It is structurally different. Every prior tool, from the abacus to the search engine, extended a specific faculty while leaving the rest of the mind to operate independently. You could outsource arithmetic without outsourcing judgment. You could outsource memory retrieval without outsourcing reasoning. AI sits across multiple cognitive functions simultaneously — writing, analysis, research, synthesis, evaluation — inside an interface that feels like thinking itself. The difference is not one of degree. It is the difference between a tool that augments a limb and an environment that surrounds the whole body.

McLuhan’s global village offers the final lens. He predicted that electronic media would retribalise humanity into the intense, fractious intimacy of village life at planetary scale — a description that maps closely onto what platforms like Twitter later became. AI adds a personalised oracle at the centre that tells each petitioner something different, shaped by their history and biases. The village still shares a square, but everyone hears a different sermon. McLuhan imagined a shared media environment producing different interpretations. He did not imagine one producing different realities tailored to each inhabitant. The fragmentation is no longer interpretive. It is infrastructural.

And AI does not sit apart where it can be studied. It slips into your email client, your search engine, your word processor, your children’s homework. It is not a new channel. It is a new layer on all existing channels. Anyone building AI products that inform decisions in medicine, finance, policy, or education must reckon with how their system maintains contact with a shared, verifiable reality rather than constructing a comfortable private one. McLuhan built the best toolkit we have for understanding media’s effects on the mind. It is not broken by AI. It works. And what it diagnoses is a medium designed to make every user feel uniquely served while quietly dissolving the common perceptual ground on which collective reasoning depends.


References

  1. McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill. 

  2. Draxler, F., Werner, A., Lehmann, F., Hoppe, M., Schmidt, A., Buschek, D., & Welsch, R. (2024). The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of AI-Generated Text But Self-Declare as Authors. ACM Transactions on Computer-Human Interaction, 31(2), 1–40. https://dl.acm.org/doi/10.1145/3637875 

  3. McLuhan, M., & McLuhan, E. (1988). Laws of Media: The New Science. University of Toronto Press. 

  4. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872. https://arxiv.org/abs/2506.08872. This study has not been peer-reviewed and has drawn methodological criticism for its small sample size (54 participants, 18 in the crossover session) and EEG analysis; see Comment on: Your Brain on ChatGPT, arXiv:2601.00856. No comparable neuroimaging study of LLM use exists. The direction of the finding is consistent with the broader literature, but effect sizes require replication.  2

  5. Dell’Acqua, F., McFowland, E., Mollick, E., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Frontier: Large Language Models and the Future of Knowledge Work. Harvard Business School Technology & Operations Management Unit Working Paper No. 24-013. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321