Res·Cog

Clarity on building thinking things,
by Gareth Price, CTO @ CorralData.

Authorship Has Outgrown Its Vocabulary

AI didn't invent collaborative writing. It made the collaboration visible — and forced a reckoning with the myth of solitary creation.

The social contract of authorship has never required that the named author produced the text. It requires that the named author stands behind the claims. A colleague’s question made me notice how rarely we say this plainly.

He had read a technical analysis I had written at CorralData and sent me a message: “Did you write this?” My process was: deep research, scenario planning, adversarial stress-testing of the arguments with a large language model, then an aggressive editorial pass guided by a style framework I built for tight writing. Could I have done it alone? Probably, given a month or two. But the result was better — it had been through an editorial process that I, as a CTO at a tech startup, could never have otherwise accessed.

The institutional response to AI-assisted writing treats it as a spectrum from legitimate to illegitimate, with the writer’s virtue measured by how little help they accepted. The ICMJE’s 2023 guidelines drew a line between acceptable AI use (editing for “readability and style”) and unacceptable use (generating substantive content), while denying AI systems authorship entirely1. Nature Portfolio’s policy makes the same distinction: “AI-assisted copy editing” requires no disclosure; “generative editorial work” does2. These policies assume that real writing is solitary production. Large language models did not create a new kind of writing. They made visible the editorial infrastructure that has always produced good non-fiction — and exposed the fact that our concept of authorship never described how the work actually gets done.

Solitary production is the exception

J.R. Moehringer, a ghostwriter and Pulitzer Prize-winning memoirist, wrote Andre Agassi’s Open, Phil Knight’s Shoe Dog, and Prince Harry’s Spare. Agassi had offered Moehringer a co-author credit; Moehringer declined3. Then, watching Agassi accept praise on a late-night talk show, Moehringer yelled at the television: “Say my name! Say my f—ing name!”4 Moehringer wrote every word. Agassi’s name is on the cover. Everyone in publishing knows how books like this get made — and no one calls Agassi a fraud.

The New Oxford Shakespeare identifies 17 plays in its expanded 44-play canon as containing writing by someone other than Shakespeare, based on computational stylistic analysis by 23 contributing scholars5. Political speeches are written by staffers. Op-eds are drafted by policy aides. The idea that authorship means solitary production is historically recent — it emerged in the 18th century alongside copyright law and the Romantic cult of original genius. Before that, collaborative and anonymous composition was the default. The social contract governing authorship has always required something other than solo production. It requires that the named author stands behind the claims. Large language models made that collaboration cheap, ubiquitous, and impossible to ignore — forcing us to articulate a norm that had operated silently for centuries.

Authorship is accountability for claims

In 2023, a New York lawyer submitted a brief drafted with ChatGPT that cited six cases the model had invented; the court in Mata v. Avianca imposed sanctions6. He delegated the writing and the thinking — he never verified the cases, never exercised the judgment his profession required. That same year, a student at the University of North Georgia, a public university in the US state of Georgia, was accused of cheating after an AI-detection tool flagged their essay. The student had used Grammarly. The university cleared them7.

Our vocabulary cannot distinguish these cases. “AI-generated content” covers both — a term so blunt it obscures the only distinction that matters: whether the human can defend the claims on the page.

The competence objection

The most serious challenge to the accountability framework is precise: responsibility without expertise is liability without competence. Shannon Vallor, an AI ethicist at the University of Edinburgh, argues in The AI Mirror (2024) that practical wisdom develops through exercise — and that outsourcing intellectual work to AI risks stunting the very capacity it appears to augment8. The objection applies directly here. When a lawyer writes a brief from case law they have read and arguments they have tested in court, composing the sentences is reasoning through the argument. When a model generates plausible legal analysis, the author may believe they have verified every claim while missing fabrications a practitioner would never have produced.

A first-year law student cannot take meaningful responsibility for appellate arguments they cannot evaluate, any more than I could for a model-generated analysis of cardiac surgical outcomes.

But Vallor’s objection assumes competence is static — that delegating intellectual labour necessarily atrophies the muscle. A competent person can judge work that exceeds what they could produce alone, and judging it teaches them something. When I stress-test a model’s strategic analysis against what I know about CorralData’s market, I encounter framings I had not considered: second-order effects, structural analogies, blind spots in my own reasoning. Verifying is not just quality control. It is accelerated learning. Each round of critical evaluation makes the author more capable, sharpens the next round, and produces better output to evaluate. The spiral runs upward.

Verification teaches the verifier

The competence threshold is not “could you have produced this yourself?” It is “can you evaluate what was produced?” — and iterative evaluation raises that threshold with each pass.

The objection also understates the historical record. Domain expertise was never the standard for authorship. Journalists write about fields they are not expert in, relying on sources and verification. Ghostwriters produce memoirs about lives they did not live. The safeguard was always the verification process. The principle holds. Large language models change the failure mode — models produce confident fabrications at a volume that overwhelms casual verification. The accountability framework survives, but only if the author does the actual work of verification, not the performance of it.

The author defends the page

I used a large language model to write this essay. I am the author. Every claim is one I hold, every piece of evidence is one I verified against its source, and every judgment is one I made. If something is wrong, it is my fault. The model has no reputation, no career, no professional consequences. I do.

“Did you write this?” is a question about production. By that standard, Agassi did not write Open. Shakespeare did not write all of Henry VI. Every executive who has ever published a ghostwritten op-ed is a fraud. We have been asking the wrong question for a long time. Large language models made it impossible to keep pretending otherwise.

The question that has always underwritten the social contract of publishing — whether we acknowledged it or not — is harder and more honest: will you defend what is on this page?


This article was written using assisted authorship. The author supplied the arguments and judgments; multiple AI models assisted with drafting, revision and stress-testing.

References

  1. International Committee of Medical Journal Editors. (2023). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (updated May 2023). The ICMJE states that AI tools “cannot be listed as an author” because they “cannot be responsible for the accuracy, integrity, and originality of the work.” https://www.icmje.org/recommendations/ 

  2. Nature Portfolio. (2023). Editorial policies: Artificial Intelligence (AI). Nature defines “AI assisted copy editing” as improvements to “readability and style” that do not include “generative editorial work and autonomous content creation.” https://www.nature.com/nature-portfolio/editorial-policies/ai 

  3. Moehringer’s account of declining co-author credit and the late-night talk show incident both appear in: Moehringer, J.R. (2023, January 16). Notes from Prince Harry’s ghostwriter. The New Yorker. https://www.newyorker.com/magazine/2023/01/23/notes-from-prince-harrys-ghostwriter 

  4. Ibid. 

  5. Taylor, G., Jowett, J., Bourus, T., & Egan, G. (Eds.). (2016). The New Oxford Shakespeare: Critical Reference Edition. Oxford University Press. The edition identifies 17 plays as collaborative, with contributions from 11 co-authors, within an expanded canon of 44 works. Attribution based on computational stylistic analysis by Santiago Segarra, Mark Eisen, Gabriel Egan, and Alejandro Ribeiro. See: Segarra, S., Eisen, M., Egan, G., & Ribeiro, A. (2016). Attributing the authorship of the Henry VI plays by word adjacency. Shakespeare Quarterly, 67(2), 232–256. https://doi.org/10.1093/sq/sqw023 

  6. Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. June 22, 2023). https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/ 

  7. Mobilio, A., Nijjar, B., Parrotta, C., & Burmanmore, J. (2024). The Grammarly girl: A case of “unintentional cheating.” In J. Heng Hartse (Ed.), Unveiling academic integrity: Case studies of real-world academic misconduct. BCcampus. https://pressbooks.bccampus.ca/aicasestudies/chapter/83/ 

  8. Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press. Vallor argues that practical wisdom (phrónēsis) develops through exercise, and that outsourcing intellectual work to AI systems risks eroding the capacity for moral and intellectual self-development.