Core Idea
AI writing tools arrived with a promise that sounded reasonable enough. They would accelerate production, handle the tedious bits, free up cognitive space for the work that matters. They would draft faster and edit smarter so that creators could publish more.
The logic was industrial and familiar. More output, same input, better results.
The problem is that thinking by humans is a process, it’s not manufacturing. It cannot be sped up indefinitely without changing what it produces. When generation becomes frictionless, when a paragraph materialises in seconds and a full essay in minutes, something shifts. The creator stops creating and starts curating.
You ask an LLM to draft an introduction, and now you are three thousand words deep, scrolling through competent prose you did not write, already tailored to your voice, wondering which bits to keep and whether any of it sounds truly like you.
Here’s the thing.
This acceleration feels like amplification, a bigger more productive megaphone, but it might not be. There is a fine line between multiplying human capability and replacing it.
Confusing the two is not just conceptually sloppy. It risks outsourcing the very process that made the work worth doing in the first place.
Counterpoint
AI is a creative amplifier, staggeringly so since the second half of 2025.
I liken it to having a classroom full of academic geniuses all willing and able to devote their entire bandwidth to your every whim. They can generate endless options, more content and critique than seems possible, then handle the grunt work of structure and polish.
In theory, the creator remains firmly in control, directing the intelligence, shaping the output, adding the human touch that machines cannot replicate.
It is a comforting story because it promises enhancement without cost. You get the speed without losing the substance, the efficiency without sacrificing the soul.
But in practice, it does not work that way.
Content production is not a neutral process that can be mechanised without consequence. The slow work of writing, the false starts and deletions, the searching for the exact word, the manual assembly of argument, these are not obstacles to thought; they are thought.
Remove the friction and you change what gets produced.
AI-generated text can be grammatically flawless, structurally sound, evidence-rich, and entirely generic. It lacks the anecdote only you could tell, the connection only you would make, the uncomfortable observation you would risk including.
What disappears is not quality in the measurable sense. What disappears is voice, specificity, the trace of a particular mind working through a particular problem.
Worse, the acceleration creates its own momentum.
Once the tool is running, stopping feels wasteful. The machine has already drafted three more sections while you were reading the first. Your role shifts from author to editor, from editor to selector, from selector to passive audience of your own supposed output.
The horse enjoys its gallop more than you do. And by the time you notice, you are several miles past where you meant to stop.
Sure you can produce ten times more content with AI assistance, hit every surface metric of quality, and still find that the essential thing you were trying to say got smoothed away in the process.
A flood of competent prose is not the same as a single sentence that actually matters.
Thought Challenge
Run the subtraction test... Take three recent pieces you created with AI assistance and three you wrote entirely manually. Strip away surface features like grammar and formatting. Read them cold a week later. Mark the moments that feel unmistakably yours, the sentences you could defend, the insights that required your particular perspective. Then ask yourself honestly which method produced more of those moments. Not more words. More of what made the work worth reading.
Track your velocity.. . Over the next month, log your output volume and your subjective sense of depth per piece. If you are publishing twice as much but spending half as long thinking about each idea, what have you actually gained? Measure not just productivity but whether individual pieces still contain something you could not have generated six months ago. Speed without evolution is not progress. It is repetition at scale.
Impose deliberate friction. Choose one upcoming project and refuse AI assistance entirely. No drafting, no editing, no suggestions. Write it the slow way, manually, with all the usual frustration. Then compare the result not against speed, but against distinctiveness. Did the forced slowness produce anything the accelerated method would have missed? If the answer is yes, you have learned something important about what gets lost in the gallop. If the answer is no, then at least you tested the assumption.
All three actions sharpen the sceptical instinct. Instead of accepting efficiency as an unalloyed good, you learn to ask what the acceleration cost, what got smoothed away, and whether the trade was worth making.
Closing reflection
Being a mindful sceptic about AI tools is not about rejecting them or pretending they do not work. It is about discipline. It is about noticing when amplification becomes replacement, when the tool starts thinking for you, and when the abundance it generates becomes a substitute for the contribution only you can make.
The horse will always enjoy its gallop. Your job, and mine, is to know when to pull on the reins.
Evidence Support
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.
TL;DR… when people expect information to be stored externally they are less likely to remember the content itself and more likely to remember where to find it. Participants exposed to conditions where digital storage was available demonstrated reduced internalisation of facts and increased reliance on external systems as a form of transactive memory.
Relevance to insight… AI text generators function as an external memory and drafting engine, encouraging creators to remember that the machine can produce it rather than working through the ideas themselves. The paper provides hard evidence that when a system reliably holds information for us, we naturally shift from content memory to location memory, mirroring the move from author to curator in AI‑assisted writing. It supports the claim that frictionless access to content changes not only how fast we work, but what our minds bother to hold and process.
Parasuraman, R., & Riley, V. (1997). Humans and automation Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
TL;DR… how humans interact with automated systems and documents patterns of over‑trust, complacency, and skill degradation when automation performs reliably. It distinguishes between appropriate use and four failure modes (misuse, disuse, abuse, and overuse), showing that high-performing automation can gradually recast the human from active operator into passive monitor.
Relevance to insight… a textbook case of “abuse” and “misuse” of automation, where powerful tools are deployed beyond their appropriate role and humans slide into supervisory passivity rather than active authorship. A drafting tool that starts as a helper can become the de facto writer while the human merely selects and tweaks. The paper supports the insight that the critical skill is not just using AI, but knowing when to stop it, reassert manual control, and resist the subtle drift from creator to overseer.
Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and cognition A review of research exploring the links between mobile technology habits and cognitive functioning. Frontiers in Psychology, 8, 605.
TL;DR… ubiquitous digital devices can impair attention, working memory, and fluid intelligence, even when not actively in use. It highlights how constant availability of digital assistance encourages cognitive offloading and multitasking, with measurable costs to sustained, deep work.
Relevance to insight… powerful general‑purpose tools crowd the cognitive workspace and tempt us toward shallow, rapid switching instead of slow, deliberate processing. The findings support the idea that an environment saturated with assistive technology is structurally hostile to deep composition and reflection. This backs the insight’s warning that increasing the velocity and convenience of content creation can erode the mental conditions under which genuine thought and original voice typically emerge.
Bawden, D., & Robinson, L. (2009). The dark side of information Overload, anxiety and other paradoxes and pathologies. Journal of Information Science, 35(2), 180–191.
TL;DR… the phenomena of information overload, infobesity, and related “pathologies” arising from abundant, easily accessible information. It shows how high volumes of available content can reduce comprehension, impair decision‑making, and generate anxiety, even as technical systems appear to deliver ever more efficient access.
Relevance to insight… new engine for information overload, but now at the level of individual creators flooding the world with plausible text. More output does not automatically mean more value and can, in fact, decrease the signal‑to‑noise ratio of human contribution. A system optimised for volume and surface quality can paradoxically make it harder for distinct, thoughtful work to be noticed or even produced in the first place.
Kellogg, R. T. (2008). Training writing skills A cognitive developmental perspective. Journal of Writing Research, 1(1), 1–26.
TL;DR… writing as a demanding cognitive activity that draws heavily on working memory, long‑term knowledge, and deliberate practice, and writing skill develops through effortful engagement with planning, translating, and revising. The paper argues that fluency and quality emerge from years of actively managing these processes, not from bypassing them.
Relevance to insight… the slow parts of writing are not obstacles to creativity but the very mechanisms that generate it. If AI systems increasingly perform planning, drafting, and even revision, then the human writer is no longer exercising the cognitive muscles that Kellogg identifies as central to expertise. Over‑reliance on AI drafting will not just change the feel of writing, but progressively hollow out the underlying skill and the authenticity of the resulting voice.



