Is that you?
On using LLMs for creative writing
“Students have always been lazy. Students have always cheated. But now, students know that a machine can do the assignment for them... Nagging at the back of their mind will be the inevitable thought: Why am I doing this when I could just push a button?”
— Stephen March (The Atlantic, 2024)
I am against the use of AI in creative writing. Such a form of writing, as opposed to ‘content production’, is intended to be a human endeavour. It is meant to signify what a human thought, what a human expresses, and what a human conveys, because the purpose of such writing goes far beyond simple communication. It allows the reader to see, hear, think, feel and believe what the writer does. It is an experience, however varied or meaningful, or of whatever quality. A machine that can neither see, hear, think, feel or believe has no right to demand that anyone else share in its phantasms. A human author has similarly no right to demand that any other human pay attention to what such a machine produced on their behalf.
This is my view on using AI and it happens to be a view that I am aware at least a few others share. There is a reason words like “slop” and phrases such as “ai;dr” have come up in the face of AI-generated content. There may be nothing wrong with the content, but that is not what we are there for.
Recently, Chris wrote about his AI policy in which he explains why in a previous article he generated nearly the entire content through AI prompts, and why he plans to do that in the future as well:
Prior to running this AI query, my knowledge in this area was very scant. I wanted to know more, but didn’t really want to spend a few hours browsing websites looking for more information. So I obtained that summary from AI.
So AI was used to speed up research. Very good. Was the information verified? I should hope so. Was anything learnt from it following which Chris wrote an essay? Not really. As he continues to explain—
I don’t attach any importance or accuracy any higher than if a guy down the pub told me all this. However, as a framework it seems to make sense with what I understand already about the evolution of land use in this country. I chose to store the output on my blog so I can reference it again in the future.
To me this seems like an unnecessary extra step. If saving that AI summary is in fact so important, why not save it in your notes app privately? Why publish it for the world to see, and that too specifically in a context—your personal website—where the normal expectation today remains that the content people view on their browser1 is original and written by the individual whose work they signed up to read? Why not do some work so readers receive more value than they would have if they had simply looked that up with AI themselves?
Chris’s answer to this is to implement what he calls “guardrails” (as a matter of policy) which warns readers that the content they are about to read is AI-generated. I appreciate this step if only because it shows Chris’s awareness of his readers’ expectations2 and yet I find this awareness inconsistent with his decision to proceed with AI anyway.
However, it is worth noting that not all of an article is going to be AI because Chris plans to write introductions himself. The problem with this is some degree of sunk cost fallacy. An article will draw people in, promise them value and then present AI content. How much does a disclaimer at this point matter? In my opinion any disclaimer should be present before the start of an article and not just before switching to AI content halfway through. As a reader, even if I forgave halfway decent AI cover images because I was there for the text content, I would absolutely want to know before I started reading if I was going to be presented with any content whatsoever, at any point during my reading, that was generated using LLMs.
Finally, the new AI policy has an associated tag ‘AI-assisted’ which helps readers filter only essays with AI content. I think this should be the other way round: it would be helpful for readers to have a tag that filters out any AI-assisted content. I cannot think of a single instance where someone has expressed to me a desire to view only AI-generated content and I recall plenty of instances of the opposite.
Like Chris I too am excited by the potential AI holds, but perhaps in a different direction entirely. I would love to handover mundane, repetitive tasks for AI with programmable decision trees while the rest of us humans engage in creative, thoughtful endeavours ourselves. And while I respectfully disagree with Chris on several aspects of his implementation (and decision) I am nevertheless interested in seeing what direction the IndieWeb as a whole takes in the coming years in using AI. I would be the last person to write this off as obvious or final, or expect the majority to share my distaste for it, even if that is what I myself would like to see in the end. But at least when I visit your website and read your essay I expect that it really was you behind those words.