
If you want to submit a short story or novel to any publisher today, you will inevitably find a disclaimer on their submission guidelines page that forbids submitting a story that used AI. But what exactly does “using AI” entail? If you dictate a story with an app that uses AI to transcribe it, is that using AI? If you then use Grammarly to proofread and edit, is that using AI? If you upload your manuscript to use Perplexity as a fact-checker and research assistant, is that using AI? If you use ChatGPT to help generate ideas during the outline phase but then write the actual story yourself, is that using AI? If you generate the first draft of a story with an LLM, but then rewrite every single word, is that still considered using AI? What if you only change 95% of the words? 75%? 50%?
It is increasingly difficult to draw a line anywhere that says “this is clearly AI writing, and this is not.” Or you can do that, but in this age, it is foolish to not use AI at all because there are too many useful applications of AI that can improve your writing. Whether it’s a simple grammatical device like Grammarly, or a dictation app like Otter, or even LLMs like ChatGPT, Claude, Gemini, and Grok, AI can help make human writing better. It is the LLMs that publishers are really most wary about. So maybe just ban them? Except LLMs can be enormously useful to writers, even if only for research purposes.
I have been critical of using AI to generate an entire story, as it misses the unique creativity that comes from the human subconscious. However, I was not dismissive of AI chatbots entirely, as they can still be quite useful to fiction writers. LLMs can help you outline your story, generate ideas, or even get through a bit of writer’s block. You may even use AI to write portions of certain scenes. If a character speaks another language, is from a historical time period, or is an AI themselves, then ChatGPT might be able to write their dialogue more convincingly than you ever could.
The wholesale ban of AI writing seems to have gone too far. For example, I recently wrote a 14,100-word story, and there is one scene where a character recites a magical spell in the form of a rhyming poem. I used ChatGPT to help generate that poem. Some of ChatGPT’s lines I kept, but I rewrote others and added a few lines of my own. Regardless, that poem is only 100 words of the 14,000 word story. So if 50 of the 100 words of a poem within a 14,000-word story was generated by AI, is that story allowed to be submitted to any publisher that bans the use of AI?
If a writer uses ChatGPT for a small portion of a story that they otherwise write almost entirely themselves, should that be discounted? What if they only used ChatGPT for the outline or idea generation phase? Or only as a fact-checker/research aid? It would be silly to discount such stories just because the writer used AI in the process. Those examples are clearly still primarily written by humans. But how much AI assistance is too much to consider it still having been written by a human? There is no clear line of demarcation.
Of course, you could just write a one-sentence prompt and let ChatGPT generate the entire story, and simply publish whatever it generates with no further editing. Clearly that is the type of story nobody wants, and it will most likely not be very good. (Though AI is continually getting better at this, at least for short flash fiction stories with a gimmick in a predefined format.1) Since AI stories could be generated quickly in large quantities, publishers have the problem of being flooded by submissions of low-quality AI stories that have been barely edited by humans, if at all. That is the reason publishers are so anti-AI.
But I would argue publishers should not ban the use of AI entirely. They should give writers the creative freedom to use AI to their best abilities and judge the final story as it is. It is either a good story or it is not, no matter how much AI was or was not used in the process. Frankly, I don’t care how much was or wasn’t written by AI. All I care about is if the story is good—and that’s what publishers should care about as well. If a human figures out how to generate a great story with AI, they should be rewarded, not punished. I’ve written before about my disdain for “algorithmic fiction”—that is, human writers who follow strict formulas. I would rather read robots who write like humans than humans who write like robots. The more human touch there is in a story, the better it will likely be—so long as the human is a good writer. And the best human writers can figure out how to use AI to help make their writing even better.
Publishers also oppose AI for ethical reasons because some LLMs used copyrighted material in their training data without the authors’ permission. This objection makes more sense, though it is another case of a blurry line. Is what an AI like ChatGPT is doing really so different than what a human author does: they read many books and watch many movies, which blend in their subconscious, and out comes a “new” story based on those ideas. Every story ever written by a human was “trained” on data written by other humans—and observations of other humans. But no author attributes every single source that influenced them—it is impossible to do, as most of the time an author isn’t even conscious of all their influences. It seems unfair to hold AI to a standard that we don’t hold ourselves to. Though humans do pay for the books and movies they consume, so tech companies should do the same. I would gladly let tech companies train their AI on my writing if I received royalty payments from it.
Publishers should lift the blanket bans on AI and let writers use AI to their best abilities to create the best stories possible. A mediocre mind will only ever create mediocrity with AI, but talented writers can use AI to create work that is truly great. AI is a tool, and only experienced craftsmen know how to get the most out of any tool. In this case, the craft is writing and the tool is an LLM.
As for the problem of being spammed with inferior AI stories, simply use AI to help identify and filter out the slop. Whether you like it or not, AI is here to stay, so banning it for fiction writing is futile. After the word processor was invented, I’m sure publishers were overwhelmed by an increase in submissions as it became faster, cheaper, and easier to type and print manuscripts. Then it happened again with the invention of email. Banning AI now would be like banning the word processor and demanding all stories be handwritten—which ironically may be the only way to ensure a story was not written by AI (for now).
- Such as my “Future Fake News” series. I have used AI to help partially write some of those stories, but I would never let AI write the whole thing. ↩︎

I totally agree.