Against AI
A response to Erin Underwood's open letter to SFWA and the SFF community
Opening social media this morning, the first thing I saw was Erin Underwood’s open letter on the use of AI, wherein she advocates for both the SFWA in particular and the SFF community in general to adopt a more nuanced response to the use of AI by creatives. A main motivator here was the SFWA’s recent missive on the subject of AI and awards eligibility: on December 19, the organization released updated rules for the Nebula Awards, the initial wording of which suggested that works containing a degree of AI-generated content were eligible for the ballot. After significant community pushback, the wording was swiftly amended to make clear that any use of AI would be disqualifying. It is this change - and, it seems, the angry response which sparked it - with which Underwood takes issue.
In laying out her intentions, Underwood writes that the issue of AI use “needs a broader conversation,” but that she has “put off raising it publicly because deviation from accepted positions (especially related to AI) in our community can be met with hostility rather than debate.” This is, in fairness, true, but with good reason: AI is not a neutral technology. The widespread anger at AI isn’t due to mindless groupthink, a failure to understand the tech itself, misplaced resentment at a fast-changing world, or any of the usual culprits. Community debate over the use of AI isn’t like the back-and-forth we collectively had about e-books and what they mean for the future of publishing, and though it has some key points of overlap with the ongoing debate about the degree to which certain tech is bad for kids, it’s not like that, either.
It’s not even simply, as Underwood puts it, that “the way this technology evolved was deeply flawed, and real harm has already been done to creators,” although this, too, is true (if a massive understatement). The outright theft of creative works to train various AI datasets whose output has since been leveraged against human artists is deeply pernicious, not just as an abstract cultural or philosophical point, but legally, financially, institutionally. I would go so far, in fact, as to characterize this creative theft as an original sin: something so foundationally damning as to have tainted every subsequent iteration of the tech, no matter how far removed. Make no mistake: if this was the only crime we could lay at AI’s door, it would be bad enough - vast enough, integral enough - to justify loathing it forever.
But AI is foundationally unethical in ways that go far beyond its creative theft, and which do not only concern the communities from whom that work was stolen. AI is unethical the way billionaires are unethical, and exists for much the same reason and with many of the same risks, because the one is inextricable from the other: a culmination of wildly unregulated capitalist excess which, if left unchecked, legitimately threatens both global democracy and the long-term habitability of our one and only planet.
At the macro level, AI has almost single-handedly negated what little progress we’d made as a species towards lowering global carbon emissions, which is frankly terrifying; at the mirco level, data centers pollute their local environments, with rural communities, poor communities and people of colour disproportionately affected. At the same time, the soaring costs of this increased electricity usage are passed onto consumers, while overworked power grids struggle to keep up with corporate demand. Even worse, AI data centers can only be cooled with fresh water, which puts them in direct competition with communities for what is arguably the single most precious substance on the planet. Drinking and bathing keeps the majority of the water in circulation, allowing for replenishment and reuse within the same location; cooling for data centers, however, results in evaporation, removing it from the local water table entirely - leading to rapid depletion of extremely finite resources.
And then there’s the human cost: the hidden yet vital work of AI content moderation, often outsourced to workers in the global south, which sees workers developing PTSD due to constant exposure to violent, horrific content. Nor is the use of AI itself benign: in addition to deepfakes and AI psychosis - something which has already led to multiple deaths, both by murder and suicide - there’s the catastrophic toll that generative AI in particular has taken on education. High schools and universities alike have been overrun by AI, and while not everyone sees the danger in this, the simple fact is that you cannot learn to think by outsourcing the act of thinking, least of all to something unintelligent. Yet students - impressionable, stressed, looking for a quick solution to the challenges of study - are using ChatGPT in droves; and so, too, are some of their teachers. (If a professor uses AI to design and mark an essay which students use AI to take, has anyone been taught?)
And all of this for a technology which, at base, doesn’t actually fucking work - or at least, not as advertised. AI frequently hallucinates, giving answers which are sometimes catastrophically wrong, with a rate of inaccuracy that at times exceeds 60%, but which more often sits around 45% - a truly staggering degree of failure for something billed as the future of technology. AI has produced fake legal citations, including false names and quotes; it has invented academic papers that don’t exist, wiped out an entire coding database, and given incorrect information to consumers on official websites, leading to at least one successful lawsuit. Even in the field of medicine - one of the places where AI use is most frequently touted as a net gain for things like diagnostics - errors are constant. Like the Demon Cat in Adventure Time, AI has approximate knowledge of many things, but given its ubiquity, all those inaccuracies add up fast.
And then there’s the social, psychological and emotional costs. From AI-generated ads to deepfake videos of cute animals to covers of popular songs, it’s increasingly difficult to know whether what we’re seeing and hearing is real; if the product we want to buy actually exists; if the incident we’re witnessing actually happened; if the podcast we’re listening to or the text we’re reading or the art we’re viewing represents legitimate human communication or AI slop. All these things are profoundly destabilizing, not only to our individual sense of reality, but to our collective sense of truth. And oh, the scamming: construction workers using AI to make incomplete work seem finished, service workers using AI to fake evidence of food delivery, landlords using AI to falsify the appearance of properties for rent - the list is endless. Nothing is real anymore.
I could go on, but the point is this: that AI is harmful in every possible way. It’s a highly inaccurate plagiarism tool built on creative theft, accelerated environmental destruction, human rights abuses, heightened risks of psychosis, sexual blackmail, educational collapse and the erosion of our collective sense of truth, and the only reason it’s ubiquitous despite all this is because a handful of sociopathic billionaires working in a highly unregulated field have doubled and tripled down on putting it in everything, because the minute the AI bubble bursts, they’ll loose a staggering amount of money. (That, of course, and it’s the kind of tech that aids in committing war crimes, which is always of powerful interest to the military-industrial complex.)
So when (to return to the open letter at hand) I see someone argue, as Underwood has done, that “there is no putting the genie back in the bottle” - that we must account for and accept certain types of AI use going forward, because that’s simply the world we inhabit now - it makes me want to scream until my throat ruptures, because no, we fucking don’t. We don’t have to accept AI as a technological innovation any more than we have to accept meth as a chemical innovation, and for similar reasons: AI is bad for us as a species. It harms us - and our planet - far in excess of any benefit it provides; and those benefits are overwhelmingly achievable through other, less toxic ends.
“Writers, artists, musicians, publishers, and the industries that support them,” Underwood writes, “must remain viable and competitive in a modern world that is becoming deeply dependent on AI tools and AI-driven infrastructure.” I agree. And the way we do that is by telling AI to roundly fuck off: to compete against it. Our collective power in this situation - our only power, in fact - is our right of refusal: to reject what is being done to us. As regular civilians, we can’t put Sam Altman, Elon Musk and Mark Zuckerberg in jail where they rightly belong (and they do belong in jail, for a staggering variety of crimes), but we can, at the very fucking least, decline to rubber-stamp their version of reality. We can’t stop our peers from choosing to use ChatGPT in their research, but we can say that putting AI-generated words in a given work of fiction disqualifies it from awards eligibility. For fuck’s sake.
Underwood writes:
“Having a yes/no switch that governs the use of AI and generative AI isn’t viable because this technology is now embedded throughout the core infrastructure that supports businesses today… At the same time, there are AI use cases that touch creative work directly and indirectly, often without the creator’s knowledge or consent. Those realities must be acknowledged. Creators should not be penalized for incidental, accidental, or third-party use of AI in business processes surrounding their original work.”
This is, to put it bluntly, a bunch of mealy-mouthed bullshit. Recall that we’re talking here about the SFWA stating rules around eligibility for the Nebulas, not about access to publishing in its entirety. No-one is entitled to be eligible for a specific award; ergo, a rule which sets the parameters for eligibility is not “penalizing” anyone, as all awards are selective by definition. You might as well argue that an author who wrote a 40,000 word novel would be “penalized” if the SFWA changed their minimum word limit for the category to 41,000. You’re not being told you can’t publish at all if you use AI; you’re just being told you can’t be on the ballot for the Nebulas.
More to the point, however, I find it both ironic and wildly hypocritical that, in one breath, Underwood is willing to acknowledge the lack of authorial consent at the heart of LLM tools - “The way this technology evolved was deeply flawed, and real harm has already been done to creators. That history matters and can’t be ignored.” - while in the next, taking it as a given that “AI use cases… touch creative work directly and indirectly, often without the creator’s knowledge or consent.” If you care at all about the fuckedupedness of the former, then your response to the latter should not be to pre-emptively accept that of course authors will continue to have no control over whether AI touches their work on the backend. Rather, you should want to take advantage of the only leverage we have, which is telling publishers firmly that if you do this, you will lose money and industry prestige. Make sure that AI doesn’t touch the works you put out, and they’ll remain award-eligible.
Would it suck, in this hypothetical scenario, for an author to discover that their own work wasn’t award eligible because of something their publisher did without their knowledge or consent? Of course! But our collective anger in that instance would be rightly directed at the fucking publisher, not at the award. Arguing for the Nebulas to actively permit a certain type of AI involvement doesn’t protect authors; rather, it frames “incidental, accidental, or third-party use of AI” on behalf of the publisher as inevitable and neutral rather than a continuation of the same disrespect with which AI was founded. The onus should be on publishers to do better, not on awards to condone their failure to try.
Nor - to speak bluntly - do I pay any credence to the idea that AI is now so deeply embedded in business that the cost of extracting it presents some insurmountable obstacle that no company could be fairly expected to manage. Even if it weren’t the case that only 5% of companies are deriving any value from AI, this isn’t a question of physical infrastructure, which requires logistics, time and money to dismantle; it’s a matter of which software to use. When it comes to getting rid of their dependence on AI, the primary concerns the vast majority of these firms will face are data protection lawsuits for having used it in the first place, and the hassle of rehiring the personnel they fired before they realised the tech was no substitute for human expertise. Both of these are own goals, to be sure, but it’s not exactly the same as removing a loadbearing wall from the metaphorical building.
“Publishers can’t realistically avoid using these tools if they intend to remain competitive and continue selling books, art, and music created by their authors and artists.”
Yes, they can. Very easily, in fact! And not least because their core audience overwhelmingly hates AI, such that we’re more likely to support a publisher who takes a strong stance against it than one who opts to hit their dick with a hammer. Nor is this phenomenon restricted to the creative sphere; other industries are already noticing that, true to the predictions of some early observers, the use of AI is coming to be seen, not as innovative, but cheap - the hallmark of a scammy, sub-par product and an untrustworthy company. When Coca-Cola released an AI-generated ad for Christmas, for instance, it was roundly derided; but when a French supermarket chain released their own, lovingly animated offering, it went viral internationally, simply because it wasn’t AI. The same was also true of a recent Porsche commercial, which mixed traditional 2D and 3D animation.
“At the same time, these tools are enabling smaller and independent publishers to compete more effectively with large companies such as Tor, Penguin Random House, and Gollancz by improving efficiency, reach, and sustainability.”
Which tools? Which publishers? This is a very easy claim to make off the cuff, but without some names and numbers to back it up, it’s so much hot air. And, again: as Underwood herself has pointed out, creatives are overwhelmingly hostile towards AI. If specific publishers are using it behind the scenes, the rest of us would like to know who to avoid.
“The real challenge is that avoiding AI entirely is becoming increasingly impractical, even for those who are committed to producing fully human-authored work, as AI is now embedded in systems creators can’t control or realistically avoid.”
This is true to the extent that, say, Microsoft Copilot comes with the Microsoft Office suite, but the idea that there’s no meaningful difference between utilizing a browser or program that comes with pre-installed AI options and actively choosing to use those tools is absurd. The point, as laid out in the Nebula rules, is what makes it into the final text. Are all the words your own? Good. That’s not a difficult bar to clear or principle to understand. (And for the record, you can simply disable Copilot.)
“Awards exist to recognize excellence related to original work by human creators and the governing rules for awards should be distinct from regulating every tool involved in the surrounding production, communication, and distribution process. Conflating authorship with standard business processes makes it harder to uphold the values awards are meant to protect.”
Granted, if one were inclined towards pedantry, I can see how the Nebula rule might seem ambiguous, as the current wording reads as follows:
“Works that are written, either wholly or partially, by generative large language model (LLM) tools are not eligible. Works that used LLMs at any point during the writing process must disclose this upon acceptance of the nomination, and those works will be disqualified.”
The clear spirit of the rule - and, indeed, its naked intent, given the surrounding conversation - is to avoid generative content; the wording, however, does leave a door cracked open for nitpicking; for some to worry, as Underwood clearly does, that the phrase “at any point during the writing process” extends to things like spellcheck or autocorrect tools, which might make use of AI without the author’s knowing. Should specific clarification be sought in this matter, and particularly given how quickly the SFWA rushed out their initial correction, the logical solution - to me, anyway - would be to add a qualifying rider, defining the terms and their purpose.
For instance: “For these purposes, ‘the writing process’ is defined specifically as the act of crafting the story itself, and does not extend to the collation or creation of any material that doesn’t appear in the final text. ‘[Using] LLMs at any point’ is defined as using LLMs in a generative capacity, to either create, amend or restructure the final text beyond what might be achieved with a non-AI spellcheck, voice-to-text or autocorrect program.”
Do I particularly like this wording? No; or rather, I dislike the idea that such a thing might be required. Firm definitions, as any philosopher will tell you - and I’ve lived with one for twenty years - are difficult things to construct. Plato once defined a human being as a “featherless biped,” only for Diogenes to come rushing in with a plucked chicken: “Behold, a man!” The more we try to nail things down, the more language writhes and tangles, sprouting unintended implications the way a struck hydra spouts heads. As such, it’s not that I can’t imagine a scenario where such a rider to the SFWA rules might prove lamentably necessary; I’m familiar both with the foibles of the general public and the pedantry of nerds. It’s rather that, just as I don’t want to shape my fiction in response to imagined bad-faith critique, I don’t want to preemptively account for some petty idiot arguing that a specific work should be disqualified because the author’s editor admitted to using spellcheck. I know that type of person exists; I’d simply prefer not to acknowledge them if I don’t have to.
Cladistically - according to Stephen Jay Gould, at least - there’s no such thing as a fish; nonetheless, we all understand implicitly what a fish is. The spirit of the law trumps its letter; yet just as nature abhors a vacuum, so does a certain type of person love a loophole, which is why, throughout various forms of human endeavor - most notably sports and customer service - you’ll find highly specific, even comical rules that serve as a form of environmental storytelling: something happened here. The darker flipside of this, of course, is the adage that OSHA rules are written in blood: that dangerous practices are often only outlawed after harm has already been done, and even then only in response to significant lobbying, because the powers that be frequently value profit over human lives.
When Panera Bread released a highly caffeinated but insufficiently labelled lemonade, resulting in the deaths of two people, it was pulled from sale. By contrast, more than ten people have committed suicide due to AI psychosis, yet the same types of chatbots remain freely available. When the grieving parents of a teenager whose suicide was functionally encouraged by an OpenAI chatbot sued the company, the response of the company’s lawyers was simple: that using their AI to assist with suicide or self-harm was against the TOC. These are not people to whom I want to cede one single fucking inch of ground, least of all in my professional space. Do not obey in advance.
“Small and indie publishers often rely on generative AI for marketing, planning, and analysis because they lack the staffing and budgets of large publishers. Blanket AI restrictions force these presses into an impossible choice of either avoiding modern tools that allow them to publish more work and sell more books or use them and disqualify all their authors from awards…
“Fan organizations and conventions are overwhelmingly volunteer-run and chronically understaffed… AI tools can reduce the burden of these time-consuming tasks and help volunteers work more efficiently. Without such support, many conventions may be forced to scale back or shut down entirely due to burnout and lack of operational capacity. The loss of these community spaces would be a significant blow to the science fiction, fantasy, and horror community as a whole.”
Small and indie publishers - and, indeed, fan conventions - have existed for decades without AI tools. Whatever convenience might be wrung from their use, these are not institutions and businesses that have sprung up overnight purely thanks to a single technological development, to be washed away like stones in a flood in its absence. I am very much in favour of government grants and other such resources to help conventions and indie publishers thrive; I am not in favour of the plagiarism machine as fallback.
When a prompt engineer argues - and they routinely do - that they’re “only able to be an artist” because generative AI allows them to “make” works that they’d otherwise lack the skill to produce, we don’t accept this as a reason to greenlight their use of AI. Why should our treatment of business be different? Anyone is entitled to try their hand at something, be it artistry or running a shop; you are not, however, entitled to succeed, such that whatever you do in service of that end - like grossly underpaying and overworking your employees, or using AI, or stealing everything you can get your hands on to train your LLM because, by your own admission, you couldn’t afford to pay for that much material otherwise - is therefore justified. Yet when it comes to AI, presumably encouraged by the as-yet consequence free illegality with which the tech was developed in the first place, “but I need to do something wildly unethical, otherwise I can’t succeed!” has somehow become a conventional justification for acting like an asshole. It’s maddening.
AI is unethical on a scale that SFF authors should be uniquely placed to appreciate, its evils mirroring metaphors that are older than our present civilization. AI is the cursed amulet, the magic mirror, the deal with the devil, the doppelganger that learns our secrets and steals our face; it’s a faerie illusion, leprechaun gold, the fox’s trick that gives rot the look of resplendence, the naked emperor parading with his cock out; it’s the disembodied voice that whispers let me in, the zombie virus that transforms the known into the unrecognizable, the corrupting fungi whose tendrils invade and poison. It’s the literal fucking One Ring, telling us that of course we’d use its power for good, compelling us to pick it up so that through us, it might do great evil.
And it is also, at the same time, constitutionally stupid. It lies. It forgets. It hallucinates. It’s Sauron with dementia. It’s like if plastic were also meth - which is to say, it’s like asbestos. It is poisoning us, our planet and our future at an unprecedented rate, and the primary use-case is for sociopathic billionaires to try and Underpants Gnomes their way into No Pay, Only Earn. (Step Three: Profit!) It ought to be legislated out of existence the same way Heelys once were, but because the US is presently governed by a kakistocracy whose worst excesses put Nero to shame, just about the only thing we can do as regular-ass people trapped in this capitalistic hellscape is to say: no, we will not grant award eligibility to works created with AI; no, we will not respect its usage; no, we will not roll over and lovingly suckle whichever technocrat’s boot is nearest; no, we will not treat the future we’re being offered as inevitable. No. No. No.
So, yeah: I am profoundly hostile towards AI. Who wouldn’t be?


Thank you for this.
“Creators should not be penalized for incidental…” I absolutely loathe takes like this.
Underwood (and others) is trying to establish a definition of “incidental” for the usage of AI. But they do so with incredible vagueness. Who are they to determine whats incidental? And who are they to determine if an “incidental” use even exists? Is there an “incidental” use of a knife in murder?
Its very reminiscent of the tech industry in general. Now AI is defined as anything. Calculator? AI. Washing machine? AI. UI? Believe it or not.. AI.
This expansion of scope allows them to get away with so much BS. We need to define everything narrowly so these clowns cant keep getting away with their statements imo.