“Creators should not be penalized for incidental…” I absolutely loathe takes like this.
Underwood (and others) is trying to establish a definition of “incidental” for the usage of AI. But they do so with incredible vagueness. Who are they to determine whats incidental? And who are they to determine if an “incidental” use even exists? Is there an “incidental” use of a knife in murder?
Its very reminiscent of the tech industry in general. Now AI is defined as anything. Calculator? AI. Washing machine? AI. UI? Believe it or not.. AI.
This expansion of scope allows them to get away with so much BS. We need to define everything narrowly so these clowns cant keep getting away with their statements imo.
This essay is amazing, thank you! I really have nothing to add to any of your points, and only commenting to say "bravo!"
I will add one sidenote: I find it deeply creepy to see Underwood stalking every real artist and writer who articulates just how completely disingenuous and ethically bankrupt the cesspool of her arguments are, all so she can pretend she agrees with them in some way, or their opinions are somehow "equal" to hers, because "we're all just talking here." These arguments are NOT equal, and are not coming from equal minds. This isn't the "conversation" she's pretending she wanted. I find the very idea offensive, honestly, and wish she'd just sit with the points people like you and Wendig articulated, and actually do some self-examination as to why these things aren't obvious to her, if she's truly a part of the writing/artistic community. It just makes it so disturbingly clear she doesn't get it at all, and she doesn't see herself clearly at all in relation to others who are pointing out the huge problems not only in what she said, but the assumptions and worldview behind all of it, and why it's deeply problematic.
There are two typos: mirco instead of micro, loose instead of lose. That hardly matters, but it's interesting to note that these may be worthwhile signs that this was written by an actual human.
One part of this is incorrect: evaporative cooling is not the only way to cool datacenters. It just happens to be the cheapest, and perhaps the point could be made that *of course* people deploying AI will never do the better, more environmentally friendly, more long term, and more expensive thing when they can just finagle water and electricity from municipalities for cheap prices, often so much so that nearby residents end up subsidizing these AI datacenters.
Foz, this is a great response. This is exactly that kind of response I was hoping would come out of this discussion. I know I started this discussion is a rather ham handed way, but I actually loved your response. It was never my intention to sell authors on using AI but to expose how I have seen it being used in business by giving examples of how people can imagine getting value out of using AI.
The one thing I could have (and should have) made clearer is that the rules that awards organizations present and the response to them by authors do generate a very real wall (not just a line in the sand) that publishers can't pretend doesn't exist. So, as this conversation continues, and I hope it does, I would love to see publishers (and other businesses) have very real conversations with their staff that address AI is/isn't, what staff can/can't do with AI, and what the impacts will be if they are caught using AI on projects.
I don't have all of the answers, but I will admit to thinking that we won't be able to get rid of AI. So, in my mind at least, we have to create some very firm guardrails for accepted and unaccepted use from an informed perspective because our government seems to have very little interest in creating any type of policy or regulation on this topic that would inform and educate the general population.
My letter was imperfect in many ways, but the collective voices from you, Chuck, Nick, and others are helping to raise awareness, educate people, and push that "no AI" line so that it is not longer just surrounding the author but the industry that the authors support.
This has been painful for me, but it's honestly less painful than the future I see without this conversation taking place. Thank you!
Thank you for this.
“Creators should not be penalized for incidental…” I absolutely loathe takes like this.
Underwood (and others) is trying to establish a definition of “incidental” for the usage of AI. But they do so with incredible vagueness. Who are they to determine whats incidental? And who are they to determine if an “incidental” use even exists? Is there an “incidental” use of a knife in murder?
Its very reminiscent of the tech industry in general. Now AI is defined as anything. Calculator? AI. Washing machine? AI. UI? Believe it or not.. AI.
This expansion of scope allows them to get away with so much BS. We need to define everything narrowly so these clowns cant keep getting away with their statements imo.
This is an excellent breakdown—with receipts. Thanks for doing the work on this.
This is a great piece! It's so important to hear from writers firsthand how AI is affecting them.
This essay is amazing, thank you! I really have nothing to add to any of your points, and only commenting to say "bravo!"
I will add one sidenote: I find it deeply creepy to see Underwood stalking every real artist and writer who articulates just how completely disingenuous and ethically bankrupt the cesspool of her arguments are, all so she can pretend she agrees with them in some way, or their opinions are somehow "equal" to hers, because "we're all just talking here." These arguments are NOT equal, and are not coming from equal minds. This isn't the "conversation" she's pretending she wanted. I find the very idea offensive, honestly, and wish she'd just sit with the points people like you and Wendig articulated, and actually do some self-examination as to why these things aren't obvious to her, if she's truly a part of the writing/artistic community. It just makes it so disturbingly clear she doesn't get it at all, and she doesn't see herself clearly at all in relation to others who are pointing out the huge problems not only in what she said, but the assumptions and worldview behind all of it, and why it's deeply problematic.
There are two typos: mirco instead of micro, loose instead of lose. That hardly matters, but it's interesting to note that these may be worthwhile signs that this was written by an actual human.
One part of this is incorrect: evaporative cooling is not the only way to cool datacenters. It just happens to be the cheapest, and perhaps the point could be made that *of course* people deploying AI will never do the better, more environmentally friendly, more long term, and more expensive thing when they can just finagle water and electricity from municipalities for cheap prices, often so much so that nearby residents end up subsidizing these AI datacenters.
Foz, this is a great response. This is exactly that kind of response I was hoping would come out of this discussion. I know I started this discussion is a rather ham handed way, but I actually loved your response. It was never my intention to sell authors on using AI but to expose how I have seen it being used in business by giving examples of how people can imagine getting value out of using AI.
The one thing I could have (and should have) made clearer is that the rules that awards organizations present and the response to them by authors do generate a very real wall (not just a line in the sand) that publishers can't pretend doesn't exist. So, as this conversation continues, and I hope it does, I would love to see publishers (and other businesses) have very real conversations with their staff that address AI is/isn't, what staff can/can't do with AI, and what the impacts will be if they are caught using AI on projects.
I don't have all of the answers, but I will admit to thinking that we won't be able to get rid of AI. So, in my mind at least, we have to create some very firm guardrails for accepted and unaccepted use from an informed perspective because our government seems to have very little interest in creating any type of policy or regulation on this topic that would inform and educate the general population.
My letter was imperfect in many ways, but the collective voices from you, Chuck, Nick, and others are helping to raise awareness, educate people, and push that "no AI" line so that it is not longer just surrounding the author but the industry that the authors support.
This has been painful for me, but it's honestly less painful than the future I see without this conversation taking place. Thank you!