This week, I’m continuing my exploration of the five major concerns surrounding Generative AI and its impact on the creative writing community. This week, I’ll cover how writers may choose to be Transparent in their use of AI, or they may not, with good reason. We’ll also discuss the largely unsubstantiated fears that GenAI plagiarizes original works.
GenAi has been getting a lot of bad press over the last 18 months, but it’s certainly not all negative. It could be viewed as another tool to help creatives create better content.
Last week, I said I wouldn’t add an AI-generated feature image due to the subject matter of the newsletter. Honestly, it looked terrible. So this week, I’ve asked DALL-E to generate something appropriate again. So, shoot me.
Last week, I covered what it means to claim Authorship when aspects of GenAi are used to produce the final piece. Closely related was the discussion on Authenticity in the age of Generative AI.
Transparency
Should a creative writer always ensure that any use of GenAI is disclosed in their writing process? Perhaps by prominently displaying a disclaimer on every piece of work affected. In fact, should they be required to do so? If yes, then why? For what purpose?
I’m very open about my use of AI for both short-form posts like this newsletter and my developing long-form writing process. I have added a disclaimer to every newsletter in the past, but I stopped doing that mainly because I’m constantly writing about the subject. I think it’s obvious that I use AI extensively and, most importantly, how I use it.
Building an audience relies partly on building trust between the writer and the reader. Of course, the content has to have some value; otherwise, why bother reading it at all? Trust in this scenario roughly equates to honesty, that is, being open about the writing process, amongst other things. But how far does this have to go? Readers are not stupid, and throwing a disclaimer up for what is obvious insults their intelligence.
Writing constantly evolves as new tools and technologies become available in the mainstream. Think about the introduction of typewriters, word processors, and personal computers. Certain Luddites viewed them negatively, but over time, they have become accepted and part and parcel of a writer’s tools of the trade.
There has always been a lot of fear and misconceptions about every new technology, and these have proved unfounded every time. The loudest naysayers gradually fade away into the background noise. People, being people, are hard-wired to fear the new and automatically assume the worst. I advise keeping an open mind and stepping back before jumping onto whatever bandwagon is the current mode. Do some research and think for yourselves.
The ridiculous idea that an AI will replace creatives wholesale is a fantasy. Have you ever tried to ask an AI to generate even a short article like this one? Even the current best for generating creative prose, Claude 3.5 Sonnet, struggles to produce a complete post that looks natural.
I have been, and remain to be, open and honest about using AI in my creative writing process. I don’t ram disclaimers down my readers’ throats anymore; I prefer to discuss and write entire articles on the subject. That, to me, is as transparent as I need to be. Anything more would be laboring the point and, frankly, boring.
What matters is the final product. Does the finished piece entertain, inform, and resonate with the readers? If yes, then I feel my job is done; if not, it’s probably because I’m a crap writer. But that would be my problem, wouldn’t it? I’ll never hesitate to answer honestly about my writing process if asked. What more can be expected?
Also, I won’t ever tell other writers they need to be more transparent about using Generative AI. It is their choice how open and honest they wish to be. As a creative, it comes down to the satisfaction of producing the best work you can. I’m proud of my progress with the quality of my work over the last few years. AI has aided none of it; it has all been human me, learning as I go.
Plagiarism
By far, the biggest worry about AI is that it will take existing works upon which it has been trained and reproduce, plagiarize, or steal the original writers’ intellectual property. If true, this really would be something to worry about. Fortunately, that’s not how Generative AI works.
The key word here is ‘Generative’. Every single large language model (LLM) in use today generates words based on the current context, the request, and the most likely next word. It does this repeatedly until it completes the request. It cannot extract its training material and reproduce it verbatim.
They are built on a foundation of machine learning, and the modern architecture of most LLMs are based on transformers, a type of neural network. During training, they are shown partial texts and asked to predict the next word or phrase.
Once the model is trained, it cannot access its original training data. Instead, it uses the linguistic patterns it has learned to generate new and original text. That said, the style and content may resemble the text on which it has been trained. But that’s how language works; consider how a human attempts to express an idea in their own words. Inevitably, they are influenced by the words and ideas they have already been exposed to.
Even when trained on copywritten material, LLMs are designed to generalize from that material, not replicate it. So, it is totally different from plagiarism, simply copying someone else’s work. Lastly, here is a summary of how LLMs really work. ChatGPT generated this (See? How to be open and transparent 101):
LLMs like me function as probabilistic generators of text rather than repositories of specific texts. By learning from a wide array of language examples, LLMs are able to synthesize new and relevant outputs, making concerns about plagiarism largely unfounded. However, continuous efforts to ensure responsible use, data handling, and transparency are key to addressing any potential ethical issues.
Final Thoughts
While writing this newsletter, I took a short break to walk the dogs. As is my routine, I listened to the latest Wordslinger podcast with James Blatch and Cara Clare. James said something interesting about trying out Claude to write a scene; here is what he had to say:
“… it wasn't that good. I mean, it was good, but it wasn’t; it was a start for a scene rather than the completed package.”
What is generated is a start, which is how I use it. To give me a kick into the creative mode of thought. It takes away or eliminates the blank page. I mean that if you’re struggling and can’t think about what or where the story goes next, you can ask AI. However, you don’t have to use what the AI generates. In fact, you shouldn’t use it generally, but what it does do is give you ideas about how to proceed.
To remain on the side of ethical use, the best approach is to use AI to enhance your creativity. Don’t use it as a crutch to replace your writing; always prioritize originality and authenticity in your work.