If you’ve been anywhere on the Internet recently, you’ve probably seen a lot of comments about the use of artificial intelligence (AI) and machine learning when it comes to creative applications like art and writing. (Among other reasons, it’s a big part of the current Writer’s Guild of America strike.) It’s about time for a post from me about what that means for my writing, and then some general thoughts about the larger implications of these new and ever-improving technologies.
One thing that’s making some of these conversations complicated is that people are using the terms “artificial intelligence” and “machine learning” in a variety of ways. Here we’re mostly talking about situations where computers use information they’ve been trained on to give output (basically, making predictions of what makes sense based on what they’ve already seen.)
First off, all of my writing comes out of my own head. I do not use any kind of generative technology tool for ideas (I have plenty), creating text, or anything else like that.
I use various computer tools as part of the writing process, of course, but nothing that has any kind of predictive creative aspect. Those tools include Scrivener, 4theWords, AeonTimeline, and Obsidian, . I also use some mapping tools and of course internet search tools. Those search tools do have some predictive aspects built in these days. I talk through parts of my thoughts with Kiya (friend and editor) as well as sometimes other friends or folks on the Discord when I’m trying to figure out how a particular aspect works.
Fundamentally, that’s for two reasons. First, there’s a lot of information and worldbuilding detail and ‘how does this connect’ that is basically only in my head. (Though a fair bit of it is also in Kiya’s head, these days.)
Second, of course, I really love writing! Writing is the point of the exercise here, why would I want something else to do it?
Not that I don’t have my nights where I’m staring at a blank screen and going ‘why won’t the computer just understand what I want on the page and make that happen?’ But I do actually want to be the one doing all that writing and making decisions.
I do use a tool with some predictive aspects to it for a particular stage of editing. To be honest, many of the tools I actually use have been part of spelling and grammar checkers for a long time now, just in a more limited way than the most recent tools allow. (If you use autocorrect on your phone, that’s also predictive.)
I start by going through my own work myself, making notes of things that need to be edited throughout the whole book, as well as where I need to make changes in specific chapters. All my own brain!
Then I move it into a tool called ProWritingAid, which offers a number of different specific editing tools. While they have been adding an AI-driven tool (Rephrase), I never use it. That’s both because I rarely want to rephrase things for the reasons it suggests, and because when I do, the words it suggests are usually not helpful to me.
(Among other problems, it doesn’t always deal elegantly with the fact I’m writing in the 1920s. I’ll use a gendered term like ‘waitress’, it will suggest I want something more gender-neutral, and I will stare at it and go ‘nope, still 1920s’. A useful edit for some people, but not stylistically useful for me.)
How I use ProWritingAid
I mostly use it for the following tools:
- Taming my commas.
- Catching typos and missing words. (See the note on this one below.)
- Cutting down long sentences. (No, Alexander, you may not have a 65 word sentence.)
- Checking consistency for British spelling and usage.
- Avoiding some phrases and words I overuse (via their “house rules” settings).
That second point – catching typos and missing words – is an accessibility tool for me. I do sometimes have weird brain glitches (after a bout of extended low-grade migraine about 8.5 years ago now) which means I will sometimes leave out a word, flip the word I meant for something else, or do other weird typos. It means that a grammar check that will flag those is particularly helpful to me, since it lets me clean those up before the next editing stage. That does generally involve some degree of predictive tool, but not one that is creating new text or content.
However, no matter what edits ProWritingAid suggests, I’m the one deciding what to change and what to keep. On average, I take maybe half the suggestions it makes (or look at it, agree that I need to change something, but do something other than what it reccomends). I usually take all its recommendations for consistency, and almost all of them for sentence length (but both of those are pretty mechanical things).
After I run the chapters through ProWritingAid, it goes off to Kiya for editing. (I promise Kiya is a real human!) The last stage is when it goes to my early readers (also real humans!), who also check and make sure things make sense, tell me if there are details that need a little more exploration, and so on.
My covers are also all human designed (by the most excellent Augusta Scarlett), though there are a couple of pieces where it’s a little harder to say “this was all human input”.
I’m planning a post (or a couple of them) about cover design sometime soon, not least because I want to talk about the designs for the Land Mysteries series in particular. However, my 1920s covers are basically a combination of a vector that provides the colour background pattern, a vector that provides the stars or sparkles, a frame, some sort of inset image, and then of course the people on the cover. The Land Mysteries have a chart, a map, and figures.
All of those – except the people in everything after my first series – come from stock image sites. You can consider a lot of them to be functionally equivalent to paint colours or brushes or other tools, designed to be used by someone else to create more complex images. Augusta combines and edits them (changing the colours, smoothing some lines, etc.) to suit what we need. There are vast libraries of all of those types of images out there. With sites that use licenses, there’s a specific creator associated with each work who gets some income from it.
Cover design practices
Combining a series of stock images or pre-existing vectors has been really common in book publishing for a good while. It’s out of the price range of not only most indie publishers but many trad publishers to have full out original artwork painted for every cover. That’s why you see a lot of covers with similar images, heroes, etc. People are working from the same pool of stock art.
So the common practice has involved taking various existing pieces via stock image licenses, combining them in new ways, and maybe adding some creative details on top of that design work. And to be clear, figuring out how to choose and combine elements to make a cohesive cover is definitely a creative process all of its own! (As you can see from my covers, in fact.)
The figures on my covers at this point are done via 3D design software, to get the position and clothing we want. Sometimes what they’re wearing under the silhouette mask is completely hilarious, for the record, because all we care about is what the outline looks like, not the other details. August then edits to smooth things out as needed. (Or as in the cover for Forged in Combat, “Can you make the bustle a little more elongated? Sorry, 1882 was really specific about bustles.”)
Again, the clothing and figures she’s using come from individual creators who’ve licensed or sold those models for use. (The figures on my first series, Mysterious Charm, all come from stock art images, but everything after that is a unique 3D image that Augusta did for me.)
About those stock images
Now, in the United States, AI generated work (either images or text) can’t be copyrighted, which presents a challenge for licensing and other legal agreements.
On the other hand, logically speaking, there’s a decent chance that at least a little of what’s up on those sites right now might have had AI generation involved. It’s really hard to tell from the end-user point of view, and that’s probably going to continue to be the case for a while. (Some places are identifying these already, some places aren’t. And it largely relies on people self-reporting the source.)
Why all the discussion?
I’m not going to get extensively into all the current discussions about AI art and writing and the concerns people have, but I do want to note a couple of them for people who want to do more reading and research. The basic concerns are four-fold, when it comes to writing.
One big concern, of course, is that AI tools will replace writers. While these tools aren’t great at fiction (here’s a post from author KJ Charles talking about what the actual AI fiction output from Story Engine looks like without editing), they can be a lot better at things like text summaries, simple marketing copy, etc.
Those are all jobs involving writers! A number of industries are also looking at whether they could use AI tools to significantly cut down the number of writers they need to pay, or turn what are currently writing jobs into more like ‘editing AI content into making sense jobs’ (which is a very different kind of work, for one thing).
Trained without consent
For any AI or machine learning tool to work, it has to have material to work from. (This is what’s referred to as ‘training’ the tool, or a dataset.) You may already have seen a lot of commentary about the art AI tools, and how they will do their best to produce art in the style of an artist. How do they do that? They’ve been exposed to that artist’s work. When an artist is long-dead, their work is widely avaible via museum sites, etc. that’s one thing. But when it’s currently creating artists who rely on people buying their art to make a living, that’s a very different sort of problem.
It’s become clear (especially in the last week or so as I publish this) that some of the writing AI sets have been trained on data that is visible on the public web, but where the creators did not agree to have it used for those purposes. The biggest example here is fanfiction, which produces some very identifiable quirks…
Training a tool on current writer and artist’s work without consent – especially in ways that mean you can then replace those people’s work – is a huge problem.
Overloading the publishing infrastructure
Another big concern is that people will think that they can generate text (using AI) and use that to get into publishing. Are these stories any good? Nope, basically never. Can they swamp and overwhelm both publishers (for submissions) or readers (via self-publishing outlets like ebooks)? Absolutely.
As a reader, I’m concerned about that. (I want to support real creative humans!) And as an author and librarian, I’m concerned about what it means for readers looking to find satisfying stories and books, and what some of the current discovery tools being swamped means.
Neil Clarke, who edits a major SF short story venue, has been writing about what the rise in AI written stories has meant on the editing and submission side. (The most recent post as I put this up talks about the current state of things, but that link will show you earlier posts where he talks about the massive rise in AI submissions that are absolutely not usable.)
Citing things that don’t exist
Finally, and with my librarian hat on, I’ve already seen comments from colleagues in other libraries about people asking for articles that don’t exist. Here’s the thing. Currently, AI tools aren’t actually much good at research. They can look like they are, though, because at least some of them will make things up that sound plausible. The examples I’ve seen most commonly involve an author who is writing in that field, a title of an article that seems plausible – but that doesn’t actually exist. Sometimes they’ll do things like mention two authors who look like they should work on the same topics, but actually don’t.
I work in a (very) niche research library, and so this hasn’t happened to me yet. (And we’re such a niche field that for most current researchers, either I know how to contact them directly or I know someone who does.) But I’m fully expecting that sometime in the not too distant future, I’m going to get asked for research that sounds plausible but absolutely doesn’t exist.
Questions or thoughts?
If you’d like more information, Jason Sandford does a regular newsletter about news related to the science fiction and fantasy community in particular. His recent Genre Grapevine on the Sudowrite Controversy and the Increasing Pushback Against AI (May 21st, 2023) does a fantastic job rounding up more information about many of the points I’ve discussed above with more examples.
If there’s anything else you’d like to know about how I’m handling this in my own writing, drop me a note via the contact form.