Hi!
I’ve been thinking a lot about AI[1], like most other people, it would seem. It’s troublesome and exciting at the same time, and full of tricky situations.
Take image generators like Dall-E, Midjourney, and Stable Diffusion. They let you give them a text prompt, and generate an image, with varying degrees of success. Fun tools, but there are many issues here. For starters, the models these services use have trained on actual artists’ work. That means that you can ask these services to generate an image in the style of Vincent van Gogh or Boris Vallejo. Neither artist (or estate, in the case of van Gogh) will get paid for this, even if the generated artwork is used in a commercial product. This is troublesome because most artists doesn’t want to be used for AI training[2]. Chocking, that…
Having ChatGPT write things for you is another AI feature that people talk about. It’s even a part of Microsoft’s Edge browser now. This, I think, is worrisome because of the dataset. Let’s say you ask ChatGPT, or any other text-based chat robot built on an AI model that scans the internet, something controversial. How can you be certain that the AI hasn’t fallen for conspiracy theories, rubbish data from content farms, or just taken a misunderstanding for the truth? When presented with an answer to your question as the truth, without having seen the data which led to said answer, it’s hard to make your own educated guess as to its validity. For web search regarding anything where you need to filter the results yourself, actually seeing the sources seems important.
Then there’s the whole rights thing here, too. Just like you can ask Dall-E to generate an image in a certain style, based on its interpretation of parsing said original, you can have ChatGPT write in an author’s style. This is already happening with AI[3], and the better ChatGPT and its ilk gets, the more AI generated books, articles, essays, blog posts, and more will be put out there.
AI models are built by borrowing from original creators. They learn by analyzing, much like a child learns to draw by mimicking. The difference is, the AI is a cheap tool that companies can rent, and use, to get away with paying original creators for their work. It’s a growing issue, and not a fair one since most people feel that nobody asked them if they wanted to train an AI to take their jobs.
The obvious solution would be for the artists being used for a model to get paid whenever an image was generated. Problem is, it’s unlikely there’s any way to track this, and where does the line go? A generated image or a body of text could be the result of thousands of analyzed content creators’ work. It’s not as simple as you might hope, unfortunately.
What it is, however, is utterly unfair to creators. Some are already seeing work drying up. Artwork in particular is expensive, and in many cases you don’t need it to be original, just relevant. Asking an AI to draw something for your article is a simple solution for many publications, and a lot more cost-effective. Problem is, that’s money that typically was paid to an artist. I feel for them, perhaps a bit more than I feel for the people who are paid $3 to crank out empty blog post for content farms. They, too, will be superseded by AI. Generating 800 words on a topic is cheaper, and the end-result might even be comparable. Even if it isn’t, an editor could touch up a lot of such pieces in a day, making it even easier to fill the internet with crap content.
Maybe that’s where it ends, when the AI used to generate both art and copy starts analyzing its own creation, serving it up as an answer to people asking a search engine a question. It’s not a particularly happy thought, is it?
It’s been a weird week, with less work than usual, but somehow still swamped. I’m looking forward to the weekend, and taking a break. I hope you’ll be able to, too.
— Thord D. Hedengren ⚡
Did you enjoy this issue of The Bored Horse? Feel free to forward it to a friend, or point them to the subscription page. Thank you! 🙏
Footnotes
- That’s PR speak for something that’s not even remotely close to the true definition of an artificial intelligence, but I’ll allow it for clarity. If you’re interested, read the book Our final invention by James Barrat.
- There’s been quite a few blunders from artist communities lately, thinking that artists want AI in the mix. Take the backlash that hit Artstation, for example. They really miscalculated what their users wanted.
- There’s an interesting story about self-publishing authors, who need to publish a large body of work regularly to stay relevant, and how they use AI services such as Sudowrite and Jasper.ai over at The Verge (in a very funky layout, too).