Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please do not use AI to write for you, published by Richard Kennaway on August 22, 2024 on LessWrong.
I've recently seen several articles here that were clearly generated or heavily assisted by AI. They are all dreadful. They are verbose, they are full of "on the one hand" and "on the other", they never make any assertion without also making room for the opposite, and end with "conclusions" that say nothing. Please do not do this.
I wrote the above, unassisted, and it states my own views. To illustrate the point I then had GPT-4o rewrite it. Here was the prompt:
I have recently seen a substantial increase in the proportion of postings to a certain online forum that were clearly generated by AI, although not marked as such. The writing is verbose and meandering, full of "on the one hand" and "on the other", never making an assertion without also suggesting its opposite, sounding oh-so-measured and even-handed that it ends up saying nothing at all.
Please write an article for me, written in the first person, arguing that AI tools in the current state of the art should never be used for this purpose.
Below the line is the chatbot's extruded meat product. This is the whole of its first output, unedited, and I did not press it to continue.
The first paragraph starts out ok, being merely verbose and repetitive but setting out the basic claim. And then the second paragraph backpedals it past where it began. The third paragraph floats off into the cloud layer, and so on for a total of nine paragraphs and 875 words that would be a waste of anyone's time to read, unless they want to see if their judgement agrees with mine.
It did not notice that I was using it in exactly the way I had asked it to argue against, and that the article displays all the faults that it purports to diagnose.
The splash image is a photograph from Wikipedia, CC BY-SA 3.0. I'll spare you the image that GPT-4o came up with, but y'know, I do find the AI illustrations that everyone heads their blog posts with these days pretty annoying. (ETA: Well, there was supposed to be an image, I filled out the "link preview" part of the entry form, but no image. But the link above will show you what it was.)
Why AI-Generated Content Shouldn't Flood Online Forums
Lately, I've noticed something troubling in the online forums I frequent: a distinct uptick in posts that seem like they've been generated by AI, though they're not labeled as such. The language is eerily similar across many of these posts - verbose, wandering, and conspicuously non-committal. Every argument is hedged with a counterpoint, every statement cushioned by a qualifier, making the writing sound balanced on the surface but ultimately empty.
The posts stretch on endlessly, piling one vague point onto another, yet they fail to really say anything. It's frustrating, to say the least, and I believe this trend reveals a serious problem with the way AI is being misused in online spaces.
I want to be clear: I'm not arguing that AI lacks merit as a tool. It has immense potential in fields like data analysis, automation, and even creativity when it complements human intelligence. However, when it comes to contributing to online discourse - spaces where real people go to express ideas, ask questions, and engage in genuine conversation - I believe AI-generated content, at least in its current form, does more harm than good.
First, let's talk about the nature of conversation itself. Forums, at their best, are places where people share perspectives, debate ideas, and solve problems together. What makes these interactions meaningful is that they are infused with human experience, emotion, and authenticity. Even when disagreements arise, they are rooted in real personal stakes - whether it's a passion for a hobby, a struggle to understand a concept, or a desire to ...
view more