You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT. They are invariably verbose, interminably waffly, and insipidly fixated on the bullet-points-with-bold style. The prose rarely surpasses the sixth-grade book report, constantly repeating the prompt, presumably to prove that they’re staying on topic. (View Highlight)
Don’t let a computer write for you! I say this not for reasons of intellectual honesty, or for the spirit of fairness. I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into. (View Highlight)
It doesn’t matter. I think this belief is most common in classroom settings. A typical belief among students is that classes are a series of hurdles to be overcome; at the end of this obstacle course, they shall receive a degree as testament to their completion of these assignments. I think this is also the source of increasing language model use in in paper reviews. Many researchers consider reviewing ancillary to their already-burdensome jobs; some feel they cannot spare time to write a good review and so pass the work along to a language model. (View Highlight)
The model produces better work. Some of my peers believe that large language models produce strictly better writing than they could produce on their own. Anecdotally, this phenomenon seems more common among English-as-a-second-language speakers. I also see it a lot with first-time programmers, for whom programming is a set of mysterious incantations to be memorized and recited. I think this is also the cause of language model use in some forms of academic writing: it differs from the prior case with paper reviews in that, presumably, the authors believe that their paper matters, but don’t believe they can produce sufficient writing. (View Highlight)
There’s skin in the game. This last cause is least common among individuals, but probably accounts for the overwhelming majority of language pollution on the Internet. Examples of skin-in-the-game writing include astroturfing, customer service chatbots, and the rambling prologues found in online baking recipes. This writing is never meant to be read by a human and does not carry any authorial intent at all. For this essay, I’m primarily interested in the motivations for private individuals, so I’ll avoid discussing this much; however, I have included it for sake of completeness. (View Highlight)
I believe that the main reason a human should write is to communicate original thoughts. To be clear, I don’t believe that these thoughts need to be special or academic. Your vacation, your dog, and your favorite color are all fair game. However, these thoughts should be yours: there’s no point in wasting ink to communicate someone else’s thoughts. (View Highlight)
In that sense, using a language model to write is worse than plagiarism. When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts. A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise. (View Highlight)
I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter. For paper reviewers, it’s worse: a half-assed review will produce little more than make-work for the original authors and tell the editor nothing they didn’t already know. (View Highlight)
I’ll now cover the opposite case: my peers who see generative models as superior to their own output. I see this most often in professional communication, typically to produce fluff or fix the tone of their original prompts. Every single time, the model obscures the original meaning and adds layers of superfluous nonsense to even the simplest of ideas. If you’re lucky, it at least won’t be wrong, but most often the model will completely fabricate critical details of the original writing and produce something completely incomprehensible. No matter how bad any original human’s writing is, I can (hopefully?) trust that they have some kind of internal understanding to share; with a language model, there is no such luck. (View Highlight)
I have a little more sympathy for programmers, but the long-term results are more insidious. You might recall Peter Naur’s Programming as Theory Building: writing a sufficiently complex program requires not only the artifact of code (that is, the program source), but a theory of the program, in which an individual must fully understand the logical structure behind the code. Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless. In Naur’s terms, such a program is dead; in our case, it’s stillborn. It should come as no surprise that nearly every vibe-coded app on the Internet struggles with security issues; look no further than the vibe-coded recipe app that leaks its OpenAI keys. Every time one generates code by prompt, they create a new stillborn program; vibe coding is the art of stitching together their corpses into Frankenstein’s monster. (View Highlight)
I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading. (View Highlight)