Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Artificial intelligence may not ruin civilization, but …
Peter Gietl

Artificial intelligence may not ruin civilization, but …

… it will probably mean the end of the written word as we’ve known it for centuries.

I used to slip nonsense into my college essays — often lines from Lewis Carroll’s “Jabberwocky” — to prove that my papers weren’t really being read. After receiving glowing but vacuous praise on a Psych 101 essay but no interaction with any of my ideas, I started cribbing from Lewis Carroll:

Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!

And so on. Perhaps on some Jungian level I felt compelled to slay the dragon of the slothful prof. Or perhaps I really meant Freudian? I don’t know. I never worked very hard on the psychology aspect after that first essay.

And now, when I face a sea of papers from my bright-eyed university sophomores, I confess to feeling a little anxious, wondering whether someone will try to do to me what I did to my own professor decades ago.

Certainly, I now have more compassion for that professor than my youthful hubris allowed. Reading an endless pile of essays hammered out at 2 a.m. the night before is a correspondence course in Sisyphean suffering. But mostly I’ve discovered, in grading 25 years of essays, that it’s largely quotidian drivel. Students grind out the work, taking no pleasure in the effort, and it’s a rare (but delightful) student seeking intellectual swordplay with the teacher.

Maybe for this reason, above all others, I’m dreading what artificial intelligence is about to do to my students. There’s no personality where there’s no person.

Artificial intelligence is unlikely to make us smarter even as it helps us generate unimaginable terabytes of new text.

Once I had a hard meeting with a young member of a fraternity when I noticed his rough draft, attached to a cleaner final, had been printed on a daisy-wheel printer, some prelapsarian floss of technology that I had seen in my youth, and it had been ancient even then. Suspicion piqued, I took a closer look at his citations. He there claimed to have interviewed a certain “Carol Schmidt” for the essay when the lad might have been 18 months old. Precocious little punk!

When we met, I asked him how Carol was doing. He had no idea who I was talking about. His confession was nearly immediate: He had drawn the essay from a filing cabinet his fraternity house maintained to help members avoid work.

The Greek youth had required enough pluck to retype the essay and change the dubious dates along the way. At least he would have had to read the old essay. He had to keep a straight face when he handed it, physically, to me. AI requires less input and substantially less initiative.

A human being placed this section header here

If you’ve not tinkered with some of the AI engines, they can write about as well as any B- college student. You need to feed an AI a decent prompt. Some, such as ChatGPT, will warn you that they are able only to fabricate sources. But others, like Agent GPT, will happily embed research and cite it using both signal phrases and in-text citations.

And they are currently undetectable ... in the sense that universities won’t back a professor who knows it’s AI. That is not proof in the litigious world of university education. Even in the case of plagiarism, universities for several years have often balked at enforcing their own bans and consequences, despite overwhelming proof.

Each engine has its own tells, identifiable quirks, and favored words, but the tells change as the AI engines are updated. In my decades of teaching, subject headers interrupting the body text announcing the topic of the next major section were exceedingly rare. Textbooks did that, sure, but not student writers. Then, this past summer, probably a third of the essays in my two online classes suddenly sported bold section headers. Turns out that’s how ChatGPT likes to structure its essays. You can tell it not to, of course, but that was its standard move.

When I mocked this habit to a composition class this fall, the headers disappeared abruptly.

What do you know, really?

It's hard to blame the students. They are being told that AI is a near-perfect good. Sure, there are copyright questions about the content it’s allowed to digest, but we see only the feeblest gestures toward the long-term intellectual damage that outsourcing writing to AI will cause.

In a STEM culture that normally prostrates itself before the word and thought of Francis Bacon, for his defining of the scientific method, I have only been met with silence when I remind my administration that Bacon also declared, “Reading maketh a full man; conference a ready man; and writing an exact man.” Without reading, you’ve got nothing to digest, like ChapGPT with no content. Without discussion and debate, you cannot see the weaknesses in your own thinking. And without writing, your thinking is imprecise and embryonic.

So the wager average students now face is this: I can get an 85% score without trying. If I try, I score in the 70% range. That’s my best. The below-average student faces even greater temptation: I could fail if I try. I will pass if I don’t. The top performers are the only category of student who might benefit from AI in the way that it is being championed. Such a student could take the AI’s evidence and logic and improve upon it.

At least, that’s how most professionals are using AI. We use it for suggestions, for sketches of ideas, for summaries of possibly tangential influences, and then we follow those leads. Wielded well, AI functions like the cadre of researchers and writers who supported James Michener or the veritable factory of writers in Alexandre Dumas’ studio.

To properly wield AI, however, you need the capacity to recognize good writing and to tweak competent writing into something better.

You also need to recognize baloney.

For example, Open AI’s Dall-E image generator refused to produce an image for me in the style of Vermeer. When I asked why, it explained that “creating an image in the style of a specific artist whose latest work was created after 1912, such as Johannes Vermeer, is not permitted according to the content policy.” It admitted it was wrong when I pointed out that Vermeer had died in 1675 but nevertheless continued to refuse to mimic that particular style.

Behind the veneer of false promises

Even if talented students might use AI to improve themselves — doing more, reaching farther — the fact remains that many talented students are groomed to see the arts as mere obstacles to their STEM careers.

A recruiter at a top engineering school my daughter recently visited bemoaned that the stereotype of engineers was sometimes true and that many talented students nevertheless needed to improve their soft skills. Barely a minute later, and without a trace of irony, he encouraged my daughter to “get rid of” those pesky required humanities classes before graduating from high school so she could get straight to the engineering content as a university student.

Artificial intelligence is unlikely to make us smarter even as it helps us generate unimaginable terabytes of new text. It will prove a valuable tool for the intellectual elite to work faster to produce texts that will, increasingly, be unread by the nouveau éduqué who will come to their credentials in the era of AI.

But don’t take my word for it. I asked AgentGPT to write a research-backed essay on AI's negative impact on student writing. It offered the following research-supported concerns about the potential negative impacts of AI upon student writing.

Other than a few trims, I made no changes to the verbiage of this selection from that piece:

Research by Dr. Joel Garza, an associate professor at Stanford University, shows that students who frequently utilize AI for writing tend to replicate pre-existing content. This imitation hampers the cultivation of their individual writing styles and limits the development of their voice ( Garza, 2018[JB1] ).

AgentGPT opted to use the American Psychological Association method for in-text citation, which is a little odd given the field of writing would prefer the Modern Language Association style, and the research behind AI would likely have used the Institute of Electrical and Electronics Engineers’ style or something equally technical.

AI's ability to generate content may inadvertently encourage plagiarism or discourage students from putting in the effort required to express their thoughts uniquely.

AgentGPT chose to use a numerated list, and I selected only its latter points here. I might have advised a student to rely on more embedded text cues to reveal the structure, but as you will see, the writing is clear and uses evidence well.

3. Limited Feedback and Personalized Guidance: Contrary to popular belief, AI's automated feedback systems may not provide students with the personalized guidance needed for improving their writing skills. A study conducted by the Massachusetts Institute of Technology (MIT) found that AI-powered tools provide generic feedback, often missing the nuances of individual writing styles and unique strengths and weaknesses (MIT, 2020). Such generic guidance fails to address specific areas for improvement, inhibiting students' growth in writing proficiency.

4. Over-Reliance on AI Algorithms: The increasing availability of AI algorithms in educational settings risks fostering an over-reliance on automated systems, leading to a passive approach to writing. A report by the National Council of Teachers of English (NCTE) argues that excessive dependence on AI threatens to detach students from the writing process by turning writing into a mechanical task rather than a creative endeavor (NCTE, 2017).

Bonus points available if you can identify some troubling things about that passage from AgentGPT.

Here’s a hint: No one named Joel Garza appears to work at Stanford.

Second hint: A study conducted at MIT should nevertheless have an author whose name would be cited. Ditto NCTE. But kudos for their prescience, divining that excessive dependence on AI in the writing process could be detrimental all the way back in 2017, five years before ChatGPT made its public debut.

If we needed a better metaphor for the false promise of AI, I’d be hard-pressed to invent one. It’s clean. It’s persuasive. It seems plausible. Reasonable even.

But it’s safe, not near the fringe where you might doubt anything it says. It’s middling, in fact. Not bland, but well inside the lines. Taupe. Anne Murray. Delaware. Perfectly fine.

And it’s lying through its horrid, artificialis dentes. Beware! The jaws that bite, the claws that catch!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Justin Blessinger

Justin Blessinger

Justin Blessinger is a professor of English at Dakota State University in Madison, South Dakota, the region’s flagship school for cyber operations, network security, AI, and all things tech.