
Getting your Trinity Audio player ready...
|
A good way to look at generative AI-produced text or art is that they’re low-quality limitations of the real thing made by machines to fool people. (Like the header here.) (Ditto for AI-generated research, which just consumes low-quality internet content and feeds back what people want it to tell them.) In other words, AI content is there to fool people and it works only to the extent that people are willing to be fooled by low-quality content.
So what does that say about the problem of AI cheating in school?
Everyone Is Cheating Their Way Through College: ChatGPT has unraveled the entire academic project. – New York Magazine
In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human.
Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.
Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (Sarah’s name, like those of other current students in this article, has been changed for privacy.) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website.
There’s plenty more like that, but you get the idea. Some of it is probably exaggerated, but much of it is also true. Students routinely rely on AI. They think less and just use a series of AI tools to move through the process. This makes grades increasingly worthless, but let’s face it, much of what they were tasked with doing was worthless.
Assigning AI to take on “Indigenous studies, law, English, and a ‘hippie farming class'” is only so much of a loss. Academia, even before it went fully woke, was based heavily on an automation of knowledge rather than on genuine learning or engagement with the material. Wokeness just finished the job by turning every class into an exercise in Marxism-Leninism where the goal was to repeat dogma in exactly the right terms while adding a dose of ‘personal insight’ into the mix. AI can automate the process better than any human being can.
AI-generated content fools people who expect low-quality repetition of what they want. That’s what academia has become. None of this prepares the students doing this for careers doing anything meaningful or having the skills to solve any problems that AI can’t solve for them, but how many college-level jobs expect that anyway?
Academia dumbed itself down so much that AI can easily replace it.
Just for the fun of it, I sent the enclosed letter to the Columbia University Alumni Magazine – But they did not print it:
LETTER TO THE REGISTRAR OF STUDENTS, COLUMBIA COLLEGE, COLUMBIA UNIVERSITY
Gentleman,
I graduated from Columbia some years ago. Recently I wrote to the Registrar asking for my college transcript. A transcript was sent to me but apparently a mix-up occurred – probably due to a similarity of name. My name is Barry Spinello. The transcript I received was for a Barry Soetero. A different student, from a different year. You will notice the similarity in names. Again this is probably the source of the mix-up. Another possible source of mix-up is that we were both lumped together into the fifth quintile. Wow! Soetero’s grades were even worse than mine.
Registrar – What should I do with the transcript? Do you have an address for Soetero so that I may get this to him directly? Or should I fax it over to our local newspaper and have them try to track him down?
Yours truly,
Barry Spinello, CC62
You may have been disregarded after the word “Gentleman” was read. The word would have been labeled taboo and no such recipient would have been located if the first offense were tolerated.
Yes. “education” has completely failed.
We’re at the point where I’m continuously surprised that we’ve reached the chapter
in the sci-fi dystopia we live in
where Idiocracy (with Atheists aborting their offspring substitute for the “breeders” in the film)
is reality.
Pardon the now cliche reference.
but it’s true.
Daniel,
Not to brag or anything, but I was here first.
Thirty years ago, I started positing that the Turing Test (more or less a game of Twenty Questions for robots) will become easier to for computers to pass as humans dumb theirselves down with constantly lowering standards.
As Alan Turing had it (circa WWII) computers would grow increasingly sophisticated. One day, as I was trying to navigate a useless phone menu, I got to thinking about just who lived lives such that the phone menu could actually help. The phone menu was way below a level which could pass a Turing Test, but ……the fact that such were used implied that the businesses which employ them must get at least Some Positive results from their use. Hence, the idea of generations of button-pushers being raised like stackable “FlavR SavR” tomatoes — which have little flavor but ARE stackable and easy to ship and stock.
AI can now pass muster with those who were derided two generations ago as “couch potatoes” but who are more like FlavR SavR tomatoes — raised to be dumbed down button pushers.
As computers grow increasingly sophisticated, the populace grows decreasingly sophisticated.
Those who chant “from the river to the sea…” and such
Those who dye their hair purple and bellow “Don’t look at me!”
Those who write essays to college administrators
College admins who bought term papers back in the day
And now, robots that chant “TWO❗ FOUR ❗SIX ❗EIGHT❗ WHO WE GONNA IMITATE ⁉️ …”
… Passing the Turing Test.
… . 👾 • 🤖 • 🥸 • 🤓 °
One thing that struck me is that the load on teachers of papers to grade is insane.
The increasing sophistication of LLMs will pose a challenge for faculty across a wide range of subjects. How about comp sci courses where coding and algo assignments are autograded? How can you really tell if an LLM engine didn’t generate the solutions? Similarly for logic (philosophy), mathematics, physics, chemistry, as well as many areas in the liberal arts. I am old enough to remember when use of calculators on exams was becoming controversial. I think we are fast approaching a singularity in the AI-human dynamic in education.
But how can AI write about its “lived experience,” a staple of student writing in recent years?