The Smile That AI Can't Fake

 

Take a close look at Wenceel Pérez, outfielder for the Detroit Tigers. Study that beautiful smile before a smile like that is no longer possible.

The reason that smile is still possible, at least for now, is that Pérez has something to smile about. Twice in one inning he made a great throw to second base, and for the second time in the same inning second baseman Javier Báez tagged out an Oakland Athletics baserunner—first Tyler Soderstrom and then Jacob Wilson—trying to stretch a base hit into a double. This was on the 24th of June, year of our Lord 2025.

But when we agree, as with our thoughtless capitulation to AI we apparently have, to replace ourselves with any and every labor-saving device available—when, that is, we consent to our own abolition—we evict from our very lives all meaningful accomplishment. And when we do that we evict with it all the joy that comes with meaningful accomplishment, at which point we will be hard-pressed to name one thing worth smiling about. We will be hard-pressed to name one human act worthy of a real human smile.

You might say that, although we are likely to replace ourselves at work, we would never replace ourselves at play. I say, Don’t count on it. Did you foresee the bot boyfriends that some women are courting?

I used to say when I was younger that I would not become that old man shaking his newspaper at the kids on my front lawn with their rock ’n roll music. But now I would wholeheartedly be that old man. I would be that old man nonpareil—and not because I’m old. (I’m 61.) The reason is that the situation couldn’t be more serious or more dire. And it has little to do with rock ’n roll or lawns. We could be playing with something we can neither understand nor keep in check. I believe we are doing just that.

I taught my first college English class in the fall of 1987 and, like most (or probably all) professors, I have had run-ins with students guilty of plagiarism and other forms of academic dishonesty, sometimes known as “cheating,” which heretofore has been frowned upon. In the 1980s and ’90s plagiarism was not only more difficult for students to perpetrate—for the most part they had to find print matter and retype it—it was also more difficult for teachers to catch—catch as opposed to detect, which was very easy. Later, when the internet arrived to distract us from worthy things, plagiarism became easier to perpetrate but, thank God, also easier to catch.

With AI, however, we have entered new terrain. Fabricated work does not exist until it has been called into being. It has no previous existence until you ask for it. And when it appears it leaves no trace whence it came. And in the case of AI you can’t, as I once did nearly 30 years ago, photocopy the source an essay was stolen from, staple it to a student’s paper, hand both versions back in one tidy package, and congratulate the student on flunking the assignment.

(I hasten to add that, in the age of AI, detection is still as easy as it ever was: think of driving down a bumpy dirt road and then suddenly hitting smooth pavement. That’s what it’s like to be reading a student’s writing and then all of a sudden to be reading his stolen prose. And AI-generated prose is even more distinctive. It produces dull empty sentences and clearly prefers certain sentence structures, not to mention ill-chosen words. I suppose this may change, but for now detection is still easy.)

Also with AI we have entered a moment when cheating really isn’t cheating anymore, because AI is “just a tool,” like a calculator. A student at Columbia University, who was disciplined for writing code to help people cheat their way through remote interviews, told the New York Magazine, “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating.”

In the same piece the NYM also reports:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

Or, to put it another way, nearly 90% of students have nothing to be proud of, nothing to smile about. Nothing they have done in college is worthy of a real human smile. Some of them will soon be cheating their way through med school and then performing surgery on someone you know—or are.

I, like many people, am still trying to think my way through the “AI Revolution” and do so without really understanding AI. I mean the “technology” of it. (This is the situation that 99% of people who use a radio are in. No one knows how anything works.) But a few things seem obvious to me.

One is that the unemployment problem is simply going to get worse. If you replace people indefinitely and then stand bewildered and slack-jawed before the long lines at the welfare office, utterly baffled by the problem of the unemployed, you’re an idiot. Basic cause-and-effect has eluded you, and you’re probably in danger of becoming a politician. Labor-saving devices don’t only save labor; they evict laborers from their labor. The Luddites knew this and rightly took umbrage. John Crowe Ransom stated it plainly in I’ll Take My Stand way back in 1930. Because of this farmers in the 1970s were committing suicide at rates hitherto unseen. And all the while the technocrats said, “You can’t stand in the way of progress.”

Another thing that occurs to me is that the “AI Revolution” is going to require quantities of energy we neither have nor will be able safely to produce. That is to say, the demand for the requisite energy is going to ramify in ways that will prove to be ecologically disastrous, as are all demands for energy except those that observe Nature’s law of return. Without a doubt we will answer the demand with more nuclear power, and the unlearned lesson from the past—that with fewer than 500 reactors worldwide we can count on a nuclear disaster about every 22 years—will once again be a part of the lecture that Reality stands up to deliver.[1] Reality tends to be a stern preceptor, and when it speaks it will probably indict us for yet another form of idiocy.

A third thing that seems obvious to me, which I suppose concerns me and only me, is that if colleges and universities can’t figure out how to check this clear and present evil, or if students can’t be convinced that there are real reasons to turn resolutely away from it (and these are students who couldn’t be convinced to say No! to the “smart” phone), then I’m out. I’m not old, but I’m old enough to know I want no part of the charade that education is turning into. I don’t want to retire yet, but this is a fight I don’t need. It’s a fight that will send me packing. The young professors can fight it, but bear in mind that they are all cell-phone addicts.

Understand I am not even getting into the data, as if data were needed, that AI is affecting cognition, especially in young people. Understand that I am not even getting into the problem of AI and teen suicide.

But let me add two more things, both of which might be more serious than the foregoing, but that the foregoing clearly implies. It may be that AI is, as I mentioned, a clear and present “evil.” If so, it will operate as evil has always operated: as a counterfeit good. The reason evil works is not that it shows up as itself. The reason it works is that it shows up as its opposite. “The prince of darkness,” we learn from King Lear, “is a gentleman.” Counterfeit money works not because it looks like Monopoly money but because it doesn’t. The magnificent cunning of evil is that it somehow manages to go undetected.

And on the matter of evil “showing up” let me add this anecdote: I keep an electronic file—a pdf—of an essay I assign to my sophomores. Their task is to read and write a summary of it. I recently opened it on my office laptop so that I could print it off. For the first time in my experience of opening this file the program that opened it offered to “help” me. It asked if I wanted a list of the keywords in the essay and if I wanted the essay summarized.

I manifestly want neither, and I want neither of these technological “benefits” to be on offer to my sophomores. I want them to read difficult things, struggle to understand, struggle to summarize, and struggle to write well. I know from having been in their shoes that without the struggle there is no growth. But that the offer shows up so readily, so unsolicited, suggests to me something plenty spooky: This is exactly how evil operates. It insinuates itself everywhereand under the guise, of course, of helping me. The prince of darkness is a gentleman. Like Satan in Paradise Lost he operates not by force but by “guile,” “fraud,” and “close design.”

And then, finally, there is the matter of our endangered saving grace, or one of our endangered saving graces, which is our enduring childlike capacity for wonder. Even in our dotage we can be bowled over by it. Think of a sugar maple in October, the sun setting it ablaze against a blue autumnal sky.  I doubt that a new planet swimming into my ken could induce more wonder in me. This is only to say that something of the child survives in us so long as we are able to be overpowered by the beauty and mystery of things, those that we have not made and those that we have, like a terrific piece of music or a great Petrarchan sonnet or an amazing throw to second.

I do not include in this list robot umpires at baseball games: I want humans to adjudicate human acts. Do you want a childless AI judge presiding over a courtroom when you have been wrongfully accused of mistreating your child? If you don’t, don’t use AI. Join the resistance.

The ancients understood something we do not: that philosophy begins in wonder. And wonder is also what a child’s experience of the world begins in. Handing work over to machines, and I mean work of both the manual and cerebral kinds, will put an end not only to the smile on Wenceel Pérez’s face. In putting an end to wonder, it will put an end to something even more sacred: childhood in both its original and enduring forms. We shouldn’t need to learn the hard way that evicting wonder from the world is bad idea. But this is precisely what we will do if we continue trying to escape our condition, which is to work. Humans attempting to escape the human condition are going to cause a lot of trouble on their way out.

What’s the upshot going to be? In casting about for a way to describe the crisis confronting us I have sometimes settled on this: We are so smart and enterprising that we have found yet another way to kill our children—because vacuuming them out of the womb is apparently too messy and inefficient.


[1] Wes Jackson of the Land Institute in Salina, Kansas, has addressed the “one-in-ten-thousand” problem, the general assumption that a reactor accident of the Chernobyl sort will occur once every ten thousand years. “That might seem safe enough,” he writes. “But with a thousand reactors, we should expect an accident every ten years on average. We currently have around 450 reactors worldwide, which means an accident every twenty-two years on average. Are we not already on schedule?”

Jason Peters joined Hillsdale’s faculty in the fall of 2021 after spending 25 years at Augustana College, where he was Dorothy J. Parkander Professor in Literature.

 
Related Essays
Next
Next

Federalist 61