As I gear up for a profile of one of the stranger New Religions — Transhumanism, now ubiquitous in cultural criticism, in discussions of Artificial Intelligence — I offer a short story about what AI can’t do.
It can’t hack literary criticism. (Ok fine, it can, but not without mixed results).
If you’re a philologist, your vocation is safe, for now. At least from the clunky version of Gemini pushed by the Google search engine.
The first rule of Slush Pile is you don’t talk about Slush Pile.
You don’t say it’s a writer’s workshop, organized by a famous local author, that convenes the second Wednesday of every month. You don’t say it meets in the garage of a leafy yard several blocks from your own, in a house curiously described as a “small theater,” the Salon Rouge. You don’t say you got the time or location wrong, you guess, because when you arrive the house is vacant, with an aluminum No Solicitors sign nailed to the siding, a cat tree and baby rocker the sole occupants peering through darkened windows.
You don’t mention that the situation suggests an appropriate response, for which there is a model in literature and film: grab a bedroll (plus two black shirts, two pair of black pants, two pair black socks, one pair black boots, one black jacket, and $300 personal burial money), and wait.
Wait on the porch, with the sleeping bag under your arm, moon or sun, rain or shine, for Slush Pile to accept you. Wait for a man with facial scars and Peroxide hair to finally emerge, just to tell you to go home: “You’re too old, fat man. Your tits are too big.”
Then wait some more.
Slush Pile is hosted by Chuck Palahniuk. The club’s purpose, one suspects, is symbolically parallel to the one in his 1996 novel, and 1999 film (sans all the pilfered liposuction fat, soap-bombs, and domestic terrorism): get out there, and mix it up with some strangers. People who share this form of catharsis.
“Write Club,” if you will, instead of Fight Club.
An acid test, to see if your punchlines land, or bomb, in a circle of contenders.
Occasionally, the leader assigns homework: Watch Animal House, come back and tell us what the gloved hand-job scene in that movie has in common with the gloved sex scene in Fight Club. “Toxic masculinity,” and such.
The point of this anecdote is that to attend Palahniuk’s monthly meeting out of journalistic curiosity, I was obliged to cobble together some short fiction. My made-up ticket to Slush Pile (which has yet to gain me entry; it seems Write Club was canceled last month). I wanted to see what a writer’s workshop organized by an edgy novelist with a reputation for boldly stating things too awful to name looks like, in 2020s Portland.
My stab at fiction (drafting a Pygmalion tale; a small vignette narrated by AI, the proverbial statue come to life) led me to notice how brittle Google’s Gemini AI overview is, sometimes, when it comes to what it was designed to be: a “Large Language Model.” In my experience with the LLM, at least, Gemini’s representation of language was rather small-minded. More on that, at the end.
While I’m on the subject, I did rewatch Fight Club. With an eye for “toxic masculinity,” or Involuntary Celibacy, or whatever Mr. Palahniuk was driving at with regard to 1978’s Animal House, or masculinity in 2025, for that matter. I must admit Fight Club was prescient, way back in 1996-99.

Support groups frequented by phonies (“tourists”), people who only derive meaning by role-playing the authentic suffering of others. People who derive meaning from violence. People who derive meaning from the vandalism of corporate symbols, sabotage of insurance and credit card companies, and coffee chains. People whose raison d’être is “Project Mayhem” — black bloc nihilism, brought to you by “the middle children of history.” A roomful of men who’ve had their testicles removed, including the “fat man,” Bob, with scandalously large breasts (who dies at the hands of police, after vandalizing said coffee chain). Alienation from the corporeal, sex-inhabiting self. Self-harm, and self-help. A neurodivergent narrator who projects his desires onto a hypermasculine alter ego, and perhaps a feminine one as well. And indeed, toxic masculinity.
What’s more toxic than dumpster diving for biomedical waste?
Fight Club was not the feel-good movie of 1999 (or was it? — sometimes when I look back at the films I enjoyed as an eighteen-year-old, I wonder if I was alright).
Yet somehow a film that climaxes with blowing up trade buildings and burning-it-all-down just to get a girlfriend (to the tune of the Pixies “Where is My Mind?”) feels appropriate to the dawn of the new millennium.
Today, one would think, the villain in Project Mayhem’s revolution, the source of the narrator’s ennui and sleep-deprived alienation, would be an evil tech company. Maybe one with a motto like “Don’t be evil,” that enables our entrapment not as consumerist pigs but as mindless computers, “calm as Hindu cows.”
A year after Fight Club hit theaters, another writer predicted what the new millennium had in store. Jaron Lanier, a computer scientist (technologist, futurist, visual artist, and musician — a polymath who’s best described as a philosopher, one of the few people from this milieu whose thinking makes sense to me), published an online article in Edge magazine, “One Half a Manifesto.”
In 2000, Lanier was already refuting a Silicon Valley religion that’s reached maturation in the last twenty five years. I’ve referred to it elsewhere as Rationalism or Transhumanism (and I’ll give you a representative profile of one or two transhumanists in the weeks to come, including a real life Pygmalion).
In 2000, Lanier gave this belief system a rather clunky name (because it didn’t have one yet): “cybernetic totalism.”
The basic assumption of this belief system is that a human being, and indeed evolutionary biology itself, functions like a computer. That humans will invent computer systems and technologies that self-improve until they can create themselves, out-powering us intellectually and biologically, enabling us to live forever — even if our genetically-enhanced bodies perish, our consciousnesses will exist indefinitely on a server somewhere, like my deceased aunt’s Instagram profile. There will be a Revelation — when we reach the Singularity, as Artificial Intelligence and nanotechnology irrevocably merge with human intelligence and biology, if not supplant them entirely.
This, they consider Utopia. As do I — “no place,” for humanity.
In other words (in the recent words of Google founder Larry Page), we are building a “digital god.” In the words of Mistral CEO Arthur Mensch, “The whole AGI [Artificial General Intelligence] rhetoric is about creating God.” According to Silicon Valley’s most prominent transhumanist-Christian, Palantir CEO Peter Thiel, we should “remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth.” (You can find more examples of this sort of rhetoric, here.)
Refuting this level of mythical hubris requires, well, mythology: Babel, Narcissus, Daedalus, Golem, Hebe and Ponce de Leon.
Pygmalion.
Myth, and Jaron Lanier’s “Half a Manifesto” from the year 2000. It’s anything but optimistic, but it does skewer Silicon Valley’s true believers, the proselytizers of the Age of Spiritual Machines:
For the last twenty years, I have found myself on the inside of a revolution, but on the outside of its resplendent dogma. Now that the revolution has not only hit the mainstream, but bludgeoned it into submission by taking over the economy, it's probably time for me to cry out my dissent more loudly than I have before.
Even in its infancy, Lanier could see that, “Artificial Intelligence is better understood as a belief system instead of a technology.”
The dogma I object to is composed of a set of interlocking beliefs and doesn't have a generally accepted overarching name as yet, though I sometimes call it "cybernetic totalism". It has the potential to transform human experience more powerfully than any prior ideology, religion, or political system ever has […] because it gets a free ride on the overwhelmingly powerful technologies that happen to be created by people who are, to a large degree, true believers […]
Remember, this is from the year 2000, when the potential downside of these “overwhelmingly powerful technologies” consisted of the dot com bubble, Y2K hysteria, and having to wait ten minutes to download nude photographs of Pamela Anderson.
A quarter century before Peter Thiel announced his intentions to build heaven on earth. Four years before he became an angel (investor) at Facebook. Lanier continues:
There is a real chance that evolutionary psychology, artificial intelligence, Moore's Law fetishizing, and the rest of the package, will catch on in a big way, as big as Freud or Marx did in their times. Or bigger, since these ideas might end up essentially built into the software that runs our society and our lives. If that happens, the ideology of cybernetic totalist intellectuals will be amplified from novelty into a force that could cause suffering for millions of people.
The greatest crime of Marxism wasn't simply that much of what it claimed was false, but that it claimed to be the sole and utterly complete path to understanding life and reality. Cybernetic eschatology shares with some of history's worst ideologies a doctrine of historical predestination. There is nothing more gray, stultifying, or dreary than a life lived inside the confines of a theory. Let us hope that the cybernetic totalists learn humility before their day in the sun arrives.
The humility that allows the founder of a company whose motto was once “Don’t be evil” to conclude that Google is building a “digital god.”
Part of transhumanism’s hubris stems from what Lanier calls “Campus Imperialism,” the certainty that an academic’s chosen field is superior to all others.
For most of the twentieth century, the physicists reigned supreme — what with hacking the laws of the universe, I am become death, destroyer of worlds, and all — “though in recent decades ‘postmodern’ humanities thinkers managed to stage something of a comeback, at least in their own minds.”
And in the minds of millions of unwitting postmodernists, who would be fed watered-down versions of these theories over social media, twenty years after Lanier wrote this — more of a comeback than he anticipated.
But technologists are the inevitable winners of this game, as they change the very components of our lives out from under us. It is tempting to many of them, apparently, to leverage this power to suggest that they also possess an ultimate understanding of reality, which is something quite apart from having tremendous influence on it.
Then, in an offhand bit of psychoanalysis, Lanier voices another root cause of this belief system, something I haven’t heard anyone else weigh-in on. Something I’ve pondered myself, about the inventor of the ultimate test for Artificial Intelligence, the Turing Test:
Another avenue of explanation might be neo-Freudian, considering that the primary inventor of the idea of machine sentience, Alan Turing, was such a tortured soul. Turing died in an apparent suicide brought on by his having developed breasts as a result of enduring a hormonal regimen intended to reverse his homosexuality. It was during this tragic final period of his life that he argued passionately for machine sentience, and I have wondered whether he was engaging in a highly original new form of psychological escape and denial; running away from sexuality and mortality by becoming a computer.
If the link between mortal-sexual transcendence and technology sounds feverish, perhaps it is — but it’s as old as ancient religion, the desire to transcend the body. I’ll expand on this next time, in a profile of the chief proponent of “Digital Immortality,” the founder of a self-described “‘trans’ religion” called the Terasem Movement, whom New York Magazine called “The Trans-Everything CEO,” Martine Rothblatt. (She’s friends with one of the main targets of Lanier’s critique, the man who popularized the idea of “the Singularity,” Ray Kurzweil.)
About the Turing Test — a computer’s ability to convincingly imitate a human, which was based on an old parlor game called “the Imitation Game,” in which a man tried to convincingly imitate a woman — Lanier has this to say:
Turing's mistake was that he assumed that the only explanation for a successful computer entrant would be that the computer had become elevated in some way; by becoming smarter, more human. There is another, equally valid explanation of a winning computer, however, which is that the human had become less intelligent, less human-like.
As an example, he cites the way credit ratings software incentivizes people to go into debt, encouraging them to borrow money whether they need it or not, to increase their credit scores.
In doing this, we make ourselves stupid in order to make the computer software seem smart.
We make ourselves take summer reading recommendations from the Chicago Sun-Times, for example.
We have caused the Turing test to be passed. There is no epistemological difference between artificial intelligence and the acceptance of badly designed computer software.
Which brings me back to Slush Pile, and Google’s “digital god.”
I realize there is a qualitative difference between the Gemini 2.0 on Google’s search engine and the AI in its DeepMind lab; that there are extraordinary tasks smarter forms of AI can accomplish. But what follows still counts as making ourselves stupid to make machines seem smart, should we choose to accept their literary criticism.
For the sake of short fiction (to attend my first Slush Pile reading with some pages under my arm), I wanted to convey the difference between an AI narrator and a human one, when I was switching between the two in a story.
I toyed with the idea of throwing in a few words from Middle English, to alert readers to a human voice — that using “knewn” instead of “known” might signal a pre-Industrial consciousness.
I’m no Late Medievalist or Early Modernist, so I asked Google if “knewn” is an archaic form of “known” (the most common spelling, it turns out, is “knouen”).
Just once, Gemini came up with the following usage, from the University of Michigan’s Middle English Compendium. (It took me two days to relocate this rare usage of “knewn” using Google and Gemini, leaving me uncertain whether this was an actual quote, or an AI hallucination.)
“That grace schuld schamely from hym schede, and he be knewn as he is.”
Here’s Google’s interpretation of what this quote, from the fourteenth century, means:
The sentence above suggests a situation where Grace might feel some shame or pressure to stick to her planned gym schedule, while also expressing a desire for her to be recognized and understood for who she is as a person.
Somehow, I felt smarter.
Then again, here’s what Gemini had to say a few days later. About the same verse, from The Stanzaic Life of Christ:
“That grace should from him shamefully shed, and he be known as he is.”
The sentence suggests that someone's good standing or grace would be shamefully stripped away, and their true character, likely a negative one, would be exposed for all to see. It implies a public revelation of a person's unworthiness or misdeeds.
That, actually, sounds fairly accurate.
Prophetic, even.
If you believe in that sort of thing.