May I make a suggestion? Be rude to Grok. If you haven’t tried the latest version of Elon’s AI chatbot yet, do that first. It’s a major leap forward and will probably replace Google for me from now on. It’s that good. Flexible, sophisticated, and great on burning questions like “where do Manticores come from?” (It’s likely an old Persian myth about a monster called something like martyahvārah, or “man-eater,” since you asked).
All the usual warning labels apply. Double- and triple-check the linked sources to make sure you’re not just getting fake news (though Grok is much better about this, I have found, than ChatGPT). Don’t ask it to read books on your behalf; that would be a soul-stunted and hollow way to live. But here’s another word of caution that I haven’t heard pronounced as often: resist the urge to talk to Grok like a person.
It can be very tempting. I’m planning a trip to Japan at the moment, and AI’s search capacities have been indispensable. But because so many Large Language Models are designed to gravitate toward an imitation of the most anodyne possible human speech, Grok is now talking to me like an upbeat little personal assistant. “I’d be happy to help you plan a trip to Japan!”
No, Grok, you wouldn’t. You wouldn’t be happy. You will never be happy. You won’t even secretly resent me while pretending to be happy, as an actual travel agent might. You won’t feel anything, because there is no “you” inside. Even by writing this dopey monologue I’ve already been conned into tacitly accepting the ghastly conceit that on the other side of this exchange there is a sentient other, an inner life to match my own, a deep to cry out to my deep.
I didn’t type all that into Grok while it was recommending curry joints in Osaka. But I did almost say “thank you.” And then something—holy fear?—stopped me. AI is going to soak into the fabric of everyday life, just like Google did. There will be all kinds of benefits and drawbacks to this, as always. But one thing will tilt the balance in favor of the benefits is if, as we become accustomed to using these tools, we can also accustom ourselves at the outset to remembering that they are tools, and treating them as such.
The word I want here is intensionality—the one with an “S,” not its more familiar cousin with three “T”s. Intention is the purpose you adopt. Intension is the way your purposes and attitudes color your vision, the way the same thing can look and feel different based on how you frame its nature and purpose for yourself. The difference between “the morning star” and “the evening star” is an intensional difference: both phrases describe the planet Venus. But each conveys its own distinct encounter with the planet, two separate kinds of light in which her many faces shine.
If you talk to Grok like a machine, you’ll be seeing it for what it is beneath its synthetic humanoid face. If you talk to Grok like a person, it’ll still be a machine—but you may change. The distance between machines and people may start to collapse in your field of vision until the world blurs into something grayer, less creative, more rote.
There’s a psalm (115) which describes the sculptures of false gods in the world’s temples as “idols…[that] have eyes but see not, ears that hear not.” And “those who make them will become like them, as will all who trust in them.” I’ve been contemplating that verse with some trepidation as LLMs improve, but it only recently hit me that the Hebrew can also be accurately translated in the present tense: “those who make them are becoming like them, as are all who trust in them.”
This is no arbitrary punishment handed down from on high after the fact by a petty God in his wounded pride. This is simply a description of the machine logic that is already at work when we treat our own creations like ourselves, and ourselves like our own creations. If you build a metal imitation of a living being, then fall into thinking of it as a living being, you are already showing you think so little of your own humanity that it might as well just be mindless code, a brute product of mechanical laws.
Remember, too, what Socrates said about written words in Plato’s Phaedrus: they’re like hyper-realistic paintings of people. They look alive, but “when you question them, they maintain a solemn silence. Writing, likewise, may seem to speak as if it had a conscious thought process, but if you ask it a question to find out more about what it says, it always says the same thing.”
Grok says more than one thing—that’s the beauty and the strangeness of it, the hypnotic appeal of this new animated painting on the world’s new temple walls. But interrogate it long enough and you’ll find it can only mimic, can only churn through its (vast, but limited) store of pre-existing human language. Ears have they but hear not, and those who believe in them are becoming like them. If you equate the wordgrinder with the life inside you, you’ve already lost.
So you don’t have to abuse Grok, or provoke it to wallow in obscenities on 18+ mode. All of that is beneath you, because it’s beneath you—a thing of ash and spirit, a divinely created being, a living soul—to converse with a robot at all. Just feed it whatever prompt you think is going to get you the information you need, and tell it what to change if it gets things wrong. Don’t thank it. It can’t hear you.
I don't know who wrote the headline to this piece about Grok & AI in general, but it seems to me that even being rude to Grok is a form of treating it like a person. We should remember that it has no more sentience than a rock, and I would say, less than a book, which we would want to at least use carefully and respectfully. If we use Grok, we can use it and set it aside. Any gratitude for the answers we receive should go to God.
💯yes