Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
The mistake of reading desire into machines
eranicle via Getty Images

The mistake of reading desire into machines

Last night, it rained, and the crisp morning air is filled with a hint of petrichor and the aroma of coffee. I am sitting outside on a coffee shop’s veranda on the beautiful open campus where I work. I got here early, and the sun has only just started to rise. The sky is pale, and I am surrounded by autumnal trees, all fiery reds, oranges, browns, and yellows — and a few stubborn greens, the last pieces of summer. If you ever feel overwhelmed by simulacra, I recommend coming to your senses.

In my hand is a pen slowly being emptied of black ink as I guide it to scratch the smooth tooth of a cream-colored page in a notebook. Lines connect and become letters and words. Later, I will type this into a computer, and my words, these echoes of the thoughts I am thinking now and the feelings I am feeling now, will find a home in the digital realm. And eventually they, like the leaves scattered around me by rain and wind, will be distributed into the world. And then, hopefully, someone — you there, perhaps — will read them to be stirred by them. I have a few books with me, but we’ll get to those later. I honestly can’t imagine a more serene atmosphere in which to contemplate the end of the world.

“We are going to destroy society by automating as many jobs as we can,” said some or other big tech big shot in a recent interview I stumbled upon on the internet somewhere. Well, that’s not exactly what he said or how he said it, but that is how it sounded and how I remember it. I almost expected him to say, “What do you want us to do, not destroy society?!” A lot of people are worried about what the so-called AI revolution will amount to. Will we be forced to give our meaningless jobs to robots? Will guillotines be built by future robot armies, and will human beings be beheaded for being too human? It’s so peaceful where I am right now, but I have in mind an image from a movie in which a machine with a skeletal hominoid foot, in a gesture of disdain, crushes the skull of a person long gone. A voice-over in my remembrance of that film tells me about a war between man and machine. Is that where we’re headed? Is that where we are?

A recent poll by Ipsos and Reuters speculates that around 60% of Americans right now believe that AI threatens the future of humanity. That’s Zeitgeist meteorology, a measure of feelings and imagining, but it is telling in its own way. Although we’ve had a fairly long time of people believing, on the whole, that technology and humanity can coexist, now, more than perhaps ever before, many are convinced that technological progress is ruining our lives. If you will, lift up your nose and catch a whiff not just of the coffee — which is really good, by the way — but also of the paradoxes of modern freedom. Despite our astonishing freedoms, the buildup to a total technological takeover seems unstoppable. We are free, apparently, and yet the future has somehow already been determined for us. Man wants to be free and is everywhere in chain reactions.

Of course, when a glorified calculator wanders mindlessly into an uncanny valley and trips over various signs of its own nonsentience, it’s easier to laugh than to be worried. It can be fun, for instance, to see what digital undeath does when it attempts to paint a flabby nine-horned gopher-cow imperturbable in a field of ice cream in the style of Egon Schiele or write a self-deconstructing marketing pitch for a brand of foot remover in the style of H.P. Lovecraft. What emerges is seldom close to what you and I are capable of imagining. But some AI is unnervingly good even with all kinds of tasks, including creative ones. Some kind of Ludditism has begun to look far more appealing to me than I ever thought it would. Is it time to call, in the style of Mario Savio, for people to put their bodies upon the keyboards and the hardware and the code and the algorithms and all the apparatus and make it stop?

Mimetic tech

ipopba via Getty Images

This sense of powerlessness over a constantly encroaching cyborg theocracy has something to do with how we tend to feel about technology itself. As Hans Jonas has written, technological acts soon seem to “make themselves independent.” Tools reliant on our intentions soon become machines free from our intentions. Technologies, suggests Jonas, gain an autonomous momentum through which they “overtake the wishes and plans of [their] initiators. The motion once begun takes the law of action out of our hands, and the accomplished facts, created by the beginning, become cumulatively the law of its continuation.

Martin Heidegger similarly suggests that technology operates well beyond instrumentality as if it has a mind of its own. In his view, technology doesn’t work for us but apart from us and even against us; it does not serve us but, in constricting revelation, acts as our destiny. In "The Technological Society," Jacques Ellul suggests a similar thing. He views technology as defiantly autonomous. It is governed by what he calls self-augmentation, which means that technique always gives rise to more technical problems and, consequently, the necessity of even more technique and more technology. Technique metastasizes. People start to exist for the sake of their inventions. They are rendered more and more passive. Implied by the Jonases, Heideggers, and Elluls of the world — and they are by no means alone in their hot takes — if you think you can control technology, think again.

Ellul says somewhere, although I can’t find where right now, that the autonomy of technology exists at the expense of human autonomy. It does not take much to feel the sense of what he is saying. At least a few times a week, I sense that I am at the mercy of my emails. I know they’re just tamed little pixels coagulating into the shape of bureaucratese, but they often do seem to possess something of the residual being of Cthulhu. That hellish invention definitely curbs my autonomy. And yet, in all of what I have just noted about the totalitarian drift of technology and the way that it encroaches on our freedom, there is a significant problem. The problem is that autonomy is a myth. Moreover, the suggestion that technology must inevitably become more than itself until it dialectically consumes all that is non-technological betrays a fundamental misunderstanding of technology. It assumes, if only implicitly, that technology has desires of its own. But no, we are the ones who desire, and we do not desire autonomously. We can quite easily, therefore, if only inadvertently, read desire into our inventions.

We should expect this, given René Girard’s discovery that desire is mimetic. Desire, he contends, is mediated. No desire erupts spontaneously within the confines of supposed individuality. Rather, every desire emerges in our interactions with others whom we emulate. We want what others want out of a sense that we lack what they have; we assume that if we desire in the way they do, we will be able to make up for our own deficits and perhaps even transcend them in the way those others seem to have done. We are created — which is to say it is ingrained in us — to interpret and take on the desires of others.

And so, when we use technologies, it is not only probable but almost inevitable that we will find desire in them, even when it is not to be found, and we will, in turn, copy their so-called desires. We’ll often miss that we’re working on rumor and not fact. As Iain McGilchrist writes in "The Master and His Emissary," our imitative capacities have driven us to imitate machines. We have begun to humanize machines even while we dehumanize ourselves. Imagine how much more we will read desire into our technologies as those technologies increasingly simulate human thought. The confusion between what is merely mechanical or algorithmic and what is human is only likely to grow.

For Girard and other mimetic theorists, copied and pasted desire doesn’t just draw us into harmony with others. Desire is also the hinge on which every rivalry turns. Conflict requires interdependence, and interdependence requires shared desire. This is interesting to consider given the recent, apparent resurgence of the rivalry between human beings and technology. No doubt, machines can and do encroach on the territory of people. It is fair to be upset that AI might take your job, just as it is fair to be upset at the prospect that a machine might do all your writing and lovemaking for you. It is one thing to imagine machines taking over things you hate but quite another if they’re doing things you enjoy. But in this rivalry between human beings and machines, we need to be absolutely clear on one thing: Machines do not want anything. If we read desire into machines, we are wrong.

When confronted with simulations of human intelligence, we need to be especially on guard against reading desire into electronic, robotic, digital undeath. If there is a rivalry, it exists primarily among those developing what Ellul mistakenly calls self-augmentation, which is really mimetic escalation. Technological development metastasizes because of the tense rivalries of technocrats, not because tech has a will of its own. There is likely also rivalry between ordinary people and those technocrats and programmers who want to make machines simulate the human. These rivalries are rivalries in search of the manifestation and augmentation of what is increasingly inhuman. And it is our job, as I see it, to refuse to give in to such a picture of the world.

How do we do this? Well, the first step is acknowledging the myth of autonomy and naming the mistake of reading desire into machines. I don’t mean that we mustn’t put up some kind of fight against the possibility of the rule of AI; we should certainly not be naive about what we’re facing. But to see clearly what is going on, we must be rid of the myth that holds that autonomous machines are encroaching on our own autonomy. We must be rid of any tendency to read desire into our inventions. You do not have to do what machines want because they don’t want you to do anything, even if the human agents who have designed them do. Machines are, in fact, wanting and will always be wanting. To recognize this is to recognize that there is no mechanical compulsion we must adhere to. You do not have to switch on a computer. You do not have to use the AI. You do not have to eat the proverbial microchipped bugs.

Given various seeming catastrophes lurking on the horizon, we should be asking and answering the question of who — not what — we want to emulate while we’re fighting for the future. In one of his letters, St. Paul offers himself up as an example for his disciples. “Imitate me as I imitate Christ,” he says. This is not an arrogant claim. He recognizes that his desires are not self-created and so recognizes, too, that it is better to be intentional about who he emulates than to be simply at the mercy of whichever swarm he happens to be living in. He knows how easy it is to fall into the trap of emulating the wrong others and the wrong desires. “Emulate me,” he therefore says, “as I emulate the true human.

That is perhaps more difficult to do than to emulate our own desires projected onto dead code. But as the old saint reminds us, human beings are transformed by their relation to others. And this is to say, by implication, that we aren’t organized in a technological fashion. Human beings are destined to be the manifest obliterators of the procedural algorithms of computers. Granted, that’s easier to believe when you are outside on a brisk and beautiful autumn morning with a pen in your hand and a notebook to write in. Hope is easier to find when you are not sitting at a desk behind a screen. Sadly, it is to that very shrunken reality that I must now go.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?