Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Why are we so afraid of AI if we’ve been using it for years?
GraphicaArtis/Getty

Why are we so afraid of AI if we’ve been using it for years?

Artificial intelligence was not elusive at all before November 2022; it had embedded itself into our lives long before ChatGPT made it en vogue.

Geoffrey Hinton made headlines for telling the BBC that artificial intelligence is an “extinction-level threat” to humanity. Hinton is no alarmist — he’s popularly dubbed the "godfather of AI" for creating the neural network technology that makes artificial intelligence possible. If anyone has authority to speak on the subject, it's him — and the world took notice when he did.

In May of 2023, Hinton quit his decade-long career at Google to speak openly about what he believes are the existential dangers AI poses to us "inferior" carbon intelligences. Moreover, ChatGPT’s debut in November of 2022, just half a year earlier, had already sparked a global reaction of equal fascination and trepidation to what felt like our first encounter with an elusive technology that had now welcomed itself into our lives, whether we were ready for it or not.

AI conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective?

Ironically, artificial intelligence was not elusive at all before November 2022; it had embedded itself into our lives long before ChatGPT made it en vogue. People were already unknowingly using AI whenever they opened their smartphone with facial recognition, edited a paper with Grammarly, or chatted with Siri, Alexa, or another digital assistant. Apple or Google Maps are constantly learning your daily routines through AI to predict your movements and improve your daily commute. Every time someone clicks on a webpage with an ad, AI learns more about his or her behaviors and preferences, which is information that is sold to third-party ad agencies. We’ve been engaging with AI for years and haven’t batted an eye until now.

ChatGPT’s debut has become the impetus for the sudden global concern about AI. What is so distinct about this chatbot as opposed to other iterations of AI we have been engaging with for years that has inspired this newfound fascination and concern? Perhaps ChatGPT reveals what has been hiding silently in our daily encounters with AI: its potential or, as many would argue, its inevitability to surpass human intelligence.

Prior to ChatGPT, our interactions with artificial intelligence were limited to "narrow AI," also known as “artificial narrow intelligence” (ANI), which is a program restricted to a single, particular purpose. Facial recognition doesn't have another purpose or capacity beyond its single task. The same applies to Apple Maps, Google's search algorithm, and other forms of commonplace artificial intelligence.

ChatGPT gave the world its first glimpse into artificial general intelligence (AGI), AI that can seemingly take on a mind of its own. The objective behind AGI is to create machines that can reason and think with human-like capacity — and then surpass that capacity.

Though chatbots similar to ChatGPT technically fall under the ANI umbrella, ChatGPT’s human-like, thoughtful responses, coupled with its superhuman capacity for speed and accuracy, are laying the foundation for AGI’s emergence.

Reputable scientists with diverse personal and political views are divided over AGI’s limits.

For example, the pioneering web developer Marc Andreessen says that AI cannot go beyond the goals that it is programmed with:

[AI] is math—code—computers built by people, owned by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious hand wave.

Conversely, Lord Rees, the former U.K. Astronomer Royal and a former president of the Royal Society, believes that humans will be a mere speck on evolutionary history, which will, he predicts, be dominated by a post-human era facilitated by AGI’s debut:

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellect of the inorganic, post-human era. So in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos.

Elon Musk and a group of the world’s leading AI experts published an open letter calling for an immediate pause on AI development, anticipating Lord Rees’ predictions rather than Andreessen’s. Musk didn’t wait long to ignore his own call to action with the debut of X’s new chatbot Grok, which has similar capabilities to ChatGPT, along with Google’s Gemini and Microsoft’s new AI chatbot integrated with Bing’s search engine.

Ray Kurzweil, trans-humanist futurist and Google’s head of development, famously predicted in 2005 that we would reach singularity by 2045, the point when AI technology would surpass human intelligence, forcing us to decide whether to integrate with it or be naturally selected out of evolution’s trajectory.

Was he correct?

The proof of these varying predictions will be in the pudding, which is being concocted in our current cultural moment. However, ChatGPT has brought timeless ethical questions in new clothing to the forefront of widespread debate. What does it mean to be human, and, as Glenn Beck poignantly asked in an op-ed, will AI rebel against its creator like we rebelled against ours? The fact that we are asking these questions on a popular scale is indicative that we are now in a new era of technology, one that strikes at deeply philosophical questions whose answers will set the tone for not only how we understand the nature of AI but moreover, how we grapple with our own nature.

Living life without fear

How, then, should we mitigate the risk of our worst fears surrounding AI becoming a reality? Will we, its current master, inevitably become its slave?

The latter fear often conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective? The government will hold all the reins to AI’s power if directed toward private companies. If directed toward the government, tech moguls can just as easily become oligarchs as their rivals in the government. In either scenario, those at risk of AI’s enslavement have very little power to control their fate.

However, one can argue that we have already dipped our toes into a Huxleyan-like enslavement, in which we have traded seemingly menial yet deeply human acts for the convenience technology serves on a digital platter. An Orwellian-like AI takeover won’t happen overnight. It will begin with surrendering the creative act of writing for an immediately generated paper “written” by an AI chatbot. It will progress when we forego the difficulty of forging meaningful human relationships with AI “partners” that will always be there for you, never challenge you, and constantly affirm you. An Orwellian future isn’t so unimaginable if we have already surrendered our freedom to AI on our own accord.

Avoiding this Huxleyan-type of enslavement — the enslavement to AI’s convenience — requires falling deeply in love with being human. We may not be in charge of regulating the public and private roles in AI’s development, but we are responsible for determining its role in our daily lives. This is our most potent means of keeping AI in check: by choosing to labor in creativity, enduring the inconveniences and hardships of forging human relationships, and desiring things that ought to be worked for outside our immediate grasp. In short, we must work on being human and delighting in the fulfillment that emerges from this labor. Convenience is the gateway to voluntary enslavement. Our humanity is the cost of such a transaction and the anecdote.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Katarina  Bradford

Katarina Bradford

Katarina is the Digital Content Manager of Glennbeck.com and a contributor for Evie Magazine. She enjoys covering topics about the intersection of philosophical inquiry and the art of daily life.