Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
When will computers be smarter than humans? Return asked top AI experts: James Poulos
gremlin/Getty

When will computers be smarter than humans? Return asked top AI experts: James Poulos

The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model technology. These machines can perform a wide range of tasks once thought to be only solvable by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.

We posed two questions to six AI experts: James Poulos, roon, Max Anton Brewer, Robin Hanson, Niklas Blanchard, and Anton Troynikov. —Eds.

1. What year do you predict, with 50% confidence, that a machine will have artificial general intelligence – that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?

2. What changes to society will this effect within five years of occurring?

James Poulos

It is very easy to imagine or reason about the absolute triumph of computational machines and networks over human capabilities and actions in the realm of so-called “general” intelligence — so easy that even humans restricted in their thought process to reasoning and imagination should be quick to question why it is so easy to do so.

If the hallmark of imagination is novelty and the hallmark of reason probity, there should be among our theorists of AI a robust and active interest in the field of more novel scenarios that require more intellectual effort than the “default” of machine supremacy, whether pegged to arrive in 300 years or 30 or three or at a point now already past.

After all, the simplest or even most tautological logic in favor of predicting the imminent and inevitable surpassing of the human faculties of reason and imagination by artificial “general” intelligence is that the momentous event became imminent and inevitable once enough of us, on the research and production side as well as the consumption side, became “convinced,” consciously or otherwise, that indeed it was more or less imminent and inevitable. At what point might that have happened, and for what reason? And to that question we can add: What constituted then, and now constitutes today, or tomorrow, “enough”? And how do we, or would we, know or sense — well enough to venture a prediction worth making and considering — that what strikes “us” as being or having been enough really is, in the world, “enough”?

Because it is clear even now, in fact more so each day, that most of us humans in the world, despite (or let’s dare to suggest because of) the massive strides in technological sophistication enclosing human and all life on Earth, do not consider the obsolescence of human beings in the realm of intellect and the activities it fosters to be imminent and inevitable.

Secular supremacy

Besides technologists, who are as instinctively inclined to agree that technology will take over the world as financiers are to feel that finance, if it has not already, will do so; and besides the absolute least reflective of ordinary people who are nonetheless reflective enough to enthuse over the self-serving predictions of technologists so unsatisfied with mere technology that they take on a second calling as “futurists,” who among us really believes that nothing will stand in the way, not just of the attainment of AGI but, as is routinely claimed, of AGI’s then-instantaneous triumph over the world and everyone in it, from the molecular to the planetary level and beyond? The common person whose work, identity, purpose, and life these events would obliterate? The anxious and scheming upper-middle-class person whose access to influence and income would remorselessly be taken away? The elite “technocrat” whose slow defeat at the hands of actual technologists would become a sudden rout?

What motive would compel skeptical Christians, American and otherwise, or monotheists worldwide, to swallow such an amazing and final judgment, one so comprehensively at odds with the bedrock of their faith?

Why, to press on, would Chinese, Indian, Indonesian, or Pakistani people, who together make up over 40% of the world’s population, take an attitude toward the destiny of technology and the human race overwhelmingly fostered, developed, and pursued headlong by not just “Westerners” but secular white Westerners adhering to long-standing esoteric beliefs commonly recognized for millennia as specifically Christian heresies? Why would Russians, among whom even those dubious that Moscow is the last protector of the true Christian faith increasingly see the technologized West as an enemy hostile to their civilization, submit with a whimper to the forecasts of that enemy’s technologists that those same technologists cannot and will not be stopped from creating the machines that exceed the capacities and significance of men, women, and children?

It could only be that such disbelievers in the fate laid out for them had been utterly demoralized by the state of technology and technologists that we already have today. Such a result could only be obtained if technologists and researchers skeptical that AGI could ever possibly exist — such as, most recently, Andrea Roli, Johannes Jaeger, and Stuart A. Kauffman, as they establish in their paper “How Organisms Come to Know the World: Fundamental Limits on Artificial General Intelligence” (Front. Ecol. Evol., 28 January 2022) — had already been thoroughly marginalized and discredited; if the ability of ordinary disbelievers to maintain ongoing communication among one another sufficient to keep their understanding alive had already been thoroughly disrupted and destroyed; and if ordinary disbelievers had been stripped of their very memory and intuition about the primal distinction between organism and machine and between human and non-human organism.

Mountains of evidence exist that the marginalization of dissident researchers, the disruption of free association and discourse, and the alienation of peoples from their own fundamental experience and recollection of reality have been growing more severe for decades. There is little denying this process has not accelerated dramatically in the past few years.

The prospect of an onrushing future in which we feel ourselves becoming entities more akin to gods without souls than to humans with them is already visible enough today to foster a vast and profound reorientation of experience, memory, and intuition against the pursuit of AGI and whatever may come after.

But the process is nowhere near completion, and even its projected advancement still farther does not take away the myriad different paths by which the achievement and absolute triumph of AGI would not happen, a non-exhaustive list of which would include: (a) failure of the research program for imaginable reasons such as technological or environmental limits or limits imposed by AIs themselves; (b) failure of the research program for hard-to-imagine reasons such as unforeseeable internal contradictions or inexplicable failures; (c) failure of the research program for unimaginable reasons that manifest only after the apparent consummation of the program; (d) conflict or agreement leading to the limitation or shutdown of the research program in some, most, or all parts of the world; conflict or agreement leading indirectly to the arrest or freeze of the research program in sufficient part to postpone the attainment of AGI indefinitely; or, to help indicate the basis for a radically different forecast than AGI supremacy; (e) abandonment of the research program due to hostility or mere loss of interest on the part of would-be consumers and even producers.

In this last scenario, it must be recognized already that the acceleration and intensification of hype and fate-mongering in the realm of encouraging acceptance of the inevitability of the ever-more-thorough technologization of life is running up against the decelerating and disenchanting effects of aspects of human nature familiar and unchanged throughout our history. The mounting human experience that the greater technologization of life now brings greater harm and, still more radically, greater disinterest, offers a glimpse of a future where humans astonishingly more augmented with technology than at present nonetheless still abandon the pursuit of AGI or prevent machines from pursuing it, because the boredom and disenchantedness experienced by the demigod-like cyborgs we have become is simply too much to bear, pointless to bear, or not worth bearing by any standard.

The prospect of an onrushing future in which we feel ourselves becoming entities more akin to gods without souls than to humans with them is already visible enough today to foster a vast and profound reorientation of experience, memory, and intuition against the pursuit of AGI and whatever may come after. To believe, to predict, that this reorientation will be simply plowed under, shoved aside, or routed around is to deny that human nature — the same human nature that perennially eggs us on to reach for fantasies we can never make real — any longer operates or exists.

Human forever

Most revealing for our purposes is how so many AGI supremacists seem capable of bearing the weight of their prophecies only because they are exactly that, prophecies, and not at all mere scientific forecasts or predictions. That the religious foundation of this vision is so plain to see yet is almost never expressed in forthrightly religious terms signals strongly to us that the future actually favors not only “religious belief” and devotion in general but in particular the ancient Christian faith — which transcends and surpasses the primal “religious” distinction between sacred and profane that produced in the West a schematic of knowledge where the mechanics of the godless world are invested with meaning by incantation.

The schematic’s result has been a consuming desire to organize all life by sacralizing mechanical causes — results — with spiritual causes — reasons, and in the end to reduce all life to this process. The vision of AGI supremacy is that automation, and only automation, can bring this process, the only true process, to completion. In such a world God’s absence from reason and the increasing disenchantment and exhaustion of imagined incantations throws us back on the same resources that even now we are returned to by the indifferent colossus of our automated simulation and prediction machines: not the sterile ideas that issue from reason and imagination alone, but the fertile experience that arises from recollection and intuition.

The return of that experience, through a uniquely human silence and stillness no machine can augment, improve, or replace, points to a re-sacralization of body as well as soul, one that no machine or machinist will be welcomed to transgress. What we intuit and recall amidst our shared participation in the sacred given life of the world can furnish more reasons than reason can calculate to stop the chain of causation that culminates in our uncreation and can envision more images than imagination can summon of what will be lost if we do not. But still more, it can move us beyond the realm of argument by reason and dream, to a place where our obsession with building creations to destroy us may heal and disappear.

James Poulos is the founder and editorial director of Return.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
James Poulos

James Poulos

BlazeTV Host

James Poulos is the editor at large of Blaze Media, the host of "Zero Hour" on BlazeTV, and the founder and editorial director of Return.
@jamespoulos →