© 2024 Blaze Media LLC. All rights reserved.
The right approach to AI policy
alengo/Getty Images

The right approach to AI policy

America’s technology sector surpasses that of any other country. Washington should not blow that lead.

As the 2024 Congressional Baseball Game entered its final inning earlier this month, the Republicans led the Democrats by a score of 21-10. With a man on first, House Majority Leader Steve Scalise (R-La.) came to bat. The pitcher, Rep. Chris Deluzio (D-Penn.), checked the runner and fired home. The ball tailed away, off the plate, and Scalise took ball one. The next pitch ran outside, as did the one that followed. With a count of 3-0, Deluzio went up and in with the pitch. But the majority leader held up, taking the four-pitch walk and making his way to first base without unshouldering his bat.

Scalise applies the same prudent and restrained approach that he displayed at bat to the regulation of artificial intelligence. He made as much clear the day after the GOP’s victory at Nationals Park. Scalise said he doesn’t “believe that Congress should pass any AI-related regulations, establishing a new party position on the most important issue in tech policy,” Punchbowl News reported.

Even staunch libertarians, who favor minimal regulation of AI, advocate some regulation and, in Scalise’s phrase, filling “gaps in the laws” where necessary.

Washington is strongly attracted to action for its own sake, often tempted to impose stringent state control on emerging technology. However, Scalise recognizes that America’s technological dominance and the accompanying prosperity largely depend on lawmakers refraining from interfering in the market.

“Ultimately, we just want to make sure we don't have government getting in the way of the innovation that’s happening,” Scalise said. “That’s allowed America to be dominant in the technology industry, and we want to continue to be able to hold that advantage going forward.”

Often, to score runs or simply to maintain the lead, lawmakers must keep their proverbial bat on their shoulder.

The alternative perspective — which favors regulatory action for its own sake — stems from the fallacy that something — anything — must be done. These action-obsessed anti-Scalisers believe that AI’s development must be centrally planned. For example, Sen. Corey Booker (D-N.J.) recently lamented that, should it fail to keep pace with European regulators, America will fall behind Europe technologically. In fact, ample data demonstrates the superiority of America’s light-touch regulatory style.

And, as a rule, European regulators serve as a poor model to follow — in any policy area.

Often, when free marketeers criticize manifestly inapt proposals, pro-regulation lawmakers deride them for supposedly rejecting all regulatory action. Buried in this ridicule lies a disastrous underestimation of the cost associated with faulty regulation.

Also, the charge is false. Even staunch libertarians, who favor minimal regulation of AI, advocate some regulation and, in Scalise’s phrase, filling “gaps in the laws” where necessary. Many ills — even many that implicate legitimate governmental interests — have no discernable public policy solutions. History offers countless cases in which would-be technocrats’ efforts at central planning produced unintended consequences far worse than the pre-existing status quo.

What’s more, many fears that drive efforts to hyper-regulate AI anticipate future ills with little chance of materializing — e.g., a supercomputer takeover, to take one worry of President Joe Biden. However, the basic laws of economics, psychology, and human association apply as much to the digital world as to the physical one. As Calvin Coolidge once said, “If you see ten troubles coming down the road, you can be sure nine will go in the ditch and you have only one to battle with.” Silent Cal would presumably have opposed hamstringing American innovation in the name of combatting those nine ditch-bound troubles.

Nonetheless, both in Washington and in statehouses nationwide, too many lawmakers have credulously embraced the “something, anything” ethos. Proposals for licensing regimes, new agencies, and speech-crushing regulatory reform have swarmed Congress. Meanwhile, state lawmakers are now considering hundreds of AI-related bills.

Consider Colorado’s Senate Bill 21-205, which Gov. Jared Polis (D) signed in May. “I appreciate the goals of the sponsors to begin an important and overdue conversation to protect consumers from misunderstood and even nefarious practices in a burgeoning industry and the bipartisan efforts to bring this bill to me,” reads Polis’ signing statement. Yet the rest of the statement reads like a veto letter. In it, Polis outlined the bill’s myriad flaws. “Government regulation that is applied to at the state level in a patchwork across the country can have the effect to tamper innovation [sic] and deter competition in an open market,” Polis wrote.

Reading this, one would expect Polis to have vetoed the bill outright, but instead (to get something — anything — enacted) Polis signed it with a plea to ameliorate SB 21-205’s flaws during its two-year implementation period.

A home run may be preferable to a walk, but if the batter sees no pitches near the zone, the choice often becomes one between a walk and a strikeout. Wishing for another viable alternative will not produce one. Wishing for the knowledge problem not to obtain in AI policy-making — or that half-baked AI regulations will not generate unintended consequences — will not make it so.

America’s technology sector surpasses that of any other country. Washington should not blow that lead.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
David B. McGarry

David B. McGarry

David B. McGarry is a policy analyst at the Taxpayers Protection Alliance in Washington, D.C.