Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Regulating AI won't protect Americans, it's about Big Tech having a monopoly

sorbetto/Getty

Regulating AI won't protect Americans; it's about Big Tech having a monopoly

The more I read and write about AI, the firmer I get in my conviction that Big Tech incumbents absolutely must strangle the decentralized AI in its cradle before it wrecks everything.

Take a look at this interview Ben Thompson did with OpenAI’s Sam Altman and Microsoft CTO Kevin Scott. I call your attention to this part in particular:

From Microsoft’s perspective, is this going to be a funnel into new products or do you see it as an end goal in and of itself, winning search?
KS: So I think you hit on a very important point which is even if the ad economics of this system doesn’t have the same economics that “normal search” has, if we gain share, it’s just great for Microsoft. I think we have a lot of ability here, partially because we’ve done so much performance optimization work and we’re really confident around costs, that we can figure out what the business model is. The thing that I know having been a pre-IPO employee at Google is the search business that you have now is very different from the search business that we had twenty years ago, and so I really think we’re going to figure out what the ad units are, we will figure out what the business model is, and we have plenty of ability to do all of that profitably at Microsoft.
SA: There’s so much value here, it’s inconceivable to me that we can’t figure out how to ring the cash register on it. [Emphasis added]

I recently said the same thing to an interviewer who asked me about search and Google. The point I made was that Google was here before with search the first time — there was no business model for it until it hit on one (by acquisition, no less). Don’t assume, I argued, that it won’t hit on another profitable model for whatever kind of user experience BingGPT and Bard evolve into.

But I also made another point to the interviewer that’s not at all captured in the above but that’s critically important for everyone thinking about tech policy in the current moment: to make any business model work for them, they will first have to kill decentralized AI.

Centralized vs. decentralized

There’s a set of assumptions implicit in Scott and Altman’s vision of how they might eventually “ring the cash register” on AI-backed chat as the new query interface for most information:

  • Users go to their centralized servers and type text into a box that they host.
  • Advertisers go to those same servers to get in front of all the users.
  • Somehow, the advertisers and the users can be connected to one another, with Microsoft acting as a middleman.
  • Or, maybe the users pay Microsoft directly for the queries via a subscription or micropayment scheme.

In other words, Microsoft’s ability to squeeze profits out of the experience of interacting with an LLM presumes that billions of users will continue to flock to a handful of centralized services to get their queries answered. This is a vision, then, predicated on a world of centralized AI.

But what if we end up in a world of decentralized AI instead? What if I can download an app that will answer current questions from all of Wikipedia and Reddit, in some cases going out to both of those sites and pulling in fresh data?

What if some of the data sources are my favorite news websites and forums, all of which have signed up to provide data to the app and which get a cut of whatever revenue it generates?

Or, what if multiple such apps are powered by open-source language models and kept fresh by access to current data sources via an API? I could certainly see the New York Times publishing such an app all by itself, with the ability to answer any question from its vast archives of past issues.

Decentralized AI is a real threat

To give some technical context for why the vision of app-based, decentralized AI I’ve described above is quite possible, consider that the size of the models needed to do this might be on the order of a few gigabytes each. For instance, the Stable Diffusion model file that powers its image generation is from 2.5 to 4.5 GB, depending on the version, and it was trained on 240TB of image data. That’s an astonishing level of compression.

So, it may be possible that the average size of the models that we need to answer, say, 75% of our random questions about the world is roughly 3GB or so — about the size of a large mobile game download.

If I can download models that can reliably answer questions about their training data, why do I need to visit a Microsoft- or Google-hosted website and type queries into their text boxes? If I want recipes from my favorite recipe site, maybe I visit their site instead and talk to their model. If I want the current NYT or WaPo consensus on Ukraine, why won’t I just go to those sites and chat with their bots? Why does a Microsoft or a Google need to be involved in any of this?

The answer, of course, is that they don’t need to be involved. Decentralized AI can and will cut them out entirely, assuming it’s allowed to.

But that’s a big assumption because the future of decentralized AI is by no means guaranteed.

But before we go into who’s trying to kill decentralized AI and why, some caveats:

  1. Using the models to answer questions requires quite a bit of computing power. But these inference costs can and will be reduced with innovation, as this is an active area of research. Also, have you seen mobile phones, lately? There’s no shortage of computing power, and phone makers are always looking for ways to use it. After a few product cycles of optimizing the hardware for running queries, it’s not hard to imagine very fast local performance on many kinds of models.
  2. Yes, the models still make up facts. This hallucination is a big problem, but it’s also one that everyone is working on. The models will get better at faithfully representing the facts in their data sources.

We’ll have to fight for a decentralized future

I’ve written at length on my Substack about the forces arrayed against decentralized AI, so I won’t repeat that here. But to summarize: The aforementioned model files representing the “brains” of an AI like Stable Diffusion or ChatGPT could very easily be treated like digital contraband and wiped from the internet.

Everyone from Googlers to Google-hating former Googlers to indie artists to profiteering lawyers are hard at work constructing rationales for why these model files should be subject to the same censorship as child porn, 3D-printed gun files, pirated movies, SPAM, and malware.

Here are some of the rationales currently being explored for banning decentralized AI:

  • All the model files are full of copyright violations because they were trained on copyrighted data.
  • Generative text models can cause harm to the marginalized because “hate speech” can be coaxed out of them.
  • Generative text models will catastrophically increase the threat of “disinformation.”
  • Generative image models will be used for non-consensual fake porn of real people, many of them children.

We wouldn’t even have to pass any new laws to have these model files banned. All it would take was an agreement among a handful of large players that these files and any apps or sites based on them pose a threat. I imagine the following platforms can and probably will come together to effect what amounts to an effective ban on decentralized AI:

  • Google Play
  • Apple’s App Stores
  • Amazon Web Services
  • Cloudflare

This means a world where everyone gets to host their own models backed by their own data sources, and facts are by no means guaranteed. Going by the lessons of history, I’d say it’s probably unlikely.

It seems increasingly likely to me that the forces of centralization will succeed in getting unauthorized model files treated like contraband, and in five years, we’ll still be running all of our queries on servers hosted by one of the Big Tech platforms.

I hope I’m wrong about this, but I do know that if we’re going to have decentralized AI, then we’re going to have to fight for it.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Jon Stokes

Jon Stokes

Jon M. Stokes is co-founder of Ars Technica. He has written extensively on microprocessor architecture and the technical aspects of personal computing for a variety of publications. Stokes holds a degree in computer engineering from Louisiana State University and two advanced degrees in the humanities from Harvard University.
@jonst0kes →