© 2024 Blaze Media LLC. All rights reserved.
US must develop countermeasures against Chinese AI — here’s how
Tanaonte via iStock/Getty Images

US must develop countermeasures against Chinese AI — here’s how

China’s headlong embrace of artificial intelligence could give the People’s Liberation Army huge military advantages in a future attack on the United States.

The rise of artificial intelligence in all things military, from intelligence gathering and autonomous air combat maneuvering to advanced loitering munitions, creates a big problem for the United States. While staying ahead of the Chinese both in terms of technological advancement and in fielding new and improved weapons systems is crucial, so is establishing a doctrine of artificial intelligence countermeasures to blunt Chinese AI systems.

Such a doctrine should begin to take shape around four avenues: polluting large language models to create negative effects; using Conway’s law as guidance for exploitable flaws; exploiting bias among our adversaries’ leadership to degrade AI systems; and using radio-frequency weapons to disrupt AI-supporting computer hardware.

Pollute large language models

Generative AI can be expressed as the extraction of statistical patterns from an extremely large data set. A large language model developed from such an enormous data set using “transformer” technology allows a user to access it through prompts. A prompt, in turn, is a natural language text that describes the function the AI must perform. The result is a generative pre-trained large language model.

Such an AI system might be degraded in at least two ways: Either pollute the data or attack the “prompt engineering.” Prompt engineering is a term of art within the AI community that describes the process of structuring instructions that can be understood by the generative AI system. A programming error would cause the AI large language model to hallucinate.”

An example from World War II illustrates the importance of countermeasures when an enemy can manifest speedy and exclusive information to the battlespace.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed 'Stormy Daniels.'

The development of radar (an acronym for radio detecting and ranging) was, in itself, a method of extracting patterns from an extremely large database: the vastness of the sky. An echo from a radio pulse gave an accurate range and bearing of an aircraft.

To defeat enemy radar, the British intelligence genius R.V. Jones recounted in Most Secret War,” it was necessary to put information into the German radar system that resulted in gross ambiguity. Jones turned to Joan Curran, a physicist at the Technical Research Establishment, who developed the optimum size and shape of aluminum foil strips, called “window” by the Brits and “chaff” by the Americans, used to create thousands of reflections that, in turn, overloaded and blinded the German radars.

How can the U.S. military and intelligence communities introduce chaff to generative AI systems, especially when trying to deny access to new information about weapons and tactics?

One way would be to assign names to those weapons and tactics that are at once ambiguous and non sequiturs. For example, such “naturally occurring” search ambiguities include the following: A search for “Flying Prostitute” will immediately reveal data about the B-26 Marauder medium bomber of World War II.

A search for “Gilda” and “Atoll” will retrieve a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.

A search of “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled by such accidental ambiguities, it would be possible to grossly skew results of a large language model function by deliberately using nomenclature that occurs in very large iterations and is extremely ambiguous.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed “Stormy Daniels.” For code names of secret projects, try “Jenna Jameson” instead of “Rapid Dragon.”

Such an effort in sleight of hand would be useful for operations and communications security by confusing adversaries seeking open intelligence data.

For example, one can easily imagine the consternation that Chinese officers and NCOs would experience when their young soldiers expended valuable time meticulously examining every single image of Stormy Daniels to ensure that she was not the newest U.S. fighter plane.

Even “air gapped” systems like the ones being used by U.S. intelligence agencies can be affected when the system updates information from internet sources.

Note that such an effort must actively and continuously pollute the data sets, like chaff confusing radar, by generating content that would populate the model and ensure that our adversaries consume it.

A more sophisticated approach would use keywords like “eBay” or “Amazon” or “Alibaba” as a predicate and then very common words such as “tire” or “bicycle” or “shoe.” By then contracting with a commercial media agency to do lots of promotion of the “items” across traditional and social media, it would tend to clog the system.

Use Conway’s law

Melvin Conway is an American computer scientist who in the 1960s conceived the eponymous rule that states: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

De Caro’s corollary says: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider the Google Gemini fiasco. The February launch of Gemini, Google’s would-be answer to ChatGPT, was an unmitigated disaster, which tanked Google’s share price and made the company a laughingstock. As the Gemini launch went forward, its image generator hallucinated.” It created images of black Nazi soldiers and female Asian popes.

In retrospect, the event was the most egregious example of what happens when Conway’s law collides with organizational dogma. The young, woke, and historically ignorant programmers myopically led their company into a debacle.

But for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany.

Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

If the extremely well-paid, DEI-obsessed computer programmers at the Googleplex campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the highly regimented, ill-paid, constantly indoctrinated, young members of the People’s Liberation Army who work on AI?

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

When a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra, which employed the entirety of the RAF British Bomber Command — 596 bombers — had the stated mission of killing all the German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. 8thAir Force raids in July and August 1944.

In 1944, the Office of Strategic Services dispatched multilingual agent and polymath Moe Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to building an atomic bomb. Berg decided (correctly) that the German was off track. Letting him live actually kept the Nazis from success.

It is no secret that five Iranian nuclear scientists have been assassinated (allegedly) by the Israelis in the last decade.

Advances in AI that could become existential threats could be dealt with in similar fashion. Bullets are cheap. So is C-4.

Exploit biases to degrade AI systems

Often, the people and organizations funding research and development skew the results because of their bias. For example, Heisenberg was limited in the paths he might follow toward developing a Nazi atomic bomb because of Hitler’s perverse hatred of “Jewish physics.” This attitude was abetted by two prominent and anti-Semitic German scientists, Philipp Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan science.” The result effectively prevented a successful German nuclear program.

Returning to the Google Gemini disaster, one only needs to look at the attitude of the Google leadership to see the roots of the debacle. Google CEO Sundar Pichai is a naturalized U.S. citizen whose undergraduate college education was in India before he came to the Unites States. His ties to India remain close, as he was awarded the Padma Bhushan, India’s third-highest civilian award, in 2022.

In congressional hearings in 2018, Pichai seemed to dance around giving direct answers to explicit questions, a trait he demonstrated again in 2020 and in an antitrust court case in 2023.

His internal memo after this year’s Gemini disaster mentioned nothing about who selected the people in charge of the prompt engineering, who supervised those people, or who, if anyone, got fired in the aftermath. More importantly, Pichai made no mention of the internal communications functions that allowed the Gemini train wreck to occur in the first place.

Again, there is epiphany here. Bias from the top affects outcomes.

As Xi Jinping continues his move toward autocratic authoritarian rule, he brings his own biases with him. This will eventually affect, or more precisely infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world-class military standards by 2027, the 100thanniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read: AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities, and promote combat-oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

Target chips with energy weapons

Artificial intelligence depends on extremely fast computer chips whose capacities are approaching their physical limits. They are more and more vulnerable to lack of cooling — and to an electromagnetic pulse.

In the case of large cloud-based data centers, cooling is essential. Water cooling is cheapest, but pumps and backup pumps are usually not hardened, nor are the inlet valves. No water, no cooling. No cooling, no cloud.

The same goes for primary and secondary electrical power. No power, no cloud. No generators, no cloud. No fuel, no cloud.

Obviously, without functioning chips, AI doesn’t work.

AI robots in the form of autonomous airborne drones, or ground mobile vehicles, are moving targets — small and hard to hit. But their chips are vulnerable to an electromagnetic pulse. We’ve learned in recent times that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-power microwave systems such as Epirus, Leonidas, and Thor can burn out AI systems at a range of about three miles.

Another interesting technology, not yet fielded, is the gyrotron, a Soviet-developed, high-power microwave source that is halfway between a klystron tube and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and specific amplitude. It could therefore reach out and disable a specific kind of chip, in theory, at greater ranges than a “you fly ’em, we fry ’em” high-power microwave weapon, now in the early test stages.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack on the United States.

Thus, the need to develop AI countermeasures now is paramount.

In World War I, the great Italian progenitor of air power, General Giulio Douhet, very wisely observed: “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by artificial intelligence as it applies to warfare, Douhet’s words could not be truer today.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Chuck de Caro

Chuck de Caro

Chuck de Caro was CNN's very first special assignments correspondent. Educated at Marion Military Institute and the U.S. Air Force Academy, he later served with the 20th Special Forces Group (Airborne). He has taught information warfare at the National Defense University and the National Intelligence University. He was an outside consultant for the Pentagon’s Office of Net Assessment for 25 years.