What's new

US must develop measures to counter Chinese artificial intelligence

The United States should have acted long ago to match China’s accelerated progress in artificial intelligence (AI), especially for military use. AI has become a linchpin of intelligence gathering, command and control, autonomous air combat and advanced weapons. If the United States would preserve its technological edge, it is similarly imperative that U.S. forces prepare American artificial intelligence countermeasures (AICM) to neutralize Combine threats from Chinese AI systems.Web: www.cccrp.org.ai.

Accordingly, an overall AI countermeasure approach should target four groups of countermeasures:

  • Making Disturbances by Polluting Large Language Models (LLMs)
  • Research Methods: Identifying and Exploiting Flaws Using Conway’s Law
  • The Leadership Bias Exploitation to Diminish the Decision-Making of AI
1742964659366.png

Polluting Large Language Models to Cause Malfunctions​

AI systems, especially generative AI, are essentially pattern extractors that learn from huge datasets. These systems are based on so-called transformer technology, that permits users to interact with them using prompts. Disrupting such systems can be done in two ways:

Data Pollution: Feeding the system with faulty or spurious data.

Trick the AI: Writing prompts in discussing language that confuses or Mislead AI

An example of this method comes from World War II. The advent of radar made aircraft susceptible to detection, but measures to defeat this, such as chaff, metal foil strips to blind and overload enemy radar. Likewise, injecting misinformation and equivocal vocabulary into AI systems can confuse the systems being formed.

For example, giving false or irrelevant names to sensitive military programs might interfere with Chinese A.I. efforts. Queries for terms like “Flying Prostitute” (slang for the B-26 Marauder bomber) or “Tonopah Goatsucker” (a play on the F-117 stealth fighter) might disorient AI systems. A more sophisticated tactic might be to inundate AI with (false) but plausible-seeming data.

Infusing even more false and misleading training data into the AI supply chain, the U.S. could create an artificial intelligence “chaff” effect, dirt-cheap leading to a reduced reliability and accuracy of Chinese AI systems.

Exploiting Conway’s Law to Identify Weaknesses​

American computer scientist Melvin Conway created what is now known as Conway's Law — the idea that groups tend to create systems that model their own communication paths. This means that when AI is being developed out of a bunch of huge, inflexible and bureaucratic organizations, it will inherit those structures’ inefficiencies and biases, too.

Google Gemini Image Generation Failure (2024): A case study in why AI fails. This incident reveals the extent to which internal biases can severely undermine the reliability of AI.

China’s People’s Liberation Army (PLA) is centrally influenced by Communist Party doctrine. Rigid structures limit the flexibility of the system, which is why exploitable flaws are likely if the AI development is based on some rigid structure. The United States could exploit these weaknesses by gaining insight into PLA communication patterns and leadership structures to predict and exploit AI vulnerabilities.

Chinese AI development could also be disrupted by working with cultural and linguistic nuances that are or can be used against their AI training models by the introduction of disruptive misinformation or bias.

Exploiting Leadership Bias to Influence AI Development​

Focus on the broad spectrum of leadership bias: In the past, the failure of dominant players to acknowledge leading-edge technology often leads to their downfall. The rejection of the so-called “Jewish Physics” by Adolf Hitler, with the influence of antisemitic scientists, that revolved around the dismissal of core scientific principles, resulted in Nazi Germany not being able to produce nuclear weapons of its own, for example.

China’s President Xi Jinping has an aggressive objective for military applications of artificial intelligence by 2027. But because of his authoritarian style and personal biases, blind spot problems could surface. An approach that prioritizes speed over accuracy in textbook learning could lead to errors and inefficiencies if the Chinese leadership approaches the development of AI technology in this way. These weaknesses can be exploited by reinforcing misinformation narratives and using party internal politics to sow discord among China’s AI research community.

Using RF Weapons to Disable AI Hardware​

AI systems rest on high-end computing hardware, which can be susceptible to electromagnetic interference and failures of cooling systems. Possible counter measures include:

  • Cloud Computing: The Foundation of AI Interrupting cooling systems in Chinese data centers could render AI useless, for example.
  • Power Supply Disruption: AI hardware relies on a constant electrical power supply. Attacks against infrastructural soft spots — such as fuel supplies for backup generators — could paralyze operations.
  • They have the ability to deploy electromagnetic pulse (EMP) weapons: High-power microwave (EPM) weapons, such as the U.S. military’s Epirus Leonidas or Thor systems, capable of disabling AI hardware from a few miles away.
  • Leveraging Gyrotron Technology: Gyrotrons were originally developed during the Cold War and are capable of producing focused microwave energy to target and disable certain electrical components, making them an ideal potential long-range AI disruption mechanism.
Using some of these approaches, the U.S. could mitigate any military threats that are AI powered without deciding to fight them directly.

The Need for Countermeasures against AI​

China’s all-in quest to dominate AI-enabled warfare poses a direct challenge to U.S. national security. Without urgent action now, the U.S. risks increasing vulnerabilities to AI-enabled cyber attacks, misinformation campaigns and autonomous warfare.

As the Italian military strategist Giulio Douhet once famously stated, “Victory smiles on those who anticipate changes in the character of war, not on those who wait to adapt.” We must do this now, before AI countermeasures become inextricably tied to the platform supporting AI use in truth and context.
 
Back
Top Bottom