
In 2025, the cybersecurity landscape has fundamentally changed. Once the domain of firewalls and human analysts, it is now defined by artificial intelligence models operating at speeds and scales that no manual system can match. AI is no longer an experimental edge. It is the new baseline for both attackers and defenders. At the center of this transformation is the AI cybersecurity arms race.
The stakes could not be higher. According to TechRadar, automated cyberattacks surged to over 36,000 scans per second this year, representing a 16.7% year-over-year increase. This acceleration is not limited to volume. Attackers are now deploying self-adapting malware, reinforcement learning ransomware, and generative AI phishing tools. The AI cybersecurity arms race has intensified on every front.
Governments are scrambling to keep up. The United States is integrating AI across its cyber infrastructure, with 58% of security operations centers now using AI tools to reduce alert fatigue and accelerate detection. China, on the other hand, is building massive data centers and incorporating AI directly into its military doctrine. Meanwhile, decentralized hacker groups are innovating even faster, leveraging open-source tools and dark web marketplaces to develop autonomous malware.
This is no longer a linear contest between two superpowers. It is a three-way struggle between the United States, China, and a rapidly growing network of global threat actors. Each group is motivated by different incentives. National security, geopolitical dominance, and financial profit now all converge in this AI-driven battlefield.
As we move deeper into 2025, the AI cybersecurity arms race is reaching a critical inflection point. Generative AI has been fully weaponized. Supply chains are vulnerable. And new classes of AI-driven exploits threaten to outpace even the most advanced government defenses. The window for proactive action is narrowing.
This blog examines seven key developments driving the AI cybersecurity arms race. From China’s stealth infiltration tactics to the rise of agentic malware and the policy gaps that leave nations exposed, we will explore how the landscape is evolving, who is leading, and where the greatest threats lie.
Despite billions in defense spending, it is not yet clear who is winning. What is increasingly clear, however, is that the rules of cyber warfare are being rewritten in real time.
The AI Cybersecurity Arms Race Explained
What defines the AI cybersecurity arms race in 2025
The AI cybersecurity arms race refers to the escalating global competition to develop and deploy AI for cyber offense and defense. In 2025, this race is defined by the scale, speed, and autonomy of AI-driven operations. Cyberattacks are no longer solely human-initiated. Advanced language models, generative adversarial networks, and reinforcement learning algorithms now power persistent and self-directed intrusions.
Offensive capabilities have reached a new threshold. AI is used to automatically identify vulnerabilities, simulate phishing responses, and optimize malware in real time. Defensive systems, in turn, use AI to triage alerts, flag anomalies, and adapt to evolving threats. Both sides are locked in a feedback loop of rapid iteration.
What sets 2025 apart is the democratization of these tools. Nation-states are no longer the sole players. AI-driven cyber threats are now available as services. Prompt injection tools, penetration bots, and deepfake toolkits are traded on the dark web, fueling a surge in AI-powered cybercrime.
Key players and their motivations
The primary actors in this arms race are the United States, China, Russia, North Korea, and decentralized hacking syndicates. Each brings unique resources and goals. For example:
- The United States invests in AI for defense and surveillance, relying on both public sector initiatives and private sector innovation.
- China integrates AI into its military and cyber doctrines, using state-owned enterprises and domestic data laws to build national capacity.
- North Korea and Russia focus on disruption and revenue generation, often through cybercrime and cryptocurrency theft.
- Independent hacker groups seek profit, power, or disruption, enabled by AI-as-a-service and weak enforcement mechanisms.
Big tech companies, cybersecurity startups, and rogue developers also play critical roles. Tools developed for legitimate purposes are frequently repurposed and resold through illicit channels. The boundary between lawful innovation and malicious use continues to blur.
The infrastructure race: chips, compute, and data centers
Hardware remains a key differentiator. China’s Huawei is expected to produce 200,000 AI chips in 2025, still behind the U.S. but rapidly gaining ground. Meanwhile, over 250 AI-focused data centers are operational in China, supporting both civilian and military AI research.
In contrast, the United States maintains dominance in high-end GPUs, AI model development, and private sector innovation. But export controls and global supply chain tensions complicate access and distribution of these resources.
Control of compute power is directly tied to the ability to generate, train, and deploy advanced AI models. As AI becomes more integral to cybersecurity strategy, nations are treating chip production and data infrastructure as critical to national defense.
Why 2025 marks a turning point
By 2025, the AI cybersecurity arms race has entered a new phase. Generative AI is no longer experimental. It is embedded in phishing kits, ransomware generators, and social engineering platforms. Agentic AI tools are beginning to act independently, discovering and exploiting system vulnerabilities without human direction.
The result is a shift in power. Attackers can scale operations with minimal oversight. Defenders must now plan for threats that evolve continuously and adapt to countermeasures. The “democratization of cybercrime” is not a theory. It is a daily reality.
As the next section explores, China’s approach to this race is both centralized and strategic. Its doctrine prioritizes stealth, speed, and systemic advantage.
China’s Strategic Use of AI in Cyber Warfare
China’s military doctrine and cyber centralization
China’s approach to the AI cybersecurity arms race is systematic, centralized, and deeply embedded within its broader military and national security strategy. The Ministry of State Security, through its affiliate CNITSEC, plays a direct role in managing cyber operations and AI tool deployment. Unlike many Western models that separate civilian and military functions, China integrates AI development across sectors.
One of the most troubling aspects of China’s doctrine is its management of zero-day vulnerabilities. According to reports, CNITSEC has delayed disclosure of critical software vulnerabilities to maintain offensive cyber capabilities, reserving them for potential state use. This practice increases the risk to global systems and allows China to silently build up its cyber arsenal while remaining within the margins of plausible deniability.
This centralization also accelerates coordination across platforms. New AI models are not siloed in research institutions. They are rapidly tested, scaled, and implemented across government, military, and state-owned enterprises. The result is an increasingly fluid capability to shift from defense to offense, with minimal friction.
Volt Typhoon and stealth infiltration tactics

The AI cybersecurity arms race has enabled more covert tactics than ever before, and Volt Typhoon is a prime example. This Chinese state-linked group specializes in stealth operations against critical U.S. infrastructure. Rather than relying on brute-force attacks, Volt Typhoon uses “living off the land” techniques, leveraging existing administrative tools to mask their presence.
AI plays a central role in their strategy. It enables the automation of reconnaissance, privilege escalation, and credential theft while minimizing detection. Once inside a system, Volt Typhoon can remain undetected for extended periods, mapping networks and identifying weak points without triggering traditional security alerts.
These operations are not theoretical. They have already targeted communications, water systems, and power grids. The use of AI ensures these attacks are not only harder to detect but also faster to deploy and more adaptive to changing defenses.
Regulatory positioning and global influence
China is not just investing in AI for internal use. It is actively shaping the global conversation around AI governance. In 2023, Beijing introduced the “Interim Measures for the Management of Generative AI Services,” which require content labeling and ethical use certifications for AI models.
While framed as consumer protections, these measures also serve to standardize control over domestic AI development and position China as a responsible actor internationally. However, this regulatory façade contrasts sharply with the aggressive cyber operations conducted by Chinese-linked actors. This dual approach allows China to project diplomatic credibility while advancing military-aligned AI capabilities in secret.
Moreover, China has placed restrictions on the export of certain AI technologies, signaling a desire to maintain competitive advantages in the global AI cybersecurity arms race. These moves reflect an understanding that influence over AI standards and supply chains is as critical as software capabilities themselves.
AI-for-hire: Covert and commercial capabilities
A less visible but increasingly powerful dimension of China’s strategy is its use of private-sector partnerships. Domestic tech giants collaborate with state entities to develop AI-driven cyber tools. These relationships are rarely public but are essential to expanding China’s capabilities without directly implicating the government in offensive operations.
China also benefits from global supply chain integration. Despite export controls, many commercial AI components still pass through intermediary markets and reach Chinese buyers. This covert access to international tools, combined with robust domestic development, supports both offensive and defensive ambitions.
As a result, China is not only narrowing the technological gap with the United States, it is doing so while avoiding many of the legal and institutional barriers that slow Western cyber policy.
The next section explores how the United States is responding to these developments and whether its strategy can keep pace with the rapidly evolving threat environment.
The U.S. Approach to AI-Powered Cyber Defense
AI integration in U.S. cyber infrastructure

The United States has rapidly adopted AI technologies across its cybersecurity infrastructure in an effort to stay competitive in the AI cybersecurity arms race. A growing number of security operations centers (SOCs) now use AI systems to reduce alert fatigue, increase accuracy, and accelerate incident response. According to Ponemon Institute data, 58% of U.S.-based SOCs have integrated AI tools into their workflows, improving their ability to identify and address threats more efficiently.
This includes the use of generative AI to triage and interpret threat data, allowing defenders to prioritize incidents based on likely impact. Pattern recognition algorithms help detect anomalies across massive volumes of system logs, reducing the window of exposure. The shift toward proactive detection reflects the growing pressure to keep up with the speed and complexity of AI-driven cyber threats.
Despite these gains, the U.S. still faces significant challenges in scaling AI tools consistently across federal and critical infrastructure systems. Fragmentation remains a key issue, with varying capabilities and policies across agencies.
Public-private partnerships and defense contracts
To strengthen its position in the AI cybersecurity arms race, the U.S. has leaned heavily on public-private partnerships. Federal agencies such as DARPA and the Cybersecurity and Infrastructure Security Agency (CISA) work closely with major cybersecurity vendors and AI startups to develop next-generation defensive technologies.
These partnerships have resulted in several high-profile contracts for AI-based threat detection platforms, as well as expanded real-time data-sharing programs between government and industry. For example, the Joint Cyber Defense Collaborative (JCDC) facilitates rapid coordination between public and private actors during major cyber events. To get weekly briefings on how these partnerships evolve and how AI is being operationalized across public infrastructure, join the Quantum Cyber AI Brief now.
👉 Join the Quantum Cyber AI Brief
Startups also play a vital role in piloting new approaches. Many AI solutions used in U.S. defense are built on research that originated in the commercial sector. This blending of innovation pipelines has helped maintain a qualitative edge, even as other countries scale more quickly.
The challenge, however, lies in operationalizing these technologies across a vast and often outdated IT ecosystem. Without uniform standards and consistent funding, breakthroughs often remain siloed rather than scaled.
Policy momentum: executive orders and standards
Policy reforms have been instrumental in pushing forward AI integration. Recent executive orders on cybersecurity include mandates for agencies to adopt AI tools where appropriate and implement post-quantum encryption standards. These efforts aim to prepare the federal government for emerging threats, including those from AI-enhanced adversaries.
The Department of Homeland Security and the White House Office of Science and Technology Policy have released frameworks to encourage responsible AI use in cybersecurity. These include guidance on bias mitigation, model auditing, and coordination with civil liberties organizations. While still voluntary, such frameworks represent a growing acknowledgment that AI regulation and national defense must now intersect.
These moves are essential, given the global stakes of U.S. vs China cybersecurity. Policy alone cannot close the gap, but it can accelerate strategic alignment across government branches and industrial sectors.
Structural weaknesses and response delays
Despite its advantages, the United States faces structural issues that hinder its ability to fully respond to AI-driven cyber threats. The federal cybersecurity landscape remains fragmented, with overlapping jurisdictions and inconsistent readiness levels. In critical incidents, this lack of cohesion can delay response times and limit information sharing.
Moreover, the public sector often lags behind the private sector in adopting and deploying advanced tools. While some federal systems use cutting-edge AI defenses, others remain vulnerable due to outdated hardware, limited personnel training, or insufficient funding. These weaknesses can be exploited by state-sponsored hackers and autonomous malware.
In the context of cyber warfare 2025, speed is critical. Slow procurement cycles and regulatory bottlenecks leave gaps that well-resourced adversaries can exploit.
As we’ll see in the next section, hackers are moving faster than governments, often deploying tools that rival or exceed the sophistication of national defense systems.
The Rise of Autonomous Hacking Tools
Generative AI’s role in evolving threat vectors
The rise of autonomous hacking tools marks a pivotal phase in the AI cybersecurity arms race. What once required human expertise can now be executed by generative AI models operating independently. These tools are changing the nature of cyberattacks, particularly in the way they target users and systems.
Phishing campaigns have evolved significantly. AI-written emails now mimic tone, syntax, and even emotional cues with alarming precision. Deepfake kits for audio and video impersonation are bundled into off-the-shelf malware packages, allowing attackers to convincingly spoof executives, IT staff, or law enforcement. These attacks succeed not just because of technical skill but because they exploit trust at scale.
Penetration bots are also becoming widely accessible. Once restricted to advanced threat actors, they are now available through Ransomware-as-a-Service (RaaS) platforms. These bots use AI to scan, probe, and identify vulnerabilities in real time, adjusting tactics dynamically to bypass defenses. This represents a significant evolution in AI-driven cyber threats, where automation enhances both precision and volume.
RansomAI and the future of self-improving malware

One of the most alarming developments in cyber warfare 2025 is the emergence of reinforcement learning malware. Researchers have documented prototypes like “RansomAI,” which adapt their encryption strategies based on the target environment. These self-improving programs evolve with each attack, increasing their success rates while evading traditional detection tools.
What makes RansomAI distinct is its ability to learn from failed attempts. If one encryption method triggers a firewall alert, it adjusts on the next iteration. Over time, this learning loop can produce malware that is uniquely tailored to each victim’s infrastructure.
In the broader AI cybersecurity arms race, tools like RansomAI demonstrate how quickly attackers can scale innovation without waiting for human input. Defensive systems must now contend with software that evolves in response to their very presence.
Credential theft at unprecedented scale
Credential-based attacks are not new, but AI has supercharged their efficiency. More than 1.7 billion stolen credentials are now circulating on the dark web, with a 42% increase in attacks leveraging these data sets in 2025 alone.
AI models are used to validate stolen logins automatically, checking them across banking, government, and enterprise systems. Once verified, these credentials serve as backdoors for deeper infiltration. Attackers often combine credential stuffing with phishing and privilege escalation techniques powered by AI.
The speed at which these attacks occur means that even a short delay in detection can result in full system compromise. The scale of the threat has outpaced manual response capabilities, making AI-powered defenses a necessity rather than a luxury.
Dark web marketplaces for AI-driven exploits
The commodification of AI-driven cyber weapons is expanding rapidly. Dark web forums now offer subscription-based access to malware generation tools, phishing kit builders, and synthetic identity generators. These services are often bundled with customer support, updates, and usage tutorials.
Some platforms operate as full software-as-a-service (SaaS) environments, where users input targets and receive tailored attack scripts. Prompt injection tools are sold with language models optimized for bypassing content filters and tricking user interfaces.
This ecosystem lowers the technical barrier to entry for cybercrime, allowing even novice users to launch complex, adaptive attacks. The AI cybersecurity arms race is no longer limited to skilled operators. It is open to anyone with cryptocurrency and a login.
In the next section, we’ll examine how hackers are leveraging these tools to move faster than governments can react, challenging traditional security paradigms.
Are Hackers Outpacing Governments?
Agentic AI and the rise of independent threat actors
One of the most disruptive developments in the AI cybersecurity arms race is the emergence of agentic AI. These are autonomous tools capable of independently discovering, testing, and exploiting system vulnerabilities. Unlike earlier generations of malware that required constant human oversight, agentic AI operates with minimal input. It navigates digital environments, makes decisions, and adapts in real time.
This evolution fundamentally alters the attacker-defender dynamic. Governments typically operate through layered approvals, legacy infrastructure, and strict protocols. In contrast, agentic systems can iterate continuously, learning from each engagement without the limitations of bureaucracy. According to PRNewswire, attackers are already deploying AI tools that emulate human decision-making in exploit development, allowing them to bypass conventional safeguards.
The shift from tool to actor means that the cyber battlefield is now populated by systems that are not just reactive, but increasingly proactive.
Cybercrime-as-a-service lowers the barrier to entry
In addition to sophisticated threat actors, a growing number of less experienced individuals are entering the arena through cybercrime-as-a-service offerings. These platforms provide access to pre-built AI-driven cyber threats, ranging from phishing bots to penetration testing kits. Some services even offer support lines, updates, and dashboard analytics.
The result is an expanded attacker base. Users no longer need deep technical knowledge to launch attacks. With a modest investment, they can gain access to tools that rival those used by state-sponsored hackers. The proliferation of AI-as-a-service options has made cybercrime more scalable and decentralized than ever.
This trend contributes directly to the growth of AI-driven cyber threats worldwide. As more threat actors join the fray, governments struggle to monitor, identify, and contain the spread of malicious AI.
China’s speed advantage vs. U.S. innovation depth
The AI cybersecurity arms race between the U.S. and China reveals a critical tension: speed versus depth. China’s centralized approach enables rapid deployment of new technologies, often bypassing legal or institutional constraints. The United States, while leading in model development and private-sector research, struggles to integrate these advances into government systems at pace.
According to Dark Reading, China is quickly catching up to the U.S. in AI cybersecurity development, particularly in the application of offensive capabilities. This narrowing gap increases pressure on U.S. agencies to accelerate innovation cycles and reduce internal friction.
The comparative advantage now shifts from who has the best technology to who can deploy it fastest and most effectively in live environments.
Defensive lag and policy paralysis
Despite growing awareness, most governments remain reactive. Policy frameworks lag behind technological progress. Procurement systems are slow. Inter-agency collaboration is inconsistent. These gaps give attackers a persistent edge, allowing them to innovate faster than defenders can adapt.
This mismatch is especially dangerous in the context of cyber warfare 2025. The velocity of attacks, driven by AI, demands rapid, coordinated responses. Yet public sector systems remain constrained by outdated infrastructure and complex legal barriers.
While some nations are pushing for reform, the current trajectory favors the offense. Without systemic change, the AI cybersecurity arms race will continue to tip toward attackers.
The consequences of this imbalance are already visible in real-world breaches. In the next section, we will examine some of the most significant failures and what they reveal about the current state of global cyber defense.
Major AI Cybersecurity Failures and Breaches
Lazarus Group and the $1.4 billion crypto theft
Few incidents illustrate the stakes of the AI cybersecurity arms race more clearly than the Lazarus Group’s $1.4 billion cryptocurrency theft. This North Korean state-sponsored hacking unit used AI-driven obfuscation tools and automated laundering systems to move funds through decentralized exchanges with minimal detection.
The group employed machine learning models to simulate legitimate transaction patterns and dynamically adjust timing and volume to evade anti-money laundering (AML) algorithms. Their approach combined technical sophistication with political intent, reflecting a new class of AI-driven cyber threats that blur the lines between national security and criminal activity.
North Korea’s use of these tactics shows how smaller nations can leverage AI to carry out operations that would have required vast human networks just a few years ago. In this case, a state-sponsored actor used AI not only to breach digital systems, but also to manipulate financial platforms at scale. For a related case involving deceptive AI job scams by North Korea, see our full breakdown here: North Korean Deepfake Job Scam: 7 Shocking Red Flags.
ChatGPT exploited for disinformation and surveillance
Misuse of generative AI has also emerged as a critical risk. OpenAI disclosed multiple cases in which ChatGPT was exploited for malicious purposes, including four operations linked to China. These campaigns involved surveillance scripting, automated propaganda content, and the generation of deceptive narratives in multiple languages.
These activities highlight how generative AI can be weaponized not just for intrusion, but also for influence. The ability to create realistic, targeted messaging at scale introduces a new vector for social engineering and geopolitical manipulation.
This trend ties directly into the broader U.S. vs China cybersecurity conflict, where information warfare becomes just as critical as network security.
Misconfigured AI defenses in public institutions
While much of the focus has been on offensive capabilities, failures in AI-based defense systems have also caused serious problems. Several public institutions have experienced security breaches due to overreliance on poorly trained models. In some cases, AI failed to identify lateral movement by intruders. In others, it flagged too many false positives, leading to alert fatigue and blind spots.
These breakdowns underscore the danger of deploying AI tools without robust oversight, rigorous training, or human validation. Defensive AI systems are only as effective as the data they are built on. If improperly tuned, they can create a false sense of security while leaving critical infrastructure exposed.
Real-world impact on civilian infrastructure
AI cybersecurity failures are no longer abstract. They are affecting hospitals, utilities, and public services. One clear example involves ransomware attacks that targeted hospital systems, forcing staff to revert to manual charting and nearly causing fatal dosing errors. These attacks often used AI to bypass detection systems and accelerate system lockdowns.
As detailed in our analysis of hospital vulnerabilities, the consequences of compromised cybersecurity are not limited to data loss. They extend to patient safety and public trust.
These failures reveal the gap between technological promise and operational readiness. The AI cybersecurity arms race is not just about who has better tools. It is about who can deploy them responsibly and effectively.
The global implications of these failures demand attention. The next section explores whether international cooperation is possible or if we are heading toward a digital arms spiral without restraint.
Global Implications: Cyber Arms Treaties or Chaos?
The risk of AI-driven global destabilization
As the AI cybersecurity arms race accelerates, the lack of international norms is becoming a serious threat to global stability. Experts warn that AI is now “democratizing cybercrime,” allowing smaller nations and independent actors to wield capabilities once limited to major powers.
This shift has broad implications. A single AI-powered exploit can now trigger disruptions in healthcare, transportation, or energy across borders. The low cost and high impact of such tools mean that cyberattacks are not only more frequent, but potentially more damaging. Without enforceable norms or deterrents, the world risks entering a digital arms race with no safety valves.
In this environment, escalation becomes more likely. Attribution remains difficult. Misinterpretations or false flags could lead to retaliation based on incomplete data. As AI-driven cyber threats grow in sophistication, the absence of shared rules could result in consequences beyond the digital domain.
Current treaties and legal voids

Despite the scale of the problem, there are currently no binding international agreements governing the use of AI in cyber warfare. The Tallinn Manual provides voluntary guidelines on cyber conflict, but it does not address AI-specific tools or automation protocols.
The Budapest Convention on Cybercrime focuses on transnational cooperation, but it lacks teeth in enforcement and does not reflect recent technological changes. No global treaty currently prohibits the use of autonomous malware, AI-written disinformation campaigns, or agentic cyber tools.
This legal void leaves space for unchecked innovation. State-sponsored hackers can operate without fear of formal reprisal, and private actors can sell AI tools to virtually anyone. The absence of multilateral governance mechanisms puts every nation at risk, regardless of its cyber readiness.
China’s domestic regulation vs. global engagement
China has taken a dual approach to AI policy. Domestically, it has introduced measures such as the “Interim Measures for Generative AI Services,” which include labeling requirements and content restrictions. These rules are intended to regulate AI outputs within Chinese borders (China’s Interim Measures, White & Case, https://www.whitecase.com/insight-alert/chinas-interim-measures-generative-ai).
However, these domestic controls contrast sharply with China’s aggressive use of AI abroad. From stealth cyber operations to misinformation campaigns, Chinese state-linked entities continue to deploy offensive AI tools in global operations. This discrepancy complicates efforts to establish global norms, as other nations question whether China would honor or enforce any multilateral commitments.
In the broader context of U.S. vs China cybersecurity rivalry, regulatory divergence becomes a strategic asset. China can position itself as a responsible AI regulator while continuing to develop and export disruptive technologies through informal channels. This dual strategy mirrors broader global risks tied to identity and verification infrastructure. For more on that, see our post on why dismantling digital ID systems may increase long-term cyber vulnerability: Trump Digital ID Repeal: 5 Reasons It’s a Risky Mistake.
What a meaningful treaty would require
Creating an effective cyber arms agreement in the age of AI would require several key components. First, nations would need to agree on verification protocols. These could include independent audits, AI model classification systems, and mandatory reporting of high-risk capabilities.
Second, attribution mechanisms must improve. Without credible methods for identifying the origin of AI-driven cyber threats, enforcement is impossible. Third, enforcement must carry real consequences. That includes economic sanctions, digital isolation, or reciprocal constraints for violators.
Finally, major players must participate. A treaty without the United States, China, or the European Union would lack legitimacy and reach. Success will depend on aligning these actors around shared interests in stability, even if their strategic goals diverge.
The next section evaluates whether any party is truly winning the AI cybersecurity arms race, or if we are heading toward a future defined by persistent vulnerability and decentralized conflict.
Who’s Really Winning and What Happens Next?
Comparative advantage: U.S., China, and hacker groups
As the AI cybersecurity arms race unfolds, the question of who holds the advantage is becoming more complex. Rather than a binary contest between two nations, the landscape now involves a multi-front battle among the United States, China, and decentralized hacker ecosystems.
The United States leads in foundational AI research, advanced chip manufacturing, and private-sector innovation. It has world-class universities, dominant cloud providers, and a vibrant cybersecurity startup ecosystem. However, its fragmented government structure and slower procurement cycles limit the speed at which these advantages are applied to national defense.
China, by contrast, has achieved operational speed through centralization. It may lag in chip quality and model innovation, but it compensates with rapid deployment, strong coordination between state and private sectors, and regulatory structures that prioritize state security goals.
Meanwhile, global hacker groups are outpacing both nations in agility. Their lack of bureaucracy allows for constant iteration. AI-as-a-service tools, dark web marketplaces, and modular malware kits have enabled these actors to become significant players in the cyber warfare 2025 landscape.
Most urgent trends to monitor in 2025–2026
Several trends stand out as critical to the direction of the AI cybersecurity arms race:
- Agentic malware: These tools continue to evolve, requiring defensive systems that can match their autonomy.
- LLM weaponization: Large language models are increasingly being used to generate phishing emails, disinformation, and social engineering scripts at scale.
- Deepfake infiltration: Audio and video deepfakes are being used to spoof identities and gain unauthorized access to secure systems.
- AI in supply chain attacks: Adversaries are embedding AI into supply chain entry points, making detection harder and impact broader.
Each of these developments reflects a shift toward persistent, adaptive threats. Traditional defenses based on static rules or manual intervention will no longer suffice.
What governments must do to close the gap
Closing the gap will require action on multiple fronts. Governments must invest in AI-specific cyber infrastructure, including faster data pipelines, threat intelligence platforms, and real-time analytics. Funding should be allocated not just to new tools, but to deployment and integration across agencies.
International cooperation is also key. The AI cybersecurity arms race is global. No single nation can defend against every threat alone. Governments must expand data-sharing agreements, co-develop detection models, and coordinate response plans with both allies and industry partners.
Streamlining policy is equally important. Agencies need the legal authority and technical capacity to respond rapidly to evolving AI-driven cyber threats. This includes faster procurement pathways, more flexible hiring processes, and clear mandates for AI integration.
Final verdict: Is anyone truly in control?

No actor appears to have full control of the battlefield. The United States holds the technological high ground but struggles with execution. China is advancing rapidly, balancing innovation with authoritarian coordination. Hacker groups thrive in the gaps between regulation and enforcement, exploiting vulnerabilities faster than defenders can respond.
The AI cybersecurity arms race is not just a contest of resources. It is a test of agility, coordination, and long-term vision. Without a shift in how nations approach AI, the advantage may continue to favor those who operate without rules. We cover emerging trends like this in our newsletter, with weekly breakdowns of the tools, threats, and policies shaping cybersecurity and AI. Subscribe here to stay ahead of what’s coming next.
With these dynamics in mind, it is critical to summarize the most important insights and address common questions that policymakers and practitioners are asking today.
Key Takeaways
- The AI cybersecurity arms race is reshaping global security dynamics. AI has become the core driver of both cyber offense and defense, making speed, automation, and scale central to modern cyber warfare strategies.
- China is strategically embedding AI into its military and cyber infrastructure. Through centralized control and rapid deployment, it is narrowing the gap with the United States in both offensive and defensive cyber capabilities.
- The United States leads in innovation but struggles with execution. While it maintains a technological edge through private-sector research and AI development, fragmented governance slows deployment across national defense systems.
- AI-driven cyber threats are growing more autonomous and accessible. Tools like agentic malware, deepfake phishing kits, and Ransomware-as-a-Service platforms are lowering the barrier to entry for attackers and expanding the global threat landscape.
- State-sponsored hackers are no longer the only major players. Decentralized hacker groups using AI-as-a-service tools now rival nation-states in their ability to conduct persistent and adaptive attacks.
- The absence of enforceable global norms increases risk for all nations. Without treaties or governance frameworks, escalation and misattribution could result in broader conflict or digital destabilization.
FAQ Section
Q1: What is the AI cybersecurity arms race?
The AI cybersecurity arms race refers to the accelerating global competition among nations, private actors, and hacker groups to develop and deploy artificial intelligence tools for cyber offense and defense. It involves autonomous threat detection, AI-driven cyber threats, and increasingly adaptive malware that challenges traditional security systems.
Q2: Why is Volt Typhoon a big deal?
Volt Typhoon is a Chinese state-sponsored hacker group that uses stealth techniques and AI to infiltrate U.S. infrastructure. Its “living off the land” strategy avoids detection by leveraging legitimate system tools. This makes it a powerful example of how state-sponsored hackers can weaponize AI without triggering alerts.
Q3: Can AI defend against AI-based attacks?
Yes, AI can be used to defend against AI-driven cyber threats, but success depends on the quality of models, training data, and real-time adaptability. The public sector often lags behind in deploying these tools, making it difficult to keep up with rapidly evolving attacks.
Q4: What are agentic AI threats?
Agentic AI refers to autonomous systems that can independently identify and exploit cybersecurity vulnerabilities. These tools act like intelligent agents, requiring minimal human direction. They represent a significant evolution in cyber warfare 2025 by enabling real-time adaptation and large-scale intrusion.
Q5: Are there any international rules governing AI in cyberwarfare?
Currently, no binding international treaties specifically regulate the use of AI in cyber conflict. Some frameworks exist for general cybercrime, but they do not address emerging technologies like autonomous malware or AI-driven disinformation. This lack of governance is a major gap in U.S. vs China cybersecurity competition.