The role of AI in future warfare

Subscribe to techstream, michael e. o’hanlon michael e. o’hanlon director of research - foreign policy , director - strobe talbott center for security, strategy, and technology , co-director - africa security initiative , senior fellow - foreign policy , strobe talbott center for security, strategy, and technology , philip h. knight chair in defense and strategy @michaeleohanlon.

November 29, 2018

  • 19 min read

This report is part of “ A Blueprint for the Future of AI ,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

To illustrate how artificial intelligence (AI) could affect the future battlefield, consider the following scenario based on a future book I am writing entitled The Senkaku Paradox: Risking Great Power War over Limited Stakes. The scenario, imagined to occur sometime between now and 2040, begins with a hypothesized Russian “green men” attack against a small farming village in eastern Estonia or Latvia. Russia’s presumed motive would be to sow discord and dissent within NATO, weakening the alliance. Estonia and Latvia are NATO member states, and thus the United States is sworn to defend them. But in the event of such a Russian aggression, a huge, direct NATO response may or may not be wise. Furthermore, the robotics and AI dimension of this scenario, and a number of others similar to it, will likely get more interesting as the years go by.

A hypothetical scenario in which Russia creates a pretext to slice off a piece of an eastern Baltic state, occupying it in purported “defense” of native Russian speakers there, could cause enormous problems if NATO chose to reverse the aggression. In that event, it could require a massive deployment of Operation Desert Storm-like proportions to liberate the territory while facing down any Russian reinforcements that might be sent. In a less successful case, Russia could interdict major elements of that attempted NATO deployment through some combination of cyberattacks, high-altitude nuclear bursts causing electromagnetic pulse, targeted missile or aerial strikes on ports and major ships, and perhaps even an “escalate to de-escalate” series of carefully chosen nuclear detonations against very specific targets on land or sea. 1  While the latter concept of nuclear preemption is not formally part of Russian military doctrine, it could influence actual Russian military options today. 2 Alternatively, the NATO deployment could succeed, only to face subsequent Russian nuclear strikes once evidence of NATO’s conventional superiority on the Baltic battlefields had presented Moscow with the Hobson’s choice of either escalating or losing. 3

By 2040, some aspects of this kind of scenario could improve for American and NATO interests. The clarity and perhaps the scale of NATO’s security commitments to the Baltic states might have strengthened, reducing the chances of deterrence failure in the first place and improving the initial capacity for resistance to any Russian aggression. 4 But on balance, technological innovation, including advancements in robotics and AI, makes it quite possible that things could also get worse.

To be sure, missile defenses will improve. But so will the missiles they have to counter, in terms of their speed and ability to maneuver warheads, along with the use of multispectral sensors or seekers.

Most aspects of the nuclear situation are unlikely to change. Missile defenses may improve, and may include lasers for point defense in some places. These laser defenses could help protect ships or ports or airfields against various types of attack. But because such laser weapons inevitably fall off rapidly in power (as the square of the distance between the weapon and its target), it will be challenging for missile defenses to provide area protection. Thus, while it is at least conceivable that ports and airfields could become much better protected, it is hard to escape the prediction that rail lines, road networks involving large numbers of bridges, tunnels, or elevated routes, and large concentrations of supplies in depots or warehouses will be at least as vulnerable in 2040 as they are today. To be sure, missile defenses will improve. But so will the missiles they have to counter, in terms of their speed and ability to maneuver warheads, along with the use of multispectral sensors or seekers.

Satellites in space are likely to remain highly vulnerable to nuclear attack. That is especially true of satellites in low Earth orbit (LEO), as they are located at altitudes similar to those traversed by ballistic missiles on typical flight trajectories, so they can be attacked by ballistic missile defense technologies. Such objects are also vulnerable, over a period of months, to the residual effects of nuclear detonations in the Van Allen belts—areas of the Earth’s magnetic field where protons and electrons generated by nuclear explosions can “get stuck,” damaging satellites on each orbital pass. Shielding can, in theory, protect against more distant explosions and against such radiation-pumped Van Allen belts, at a typical cost of perhaps 10 percent of the overall satellite. However, it is unlikely that most commercial satellites will be shielded unless the government subsidizes such endeavors. Even with shielding, advanced imaging satellites and other high-value assets in LEO will remain vulnerable since they may be individually and directly attacked by an adversary. 5

By 2040, many cyber systems controlling NATO weaponry and other platforms should be more resilient to attack. That is because NATO will have had two decades to address problems that are now widely understood. That is unlike the case 20 years ago, when, even though the Y2K debacle and other scares should have sobered people to the risks of inadequate computer security measures, a general sense of complacency about great-power relations discouraged meaningful action against threats to electronics from hacking, high-altitude nuclear bursts, malicious supply-chain actors who might compromise the integrity of semiconductor chips, and so on. Admittedly, this conclusion assumes greater vigilance on the part of NATO states than will perhaps prove to be the case. However, progress in this arena will probably not be uniform. It seems relatively unlikely to result in meaningful hardening of the critical civilian infrastructure on which militaries depend.

Even if classic computer hacking, spoofing, advanced persistent threats, and related measures gradually lose some of their effectiveness, a new set of challenges is appearing on the horizon. One challenge could be a more efficient form of advanced persistent threat in which efforts to penetrate an adversary’s computer systems employ automated capabilities with massive raw computational power that continually adjust tactics to the defenses encountered.

Even if classic computer hacking, spoofing, advanced persistent threats, and related measures gradually lose some of their effectiveness, a new set of challenges is appearing on the horizon.

Another major complicating development could be the advent of constellations or swarms of smart robotic devices. For example, by 2040, large numbers of smart sea mines could pose enormous threats to shipping; in the scenario of Russian aggression, NATO would need to mount a response to these threats. 6 The devices might in effect be miniature submarines, with sensors and explosives as payload. Russia is already strong in submarine technology 7 and could probably master this type of technology in the years to come. Such unmanned underwater vehicles (UUVs) could be widely deployed in places like the Baltic Sea in times of crisis. Rather than having to hunt for a couple dozen Russian submarines, as might be the case today—already a daunting proposition—NATO forces seeking to reach Baltic ports might need to search for hundreds or even thousands of potent threats. It seems implausible that arms control agreements would prevent the development and deployment of such autonomous systems, not only because of the verification challenges but also because the United States itself will feel powerful incentives to create more autonomous systems, including those with the ability to employ lethal force under certain types of conditions, as Paul Scharre has convincingly argued. 8

In another scenario, swarms of quadcopters (unmanned helicopters with four rotors), each packing several kilograms of explosives—thus able to destroy a modern jet if detonated at the right location—might attack NATO air bases and the aircraft on them. Terminal defenses using lasers could possibly destroy some of the incoming threat devices or weapons, but the swarm could then choose a different attack route or seek to overwhelm a defense with a saturation attack. Swarms could also deploy in the airspace surrounding an airfield, staying out of range of any such directed-energy defenses and attempting to strike aircraft as they left or approached a runway.

Related Content

Steven Pifer

July 21, 2015

Robert Einhorn, Steven Pifer

September 21, 2017

Michael E. O’Hanlon

May 2, 2019

There are some situations that pose particular challenges. Imagine loitering aerial devices akin to the sensor fuzed weapon (SFW) that have been part of the U.S. armamentarium for years. This weapon is somewhat controversial: it is categorized as a “cluster munition,” a type of weapon banned by international convention, though the United States is not a party to the accord. 9 However, it is better thought of as a type of robotic weapon. The benefit of such technologies in combat was discussed extensively as far back as 1998, when a RAND study envisioned their use in situations such as an Iraqi armored vehicle attack against Saudi Arabia on major highways. In that model, which considered technologies available at the time, some 10,000 weapons carrying 40 Skeet submunitions, or perhaps the Brilliant Anti-Tank (BAT) weapon, would suffice to destroy several thousand armored vehicles and effectively halt an enemy assault. The total cost of the ordnance was estimated at several billion dollars. 10 Such munitions could be used in a similar way against NATO movements on major roads in Europe, advancing from western points toward Poland and the Baltic states, with the munitions delivered in the future by small robotic devices. Swarms of robotic devices carrying munitions payloads could also be used to attack trains or road convoys in transit, perhaps after being positioned by special forces that had penetrated into NATO territory.

Another type of robotic swarm might be used to create an interconnected network of unmanned aquatic systems functioning, in effect, as mobile mines or torpedoes. This is not presently a technology concept that the U.S. Navy has come close to operationalizing; a 2013 RAND study lists the technology maturity of such systems as between 1 and 3 on a Technology Readiness Level scale that goes from 1 to 9. 11 However, the constituent technologies, such as automated sensors, are already largely available. 12 As AI improves, a constellation of such devices could be made largely autonomous.

Much of the relevant technology is already available. Drug-trafficking organizations have been using semisubmersibles to transport drugs to the United States for years, now craft with very slender vessel designs that are efficient at cutting through waves (though still slower than most warships). 13 A decade ago, it was already possible to build such boats with a payload of 10 tons and at a cost of less than $1 million per vessel; they were often manned then, but making them fully autonomous would not be a major leap. 14

Clearing operations against what would in effect be mobile and self-healing minefields populated by devices that can communicate with each other and reposition themselves to create dense, lethal networks will be much more difficult than clearing current threats.

Such capabilities create the specter of not just “smart mines” (able to distinguish one type of ship from another before detonating) but mobile, re-deployable, and agile mines operating as autonomous networks. Since mines have been responsible for most U.S. Navy ship losses since World War II, this is a particularly unsettling prospect. 15 In modern times, the U.S. Navy has primarily avoided mines by staying clear of waters where they might be deployed, as opposed to having any particularly effective counter to them. The main alternative, as outlined by Caitlin Talmadge, would be to conduct extensive clearing operations to create relatively narrow channels for movement, if enough time is available for such purposes. (Talmadge estimated a month or more in a scenario in which Iran mined the Persian Gulf and the U.S. Navy and allies then sought to clear the waterways.) 16 Used against America’s enemies of recent decades, this might have been a doable proposition. But when a U.S. Navy vessel has to approach a Baltic port against a Russian foe of 2025 or 2030 or 2035 or 2040, the situation could be very different. Clearing operations against what would in effect be mobile and self-healing minefields populated by devices that can communicate with each other and reposition themselves to create dense, lethal networks will be much more difficult than clearing current threats.

If NATO figured out how to jam the communications between smart, unmanned, mobile mines, the adversary’s robotic systems might simply be deployed in redundant patterns to be sure there were no gaps in coverage. They could also be programmed to change their positions every so often to elude neutralization and to repair any potential gaps in their coverage—even if there were no central data processor that actually knew where the gaps were located and even if space-based navigation systems were disabled (since the UUVs could have various types of inertial or bottom-following guidance). 17 The network could be set up simply to play the odds, in an environment of little communication and poor information exchange.

How many such UUVs might be needed to achieve the desired effect of rendering transport ships highly vulnerable as they approached a port such as Talinn or Riga? As one possibility, the devices might be released from Kaliningrad with instructions to move eastward toward the littoral waters of those port cities. Even existing battery technology makes a “swim” of such distance within reach. 18 Progress in nanomaterials and other constituent elements of batteries may further improve performance in the years ahead.

One way to estimate the quantitative requirements for such a UUV network is to compute how long a picket line might be needed near those ports to cover all possible lines of approach, and then estimate the needed density of separate armed devices along that line. Whatever estimate followed from this simple calculation might then be multiplied by two or three or four to account for attrition of some devices as a result of NATO anti-mining efforts or malfunction.

The approach to Riga, Latvia, is through a body of water about 40 miles wide at points near the port. The picket line might be set up roughly three-to-five miles offshore, where water depths are 100 feet or more—making it hard to detect any submersible object visually. 19 The math might go something like this:

  • If the range of each UUV’s lethal mechanism is similar to that of a modern torpedo such as the U.S. Mark 48, then they might be spaced every one-to-five miles—based on the fact that these torpedoes can typically lock on to targets from a distance of 4,000 yards. 20
  • To improve the density of the picket line and allow multiple shots to be taken at a given transport, the spacing might be kept at perhaps one mile, meaning that 40 UUVs would be needed to populate a given picket line.
  • With multiple picket lines, perhaps 200-to-500 UUVs in all, at a cost of no more than several hundred million dollars, it would be very difficult to approach the wharves at Riga.

Of course, the United States and other NATO countries could attempt to thwart the operations of these UUVs. They could try to destroy them en masse at their source before the UUVs could be released. They could also create their own robotic swarms designed to find, identify, and neutralize the attacking weapons.

But there would be a fundamental difference from today’s situation. The kind of impunity that U.S. forces have enjoyed for decades during intercontinental movement would be threatened to some degree and could no longer be assumed. And even Russia’s relatively modest military resources would still be ample for the kinds of investments needed in these domains, in purely financial terms, as the above calculations underscore.

If necessary, NATO could avoid some of these problems by staying out of the Baltic Sea. U.S., Canadian, and U.K. forces could deploy to France or the Netherlands or Germany and then move eastward toward Russia, picking up allied help along the way. This strategy might eventually work—but with considerable time delays and with vulnerabilities during movement along road and rail networks. Moreover, Russia might doubt that NATO would have the will to mount such a response. Thus, the key goal of upholding deterrence might be lost, even if, in theory, a war could eventually be won.

Robotics and AI could take on a central, and very important, role in warfare by 2040—even without anything resembling a terminator or a large killer robot.

Because of NATO’s strategic depth and its enormous resource disparity when measured against Russia’s—two advantages the United States and its Pacific allies would likely not have in the Pacific theater against China—NATO would still be favored to win a conventional-only conflict in eastern Europe 20 years from now. But the degree of difficulty would be quite considerable and the degree of escalatory risk highly unsettling. In my book, I attempt to offer Washington and other NATO capitals some policy options. For the purposes of this essay, the simple point is this: robotics and AI could take on a central, and very important, role in warfare by 2040—even without anything resembling a terminator or a large killer robot.

  • See, for example, Fabrice Pothier, “An Area-Access Strategy for NATO,” Survival 59, no. 3 (June–July 2017), pp. 73–79.
  • Alexander Velez-Green, “Russian Strategists Debate Preemption as Defense against NATO Surprise Attack” (Washington: Center for a New American Security, March 2018).
  • In their discussion of the new U.S. Army and Air Force concept of Multi-Domain Battle, General David Perkins and General James Holmes convincingly argue that an enemy will contest U.S. and allied operations across many different realms simultaneously. See General David G. Perkins and General James M. Holmes, “Multi-Domain Battle: Converging Concepts toward a Joint Solution,” Joint Forces Quarterly 88 (1st quarter, 2018).
  • Paul B. Stares, Preventive Engagement: How America Can Avoid War, Stay Strong, and Keep the Peace (Columbia University Press, 2018), pp. 154–55.
  • Barry D. Watts, The Military Uses of Space: A Diagnostic Assessment (Washington: Center for Strategic and Budgetary Assessments, 2001), p. 99; Peter L. Hays, United States Military Space: Into the Twenty-First Century (Montgomery, Ala.: Air University Press, 2002), pp. 121–24; Michael E. O’Hanlon, Neither Star Wars Nor Sanctuary: Constraining the Military Uses of Space (Brookings, 2004), pp. 67–70, 126–27; and Bruce G. Blair, Strategic Command and Control: Redefining the Nuclear Threat (Brookings, 1985), pp. 201–07.
  • Such mines could be developed in fairly short order today; see Ochmanek and others, U.S. Military Capabilities and Forces for a Dangerous World, pp. 16–19, 28.
  • Dave Majumdar, “The Rise of Russia’s Military,” National Interest 156 (July/August 2018), pp. 36–46.
  • Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton, 2018).
  • Thomas Gibbons-Neff, “Why the Last U.S. Company Making Cluster Bombs Won’t Produce Them Anymore,” Washington Post, September 2, 2016.
  • David A. Ochmanek and others, To Find, and Not to Yield: How Advances in Information and Firepower Can Transform Theater Warfare (Santa Monica, Calif.: RAND, 1998), pp. 1–40, 83–100.
  • See, for example, Scott Savitz and others, U.S. Navy Employment Options for Unmanned Surface Vehicles (USVs) (Santa Monica, Calif.: RAND, 2013), pp. 7, 18, 33.
  • Robert W. Button and others, A Survey of Missions for Unmanned Undersea Vehicles (Santa Monica, Calif.: RAND, 2009), p. 57.
  • Kyle Mizokami, “Colombian Drug Smugglers Built This Stealthy, Special Forces–Inspired Boat,” Popular Mechanics, June 13, 2017.
  • David Kushner, “Drug-Sub Culture,” New York Times Magazine, April 23, 2009
  • Captain Wayne P. Hughes Jr. (ret.), Fleet Tactics and Coastal Combat, 2nd ed. (Annapolis, Md.: Naval Institute Press, 2000), p. 153.
  • Caitlin Talmadge, “Closing Time: Assessing the Iranian Threat to the Strait of Hormuz,” International Security 33, no. 1 (Summer 2008), pp. 82–117. An alternative analysis argued that one month might be an exaggeration, for simple mine technology, but my main concern here is with more advanced mines. See William D. O’Neil, “Correspondence: Costs and Difficulties of Blocking the Strait of Hormuz,” International Security 33, no. 3 (Winter 2008–09), pp. 190–95.
  • Button and others, A Survey of Missions for Unmanned Undersea Vehicles, pp. 51–52.
  • Ibid., pp. 21, 63.
  • See “Approaches to Port of Riga,” GPS Nautical Charts , Bist LLC, 2014 (http://fishing-app.gpsnauticalcharts.com/i-boating-fishing-web-app/fishing-marine-charts-navigation.html#9/57.0200/24.0100).
  • Kyle Mizokami, “The U.S. Navy Is Getting a More Lethal Torpedo,” Popular Mechanics, December 27, 2016.

Artificial Intelligence

Foreign Policy

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

April 4, 2024

Nicol Turner Lee

March 28, 2024

Joseph B. Keller

Home Logo: National Defense University Press

AI is Shaping the Future of War

By Amir Husain PRISM Vol. 9, No. 3

Download PDF

Artificial Intelligence and Machine Learning are changing future of war.

S everal years ago, before many were talking about artificial intelligence (AI) and its practical applications to the field of battle, retired United States Marine Corps General John Allen, and I began a journey to not only investigate the art of the possible with AI, but also to identify its likely implications on the character and conduct of war. We wrote about how developments in AI could lead to what we referred to as “Hyperwar” — a type of conflict and competition so automated that it would collapse the decision action loop, eventually minimizing human control over most decisions. Since then, my goal has been to encourage the organizational transformation necessary to adopt safer, more explainable AI systems to maintain our competitive edge, now that the technical transformation is at our doorstep.

Through hundreds of interactions with defense professionals, policymakers, national leaders and defense industry executives, General Allen and I have taken this message to our defense community—that a great change is coming and one that might see us lose our pole position. During the course of these exchanges, one fact became increasingly clear; artificial intelligence and the effects it is capable of unleashing have been gravely misunderstood. On one hand, there are simplistic caricatures that go too far; the Terminator running amuck, an instantiation of artificial intelligence as a single computer system with a personality and a self-appointed goal, much like the fictionalized Skynet. Or an intelligent robot so powerful and skilled that it would render us humans useless. On the other hand, there are simplifications of AI as a feature; trivializations in the name of practicality by those who cannot see beyond today and misconstrue AI’s holistic potential as the specific capabilities of one or two products they have used, or most likely, merely seen. I would hear from some that fully autonomous systems should (and more amusingly, could ) be banned and this would somehow take care of the “problem.” Others thought the proponents of artificial intelligence had overstated the case and there would never be synthetic intelligence superior to humans in the conduct of war.

But artificial intelligence is not like a nuclear weapon; a great big tangible thing that can be easily detected, monitored or banned. It is a science, much like physics or mathematics. Its applications will lead not merely to incremental enhancements in weapon systems capability but require a fundamental recalculation of what constitutes deterrence and military strength. For example, the combination of AI elements—visual recognition, language analysis, the automated extraction of topical hierarchies (or ontologies), control of systems with reinforcement learning, simulation-based prediction, and advanced forms of search—with existing technologies and platforms, can rapidly yield entirely new and unforeseen capabilities. The integration of new AI into an existing platform represents a surprise in its own right. But the complex interactions of such platforms with others like them can create exponential, insurmountable surprise. Which current conventional system deters such an AI creation?

These reactions were all telling. Rather than seeing artificial intelligence as a science, people were reacting to caricatures or linear projections based on the past. Specifically, the contention that since no AI has been built thus far that can exhibit long-term autonomy in battle, such an AI could never be built. Or that if it were, then it would take over the world of its own volition. These reactions would not be as problematic if they were coming from ordinary people playing the role of observers. But seeing people in positions of power and authority— participants —espouse such thinking was worrisome. Why? Simply because artificial intelligence will lead to the most important capabilities and technologies yet built by humankind, and a failure to understand the nature of artificial intelligence will cause us to fall behind in terms of taking advantage of all it has to offer in the near, medium, and long term. The stakes are high beyond description.

Earlier in this piece, I described hyperwar to be a type of automated—potentially autonomous—conflict. But a deeper understanding of concepts underpinning hyperwar requires exposure to the idea of the Observe-Orient-Decide-Act (OODA) loop; a cyclical process governing action both in the realm of war, and as many have recently pointed out, in commerce, 1 engineering, 2 and other peace-time pursuits.

Where did the idea of the OODA loop come from? While researchers in various fields throughout history have articulated the idea of a cognitive decision/action loop, the modern day conception of the OODA loop in a military context came from USAF Colonel John Boyd. Col. Boyd is famous both for the OODA loop and for his key role in developing the F-16 program. He is also remembered as that famed military strategist whose conceptual and doctrinal contributions, some would argue, quite directly led to the overwhelming U.S. victory in the first Gulf War. Acknowledging the impact of Boyd’s work, then-commandant of the Marine Corps., General Charles Krulak said these words in Boyd’s eulogy: “John Boyd was an architect of [the Gulf War] victory as surely as if he’d commanded a fighter wing or a maneuver division in the desert. His thinking, his theories, his larger than life influence were there with us in Desert Storm.”

Of all Boyd’s considerable contributions, perhaps the idea of the OODA loop is the most potent and long-lasting. OODA governs how a combatant directs energy to defeat an opposing force. Each phase of the OODA loop is itself a cycle; small OODA loops curled up within larger ones. As the OODA loop progresses, information processes feed decision processes that filter out irrelevant data and boil outputs down to those that are necessary and of the highest quality. In turn, these outputs become inputs to another mini OODA loop. Seen in this way, the macro OODA loop of war is a massively parallel collection of perception, decision, and action processes; exactly the types of tasks AI is so well suited to, running at a scale at which machines possess an inherent advantage.

AI in Perception, Decision, and Action

Just how good has AI become at these perception, decision, and action tasks? Take perception, an area where machines and the algorithms they host have made great strides over the past few years. AI systems can now beat Stanford radiologists in reading chest X-rays, 3 discern and read human handwriting faster than any human, 4 and detect extrasolar planets at scale, from murky data that would be a challenge for human astronomers to interpret. 5 The AI perception game is hard to beat, and operates at a scale and speed unfathomable to a human being.

The combined effect of millions of sensors deployed in space, in the air, on land, on the surface of the sea and under it, all being routed to a scalable AI perception system will be transformative. We are beginning to see shades of what this will feel like to military commanders. When the Russian military conducted a test of 80 UAVs simultaneously flying over Syrian battlefields 6 with unified visualization, Russian Defense Minister Sergei Shoigu commented that the experience was like a “semi-fantastic film” and that “they saw all the targets, saw the launches and tracked the trajectory.” This, of course, is just the beginning.

What about decisionmaking? How would AI fare in that domain? Today, planners use tools such as “Correlation of Forces” (COF) calculators 7 to determine the outcome of a confrontation based on the calculated capability of a blue force versus a red force. They use these calculations and projections to make logistical and strategic decisions. If you divide the battlespace into a grid that constrains both space and time, in some sense the only COF calculation that matters inside each cell is the COF calculation for the cell itself, not for the entire grid. Taking this idea further, given the presence of assets in each cell, one could calculate their area of impact, under the constraint of a time bound. Obviously, a hypersonic missile will have a larger area of impact with a smaller time bound in comparison to a tank. An AI trying to solve this problem would use sensors to identify assets present in each grid, calculate COF coefficients for each cell for a given time bound, and then seek to generate and optimize a plan of action that results in the smallest own-force maneuvering most efficiently to inflict maximum attrition on the enemy. All while suffering the least damage itself. A proxy for determining how much damage you could inflict while minimizing own-losses is the COF coefficient itself. The larger your advantage over the enemy, the greater the chances of a swift victory. An AI could also play this per-cell “COF” optimization game with itself millions of times to learn better ways of calculating COF coefficients.

This is one simple example of how a strategic hyperwar AI could seek advantage. There are others. The key point is that no human commander could even properly process thousands of fast-changing, per-cell COF calculations, much less act on them with the speed of a purpose-built machine running a rapidly improving algorithm.

Finally, let us come to action. In 2020, the Defense Advanced Research Projects Agency (DARPA) organized a dogfight competition 8 between human F-16 pilots and various AI algorithms, called “AlphaDogfight.” The result was a landslide. AI won 5-1. There are many points of view about this competition and questions raised as to whether the rules of engagement were determined fairly. From my own personal experience applying AI to autonomous piloting applications, I know this: AI eventually wins. In 2017, SparkCognition, the AI company I founded, worked to develop technology to identify the conditions for an automated take off rejection. Using reinforcement learning, the AI we developed exceeded human performance both in timeliness of decision-making and accuracy of decisions made. The following year we worked on multi-ship defensive counter air (DCA) scenarios and found that, once again, AI performed amazingly well. In time, AI will win. Is someone making bets to the contrary? And if not, why aren’t we moving faster to embrace the inevitable?

The fusion of distributed artificial intelligence with highly autonomous military systems has the potential to usher in a type of lightning-quick conflict that has never been seen before. The essential findings of my work in collaboration with General Allen discussed above revealed that if artificial intelligence was aggressively applied to every element of the OODA loop, in essence, the OODA loop could collapse on itself. Artificially intelligent systems would enable massive concurrent coordination of forces and enable the application of force in optimized ways. As a result, a small, highly mobile force (e.g. drones) under the control of AI could always outmaneuver and outmass a much larger conventional force at critical points. Consequently, the effect of platforms under AI control would be multiplied many fold, ultimately making it impossible for an enemy executing a much slower OODA loop to contend or respond.

What, then, are the larger implications of AI’s dominance in perception, decision, and action tasks? What happens when the OODA loop collapses? Let us examine a few implications.

Regional Powers and the “AI-Enabled Skirmish”

Previous work indicates that AI would provide a significant increase in the latitude of action available to both nation states and non-state actors. Smaller scale autonomous operations have an inherent quality of deniability in that there are no humans to capture or interrogate. And it is not just conventional, kinetic actions that AI can control but also cyber operations. The applications of AI to cyber are tremendous and range from automatic development of cyber weapons to the continuous, intelligent scanning of enemy targets to identifying pathways for exploitation, to the autonomous conduct of large scale, distributed cyber operations.

The onset of hyperwar type conflicts will have a great effect on almost all our current military planning and the calculations on which these plans are based. The most potent teeth to tail ratios sustainable by a human force will seem trivial when autonomous systems are widely deployed. The idea that training will always enable dominance will have to be questioned. And the already outdated notion of platform versus platform comparisons will become completely extinct.

Most of the scenarios described in “Hyperwar: Conflict and Competition in the AI Century,” have already come to pass. In one conceptual vignette, we outlined how autonomous drones could be used to attack oil installations. Two years later, this actually happened against a Saudi oil facility in Abqaiq. We also highlighted how existing conventional aircraft would be reused as autonomous drones. The Chinese did exactly that with their J-6 and J-7 aircraft. Integrating AI into current systems presents the opportunity to build a potent capability at low cost and create significant complications for planners looking to counter these threats.

When kinetic or cyber effects can be employed over great distances, with great precision and with no human involvement, the likelihood that countries and groups will use these capabilities increases. And when autonomous systems begin to blunt the training-enabled human edge, the potency of such actions is amplified.

The Rest of the World is in on the Secret: the Future is Autonomous

Every day brings with it new announcements in military technology developments. And most of these are not taking place in the United States. Consider just the following recent news from around the world:

  • Russia announced that they deployed 80 drones simultaneously in Syria for ISR (Intelligence, Surveillance and Reconnaissance) coverage and were able to see “everywhere all at once.”
  • The Russians have also tested the Mi-28N attack helicopter with a new drone launcher 9 that can be used to deploy ISR systems and intelligent loitering munitions. In January, 2021 Iranian media showed images of a similar system mounted on a helicopter.
  • During the Azerbaijan-Armenia conflict, Turkish TB2 drones were used to devastating effect in contested airspace. Mass deployment of these systems in combination with loitering munitions took out S-300 surface to air missile sites, armor, and infantry. TB2s are being produced at the rate of at least one per week at a cost that is a tenth, possibly a twentieth of U.S. MALE (Medium Altitude Long Endurance) drones.
  • Israeli Harop drones delivered to Azerbaijan are also being used—both kinetically and for propaganda. A recent Azerbaijani martial music video shows a convoy of Harop trucks, each equipped with nine launchers. One can literally see the Azerbaijani military showcase—in a music video, no less—the lethal capability to concurrently deploy a swarm of at least 36 drones.
  • Azerbaijan converted old soviet-era biplanes into DEAD (Destruction of Enemy Air Defense) drones by using them to both identify SAM sites and destroy them via kamikaze attacks.
  • Baykar Makina, the Turkish company that manufactures the Bayraktar TB2, has test flown the Akinci, a drone with a broader mission profile, greater capabilities, and lower cost in comparison to deployed U.S. drones. They have also announced an air-to-air mission capability for the same platform, potentially integrating the Turkish Gokdogan 10 and Bozdogan air to air missiles.
  • The Chinese, in the last few months of 2020, announced and tested two drones; a 100kg payload twin rotor aircraft that can supply troops at high altitude, 11 and a high-speed drone designed for ISR, electronic warfare, and ground strike.
  • Iranian drone production, by all accounts, has ramped up tremendously and a huge range of designs are being produced, 12 including a MALE system. Iran recently demonstrated a combination of small, high speed boats with an autonomous drone, raising the possibility of (UCAV) drones being deployed from (USV) drones. 13
  • Ukraine has formed a joint venture company with Turkey to manufacture a modified version of the TB2. The initial plan is to produce at least 48 aircraft. 14
  • The variety and scope of Chinese drone developments is incredibly impressive, and unmanned systems now address every application, from low-end tactical to high-end strategic.

There is also a considerable amount of work going on in Pakistan, India, Israel, South Korea, Brazil, and elsewhere. The list truly goes on and on. In a world where strategic competition between near-peers is once again at the fore, the pace of military innovation is skyrocketing.

While the volume and pace of these developments is impressive, nothing in the list above should be truly surprising. For years, General John Allen, former Deputy Secretary of Defense, Robert O. Work, and others have been pointing to the potential of autonomous technologies, inexpensive sensors, and fast spreading technical knowledge combining to yield potent and inexpensive capabilities.

Cost is a Competitive Advantage

Countries across the globe are leveraging low-cost frameworks for innovation, combining open source software and systems with inexpensive, commercial grade electronics, domestic software prowess and a willingness to experiment and rapidly iterate using methodologies often referred to as “Agile.” Not only does this result in lower development costs, it also leads to speed of innovation.

In contrast, in the United States we spend large sums of money on incredibly expensive platforms that work well when they are maintained at great cost, and that perform when they are piloted or controlled by humans in whom we have invested millions of additional dollars of training time. Is this the best strategy? Or are we doing to ourselves what we did to the Soviet Union in the 1960s and 1970s… encouraging military spending into broader economic oblivion?

Our opponents will increasingly use inexpensive technologies that are easily produced, employable in large quantities, and that continue to deliver results even when they are left to their own devices without any need for a highly trained human operator.

While the United States is the richest nation on earth, too great a disparity in cost-per-capability cannot be sustained even by the world’s apex military power. We are walking a dangerous path if we continue to provide lip service to emerging, disruptive technologies while making the real, significant investments in legacy platforms. It is not enough to talk about technological disruption, we must actually disrupt our funding and spending patterns.

Let us apply the cost-per-capability lens to just a few of our high-end platforms that have traditionally been force multipliers and differentiators for our forces. U.S. attack helicopters are the most potent in the world. But recent export orders show that they now cost between $100-125 million per aircraft. 15 While capabilities vary based on platform, in general, these helicopters carry anywhere between 8 and 16 anti-tank guided missiles (ATGMs), enjoy a loiter time of about 2.5 hours, and carry two pilots on board. In contrast, the Bayraktar TB2 currently being used in Libya and Nagorno-Karabakh has a loiter time of 24 hours, carries 2 ATGMs, requires zero on-board pilots, and costs about $2M 16 . It’s quite apparent that armor is vulnerable to these drones, much as it is to attack helicopters. But have we considered how these drones can be employed in swarms as an alternative to the expensive attack helicopter? How many TB2s can be delivered via a single transport aircraft? How many conventional attack helicopters? How much training is required for on-board pilots versus for an autonomous system complemented by a remote operator? A new, distributed lethality alternative to attack helicopters has advantages beyond the obvious lower cost.

It might be tempting to look at tactical drones and dismiss them as relatively simple systems that were bound to proliferate. Of course, I agree with both those points; many are simple systems and they have indeed proliferated. However, the drones now being developed in a number of countries are not necessarily just tactical or low-end. Complex high-end capabilities are proliferating, too. AI is being applied to other complementary areas, such as jamming, to create cognitive EW (Electronic Warfare) pods that can be flown into action by a UAV.

And it is not just about the drones alone, but rather the fact that their employment in real theatres of conflict also entails a significant shift in the entire concept of operations. For example, it has been theorized that TB2 drones over Azerbaijan were controlled from Turkey, with larger Akinci drones acting as relays. ATGMs delivered at scale, against a peer-force by attritable, long-endurance platforms controlled by pilots hundreds of miles away… never before was this concept of operations employed. But even newer methods of employment are coming.

Turkish Aerospace and Bayraktar are collaborating with Aselsan to incorporate the Koral EW system onto their drones. Russia’s Uran-9 UGVs have been improved after their performance in Syria was studied and gaps were identified. Chinese UAV developments are progressing at such a significant rate that it is difficult to capture them in a work that falls short of book-length. Sensors, control systems, vehicles, and conops are all evolving fast on the global scene and this means complex, multi-system threats employed in surprising ways.

Michael Peck, writing in National Interest suggests that “Turkey may have won the laser weapons race” when it deployed a laser weapon system in Libya that was able to shoot down a Chinese Wing Loong drone. He goes on to quote Alexander Timokhin of Army Recognition; “the interesting thing in this whole story is how essentially newcomers to the laser theme occupy that niche in which the ‘grandees’ of laser business, such as Russia and the USA, do not even think to climb.” Indeed, space that is ceded will be occupied. Technological gaps between several leading nations of the world are no longer so insurmountable so as to allow complacency. And cost matters! How is it that Turkey, with a $22 billion defense budget, is able to drive so much innovation in air-to-air missiles, lasers, EW, drones, and many other areas, whereas our dollars do not quite seem to go as far in the United States.

Cost is a critical feature, too! Big, expensive, slow-to-evolve, slow-to-build and complex to maintain platforms need to be re-thought in an age where software is the most lethal weapon. One that is growing exponentially in capability over months, not years. You can not bend new metal fast enough to keep up. It is the relationship between the software and the metal that truly matters. In this context, how does the $35 billion carrier strike group evolve in the age of inexpensive DF-21D missiles and next-generation AI-powered cruise missiles? What about the tank? General Tony “T2” Thomas, the former commander of the United States Special Operations Command (USSOCOM), recently discussed this point with me and wondered whether Nagorno-Karabakh pointed us to the end of the tank-as-platform. General Thomas has also publicly tweeted his views on this topic; “The real debate is the role of massed armor in future warfare (there is a reason the Marines just gave up their tanks).”

There are signs of progress and improvement. Certainly, the United States has not been sitting entirely still. The Air Force’s announcement of the first test of a sixth generation platform is encouraging, in particular because it was developed so quickly. Also encouraging are the three Boeing, General Atomics, and Kratos “SkyBorg” prototype development efforts for loyal wingmen drones. But given history, one wonders how expensive new systems will be by the time they are deployed. Will future programs be able to avoid the types of issues that the F-35 program encountered? A $120 million, fifth-generation stealth platform for use against near-peer threats, but only used in anger with non-stealthy, externally mounted munitions to conduct missions in uncontested airspace. Are these missions not better suited to a 40-year old F-16 or A-10? Consider further the case of our B1s, which are exquisitely complex aircraft designed for low-altitude, high-speed penetration of highly defended airspace. To find some use, they were eventually used to drop conventional bombs in Afghanistan. Mundane, low-end work for a high-end platform.

It is high time we got over the platform and focused on the mission. If we keep buying $120 million jets with $44,000/hr flight costs to use them on missions better suited to $2 million drones that could cost us $2,000/hr, we will eventually find that financial oblivion we seem to be looking for. We do not need all high-end, all the time. And there are more imaginative ways of employing our existing high-end platforms than as frontline bomb trucks.

AI for Sense-Making, Cyber, and Space

While AI will play a huge role in augmenting conventional platforms, it will also play four additional roles. First, it has the potential to automate planning and strategy. Second, it can revolutionize sensor technology by fusing and interpreting signals more efficiently than ever before. Third, it has a massive role to play in space based systems; particularly around information fusion to counter hypersonics. Fourth, it can enable next generation cyber and information warfare capabilities.

Imagine an ocean in which submarines cannot hide effectively, negating one leg of the triad. Imagine middle powers fielding far more competent forces because while they lack the resources to train human pilots to the level of the United States Air Force, they are capable of the design expertise required to field AI-powered platforms. Imagine cyber attacks engineered by AI and executed by AI at scale. Imagine long-running, fully automated information warfare and espionage programs run by AI systems. If AI is applied creatively in nation state competitions, it has the potential to create significant, lasting impact and deliver a game-changing edge.

Software: The Ultimate Weapon

Software, AI, autonomy—these are the ultimate weapons. These technologies are the difference between hundreds of old Mig-19 and Mig-21 fighter jets lying in scrap yards, and their transformation into autonomous, maneuverable, and so-called “attritable,” or expendable, supersonic drones built from abundant air frames, equipped with swarm coordination and the ability to operate in contested airspaces. Gone are the days when effectiveness and capability could be ascribed to individual systems and platforms. Now, it’s all about the network of assets, how they communicate, how they decide to act, and how efficiently they counter the system that is working in opposition to them. An individual aircraft carrier or a squadron of strategic bombers are no longer as independently meaningful as they once were.

In the emerging environment, network-connected, cognitive systems of war will engage each other. They will be made up principally of software, but also of legacy weapons platforms, humans, and newer assets capable of autonomous decision and action. The picture of the environment in which they operate across time and space will only be made clear by intelligent systems capable of fusing massive amounts of data and automatically interpreting them to identify and simulate forward the complex web of probabilities that result. Which actions are likely to be successful? With what degree of confidence? What are the adversary’s most likely counter-moves? The large scale, joint application of autonomously coordinated assets by a cognitive system will be unlike anything that has come before. It is this fast-evolving new paradigm, powered by artificial intelligence at every level, from the tactical to the strategic, that demands our attention. We must no longer focus on individual platforms or stand-alone assets, but on the cognitive system that runs an autonomous “Internet of War.”

Integrating the “LEGO bricks” of intelligence and autonomy into conventional platforms results in unconventional upgrades. A Chinese-built Shenyang J-6 Farmer fighter jet with autonomy is not just a 1950s era write-off. It becomes a system with new potential, diminished logistics dependencies, and an enhanced efficacy that goes far beyond an engine or radar upgrade. Broadly, the consequences of the use of AI to revitalize and reinvent conventional platforms will be hard to ignore.

Preparing for an Autonomous, Software-Fueled Future

Despite the change occurring globally in value shifting from the physical to the digital, and the tremendous latent potential of AI, the U.S. Department of Defense has not traditionally been at its best when it comes to understanding, acquiring, or deploying software capabilities. Hardware platforms come far more naturally to our acquisition professionals. We can hope for a change of heart and perspective, but absent that, in order for AI to be meaningful to them in the near term, we must reinvent, enhance, and reimagine existing platforms just as we build new ones. It is only then that we will cost-effectively fulfill needs and create significant new capabilities that open the door to even greater future potential. Briefing after briefing on the potential of AI, or distributing primers on machine learning inside the confines of the Pentagon will not lead to critical adoption; the performance gains that result when AI is integrated into platforms will be the proverbial proof that lies in the eating of the pudding.

We have made the mistake of being too slow to adapt, and not predicting the next conflict well enough to be prepared. Perhaps some of our allies have made the same mistake. In fact, a report from the European Council on Foreign Relations (ECFR) concluded that “the advanced European militaries would perform badly against Azerbaijan’s current UAS-led strategy.” 17 The truth is that we have developed an inflated opinion of the quality of our readiness because over the past 40 years we have not had to face opponents that were able to turn our omissions into unforgivable sins. The future may not be so kind.

To compete in this new era of exponential technologies, the U.S. military and our intelligence agencies need to go all-in on digital and physical systems powered by artificial intelligence. Imbued with synthetic cognition, such systems can make a meaningful difference to every branch of our armed services and our government organizations. A serious effort to fuel the development of such systems will lay the groundwork for true, full-spectrum AI adoption across government. But for any of this to become reality, long held views and processes in the Defense Department must change. In order to turn the tide, at a minimum, we need to:

  • Take a “let a thousand flowers bloom” approach with ideation and experimentation. Financially incentivize startups to contribute to innovation and encourage them to rethink platforms (Note: $50,000 is not an incentive especially in the context of the massive hurdles companies need to overcome to be a government supplier). Red tape—from clearances to past performance requirements—often makes it impossible for young companies to participate and should be re-thought. The focus should be on delivering capability, not how the capability is delivered.
  • Use existing platform upgrade opportunities to source autonomy and AI technology—particularly from younger, innovative companies—and incorporate it into systems that already exist. Rather than transforming platform upgrades into a vendor annuity, DOD can use upgradation roadmaps to accelerate a broad based AI transformation and build subsystems that will find use across many areas.
  • Connect successful experiments with “end users” in our services early and quickly, capturing feedback and allowing rapid iteration.
  • Make fast funding mechanisms available directly to smaller, innovative companies to convert successful experiments to deployable systems. We must reduce bureaucratic burdens on smaller companies so that they can directly deliver to government customers. Presently, many smaller companies have no choice but to deliver their capabilities through a handful of primes. This can be both monetarily inefficient and unhealthy for the growth of the defense ecosystem.

If we are to remain competitive, an aggressive, fast-track effort to incorporate AI into existing and new platforms must be adopted. In the age of hyperwar, our willingness to embrace commercial innovation, our decisiveness in acknowledging that we live in a post-platform era, and most importantly, the speed with which we operationalize new investments, will be the attributes that lead to victory. PRISM

1 What Do AI And Fighter Pilots Have To Do With E-Commerce? Sentient’s Antoine Blondeau Explains | GE News .

2 How Great Engineering Managers Identify and Respond to Challenges – the OODA Loop Model - Waydev .

3 https://hitconsultant.net/2019/08/22/ai-tech-beats-radiologists-in-stanford-chest-x-ray-diagnostic-competition/.

4 https://www.labroots.com/trending/technology/8347/ai-reads-handwriting.

5 https://news.sky.com/story/ai-algorithm-identifies-50-new-planets-from-old-nasa-data-12057528.

6 http://newsreadonline.com/russia-in-syria-simultaneously-launched-up-to-80-drones/.

7 Demystifying the Correlation of Forces Calculator (army.mil) .

8 AlphaDogfight Trials Go Virtual for Final Event (darpa.mil) .

9 Russia bets big on Mini Drones for Attack Helicopter, Combat Troops (defenseworld.net).

10 Military Watch Magazine .

11 China’s Autoflight puts a canard twist on its latest long-range eVTOL (newatlas.com) .

12 Iran showcases Shahed 181 and 191 drones during “Great Prophet 14” Exercise - The Aviationist .

13 Iranian press review: Revolutionary Guard equips speed boats with suicide drones | Middle East Eye .

14 Ukraine Forming Venture with Turkey to Produce 48 Bayraktar TB2 Drones (thedefensepost.com) .

15 Apache attack helicopters and weapons: $930 million price tag is unreal (nationalheraldindia.com) .

16 UK eyes cheaper armed drones after Turkey’s successful UAV program | IRIA News (ir-ia.com) .

17 Air Forces Monthly, January 2021.

Uncrewed aerial vehicles equipped with specialized software and sensors fly during the Technical Concept Experiment (TCE) 23.2, at Marine Corps Base Camp Pendleton in California on January 19, 2024. (US Navy photo by Michael Walls)

  • National Security & Defense
  • Foreign Policy
  • Human Rights
  • Domestic Policy

The TikTok logo on phone screen in Brussels, Belgium, on March 21, 2023. (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)

  • Indo-Pacific
  • Middle East
  • Europe & Central Asia

A Swedish CB90-class fast assault crafts approaches the well deck of the Whidbey Island-class dock landing ship USS Gunston Hall (LSD 44), during small boat operations in support of Steadfast Defender 24, March 6, 2024. (US Navy photo by Mass Communication Specialist 1st Class Danielle Serocki)

  • Center for Defense Concepts and Technology
  • Center for Peace and Security in the Middle East
  • Center on Europe and Eurasia
  • China Center
  • Initiative on American Energy Security
  • Japan Chair

The water level of the Yangtze River and Jialing River in the Chongqing section falls in Chongqing, China, August 17, 2022. (Photo by Costfoto/NurPhoto via Getty Images)

  • In the Media

Fighter jets execute a flyover dedicated to the nineteenth anniversary of Lithuania's accession to NATO on March 29, 2023, in Vilnius, Lithuania. (Photo by Oleg Nikishin/Getty Images)

  • Board of Trustees
  • For the Media
  • Careers & Internships

green default image with H

Artificial Intelligence and Future Warfare

koichiro_takagi

This article originally appeared in Japanese in  Foresight.

Artificial intelligence (AI) is one of the military technologies that major powers have paid the most attention to in recent years. The United States announced its Third Offset Strategy in 2014, which sought to maintain US military advantage through the use of advanced military technologies such as AI and unmanned weapons. The US National Security Strategy released on 12 October 2022 listed AI as one of the technologies that the United States and its allies should promote investment and utilization.

In contrast, in 2019, China announced a new military strategy, Intelligentized Warfare, which utilizes AI. Officials of the Chinese People's Liberation Army (PLA) have stated that it can overtake the US military by using AI. In a televised speech in September 2017, Russian President Putin said that the first country to develop true AI will rule the world. Thus, major countries have shown great interest in the military use of AI in recent years, and there has been a race to develop it.

The Russo-Ukrainian war is the first conflict in history in which both sides use AI. 1 Russia used AI to conduct cyber-attacks and create deep-fake videos showing President Zelensky surrendering. Meanwhile, Ukraine is also using facial recognition technology to identify Russian agents and soldiers, as well as analyzing intelligence and planning strategies. However, AI is not a technology that should necessarily be used without limitation in the military. The most famous warning comes from physicist Stephen Hawking: AI may bring about the end of humankind. As symbolized by the SF film Terminator, a future in which weaponized AI loses control and revolts against humans has been repeatedly spoken of. This article argues what are the advantages of the military use of AI, how is it going to be used in future wars, and what dangers have been identified. This article also identifies four very different aspects of the military use of AI: faster information processing, faster decision-making, autonomous unmanned weapons, and use in cognitive warfare. When discussing military applications of AI, some discussions focus on only some of these aspects. This paper discusses these aspects comprehensively.  

Why the F-86 Outperformed the MiG-25 in the Korean War

AI is an enhancement or replacement for the human brain. Weapons developed so far in human history have enhanced human muscles, eyes, and ears. Compared to primitive humans fighting with clubs, modern humans have far more powerful killing power, see their enemies thousands of miles away, and communicate with their allies thousands of miles away.

However, this is the first time in the long history of human warfare that the brain has been enhanced. The changes brought about by AI could therefore be unprecedented and distinctive.

Even before the practical application of AI, the speed at which the human brain processes information and makes decisions was the most important factor in determining who wins or loses a war. It was John Boyd of the US Air Force who first theorized this. Based on his own experience of air combat in the Korean War, Boyd believed that the speed of a pilot's decision-making, rather than the performance of the fighter aircraft itself, could make the difference between winning and losing a battle.

In September 1951, about a year after the start of the Korean War, the Chinese Air Force began participating in the Korean War as the People's Volunteer Army. The new Soviet-made MiG-15 fighter aircraft used by the Chinese Air Force at this time were overwhelmingly superior to the US F-86 Sabre in terms of aircraft performance such as climbing altitude, propulsion and turning performance, and the power of their machine guns.

However, it was the less capable US F-86 that won the battle, with a shootdown ratio of 3.7 to 1. 2 The reasons for this were the F-86's superior radar and the fact that the F-86 had good visibility from the cockpit, allowing the pilot to see in all directions. In contrast, visibility from the cockpit of the MiG-15 was extremely poor.

Boyd’s experience serving in the Korean War as a new pilot led him to perfect a new military theory in the 1960s and 1970s. Boyd's OODA loop was the first theory to argue for the importance of speed of decision-making in war.

According to his theory, combat is won or lost not on the performance of the weapon, but on the speed of the OODA (Observation, Orientation, Decision, and Action) loop. In other words, the F-86, with its good pilot visibility, had a superior speed of decision-making, whereby the pilot quickly assesses the situation and takes action. This theory has since been used in business and other areas. 

When Boyd developed this theory, AI had not yet been developed. However, Boyd showed that if humans could process information and make decisions quickly, they could win battles, even with inferior weapon performance.

However, Boyd's theory was derived from a relatively simple action: piloting a fighter aircraft. In subsequent wars, it became clear that there were limits to human information-processing capacity when the theory was applied to an entire military operation involving tens or hundreds of thousands of soldiers, weapons, and sensors operating organically.

Failures of Network-Centric Warfare

Many people have described the 1991 Gulf War as the first space war. The US military used reconnaissance satellites, communications satellites, and Global Positioning System (GPS) satellites to gather accurate information on the Iraqi army and shared the collected information over communications networks. The US military also used precision-guided weapons to carry out precise attacks against Iraqi forces.

New theories on decision-making speed developed in the 1990s when this method of warfare became possible. The leading theory was Network Centric Warfare (NCW), proposed by Arthur Cebrowski of the US Navy in 1998. He argued that networking military organizations using rapidly developing information and communications technology would enable rapid decision-making and that overwhelming victory could be expected due to superior decision-making speed. 3

Cebrowski's theory was inspired by the business model of the fast-growing retailer Wal-Mart in the 1990s. Wal-Mart's collection of payment information from cash registers, the real-time sharing of payment information with producers and distributors, and the resulting optimized control of production and delivery were revolutionary at the time. Applying this to military organizations, Cebrowski proposed that networking military organizations would enable optimal and rapid decision-making.

In 2001, President George W. Bush began to realize Cebrowski's theory immediately after taking office. NCW was the theoretical core of the transformation of the US military at the time. In the course of this reorganization, the United States entered into the wars in Afghanistan and Iraq.

However, in these two wars, NCW was not always successful. One of the factors was information overload and inadequate information processing capability. 

During the wars, US military command centers located in Kuwait and Qatar collected large amounts of information from satellites, manned and unmanned aircraft, a wide variety of radars and sensors, and many field units. However, the information collected at headquarters in war is diverse, complex, and contains a large amount of duplication, ambiguity, and inconsistency, and is not a simple data set like Wal-Mart payment information. The information processing technology of the time was not capable of automatically processing such complex data. Naturally, the US military command was also unable to properly extract and process a large amount of information gathered.

Cebrowski's NCW theory came to an end with the departure of President Bush, but the development of theories to speed up decision-making has continued since then. 

Improvement of Information Processing

The US Department of Defense (DoD) is currently developing a theory of Mosaic Warfare. At its core is the concept of Decision Centric Warfare (DCW), 4 which aims at the relative superiority of decision-making speed, and its aim is similar to those of Boyd and Cebrowski's theory. DCW differs significantly from the previous concepts in that it makes use of AI and unmanned weapons. DoD is also developing Joint All-Domain Command and Control (JADC2), which uses AI to process data collected by a large number of sensors to support the commander's decision-making. 

Information-gathering systems used in modern warfare include satellites, manned and unmanned aircraft, ground and undersea radars and sensors, the number and variety of which are increasing. In addition, open-source data can be collected from the internet and other sources. Furthermore, the volume of data sent from each of these information-gathering devices is exploding, for example, the resolution of images taken by satellites has improved dramatically. AI has made it possible to process the vast amounts of data coming from these information-gathering systems. In addition, AI can detect correlations between many different data sets, enabling the detection of changes that would otherwise go unnoticed by humans. 

The PLA is also focusing on the improvement of its information processing capabilities using AI. For example, they are building a network of unmanned weapons and undersea sensors in the waters around China and are using AI to process information from these networks. 5 Furthermore, they are considering a new form of electronic warfare that uses AI to analyze received radio signals and optimize jamming. 6  

As discussed above, the speed of decision-making is an extremely important factor in deciding who wins or loses a war, and various theories have been developed on this subject. However, the Iraq and Afghanistan wars revealed a lack of capacity to process large amounts of information. And now that AI can process large amounts of data, rapid decision-making is becoming a reality. 

The Dangers of Flash War

Given that speed of decision-making is the most important factor in deciding who wins or loses a war, if AI not only processes information but also makes decisions itself, AI can be expected to make quick decisions that are incomparably faster than those made by humans. Furthermore, AI is free from biases, heuristics (intuition, preconceptions), fears and other emotions, and fatigue inherent in the human brain, and can be expected to make objective and accurate decisions even in the extreme conditions of war.

However, one of the problems with delegating decision-making in war to AI is the risk of flash war, where critical decisions, such as starting a war or launching nuclear missiles, are made at a moment's notice. 7  

Throughout history, major national leaders have invariably taken restrained decisions when there is a risk of nuclear war. US President Harry Truman dismissed UN Commander Douglas MacArthur, who advocated the use of nuclear weapons in the Korean War. President Dwight Eisenhower did not send US troops to the Hungarian uprising in 1956. President John F. Kennedy did not stop Soviet troops from building the Berlin Wall. In these situations, if AI performs calculations mechanically, there is a risk that AI will come to the conclusion to launch an early pre-emptive strike in order to deter the other side from taking action and creating a strategic advantage. 

A number of studies in the United States have identified problems with AI making strategic decisions, such as launching nuclear missiles, since the 2010s. 8 Many of these studies have argued that the use of AI increases the risk of nuclear war. 

Invariably, these studies cite the anecdote of the Soviet lieutenant who saved the world: in 1983, a Soviet automatic warning system detected nuclear missiles flying from the United States. 9 According to their manual, the Soviet military had to launch its nuclear missiles for counterattack before the US nuclear missiles landed. However, one Soviet air force lieutenant suspected a false alarm from the automatic warning system and voided the nuclear launch order. In fact, the automatic warning system had misdetected the light reflecting off the clouds, and the lieutenant prevented the destruction of the world by nuclear weapons.

Given this, the prevailing view in western countries is that the role of AI should be only to support human decision-making and that humans should make the final decision. The Decision Centric Warfare currently being developed by DoD also states that the role of AI will be to support human decision-making, for example, AI will create operational plans and propose them to the commander.

AI may be able to create better operational plans than humans. For example, AlphaGo surprised professional Go player Lee Sedol by playing a move that a human would never play. Similarly, AI may create outlandish plans of maneuvering that humans would never come up with. However, it should be humans who approve and implement them.

JADC2, which is being developed by the US military, also aims to reduce engagement times by using information collected from a large number of sensors to identify targets for attack, and the AI recommends to the commander the weapons that will attack those targets. 10 AI could also predict weapon failures in advance and recommend maintenance units to carry out maintenance at the optimum time. Furthermore, AI could predict in advance the supplies needed by field units and recommend the optimum amount of supplies and transport plans to supply units and transport units. 11

AI Dominates Human Decision-Making

However, even if the role of AI is limited to supporting human decision-making and humans make the final decision, there is still a risk of human judgment being dominated by AI. Many studies often cite the downing of a US Navy aircraft F/A-18 by a US surface-to-air missile unit in the 2003 Iraq War. In this incident, the automated system of the US surface-to-air missile Patriot misidentified a friendly aircraft as an enemy aircraft. The human operators, who had to make a decision in just a few seconds, fired the missile in accordance with the automated system's indication, killing the pilot of a friendly F/A18. 12

What this case shows is that in the stressful situation of combat, and in the short time available to make a decision, humans are forced to rely on the judgment of machines. Psychologists have also demonstrated that as trust in machines increases, humans will trust them, even when evidence emerges that the machine's judgment is incorrect. 13

As discussed above, delegating decision-making itself to AI is expected to reduce the time required for decision-making and provide objective judgments that are not influenced by biases, heuristics, and fatigue inherent in humans. However, a number of studies point to the dangers of flash wars of instantaneous escalation and the risk of human decision-making being dominated by AI.

Unmanned Weapons Outperform Humans in Decision-Making Speed

Unmanned weapons have a long history, with Israel putting them to practical use in the 1970s. During the 1991 Gulf War, the US military used unmanned reconnaissance aircraft in live fire. The use of unmanned weapons increased dramatically during the Iraq War when the US military increased its use of unmanned weapons from 150 in 2004 to 12,000 in 2008. 

Human operators remotely operated these early drones, which had limited AI autonomy. Nevertheless, unmanned weapons had numerous advantages and their use spread rapidly.

Unmanned weapons can operate for continuous periods of time, at high altitudes or in deep water, and make rapid turns at high speeds, which would not be possible with a human crew on board, consequently, can achieve unparalleled kinetic performance. Furthermore, unmanned weapons can carry out missions in hazardous locations without risking the lives of the pilot or crew. The absence of a crew means that living space, life support, and safety equipment are not required, leading to cheaper manufacturing and smaller airframes. Smaller aircraft are less visible on radar and can operate undetected.

However, remote control by human operators requires a communication link between the unmanned weapons and the human operators, which would render the weapons unmaneuverability if the communication links are disrupted. In the Russo-Ukrainian war, the Ukrainian unmanned weapons Bayraktar TB-2 were active in the early stages of the war. However, Russian electronic warfare units gradually jammed the remotely piloted TB-2. 14  

In contrast, AI-equipped unmanned weapons that operate autonomously can continue to operate in the face of jamming. In addition, the US military is currently facing an overwhelming shortage of manned aircraft pilots and unmanned aircraft operators, and autonomous weapons will help to resolve the personnel shortage. 15

Above all, if an AI pilots an unmanned weapon, it will operate significantly faster. As in the match-up between the US F-86 and the Soviet MiG-15 in the Korean War mentioned above, decision-making speed is the most important factor in warfare. For this reason, when human remotely piloted weapons and AI-autonomous unmanned weapons are pitted against each other, human operators cannot compete with autonomous unmanned weapons which have overwhelmingly fast decision-making speeds.

Further development of autonomous technology for unmanned weapons could lead to the realization of swarms which consist of numerous unmanned weapons. For example, a swarm of ants is self-organized as if it were a single living organism, carrying food in a procession and building a complexly shaped nest, even if there are no individual ants to direct it. Similarly, if a large number of autonomous unmanned weapons were to self-organize, a powerful weapon swarm could be created that would continue to carry out its mission as a whole swarm, even if communications are disrupted or some individuals are lost under attack.

Advanced AI technologies are already making unmanned weapons more autonomous. In recent years, even remotely piloted unmanned aerial vehicles are increasingly utilizing AI for autonomous take-off, landing, and normal flight, leaving human operators to concentrate on tactical decisions, such as selecting attack targets and executing attacks. For example, the US military's X-47B unmanned stealth combat aerial vehicle successfully auto-landed on an aircraft carrier in 2013 and successfully refueled in the air under autonomous control in 2015. As such autonomy advances, given the overarching mission such as protecting an aircraft carrier from enemy attack, unmanned weapons will be able to conduct it autonomously. 

Regulations That Only Tie the Hands of the Good People

However, the prevailing view in Western countries is that unmanned weapons that attack autonomously without human intervention are unethical and should be regulated. 16 The US DoD established guidelines on the autonomy of weapon systems and set certain standards. 17

The danger of autonomous unmanned weapons running amok, as in the future depicted in the science fiction film Terminator, has also been raised. The Swedish philosopher Nick Bostrom's thought experiment is often cited in relation to this argument. Suppose we have a fully autonomous robot that creates paper clips. This robot would find the most effective means of achieving its goal and carry it out. This robot gradually begins to destroy human civilization, using up all the resources on the planet to make paperclips, and humanity perishes. This robot will eventually expand into space and overrun the cosmos.

International efforts are also underway to regulate Lethal Autonomous Weapons Systems (LAWS). However, there has been little progress in concrete efforts, with China opposing the regulation and Russia opposing the discussion itself, and there was little progress in reaching a consensus even on what should be regulated. 

A dilemma exists in the regulation of AI-equipped autonomous weapons. As discussed above, speed of decision-making is of paramount importance in winning or losing a war. For this reason, regulation of autonomous weapons reduces the most important capability of decision-making speed, and countries that do not comply with the regulation may unfairly benefit. In other words, there is a danger that the regulations will only tie the hands of good people. 

Controlling Human Cognition with Deep Fakes

China and Russia are attempting to use AI from a completely different perspective to faster information processing, AI decision-making, and autonomous unmanned weapons as mentioned above. This is the use of AI in cognitive warfare, in which the cognition of the human brain is affected and the will of the opponent is influenced to create a strategically advantageous environment or to bring the opponent to its knees without a fight.

Qi Jianguo, former deputy chief of staff of the PLA, has stated that if the PLA gains an advantage in the development of AI technology, the PLA will be able to control human cognition, which is the lifeline of national security. 18 However, it is not clear how exactly the PLA intends to control human cognition.

A common possible method would be to use deep fakes, i.e., videos, images, and audio that have been falsified or generated using AI. Examples include Russia's attempts to spread on social media a fake video of President Zelensky calling for people to stop fighting and surrender at the beginning of the Russo-Ukrainian war.

Similarly, there are concerns that China may use AI-based language generation and other methods to create social media content, which could be used to manipulate public opinion in Taiwan or to try to discredit the US military operations supporting Taiwan. 19

Indeed, when U.S. Speaker of the House Nancy Pelosi visited to Taiwan in August 2022, China spread fake news that PLA's Su-35 fighter jets had violated Taiwan's airspace. Also in 2020, the Chinese government systematically spread the theory that the virus causing COVID-19 was a US biological weapon, using state media and social media. 20

Attempts to spread disinformation and create false perceptions among people have existed for many years. A frequently cited example of past success is the disinformation campaign launched by the Soviet Union in 1983, which claimed that the human immunodeficiency virus (HIV) was a man-made virus created by the US military. This disinformation spread around the world and remade the perception of many people. A survey shows that even today, 48% of African Americans believe that HIV is a manmade virus.

However, it took the Soviet Union 10 years to spread disinformation about HIV around the world. In comparison, AI-based deep fakes can create large amounts of disinformation in a short time. Furthermore, bots that operate automatically on the internet allow this disinformation to spread to a large number of people.

Another method is to confuse the opponent's decision-making through disinformation, thereby speeding up the relative decision-making of the allies. Decision Centric Warfare, currently being developed by DoD, uses electronic warfare weapons, unmanned aerial vehicles, and other means to confuse the enemy and place a decision load on the enemy command, thereby gaining relative decision-making speed superiority.

Will AI or Human Intelligence Determine Future Warfare

Whereas conventional weapons have enhanced human muscles, eyes and ears, for the first time in human history, AI enhances the human brain and could realize the most important aspect of warfare—rapid decision-making.

Many strategists have argued that the characteristics of warfare change with technological progress, but the nature of warfare remains constant. For example, Carl von Clausewitz's theory that war is a battle of wills on both sides and the existence of a fog of war have been regarded as immutable nature of war.

However, some theorists argued that this immutable nature of warfare would be changed by science and technology. For example, in the 1990s, when numerous satellites, numerous, and sensors were used in warfare, some theorists argued that the fog of war will be lifted. They argued that the traditional battlefield, which was shrouded in fog with no information on the enemy, would be cleared by the accurate information provided by satellites and other technologies.

However, the wars in Iraq and Afghanistan in the 2000s revealed these claims to be false. Many militaries who served in the wars testified that the battlefield was still shrouded in fog. The information available on the battlefield was incomplete, sometimes mutually contradictory, and there was information overload with more information coming in than could be processed.  

In the first place, Clausewitz's concept of a fog of war does not refer to a lack of information, but to uncertainty and chance in war. In the extreme conditions of war, people are often unable to make accurate decisions due to fear or fatigue. There have been many instances since ancient times where a single soldier ran away from a battle out of fear, leading to the total collapse of hundreds or more troops without a fight. Clausewitz described the fog of war as including the uncertainties and coincidences that accompany human nature, such as fear.

Thus, those who claimed in the 1990s that the fog of war would lift were criticized for not understanding Clausewitz's theory. And it became clear once again that the nature of war as pointed out by Clausewitz is unchanging as long as the actors in war are human beings.

In the 2010s, with the practical application of AI, there were renewed indications that the nature of war could change. In May 2017, US Deputy Secretary of Defense Robert Work said that artificial intelligence may change the nature of war. 21 AI is not associated with the fear and fatigue inherent in humans and can process large amounts of information accurately in short periods of time. AI information processing, decision-making, and autonomous unmanned weapons, therefore, appear to be unrelated to the fog of war.

However, AI itself could be the source of a new fog of war. The dangers of flash wars, human decision-making being dominated by AI, and autonomous unmanned weapons running amok, bring new uncertainties and coincidences to war. Thus, many argue that even with the development of AI, as long as it is humans who conduct warfare, elements arising from human nature will remain constant and the nature of warfare will remain unchanged. 22  

Underlying this debate is the issue of whether it is science and technology or human intelligence that will determine the future. In 1940, the Germans defeated the French army with the innovative concept of blitzkrieg using tanks. However, at that time, the defeated French were superior in terms of both the number and performance of tanks. In 1870, the Prussian army defeated the French using railways, but the defeated French were superior in terms of both the number and performance of railways.

Thus, throughout history, it has not been the superiority of science and technology itself, but the human intelligence that uses it, that has won or lost wars. Future warfare may be determined not by the science and technology of AI itself, but by the innovativeness of the concepts that utilize it, and by human intelligence and creativity. 

Lauren Kahn. "How Ukraine Is Remaking War: Technological Advancements Are Helping Kyiv Succeed,"  Foreign Affairs,  August 29, 2022, https://www.foreignaffairs.com/ukraine/how-ukraineremaking-war

Martin Van Creveld. The Age of Airpower ( Public Affairs, 2011)

Arthur K. Cebrowski and John H. Garstka. "Network Centric Warfare—Its Origin and Future," US Naval Institute , January 1998.

Bryan Clark, Dan Patt, and Timothy A. Walton. "Advancing Decision-Centric Warfare: Gaining Advantage through Force Design and Mission Integration," Hudson Institute, July 2021,  https://www.hudson.org/national-security-defense/advancing-decision-cen…

Alex Stephenson and Ryan Fedasiuk. "How AI Would—and Wouldn't—Factor into a US-Chinese War,  War on the Rocks,  May 3, 2022,  https://warontherocks.com/2022/05/how-ai-would-and-wouldnt-factor-into-…

James Johnson. AI, Autonomy, and the Risk of Nuclear War,  War on the Rocks,  July 29, 2022,  https://warontherocks.com/2022/07/ai-autonomy-and-the-risk-of-nuclear-w…

James Johnson. "Dr. Strangelove Redux?"  Journal of Strategic Studies, April 30, 2020, DOI 1080/01402390.2020.1759038 

Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin. "A High-Reward, Low-Risk Approach to AI Military Innovation,"  Foreign Affairs,  May/June 2022,  https://www.foreignaffairs.com/articles/united-states/2022-04-19/force-…

Benjamin Jensen, Scott Cuomo and Chris Whyte. "Wargaming with Athena: How to Make Militaries Smarter, Faster, and More Efficient with Artificial Intelligence,"  War on the Rocks,  June 27, 2018,  https://warontherocks.com/2018/06/wargaming-with-athena-how-to-make-mil…

Bryan Clark. "The Fall and Rise of Russian Electronic Warfare,"  IEEE Spectrum,  July 20, 2022,  https://spectrum.ieee.org/the-fall-and-rise-of-russian-electronic-warfa…

Tyler Jackson. "Keep MQ-9 Pilots Flying,"  War on the Rocks,  July 30, September 26, 2022,  https://warontherocks.com/2022/09/keep-mq-9-pilots-flying/

Robert O. Work, James Winnefeld and Stephanie O'Sullivan. "Steering in the Right Direction in the Military-Technical Revolution,"  War on the Rocks,  March 23, 2022,  https://warontherocks.com/2021/03/steering-in-the-right-direction-in-th… F. Trager and Laura M. Luca, "Killer Robots Are Here—and We Need to Regulate Them,"  Foreign Policy,  https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-we…

Department of Defense. " Department of Defense Directive 3000.09: Autonomy in Weapon Systems," November 21, 2012,  https://www.hsdl.org/?abstract&did=726163

戚建国. "抢占人工智能技术发展制高点. 中国军网国防部网," July 25, 2019,  http://www.81.cn/jfjbmap/content/2019-07/25/content_239260.htm

Anthony J. Eastin and Patrick G. Franck, "Restructuring Information Warfare in the United States: Shaping the Narrative of the Future,"  Air and Space Power Journal,  Winter 2020

Sydney J. Freedberg, Jr. "War without Fear: DepSecDef Work on How AI Changes Conflict,"  Breaking Defense,  May 31, 2017,  https://breakingdefense.com/2017/05/killer-robots-arent-the-problem-its…

Peter L. Hickman. "The Future of Warfare Will Continue to Be Human,"  War on the Rocks,  May 12, 2020,  https://warontherocks.com/2020/05/the-future-of-warfare-will-continue-t…

Modern War Institute

  • Senior Fellows
  • Research Fellows
  • Submission Guidelines
  • Media Inquiries
  • Commentary & Analysis

Upcoming Events

  • Past Events
  • October 2021 War Studies Conference
  • November 2020 War Studies Conference
  • November 2018 War Studies Conference
  • March 2018 War Studies Conference
  • November 2016 War Studies Conference
  • MWI Podcast
  • Urban Warfare Project Podcast
  • Irregular Warfare Podcast
  • Social Science of War
  • Urban Warfare Project
  • Irregular Warfare Initiative
  • Project 6633
  • Shield Notes
  • Rethinking Civ-Mil
  • Competition in Cyberspace Project
  • Book Reviews

Select Page

Artificial Intelligence is the Future of Warfare (Just Not in the Way You Think)

Paul Maxwell | 04.20.20

Artificial Intelligence is the Future of Warfare (Just Not in the Way You Think)

Artificial intelligence is among the many hot technologies that promise to change the face of warfare for years to come. Articles abound that describe its possibilities and warn those who fall behind in the AI race. The Department of Defense has duly created the Joint Artificial Intelligence Center in the hopes of winning the AI battle. Visions exist of AI enabling autonomous systems to conduct missions, achieving sensor fusion, automating tasks, and making better, quicker decisions than humans. AI is improving rapidly and some day in the future those goals may be achieved. In the meantime, AI’s impact will be in the more mundane, dull, and monotonous tasks performed by our military in uncontested environments.

Artificial intelligence is a rapidly developing capability. Extensive research by academia and industry is resulting in shorter training time for systems and increasingly better results. AI is effective at certain tasks, such as image recognition, recommendation systems, and language translation. Many systems designed for these tasks are fielded today and producing very good results. In other areas, AI is very short of human-level achievement. Some of these areas include working with scenarios not seen previously by the AI; understanding the context of text ( understanding sarcasm , for example) and objects; and multi-tasking (i.e., being able to solve problems of multiple type). Most AI systems today are trained to do one task, and to do so only under very specific circumstances. Unlike humans, they do not adapt well to new environments and new tasks.

Artificial-intelligence models are improving daily and have shown their value in many applications. The performance of these systems can make them very useful for tasks such as identifying a T-90 main battle tank in a satellite image, identifying high-value targets in a crowd using facial recognition, translating text for open-source intelligence, and text generation for use in information operations. The application areas where AI has been most successful are those where there are large quantities of labelled data, like Imagenet , Google Translate , and text generation. AI is also very capable in areas like recommendation systems, anomaly detection , prediction systems, and competitive games . An AI system in these domains could assist the military with fraud detection in its contracting services, predicting when weapons systems will fail due to maintenance issues, or developing winning strategies in conflict simulations. All of these applications and more can be force multipliers in day-to-day operations and in the next conflict.

AI’s Shortfalls for Military Applications

As the military looks to incorporate AI’s success in these tasks into its systems, some challenges must be acknowledged. The first is that developers need access to data. Many AI systems are trained using data that has been labeled by some expert system (e.g., labeling scenes that include an air defense battery), usually a human. Large datasets are often labeled by companies who employ manual methods . Obtaining this data and sharing it is a challenge, especially for an organization that prefers to classify data and restrict access to it. An example military dataset may be one with images produced by thermal-imaging systems and labeled by experts to describe the weapon systems found in the image, if any. Without sharing this with preprocessors and developers, an AI that uses that set effectively cannot be created. AI systems are also vulnerable to becoming very large (and thus slow), and consequently susceptible to “dimensionality issues.” For example, training a system to recognize images of every possible weapon system in existence would involve thousands of categories. Such systems will require an enormous amount of computing power and lots of dedicated time on those resources. And because we are training a model, the best model requires an infinite amount of these images to be completely accurate. That is something we cannot achieve. Furthermore, as we train these AI systems, we often attempt to force them to follow “human” rules such as the rules of grammar. However, humans often ignore these rules, which makes developing successful AI systems for things like sentiment analysis and speech recognition challenging. Finally, AI systems can work well in uncontested, controlled domains. However, research is demonstrating that under adversarial conditions , AI systems can easily be fooled , resulting in errors. Certainly, many DoD AI applications will operate in contested spaces, like the cyber domain, and thus, we should be wary of their results.

Ignoring the enemy’s efforts to defeat the AI systems that we may employ, there are limitations to these seemingly super-human models. An AI’s image-processing capability is not very robust when given images that are different from its training set —for example, images where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured . Unless these types of images were in the training set, the model may struggle (or fail) to accurately identify the content. Chat bots that might aid our information-operations missions are limited to hundreds of words and thus cannot completely replace a human who can write pages at a time. Prediction systems, such as IBM’s Watson weather-prediction tool, struggle with dimensionality issues and the availability of input data due to the complexity of the systems they are trying to model. Research may solve some of these problems but few of them will be solved as quickly as predicted or desired.

Another simple weakness with AI systems is their inability to multi-task. A human is capable of identifying an enemy vehicle, deciding a weapon system to employ against it, predicting its path, and then engaging the target. This fairly simple set of tasks is currently impossible for an AI system to accomplish. At best, a combination of AIs could be constructed where individual tasks are given to separate models. That type of solution, even if feasible, would entail a huge cost in sensing and computing power not to mention the training and testing of the system. Many AI systems are not even capable of transferring their learning within the same domain. For example, a system trained to identify a T-90 tank would most likely be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks are image recognition. Many researchers are working to enable systems to transfer their learning , but such systems are years away from production.

Artificial-intelligence systems are also very poor at understanding inputs and context within the inputs. AI recognition systems don’t understand what the image is, they simply learn textures and gradients of the image’s pixels. Given scenes with those same gradients, AIs readily identify portions of the picture incorrectly. This lack of understanding can result in misclassifications that humans would not make, such as identifying a boat on a lake as a BMP.

This leads to another weakness of these systems—the inability to explain how they made their decisions. Most of what occurs inside an AI system is a black box and there is very little that a human can do to understand how the system makes its decisions. This is a critical problem for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and learn why it made a mistake is legally and morally important. Additionally, issues on how we assess liability in cases where AI is involved are open research concerns. There have been many examples in the news recently of AI systems making poor decisions based on hidden biases in areas such as loan approvals and parole determinations . Unfortunately, work on explainable AI is many years from bearing fruit.

AI systems also struggle to distinguish between correlation and causation. The infamous example often used to illustrate the difference is the correlation between drowning deaths and ice cream sales. An AI system fed with statistics about these two items would not know that the two patterns only correlate because both are a function of warmer weather and might conclude that to prevent drowning deaths we should restrict ice cream sales. This type of problem could manifest itself in a military fraud prevention system that is fed data on purchases by month. Such a system could errantly conclude that fraud increases in September as spending increases when really it’s just a function of end-of-year spending habits.

Even without these AI weaknesses, the main area the military should be concerned with at the moment is adversarial attacks. We must assume that potential adversaries will attempt to fool or break any accessible AI systems that we use. Attempts will be made to fool image-recognition engines and sensors; cyberattacks will try to evade intrusion-detection systems ; and logistical systems will be fed altered data to clog the supply lines with false requirements.

Adversarial attacks can be separated into four categories : evasion, inference, poisoning, and extraction. It has been shown that these types of attacks are easy to accomplish and often don’t require computing skills. Evasion attacks attempt to fool an AI engine often in the hopes of avoiding detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus. The primary survival skill of the future may be the ability to hide from AI sensors. As a result, the military may need to develop a new type of AI camouflage to defeat AI systems because it’s been shown that simple obfuscation techniques such as strategic tape placement can fool AI. Evasion attacks are often proceeded by inference attacks that gain information about the AI system that can be used to enable evasion attacks. Poisoning attacks target AI systems during training to achieve their malicious intent. Here the threat would be enemy access to the datasets used to train our tools. Mislabeled images of vehicles to fool targeting systems or manipulated maintenance data designed to classify imminent system failure as normal operation may be inserted. Given the vulnerabilities of our supply chains, this would not be unimaginable and would be difficult to detect. Extraction attacks exploit access to the AI’s interface to learn enough about the AI’s operation to create a parallel model of the system. If our AIs are not secure from unauthorized users, then those users could predict decisions made by our systems and use those predictions to their advantage. One could envision an opponent predicting how an AI-controlled unmanned system will respond to certain visual and electromagnetic stimuli and thus influence its route and behavior.

The Path Forward for Military AI Usage

Artificial intelligence will certainly have a role in future military applications. It has many application areas where it will enhance productivity, reduce user workload, and operate more quickly than humans. Ongoing research will continue to improve its capability, explainability, and resilience. The military cannot ignore this technology. Even if we do not embrace it, certainly our opponents will, and we must be able to attack and defeat their AIs. However, we must resist the allure of this resurgent technology. Placing vulnerable AI systems in contested domains and making them responsible for critical decisions opens the opportunity for disastrous results. At this time, humans must remain responsible for key decisions.

Given the high probability that our exposed AI systems will be attacked and the current lack of resilience in AI technology, the best areas to invest in military AI are those that operate in uncontested domains. Artificial-intelligence tools that are closely supervised by human experts or that have secure inputs and outputs can provide value to the military while alleviating concerns about vulnerabilities. Examples of such systems are medical-imaging diagnostic tools, maintenance-failure prediction applications, and fraud-detection programs. All of these can provide value to the military while limiting the risk from adversarial attacks, biased data, context misunderstanding, and more. These are not the super tools sponsored by the AI salesmen of the world but are the ones most likely to have success in the near term.

Lt. Col (Ret) Paul Maxwell is the Cyber Fellow of Computer Engineering at the Army Cyber Institute at the United States Military Academy. He was a cyber and armor branch officer during his twenty-four years of service. He holds a PhD in electrical engineering from Colorado State University.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Staff Sgt. Jacob Osborne, US Marine Corps

Tom Rozman

This is a thoughtful and worthwhile discussion relative to this ever developing technology. The consideration of AI's current array of possible military applications and the technology's vulnerabilities is clearly and important one. What is an open question is hoe effective our development and application of the technology and its capabilities will be? .

This is a thoughtful and worthwhile discussion relative to this ever developing technology. The consideration of AI's current array of possible military applications and the technology's vulnerabilities is clearly an important one. What is an open question is how effective our development and application of the technology and its capabilities will be?

James B

AI as the author describes it sounds wildly overcomplicated, with a focus on perfect complete solutions rather than good enough partial solutions. Not to say that the present DOD bureaucracy isn't trying to be perfect and complicated–that would fit past patterns–but it's stupid.

Train AI like we train humans: one task at a time, basic to detailed. "A human is capable of identifying an enemy vehicle [1], deciding a weapon system to employ against it [2], predicting its path [3], and then engaging the target [4]," because these are separate tasks that have been broken down to a PFC-simple level of decisionmaking. If "This fairly simple set of tasks is currently impossible for an AI system to accomplish," then the AI designers need to find a new line of work.

[1] We don't hand soldiers a deck of photos of T-90 tanks and tell them to learn ID'ing the T-90 and nothing else, we show them the basic components of tanks and teach them the combinations of different parts that lead to a T-90. An AI that can ID a T-90 may not know what type of tank a Type 99 is, but it should know that both images are tanks or tank-like vehicles. In combat and similar situations, this leads to a lot of "UI tank" classifications rather than certainty in typing, but it might not matter.

[2] Weapon selection is usually multiple-choice, not short-answer. Typical combat vehicles carry 2-3 types of ammunition ready for immediate use. An AI can use basic rules, the same ones human gunners do, to select ammo based on target type.

[3] and [4] We already use computers for this. Not the specific decision to fire, but all the mechanics of targeting and fire control are computerized on most of our weapon systems. The only difference between non-learning computers and learning AI is that AI-based FCS would optimize for real-world conditions faster, which is still a pretty basic subroutine.

If you train AI in part-tasks, you also get (probably not all, but still) insight into the decision-making process. Dictating the decision-tree would slow down the AI and make it less flexible, but it would make it useful outside that specific AI black box, which is an absolute requirement for large military organizations.

Andrew Koluch

Thanks for keeping the discussion goal-oriented. AI is better used to find those individuals and teams capable of performing faster and more consistently than a computer program than it is for replacing those people with electronics. When the focus shifts to the Warfighter, then the machine will excel.

P

Like James, I have dubious questions about military AI because I believe that the military doesn't have the best AI because private corporations DO NOT WANT TO WORK for the US Military due to ethical, moral, political, Religious, legal, and other reasons.

Take IBM's Watson supercomputer AI for example…the JEOPARDY champion. Watson might not be able to distinguish a Russian T-90 from a Chinese Type 99 MBT (but it should and I bet it can if tested properly), but I do think Watson can tell that the tank IS NOT NATO and thus is an enemy tank. Then that is good enough AI.

AI is important because if there is any Lunar Moon War, then robots, drones, and UGVs will most likely be sent first than SpaceMarines. Space Force cannot muster soldiers into rockets fast enough compared to launching remote AI drones, probes, and robots. Thus, future military AI has a place in space and it had better work. Does AI need to know the difference between a Russian Moon T-90 compared to a Chinese Moon Type-99? Does it need to? USA AI needs to at least know that MBTs of enemy nations shouldn't be there and even the camouflage pattern should be enough to tell the two tanks apart. Humans in the Loop will always be needed.

The quest for Quantum Tech might develop unbreakable codes for AI. If the enemy develops better and faster AI for their military, then the USA needs to compete as well. The author makes solid points, just that I don't believe the best AI is in the military…the best AI is in corporate and I doubt most corporate wants to share their AI…and that could be a future problem for the DoD. Or the best AI is in government and classified so deep that it's unknown to the public as Black Ops Top Secret Programs.

Abhishek Jangid

Interesting perspective on the role of AI in modern warfare. It's clear that AI is becoming increasingly important in military operations, but I'm not sure I agree with the idea that it will completely replace human soldiers. There are certain inherent risks and limitations to relying solely on AI, including the potential for hacking and cyber attacks, as well as the difficulty in replicating the complex decision-making abilities of human soldiers. Nonetheless, AI is certainly a valuable tool that can augment and enhance human capabilities, and I'm excited to see how it continues to evolve in the future of warfare.

Na7 WhatsApp

Interesting perspective on the role of AI in modern warfare. It's clear that the technology is advancing at a rapid pace, and it will be fascinating to see how it shapes the future of military strategy. However, I can't help but wonder if there are any ethical considerations that need to be taken into account when it comes to the use of AI in warfare. As the technology becomes more advanced, it's crucial that we ensure it's used responsibly and with proper oversight.

Leave a reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

The articles and other content which appear on the Modern War Institute website are unofficial expressions of opinion. The views expressed are those of the authors, and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

The Modern War Institute does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Rather, the Modern War Institute provides a forum for professionals to share opinions and cultivate ideas. Comments will be moderated before posting to ensure logical, professional, and courteous application to article content.

Most Popular Posts

  • The Return of the Tactical Crisis
  • The Return of Tactical Antiaircraft Artillery: Optimizing the Army Inventory for the Era of Small Drone Proliferation
  • War Books: Manga and Anime

Announcements

  • Announcing the Modern War Institute’s 2023–24 Senior and Research Fellows
  • Essay Contest Call for Submissions: Solving the Military Recruiting Crisis
  • Call for Applications: MWI’s 2023–24 Research Fellows Program
  • Join Us This Friday for a Livestream with Ambassador Michael McFaul and Secretary Chuck Hagel

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Artificial intelligence and the future of warfare: The USA, China, and strategic stability

Profile image of Augusto C . Dall'Agnol

2022, Journal of Strategic Studies

Artificial intelligence and the future of warfare present the reader with a clear and elegant understanding of artificial intelligence as it provides a robust technical foundation concerning key technological advances in the evolution of AI for a non-technical audience. The author emphasizes that artificial intelligence is a force multiplier for both offense and defense capabilities – despite the paradoxical underdeveloped nature of counter AI capabilities.

Related Papers

Defense & Security Analysis

James Johnson

Recent developments in artificial intelligence (AI) suggest that this emerging technology will have a deterministic and potentially transformative influence on military power, strategic competition, and world politics more broadly. After the initial surge of broad speculation in the literature related to AI this article provides some much needed specificity to the debate. It argues that left unchecked the uncertainties and vulnerabilities created by the rapid proliferation and diffusion of AI could become a major potential source of instability and great power strategic rivalry. The article identifies several AI-related innovations and technological developments that will likely have genuine consequences for military applications from a tactical battlefield perspective to the strategic level.

role of artificial intelligence in future warfare essay

The Washington Quarterly

This article demystifies the hype surrounding AI in the context of nuclear weapons and, more broadly, future warfare. Specifically, it highlights the potential, multifaceted intersections of this disruptive technology with nuclear stability. The inherently destabilizing effects of military AI may exacerbate tension between nuclear-armed great powers, especially China and the United States, but not for the reasons you may think.

Chathumal Chandrasiri

Gloria Shkurti

Considered as the 4th Industrial Revolution, Artificial Intelligence (AI) has become a reality in today’s world, especially in the military. Experts and academicians have emphasized the importance of AI for a long time. Furthermore, world leaders, including Obama, Trump, Xi, and Putin, have all made important statements that bring to the fore the significance of AI which can be summarized with what Putin stated on September 2017: whoever becomes the leader in AI, will rule the world. This analysis provides a short introduction on what AI is, how it has evolved until today and how it will change the nature of warfare. It then assesses why states invest in AI to later turn to the case of the U.S. and China. For both states, the main official documents and statements are analyzed, the bureaucratic structures that work on AI are presented and finally examples of how the U.S. and China are applying AI in the military are provided. The conclusion briefly comments on how the strategies of China and the U.S. differ followed by some recommendation on what states like Turkey should do in the near future.

The use of artificial intelligence systems is ready to transition from basic science research and a blooming commercial industry to strategic implementation in the Defense Acquisition system. The purpose of this research is to determine the problems awaiting artificial intelligence (AI) systems inherent to defense acquisition. AI is a field of scientific study focused on the construction of systems that can act rationally, behave humanly, and adapt. To achieve AI behavior takes AI essentials, which consider mobility, system perspective, and algorithms. Unfortunately, AI essentials are under addressed in the concept of operations that fuels the Joint Capabilities Integration and Development System. Influences to the concept of operations analyzed in this research include strategic documentation, joint technology demonstrations, and exercises that aim to capture technology-based lessons learned. Failure to address AI essentials causes problems in defense acquisition: system requiremen...

Shalitha Dasanayaka

Stephan De Spiegeleire , Matthijs Maas

Artificial intelligence (AI for short) is on everybody’s minds these days. Most of the world’s leading companies are making massive investments in it. Governments are scrambling to catch up. Every single one of us who uses Google Search or any of the new digital assistants on our smartphones has witnessed first-hand how quickly these developments now go. Many analysts foresee truly disruptive changes in education, employment, health, knowledge generation, mobility, etc. But what will AI mean for defense and security? In a new study HCSS offers a unique perspective on this question. Most studies to date quickly jump from AI to autonomous (mostly weapon) systems. They anticipate future armed forces that mostly resemble today’s armed forces, engaging in fairly similar types of activities with a still primarily industrial-kinetic capability bundle that would increasingly be AI-augmented. The authors of this study argue that AI may have a far more transformational impact on defense and security whereby new incarnations of ‘armed force’ start doing different things in novel ways. The report sketches a much broader option space within which defense and security organizations (DSOs) may wish to invest in successive generations of AI technologies. It suggests that some of the most promising investment opportunities to start generating the sustainable security effects that our polities, societies and economies expect may lie in in the realms of prevention and resilience. Also in those areas any large-scale application of AI will have to result from a preliminary open-minded (on all sides) public debate on its legal, ethical and privacy implications. The authors submit, however, that such a debate would be more fruitful than the current heated discussions about ‘killer drones’ or robots. Finally, the study suggests that the advent of artificial super-intelligence (i.e. AI that is superior across the board to human intelligence), which many experts now put firmly within the longer-term planning horizons of our DSOs, presents us with unprecedented risks but also opportunities that we have to start to explore. The report contains an overview of the role that ‘intelligence’ - the computational part of the ability to achieve goals in the world - has played in defense and security throughout human history; a primer on AI (what it is, where it comes from and where it stands today - in both civilian and military contexts); a discussion of the broad option space for DSOs it opens up; 12 illustrative use cases across that option space; and a set of recommendations for - especially - small- and medium sized defense and security organizations.

Austral: Brazilian Journal of Strategy & International Relations

Daniel Barreiros , Italo Poty

This article analyses the US Department of Defense initiative formalized in the Summary of the 2018 Department of Defense Artificial Intelligence Strategy. The conclusion is that the US emphasis on the use of artificial intelligence to expand C4ISR capabilities (command, control, communications, computers, intelligence, reconnaissance and surveillance) and the denunciation of “ethical risks’’ involving Lethal Autonomous Weapon Systems (LAWS) are narrative strategies aimed at dealing in the short term with the inability of the US technology agencies to master autonomous military platform technologies and with the Russian resolve on the development of these lethal autonomous military platforms.

Strategic Studies Quarterly

AI-augmented conventional capabilities might affect strategic stability between great military powers. The nuanced, multifaceted possible intersections of this emerging technology with a range of advanced conventional weapons can compromise nuclear capabilities, thus amplifying the potentially destabilizing effects of these weapons. This article argues that a new generation of artificial intelligence–enhanced conventional capabilities will exacerbate the risk of inadvertent escalation caused by the commingling of nuclear and nonnuclear weapons. The increasing speed of warfare will also undermine strategic stability and increase the risk of nuclear confrontation.

AI-Enabled Cyber Warfare: The Future of Cyber Conflicts

As the world becomes increasingly dependent on technology, cyber warfare is emerging as a critical issue. Artificial intelligence (AI) is poised to play a significant role in cyber conflicts of the future. AI-enabled cyber warfare is becoming a reality, with governments and other organizations developing and deploying AI-based tools for both defensive and offensive purposes. This paper examines the potential impact of AI on cyber warfare, including the ways in which AI can be used to enhance cyber attacks and defences, the ethical and legal considerations surrounding the use of AI in warfare, and the potential implications for international relations and national security. The paper also discusses the need for greater international cooperation and coordination to prevent AI-enabled cyber warfare from escalating into full-scale conflicts. Ultimately, the paper argues that AI will play an increasingly important role in cyber warfare and that policymakers and military leaders must be prepared to address the unique challenges posed by this new form of warfare.

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IMAGES

  1. The Role of AI In Future Warfare

    role of artificial intelligence in future warfare essay

  2. AI Technology in the Military-Transformation of Future Warfare

    role of artificial intelligence in future warfare essay

  3. Artificial Intelligence and the Future of Warfare

    role of artificial intelligence in future warfare essay

  4. Artificial Intelligence Essay

    role of artificial intelligence in future warfare essay

  5. Artifical Intelligence in Warfare

    role of artificial intelligence in future warfare essay

  6. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    role of artificial intelligence in future warfare essay

VIDEO

  1. Ukraine War

  2. AI and Warfare

  3. Episode 13: Can Artificial Intelligence Improve School Safety? (Part 1 of 2)

  4. CSS Essay on Artificial Intelligence || AI and Jobless future

  5. How AI is Reshaping Security Operation Centers

  6. AI Ethics Impact and Advantages in Daily Life How A.I Is Saving Our Life Day By Day

COMMENTS

  1. The role of AI in future warfare

    For the purposes of this essay, the simple point is this: robotics and AI could take on a central, and very important, role in warfare by 2040—even without anything resembling a terminator or a ...

  2. Artificial intelligence & future warfare: implications for

    Cummings, Artificial intelligence and the future of warfare. 82. The pace of military-use AI diffusion to other states and non-state entities will likely be constrained, however, by three major aspects related to this phenomena: (1) Hardware constraints (i.e. physical processors); (2) the algorithmic complexity inherent to Deep learning; and (3 ...

  3. PDF Artificial Intelligence and the Future of Warfare

    Artificial Intelligence and the Future of Warfare 4 | Chatham House While there are many parallels between human intelligence and AI, there are stark differences too. Every autonomous system that interacts in a dynamic environment must construct a world model and continually update that model (as shown in Figure 1).

  4. AI is Shaping the Future of War

    Artificial Intelligence and Machine Learning are changing future of war. S everal years ago, before many were talking about artificial intelligence (AI) and its practical applications to the field of battle, retired United States Marine Corps General John Allen, and I began a journey to not only investigate the art of the possible with AI, but ...

  5. Artificial Intelligence and the Future of War

    In his post-apocalyptic novel Towards the End of Time, John U pdike imagined a war between the US and. China in 2020. Looking back, the novel's chief protagonist, one of Updike's many white ...

  6. Artificial intelligence & future warfare: implications for

    The research aim of this paper is to provide a review of the existing research that makes a connection between artificial intelligence and international security for the purpose of confirming the hypothesis that development and implementation of artificial intelligence announce the upcoming revolution in military affairs, which will transform the manners of the states' entering strategic ...

  7. AI weapon systems in future war operations; strategy, operations and

    The future of war will be fought by machines, but will humans still be in charge?1In recent years, the introduction and use of Artificial Intelligence Weapon Systems (AIWS) in war operations is adv...

  8. Artificial intelligence & future warfare: implications for

    To cite this article: James Johnson (2019): Artificial intelligence & future warfare: implications for international security, Defense & Security Analysis, DOI: 10.1080/14751798.2019.1600800

  9. Artificial Intelligence and Warfare

    New developments in Artificial Intelligence may enhance real-time surveillance and reduce the survivability of launch platforms. This is because AI provides an enhanced ability to adjust real-time warfare strategies better and faster than humans. Therefore, humans will interact with AI devices in ways that differ from interactions with today ...

  10. Johnson, J. (2021). Artificial Intelligence and The Future of Warfare

    According to James Johnson, PhD, Lecturer in Strategic Studies in the Department of Politics & International Relations at the University of Aberdeen and author of the book Artificial Intelligence and the Future of Warfare, the hype around this has made it easy to overstate the opportunities and challenges posed by the development and deployment ...

  11. Artificial Intelligence in Military Application

    Abstract. Artificial Intelligence (AI) is playing an increasing role in planning and supporting military operations and becoming a key tool in intelligence and analysis of the enemy's ...

  12. Artificial Intelligence and Future Warfare

    The Decision Centric Warfare currently being developed by DoD also states that the role of AI will be to support human decision-making, for example, AI will create operational plans and propose them to the commander. ... Will AI or Human Intelligence Determine Future Warfare. Whereas conventional weapons have enhanced human muscles, eyes and ...

  13. Artificial intelligence and the future of warfare

    'Artificial intelligence and the future of warfare present the reader with a clear and elegant understanding of artificial intelligence as it provides a robust technical foundation concerning key technological advances in the evolution of AI for a non-technical audience.' Augusto C. Dall'Agnol, Journal of Strategic Studies

  14. Artificial Intelligence and the Future of Conflict

    Table of Contents. Introduction. It is hard to predict the exact impact and trajectory of technologies enabled by artificial intelligence (AI). 1 Yet these technologies might stimulate a civilizational transformation comparable with the invention of electricity. 2 AI applications will change many aspects of the global economy, security, communications, and transportation by altering how humans ...

  15. The Future of Military Applications of Artificial Intelligence: A Role

    The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? Author links open overlay panel Michael C. Horowitz, ... to the possibility that an enemy could sever telegraph cables to prevent London from communicating with its forces in a future theater of war. ... this essay argues, ...

  16. Artificial intelligence and the future of warfare

    This volume offers an innovative and counter-intuitive study of how and why artificial intelligence-infused weapon systems will affect the strategic stability between nuclear-armed states. Johnson demystifies the hype surrounding artificial intelligence (AI) in the context of nuclear weapons and, more broadly, future warfare. The book highlights the potential, multifaceted intersections of ...

  17. Artificial Intelligence, Drone Swarming and Escalation Risks in Future

    Abstract. The rapid proliferation of a new generation of artificial intelligence (AI)-augmented and -enabled autonomous weapon systems (AWS), most notably drones used in swarming tactics, could have a significant impact on deterrence, nuclear security, escalation and strategic stability in future warfare.

  18. Applied Artificial Intelligence in Modern Warfare and National ...

    Artificial Intelligence (AI) applications in modern warfare have revolutionized national security power dynamics between the United States, China, Russia, and private industry. The United States has fallen behind in military technologies and is now at the mercy of big technology companies to maintain peace.

  19. Artificial Intelligence is the Future of Warfare (Just Not in the Way

    Artificial intelligence will certainly have a role in future military applications. It has many application areas where it will enhance productivity, reduce user workload, and operate more quickly than humans. Ongoing research will continue to improve its capability, explainability, and resilience. The military cannot ignore this technology.

  20. The Role of AI in Modern Warfare: A Revolution on the Battlefield

    IV. Predictive Analysis. Another significant role of AI in modern warfare is predictive analysis. AI algorithms can process vast amounts of data, including historical battle records, weather ...

  21. (PDF) Artificial intelligence and the future of warfare: The USA, China

    Artificial intelligence and the future of warfare present the reader with a clear and elegant understanding of artificial intelligence as it ... the paper argues that AI will play an increasingly important role in cyber warfare and that policymakers and military leaders must be prepared to address the unique challenges posed by this new form of ...

  22. Warfare in the Age of AI: A Critical Evaluation of Arkin's Case for

    These arguments focus on possible violations of human rights, IHL and human dignity. The final section offers a way forward in terms of how a ban on LAWS might be implemented and the role of governments, academics, and the public. Keywords. Lethal Autonomous Weapon Systems; Artificial Intelligence; Human Rights; International Humanitarian Law

  23. Drone Swarms Are About to Change the Balance of Military Power

    Essay; Drone Swarms Are About to Change the Balance of Military Power On today's battlefields, drones are a manageable threat. When hundreds of them can be harnessed to AI technology, they will ...

  24. PDF Artificial Intelligence and Future Warfare

    evaluation and distribution, realistic war gaming, prediction, training simulations, communications, logistics, movements, etc. An Artificial intelligence based coordination framework called DART was first attempted in 1991 during the Gulf War-1, which DARPA claims had more than took care of their speculation.

  25. Ukraine's Battlefield Demands Fuel an AI Race

    Without question, artificial intelligence (AI) holds great promise as a weapon in warfare, but there are also serious ethical and strategic issues that raise questions about the morality of using ...