Artificial Intelligence in Military Decision Making – A Case for the Human to Stay in The OODA Loop

 By Ameya Kelkar
0
762

Abstract

In today’s technologically saturated world, people rely on various technologies for their daily needs. All sectors to a certain degree, today rely on the use of technologies to increase their output and levels of efficiency. The militaries of the world are no different, often being the users of emerging technologies before the civilian market. The need for incorporating the latest technologies within the military architecture is a core theme of any competent military force, intent on enabling itself as an efficient and effective fighting force. However, with the incorporation of Artificial Intelligence and Machine Learning models, tasks such as automation of weapon systems has been mustering pace, giving rise to the potential use of unmanned weapons.

This article will thus explore both the benefits, and the possible dangers, which can result from the incorporation of Artificial Intelligence and Machine Learning models in general warfare strategies. While looking at the advantages, this article will also stress on the need to keep the human element in all essential parts of the OODA (Observe, Orient, Decide and Act) Loop, helping underscore some of the potential pros and cons of making our warfighting machines more autonomous.

Introduction

The nations of the 21st century are witnessing transformational changes in its warfighting strategies. From the adoption of precision munitions to the introduction of cyber and space domains in a nation’s battle calculus, there are many facets which modern militaries need to acknowledge, if it is to remain capable of answering any challenges it may face. One of these challenges is the growing prevalence of automation in a nation’s arsenal and the technological developments machines have undergone to make them faster, more lethal, and autonomous, ensuring nations which can invest in such developments can meet the present and future threats posed to their security.

In today’s world, technology is becoming an increasingly central aspect of life for any country. From the simple adoption of smartphones to the development and deployment of next-generation offensive and defensive weaponry, no country can afford to turn a blind eye to the importance of technology in a nation’s development. Nation-states, no matter how large their defence budgets, or the existing capabilities of their military, are always in the process of developing new technologies to enhance their overall arsenals. Automated weapon systems, in this regard, play a special role in determining a nation’s warfighting capabilities. With their wide-ranging capabilities and their myriad applicability in the improvement of all sectors of a nation’s economy, the adoption of Artificial Intelligence and Machine Learning models by a nation is seen as an integral part of its overall development. This efficacy has not only been showcased in detail in simulated battlefields but has also been showcased in the assistance it has provided towards the increase in efficiency of other sectors of a nation’s economy (OCED, 2021, pg 22).

Technology and Autonomy of Military Hardware

In today’s technologically saturated world, many nations are working towards the adoption of systems which will enable them to make faster and improved decisions on the ground. With nations relying on advanced computing power to gain any form of advantage on the battlefield, nations are looking to incorporate big data in their decision making. This has taken the form of utilizing advanced algorithms providing faster computation and data analytics. Commonly referred as Artificial Intelligence and Machine Learning (Artificial Intelligence (AI) Vs. Machine Learning, n.d.), theses algorithms and models were created for the express purpose of big data analysis, such models now have been adopted by countries in their military infrastructure particularly, in the performance of their military hardware to increase their overall autonomy when deployed (Kerbusch et al., 2018, pgs 7 – 8).

A machine with autonomy, according to definitions (Erskine and Mowbray, 2023), is the capability of said machine to make decisions and execute them without a human decision-maker in the process, based on the analysis of data which would be too large and varied for humans to parse through in a limited time frame. From targeting to engaging an adversary, nations are investing more in ensuring the autonomy of military hardware to be capable of carrying out missions with minimum human interference, with the goal being the automation of the analysis of any gathered data and application of calculating probable variables and vectors and algorithms to determine the most effective method of action. This concept of autonomy has already seen its possible application in real-life situations, by international bodies (Wadhwa and Salkever, 2021) concerning the use of autonomous drones to even conduct targeted reconnaissance.

This becomes a major concern when speaking of a central aspect in the cycle of decision making known as the Observe, Orient, Decide, Act Loop (OODA Loop). In basic terms, the OODA Loop characterizes the general decision-making processes used by leaders to understand their environment and decide upon their course of action accordingly (Johnson, 2023, pg 6). In today’s enhanced strategic environment, military leaders must orient their decisions to better serve the political and cultural environment through careful observation of said environment and decide and act upon their decisions in a manner which meets their strategic objective while also looking at the larger picture. In today’s time, the OODA Loop has been seen as one which requires speed for the data it needs gathered and analysed for any efficient and comprehensive decision to be made by anyone (Anderson et al., n.d.). For this purpose, militaries across the world are looking towards incorporating Artificial Intelligence and Machine Learning models into their decision-making (Turek, n.d.), seeing it as an inevitable necessity to ensure their defence structures do not fall behind of their adversaries. Nations see this as the next logical step as it will enable them to collect and analyse data far faster than any normal human can. However, these ideas miss one key point of the OODA Loop: the loop is developed for decision making.

Humanity as a feature of the Loop

One of the main problems with the introduction of Artificial Intelligence and Machine Learning models into the OODA Loop, is the fact that it leads to an overemphasis on the speed of the delivery of data rather than the actual efficacy of the data in the delivery of one’s objectives (Daniels, 2021). With this emphasis on speed, the OODA Loop is shortened in terms of the time it needs for better and more comprehensive decision-making, with the focus being more on how much data can be analysed rather than the implications this data has on the society and a nation’s tactical advantage at large. The inherent complexity in the type of data required and the calculations needed to come to a decision are compounded by the fact that algorithms cannot understand human nature and the intricate nature of human society (Menthe et al., 2024, pg 18). Being a compilation of algorithms in the base sense, Artificial Intelligence in the present and near future, does not seem to be capable of understanding of the broad societal contexts in which actions at a geopolitical level can occur, be they military or otherwise.

These are some of the current problems associated with the idea of integrating Artificial Intelligence into the decision-making structures of the OODA Loop. This becomes decidedly more prescient when we realize that use of such weapon systems for combat purposes is part of a domain of irregular or hybrid warfare, and as such the introduction of elements such as Artificial Intelligence can lead to a further complication in one’s understanding of the intricacies involved in the decision-making processes underlining any action undertaken in the military realm.

Possible solutions:

It is however clear, that militaries across the world will be looking towards any advantage which can help their decision-making processes, even if that entails using machines to harvest large amounts of data. While this undertakes its own development, military decision makers need to weigh the advantages of giving machines and algorithms too much control over the decision-making processes over its disadvantages. Some of the more developed nations are already working towards creating frameworks (Defence Artificial Intelligence Strategy, 2022, Pgs 52 – 53) which can help guide the current and future developments in the integration of Artificial Intelligence and Machine Learning models in their overall defence architecture. Taking a long-term perspective, militaries looking towards the future can also look at the implementation of safeguards within their own decision-making capacities to ensure the human element is never taken out of the Decide portion of the OODA Loop. While it would be ideal to have the human element in all stages of the OODA Loop, the fact remains that humans can only observe and analyse a limited amount of information in any given point amount of time, a weakness does not present within machines and algorithms. But this advantage of Artificial Intelligence needs to be harnessed by the human being who should always be at the centre of working with the machine to use the data it analyses to conclude which will fit the requirements of sound decision-making capabilities (Scharre, 2015).

This will be more important when considering the use of military weapon systems for any future scenario. As the use of these systems requires precise attention to detail, not to mention the split-second change in decisions as new information is always relayed, humans which can make these decisions given these and many more constraints, considering the many factors at play are something which no military can ignore. At any stage of development, trusting machines to make decisions for humans is going to prove itself to be a detriment to the operational usefulness of any military doctrine, as it will not only lead to serious doubts on the efficacy and ethicality of such systems, but will also lead to questions being placed on the human counterpart should anything go wrong. Having a human which can understand the intricate nature of human society and the human psyche will be tool of increasing importance as Artificial Intelligence becomes an increasingly prevalent facet of a nation’s technological development.

Conclusion:

It is clear from the current speed of development that tools such as Artificial Intelligence and Machine Learning models that technologies aimed at making combat more efficient will play an ever-increasing role in the overall decision-making capabilities of a nation’s national security infrastructure. However, while these maybe very useful tools, it is important that the humans at all levels of decision making see these as mere tools which cannot and should not replace the human and their role in the overall decision-making processes. While Artificial Intelligence and Machine Learning models can help the human decision-maker gain a better understanding of the on-ground situation, it will still be up to the human to see the data and compile it with the geopolitical, economic, social, and political situation of the target group to come to a more rounded and holistic decision not simply arrived at with the use of algorithms.

References

Anderson, Wendy R. et al. (n.d.). The OODA Loop: Why Timing Is Everything. European Parliament. Retrieved: 27/06/2023. https://www.europarl.europa.eu/cmsdata/155280/WendyRAnderson_CognitiveTimes_OODA%20LoopArticle.pdf.

Artificial Intelligence (AI) Vs. Machine Learning (n.d.). The FU Foundation of School of Engineering and Applied Science – Columbia University. Retrieved: 15/07/2023. https://ai.engineering.columbia.edu/ai-vs-machine-learning/.

Daniels, Owen (2021, May). Speeding Up the OODA Loop with AI: A Helpful or Limiting Framework? Joint Air & Space Power Conference 2021. Joint Air Power Competence Centre. Retrieved: 24/06/2023. https://www.japcc.org/essays/speeding-up-the-ooda-loop-with-ai/.

Defence Artificial Intelligence Strategy (2022, 15 July). Ministry of Defence – Government of the United Kingdom. Retrieved: 29/06/2023. https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy/defence-artificial-intelligence-strategy. Pgs 52 – 53.

Dr. Turek, Matt (n.d.). In The Moment (ITM), Defense Advanced Research Projects Agency. Retrieved: 13/07/2023. https://www.darpa.mil/program/in-the-moment.

Erskine Jonathan and Mowbray, Miranda (2023, 10 January). What Killer Robots Mean For The Future of War? The Conversation. Retrieved: 29/06/2023. https://theconversation.com/what-killer-robots-mean-for-the-future-of-war-185243.

Johnson, James (2023). Automating the OODA Loop in The Age of Intelligent Machines: Reaffirming The Role of Humans in Command-and-Control Decision-Making in The Digital Age. Defence Studies. Vol 23 No 1. https://doi.org/10.1080/14702436.2022.2102486. Pg 46.

Kerbusch, Philip et al. (2018). Roles Of AI And Simulation For Military Decision Making, Science And Technology Organization. North Atlantic Treaty Organization. Retrieved: 13/07/2023. https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-160/MP-IST-160-PT-4.pdf. Pgs 7 – 8.

Menthe, Lance et al. (2024). Understanding the Limits of Artificial Intelligence for Warfighters: Volume 1, Summary. RAND Corporation. Retrieved: 05/01/2024. https://www.rand.org/content/dam/rand/pubs/research_reports/RRA1700/RRA1722-1/RAND_RRA1722-1.pdf.

OECD (2021), Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges, and Implications for Policy Makers, https://www.oecd.org/finance/artificial-intelligence-machine-learning-big-data-in-finance.htm.

Scharre, Paul (2015, 11 March). The Human Element in Robotic Warfare. War On The Rocks. Retrieved: 22/06/2023. https://warontherocks.com/2015/03/the-human-element-in-robotic-warfare/.

Wadhwa Vivek and Salkever, Alex (2021, 05 July). Killer Flying Robots Are Here. What Do We Do Now? Foreign Policy. Retrieved: 19/06/2023. https://foreignpolicy.com/2021/07/05/killer-flying-robots-drones-autonomous-ai-artificial-intelligence-facial-recognition-targets-turkey-libya/