AI, Automation, and the Ethics of Modern Warfare

Editors Note: In a previous post, Palantir’s Privacy and Civil Liberties (PCL) team articulated the case for why AI ethics and efficacy discussions need to move beyond performative towards operational realities. We suggested that the contextual domains of AI application provide a tangible foothold for delving into the ways that Palantir, and the AI-enabled and -assisted applications we build for our clients, can move this conversation forward.

In this blog post, Palantir Global Director of Privacy & Civil Liberties Engineering Courtney Bowman and Privacy & Civil Liberties Government and Military Ethics Lead, Peter Austin, explore the ethical role of technology providers in the defense domain. Future posts will explore Palantir’s work supporting defense workflows in the most consequential settings.

Introduction

The past year has reminded us that we live in a world of conflict. To pick the most salient example, the Ukrainian people are heroically and justly defending their homeland. How such a defense has been mounted against, by most initial accounts, implausible odds, points to a set of questions about the nature of contemporary conflict, the role of technology in both deterring and fighting wars [1], and whether or how the ethics of lethal use of force has changed or remained similar to previous eras and previous wars.

One critical facet of this discussion is the place of Artificial Intelligence (AI) and automation as a decisive differentiator for both deterring and, where necessary, engaging in armed conflict involving adversaries of the United States and its allies and partners around the world.

In this series, we plan to explore the applicability and evolution of the Just War tradition and the Law of War to the domain of modern defense applications. We also provide an assessment of the corresponding methods, responsibilities, and roles that the builders of AI and automation technologies should play in incorporating these principles into advanced technologies. Our discussion, in contrast with more speculative conjectures, is derived from Palantir’s direct experience in support of defense forces. We endeavor to provide a straightforward perspective on the art of the possible in using software and other integrated technology components to supplement and enable the expertise and responsibilities of the modern warfighter.

We believe that providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products. This responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision-making processes.

To that end and where possible, we intentionally use lay vocabulary that emphasizes — sometimes rather starkly — the more uncomfortable elements of war-fighting. While there is an established jargon that euphemizes elements of this topic, we believe it is important to speak clearly and avoid language that glosses over some of the most central questions of the morality of armed conflict. Where the military literature may refer to “kinetic action,” for example, we refer directly to live warfare with lethal force (recognizing that there may be some actions with lethal implications that are not kinetic, and some uses of kinetic action that are not lethal). [2]

Mission and Morality

As a mission-driven technology company, Palantir was founded on the belief that software can provide the decisive difference in enabling our most consequential institutions to address the challenges of the modern world. We believe that when we can make a difference in the service of a just cause, such as in the defense of Ukraine, we carry a moral responsibility to do so. And so, we are proud to provide our technical experience and technology to Ukrainian forces defending their homeland, national sovereignty, and personal freedoms.

Doing the job well, however, entails more than enabling our defense partners to better pursue their military ends. Insofar as wars must be fought, they must be fought in ways that are ethically defensible. Similarly, in times of peace, the development of ethically defensible technologies remains equally, if not more, important to preserving a competitive advantage through which armed conflict is deterred and peace is preserved.

Palantir’s mission to defend liberal democracies and their allies and partners [3] includes respecting those values, laws, and traditions that make liberal democracies and open societies worth defending. In the military context, this means supporting our defense partners in their commitments to operate within legal and ethical norms. These norms are established in the tradition and theory of Just War as principles for initiating warfare (jus ad bellum) and principles for just conduct among warring parties (jus in bello) [4]. The norms are further codified in the Law of War as the set of international rules that lay out what can and cannot be done during an armed conflict [5].

New Technologies Meet Old Traditions

The introduction of novel technologies, such as by AI and automation, into military decision-making generally, and specifically to both strategic engagement modeling and tactical targeting decisions, carries serious implications to how the Just War tradition is to be adapted to and interpreted within the complexities of the modern battlefield.

In an environment in which digital information is arguably on par with the importance of artillery and materiel, a host of new challenges emerge. These include questions about military capacity for processing reams of sensor data; how information sources are to be protected against misinformation and other adversarial attacks; how these inputs are to be validated in a timely and complete manner; how qualitatively useful and reliable information is analyzed; how analytical work products are fed into decisioning workflows of commanders who must determine a course of action; and what the military’s ability is to understand the impact of its actions on both combatants and non-combatants alike.

Military decision-makers, therefore, will commensurately require tools that enable them to more effectively achieve their military objectives — inclusive of their legal and moral obligations — and to quickly make decisions between alternative courses of actions with as much context as possible, all while keeping their personnel safe.

To be sure, the Just War tradition and the Law of War will (and should) continue to provide critical guiding considerations and principles, such as the requirement to distinguish between combatant and non-combatant, but may also need to be further interpreted to deal with the novel aspects of decision-making for modern deterrence and warfighting that AI and automation introduce. For example, AI that is used to assist in the process of identifying military targets (human or otherwise) will raise a host of tactical questions about the capacity to discern legitimate from illegitimate targets; the accuracy and reliability of assessments derived from that information; and the culpability of information systems’ designers, users, and supervisors in relying on the systems’ outputs to inform decisions involving the use of force. At the more strategic level, AI and automation present new challenges in relation to the coordination of manned and unmanned ground, air, and space assets; monitoring and responding to large scale misinformation campaigns through mainstream and alternative media; complex cyber operations executing sophisticated financing schemes; supply chain disruptions and trade blockades; and the impact of natural disasters and climate change.

In some cases, this increased role of AI and automation technologies will plainly translate into commanders having increased capability to understand the impact of their actions on potential non-combatants. However, in cases where the question of accountability for decision-making becomes more complicated, technology providers will need to further explore the ways in which new technologies will inform or displace traditionally human-driven aspects of those decisions and subsequent actions. For example, within the context of targeting activities, tools that facilitate the use of digital information and augment decision-making capacities may also become critical components for assessing collateral damage, understanding patterns of civilian harm, and overseeing the lawfulness of command-level decisions.

These questions — and considerably more like them — point to the ways that Just War theory and the Law of War both bind the current predicament to a tradition and set of obligations worth preserving, but also introduce strains in the application of those principles. It is the intent of updated ethical constructs, therefore, to address these novel considerations.

Progress Toward the Military Ethics of AI

In recent years, the US Department of Defense (DoD) has made efforts to develop a robust ethical framework for addressing attendant issues related to AI and automation, including:

  • October 2019, Defense Innovation Board releases AI Ethics Principles Whitepaper — The culmination of a 12-month study conducted by the Defense Innovation Board — an independent advisory body to the Secretary of Defense, Deputy Secretary of Defense, and other senior leaders across the DoD — the publication aimed at addressing the question of why a military AI ethics articulation is needed. The Board proposed five principles and twelve concrete, though non-binding, recommendations for enshrining those principles in DoD practice, covering the major areas of Responsible, Equitable, Traceable, Reliable, and Governable AI capabilities.
  • February 2020, DoD adopts Ethical Principles for AI — Following the release of the Defense Innovation Board whitepaper, the DoD officially adopted recommendations to guide the testing, fielding, and scaling of AI-enabled capabilities across the DoD.
  • March 2020, DoD integrates Ethical Principles for AI into commercial prototyping and acquisition programs — Initiated through the Defense Innovation Unit (DIU) — a DoD organization focused on fielding and scaling commercial technology across the U.S. military — the strategic initiative is reflective of efforts on behalf of the DoD to operationalize the proposed AI principles within commercial acquisition programs.
  • May 2021, Deputy Secretary of Defence issues memorandum calling for “Implementing Responsible AI in the Department of Defense” — The memorandum established and directed the DoD’s “holistic, integrated, and disciplined approach for Responsible AI (RAI),” calling for implementation of RAI in accordance with six foundational tenets: RAI Governance, Warfighter Trust, AI Product and Acquisition Lifecycle, Requirements Validation, Responsible AI Ecosystem, and AI Workforce.
  • November 2021, DoD publishes a set of Responsible AI Guidelines — Efforts to integrate AI ethical principles are further developed with the release of the DIU’s “Responsible AI Guidelines in Practice.” The publication consists of specific questions that should be addressed at each phase in the AI lifecycle (planning, development, and deployment) to help ensure AI programs align with the DoD’s Ethical Principles for AI. The document is framed not as formal requirements, but rather guidance to serve as a “starting point for defining a process by which the DoD’s Ethical Principles for AI can be operationalized on acquisition programs.”
  • June 2022, DoD releases its Responsible Artificial Intelligence Strategy and Implementation Pathway — Prepared by the DoD Responsible AI Working Council, the memorandum is meant to outline a functional path forward for operationalizing DoD’s AI ethical principles and advancing Responsible AI (RAI) more broadly through six lines of effort including modernizing governance structures, achieving standard levels of technology proficiency, and cultivating appropriate levels of care and risk mitigation in AI product and acquisition lifecycles.
  • January 2023, DoD updates Directive 3000.09 ‘Autonomy In Weapon Systems’ — This update to the earlier November 2012 version of the Directive provides additional specification of the process that would need to be followed were the DoD ever to field autonomous weapons systems outside of the very limited contexts of non-lethal or static, onboard, and/or networked defense of installations, platforms, and unmanned vehicles or vessels, including ensuring consistency with the DoD AI Ethical Principles and the Law of War.

With each of these measures, the DoD signaled a clear set of expectations around the need for ethics to be deeply embedded within future uses of AI technologies and in the developing culture of military personnel, programs, and private sector contractors. While we provide a detailed overview of this evolution in the US, importantly, the US is not alone in these efforts. For instance, in June 2022 the UK Ministry of Defence published its “Defence Artificial Intelligence Strategy” which highlights the importance of norms, values, and best practices when applying AI tools to defense. The recent REAIM conference hosted by the the government of the Netherlands, in which we took part, brought together stakeholders from around the world, also focused on these critical issues on a global stage.

However, as we’ve previously elaborated, even despite efforts in recent years to develop a robust ethical framework for addressing attendant issues related to AI and automation, the biggest challenge is moving beyond the declarative, to specific actions that operationalize and implement ethical considerations in the real world.

Ethical Responsibilities of Software Providers

Fielding tools that have been developed in accordance with ethically responsible AI and automation best practices is not at odds with what is most effective — to the contrary, ethical implementation practices are often indispensable to the tools’ effectiveness and trustworthiness.

Palantir is aligned here with our partners in the defense and national security space: core to the mission of any military decision-maker is fulfilling their obligations under the Law of War, while also effectively achieving their objective. If we only focus on one or the other — either a reductive and simplistic approach to military effectiveness, or the consideration of ethical goals divorced from vital security and defense objectives — we will fail at both.

As a company, so much of Palantir’s effort is predicated on both building the technology that powers institutions critical to society and building it in such a way that reinforces responsible, defensible applications. Just as we believe that the best way to support critical intelligence missions was through technology built to protect privacy and civil liberties, we accept and endorse the dual commitments of protecting and preserving ethical imperatives while also giving warfighters the best tools to execute on their mission objectives. These two theses should not be treated as mutually exclusive, but rather as dialectically composite and essential to supporting our moral standing, our customers’ legal and moral obligations, and preserving the world we want to live in.

In the next post in this series, we will delve into more specific details about how Palantir’s work has sought to concretely develop and implement critical ethical considerations into capabilities and defense workflows in the most consequential settings.

Authors

Peter Austin, Privacy and Civil Liberties Government and Military Ethics Lead, Palantir Technologies
Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies

References

[1] Ignatius, David. “How the algorithm tipped the balance in Ukraine.” Washington Post 19 December 2022, https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/.
[2] In a similar vein, some of our public statements use the rather jarring phrase “kill chain” because the phrase lays bare the stakes involved and thereby engenders a more forthright discussion. “Kill chain” is a military concept that refers to the structure of an attack. Though the term is deployed to describe military actions with both lethal and non-lethal effects (and increasingly has been repurposed in the cyber context to describe the procedure of cyber operators in carrying out attacks on information technology infrastructure), we suggest that its use in relation to the use of force with potentially lethal consequences may help to provide a more plain and direct expression of the realities of warfare.
[3] “We generally do not enter into business with customers or governments whose positions or actions we consider inconsistent with our mission to support Western liberal democracy and its strategic allies.” See, for example, Palantir’s United States Securities and Exchange Commission Form S-1 Registration at https://www.sec.gov/Archives/edgar/data/1321655/000119312520230013/d904406ds1.htm and most recent 10-K filing at https://www.sec.gov/ix?doc=/Archives/edgar/data/1321655/000132165523000011/pltr-20221231.htm.
[4] The Just War tradition dates back at least to the 4th century and the work of St. Augustine, who emphasized war as a last resort after all peaceful options have been considered, but also articulated a set of principles for the just waging of armed conflict, including offensive wars. Contemporary Just War Theory articulates principles and moral justifications for entering into war (jus ad bellum) such as Last Resort, Legitimate Authority, Just Cause, Probability of Success, and Right Intention. It also provides a set of principled considerations for the moral conduct of the war (jus in bello) once belligerents have been engaged, including the principles of Distinction, Proportionality, Military Necessity, and Minimizing Unnecessary Suffering. For an accessible treatment of Just War theory, see Walzer, Michael (1978). Just and Unjust Wars. Basic Books.
[5] Sometimes called “Law of Armed Conflict” or “International Humanitarian Law,” the Law of War is specifically intended to address the circumstances of armed conflict, and comprises both customary international law and treaties. The International Committee of the Red Cross (ICRC) maintains introductory material at https://www.icrc.org/en/war-and-law for readers interested in the underlying treaties and customs, whose concepts are then implemented by state actors. For example, the United States Department of Defense’s Law of War Manual at https://dod.defense.gov/Portals/1/Documents/pubs/DoD%20Law%20of%20War%20Manual%20-%20June%202015%20Updated%20Dec%202016.pdf?ver=2016-12-13-172036-190.


AI, Automation, and the Ethics of Modern Warfare was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.