Experience-based insights to guide federal agency AI oversight
Introduction
On October 30, 2023, U.S. President Biden signed an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order provides a comprehensive plan for the Administration’s goals for artificial intelligence (AI) governance, including a series of orders to executive agencies to undertake specific actions to advance these AI policy initiatives. As an early step in this unfolding process, on November 1, 2023, the Office of Management and Budget (OMB) released for public comment a draft policy memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” The primary thrust of this draft is to lay out guidelines and expectations for AI governance structures across federal government agencies.
As part of Palantir’s ongoing effort to contribute to technology policy dialogues of critical importance to public sector institutions and the communities they serve, we submitted a response compiling a set of experience-based insights that we believe will help strengthen OMB’s policy and better steer agencies’ approaches to AI governance and oversight.
Response to the Office of Management and Budget Draft Memo RFC
Palantir’s response synthesizes and extends a body of prior contributions, including our responses to OSTP’s National Priorities for Artificial Intelligence, NTIA’s Privacy, Equity, and Civil Rights RFC, NTIA’s AI Accountability Policy RFC, and FTC’s Advanced Notice of Proposed Rulemaking on Commercial Surveillance and Data Security.
Our comments addressed several important questions raised by the draft memo and that we believe are critical aspects for federal government agencies to competently contend with in building responsible AI programs. In brief summary, we touched on the following:
- We offered practical guidance on effective ways for instituting and empowering Chief AI Officer (CAIO) roles within agencies, including exposing them to field operations.
- AI governance should be incorporated into existing agency oversight bodies, as opposed to constructing new governance entities.
- Responsible AI Innovation should be pursued through an ensemble of efforts including, institutionalizing and expanding “field-to-learn” programs; accentuating the import of trustworthy data foundations as core AI infrastructure; and encouraging collaboration with stakeholders spanning government, private sector, academic, non-profit, and civil society.
- With respect to responsibly leveraging generative AI — including Large Language Models (LLMs) — we advised on implementation of strong Testing & Evaluation (T&E) frameworks to help identify contexts in which LLMs can make the most impact while minimizing attendant risks and enabling operational efficacy, safety, security, and ethics. Specifically, we point to a set of the demonstrably compelling and defensible LLM use case archetypes, while outlining a set of failure modes that institutions aspiring to use LLMs commonly fall into.
- We advocated for collapsing the safety-impacting and rights-impacting evaluations of AI into a single “high-risk” assessment given the frequent overlap of the two categories, the value of minimizing administrative complexity in assessing the merits of AI use cases, and the stronger rights-protective outcomes that a combined standard would ensure.
- Towards the goal of refining the set of practices implemented by agencies in making risk assessments of AI applications, our response underscored the value of agencies providing responsible AI guidance that is tangible, operationally focused, and that gives primacy the protection of relevant rights. At the same time, we encouraged a set of improvements to current minimum practices, including recommendations for clarifying the metrics used to concretely assess AI risk, the need for continuous evaluation (as opposed to single-point-in-time assessments), and the value of incorporating data privacy impact into the assessment frameworks. Lastly, we suggested the inclusion of two new practices. The first focused on the need for agencies to plan for AI system releases and rollouts to limited environments and the handling of upgrades, downgrades, replacements, or fallbacks on non-AI options in the event of AI failure. The second advised that agencies formalize planning for how to safely discontinue the use of an AI system if and when such a step becomes necessary.
- Finally, we contributed thoughts on public reporting of agencies’ use of AI in their annual use case inventory. In order to increase transparency and demonstrate accountability, we suggested that special focus should be placed on AI use case and data governance descriptions (including touching on critical aspects of data privacy), risk assessment documentation, audit result reviews, and statements addressing security incidents and corresponding mitigations. Moreover, these public reporting efforts should be treated as responsibilities to tend to through the full AI system lifecycle, not just at the start.
Conclusion
Readers interested in learning more about how Palantir is contributing to these landmark AI governance discussions are encourage to read our full response and explore our collection of AI policy contributions.
Authors
Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead
Palantir’s Response to OMB on AI Governance, Innovation, and Risk Management was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.