Categories: FAANG

Palantir’s Response to the OSTP National Priorities for Artificial Intelligence RFI

Palantir’s Response to the White House Office of Science and Technology Policy (OSTP) National Priorities for Artificial Intelligence Request for Information

Introduction

The regulation of Artificial Intelligence has become one of the most lively and expansive areas of public policy discussion today. In recent days alone, the UK government held an AI Safety Summit, which Palantir’s CEO Dr. Alex Karp participated in.

Across the pond, the White House released an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence that builds on a series of overlapping legislative and regulatory initiatives, including the White House Office of Science and Technology Policy (OSTP) Blueprint for an AI Bill of Rights, the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and the bipartisan Hill gathering of prominent figures to discuss the contours of Senator Schumer’s SAFE Innovation Framework.

Earlier in the year, OSTP initiated a significant development in its AI workstream with the announcement of the Biden-Harris Administration’s series of initiatives to advance “Responsible Artificial Intelligence Research, Development, and Deployment.” As one component of that series, OSTP issued a Request for Information (RFI) on national priorities for “mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives.” On October 30, in parallel with the announcement of the new Executive Order, OSTP has now made these RFI responses available to the public.

As part of Palantir’s continued engagement with issues at the critical intersection of law, regulation, technology, and societal impact, we submitted a response that channels over 20 years of experience “building technology to uphold and enforce ethical and accountable practices in the use of our software products, including Artificial Intelligence (“AI”) enablement tools and platforms.”

Response to the Office of Science and Technology Policy RFI

Palantir’s response synthesizes and extends a body of prior contributions, including our responses to NTIA’s Privacy, Equity, and Civil Rights RFC, NTIA’s AI Accountability Policy RFC, and FTC’s Advanced Notice of Proposed Rulemaking on Commercial Surveillance and Data Security. Our comments are focused on a few central themes of the AI regulatory discussion.

Protecting rights, safety, and national security

We reiterated our prior recommendation that effective and actionable AI standards aimed at protecting people’s rights and safety are likely to be most effective when they are crafted to be context- and domain-specific, “since the nature and the scale of risks attached to AI technologies will vary profoundly between domains and use cases.” We further recommended that risks to people’s rights and safety can start to be mitigated with contextual implementation of a few key measures, namely mandating model documentation, encouraging the adoption of strong security controls to constrain model misuse, and requiring basic tools for model monitoring and observability, especially in consequential domains of use. We also highlighted the need for responsible AI frameworks to address the structural concerns of reliability, traceability, accountability, human-centric orientation, and scope of AI use.

Expanding on concepts articulated in both the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, we highlighted that mitigating risk in the application of AI technologies is about more than just AI models. It requires opening the aperture of evaluation to consider the fully-integrated AI system (not just component AI tools) and the complete, end-to-end model lifecycle (from problem definition through post-production model maintenance). Moreover, we advocate that the robust governance of AI systems benefits from multi-disciplinary, multi-stakeholder engagement. We also advanced the importance of AI development platforms providing critical tools including version control, branching, security controls, among other data and security protections.

Given Palantir’s extensive experience working with intelligence and defense institutions, we highlighted several benefits to US national security through a responsible strategic deployment of AI-enabled capabilities, including the potential to help strengthen adherence to Just War and International Humanitarian Law (IHL) principles by improving the speed, clarity, and accuracy of battlefield situational awareness. At the same time, these systems may pose risks — such as obfuscating decision-making responsibility — that need to be thoughtfully characterized and mitigated through investment in tools for data governance, transparency, and security. To use AI operationally is to contend with the full operating context and lifecycle of AI systems.

Advancing equity and strengthening civil rights

Our response advocates that ethically defensible AI programs must directly address questions of normative fairness with an acknowledgement that such concepts may not fully translate into mathematical measures that AI outputs can cleanly be evaluated against. The process of optimizing AI algorithms involves trade-offs that should be explicitly and intentionally chosen. Similarly, the concept of AI accountability is not one-size-fits-all. Instead, it depends on the specific context in which AI is used, and this can often implicate other intrinsic areas of concern or risk, such as privacy, civil liberties, fundamental rights, sustainability, equity, inclusion, diversity, etc.

We encouraged OSTP to consider pre-existing domestic and international laws, such as the European Union General Data Protection Regulation (GDPR), for its articulation of individual data subject rights that help establish a clearer expectation of what algorithmic accountability might mean to individuals impacted by these systems.

Promoting economic growth and good jobs

We reaffirm the principle that AI is generally most effective and responsible when employed to assist and enhance human decision-making, rather than replace it. In this vein, our response highlighted the need to prepare the workforce for future applications of AI and ensure that the rise of AI technology does not disproportionately harm certain vulnerable populations and economic sectors. Part of this approach should include administration of safety nets and retraining mechanisms to guard against potential near-term job losses and setbacks as industries adopt AI and the roles of this new labor landscape develop.

To ensure competition in the AI marketplace, we encourage government programs to consider commercial sector offerings as a viable alternative to building in-house AI capabilities, especially when it comes to the most advanced edges of development. However, considering the acquisition of commercially available AI capabilities should not come at the risk of the private sector failing to deliver on marketed promises. Instead, the successful integration of AI capabilities should entail access to application environments where programs must successfully demonstrate their utility and worth.

Innovating in public services

The use of AI opens numerous avenues for the public sector at-large and the Federal Government, in particular, to improve public services. Our response promotes an operational “field-to-learn” approach to AI innovation. This is a framework that we believe enables technical innovation, as well as accountability, by better exposing technologists, ethicists, policy-makers, and AI users to the most realistic deployment challenges that AI systems will need to face and overcome. Furthermore, because most AI applications are significantly dependent on data, we emphasize the role of a robust data infrastructure (i.e., the systems that help manage data) as an essential, but often underappreciated foundation for responsible and accountable AI deployment in support of public service missions.

Conclusion

Readers interested in learning more about how Palantir is contributing to these pivotal discussions are encouraged to read our full response and explore our prior AI policy contributions, including the specific statements that fed into this RFI response:

Authors

Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead


Palantir’s Response to the OSTP National Priorities for Artificial Intelligence RFI was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

AI Generated Robotic Content

Recent Posts

Can “Safe AI” Companies Survive in an Unrestrained AI Landscape?

TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…

8 hours ago

Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

Whether a company begins with a proof-of-concept or live deployment, they should start small, test…

9 hours ago

14 Best Planners: Weekly and Daily Notebooks & Accessories (2024)

Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…

9 hours ago

5 Tools for Visualizing Machine Learning Models

Machine learning (ML) models are built upon data.

1 day ago

AI Systems Governance through the Palantir Platform

Editor’s note: This is the second post in a series that explores a range of…

1 day ago

Introducing Configurable Metaflow

David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…

1 day ago