Palantir’s Response to NIST RFI on Artificial Intelligence


As part of the process laid out in the Biden-Harris Administration’s Executive Order on Safe, Secure, and Trustworthy AI, the National Institute of Standards and Technology (NIST) released a request for information to assist the agency in carrying out its obligations under the Executive Order. Palantir is proud to continue our ongoing contributions to these AI policy discussions and share our recommendations on how NIST can advance the safe, secure, and trustworthy use of AI.

Summary of Response to NIST

In our response to NIST, we highlighted four areas of focus aligned to two of the themes in the RFI: “Developing Guidelines, Standards, and Best Practices for AI Safety and Security,” as well as “Advance Responsible Global Technical Standards for AI Development.” Our response draws from both a body of prior contributions to AI policy, as well as our first-hand experience building foundational software for AI/ML. We encourage interested readers and policymakers to explore our full response, but here we provide a brief summary of the key themes addressed therein.

Foundational Investments in Digital Infrastructure for AI

Based on our 20-plus years of experience, we have observed the importance of grounding AI systems — for both traditional machine learning (ML) applications and for Generative AI, including Large Language Models (LLMs) — with foundational investments in a broader digital infrastructure. As such, we encouraged NIST to incorporate guidance for such digital infrastructures as part of their best practices for safe, secure, and trustworthy AI. We specifically highlighted requirements for building effective and responsible digital infrastructures for AI systems in areas of access controls, data protection, data governance, multi-stakeholder collaboration, and iterative development for decision support. This recommendation to NIST builds specifically upon our prior recommendations to the Office of Management and Budget as it works to carry out its own obligations under the Executive Order.

Red-teaming and Beyond: T&E strategies for assessing Generative AI use cases

Our response emphasized that a robust testing and evaluation (T&E) process is essential for ensuring that AI systems and models are safe for deployment and operate as intended. From Palantir’s experience building industry-leading capabilities for AI T&E, we recommended that NIST consider a broad spectrum of T&E processes and approaches. While red-teaming has garnered much attention in recent AI policy discourse, it is not the only available methodology, nor is it always the most appropriate or effective way to evaluate and address Generative AI risks. For red-teaming to be most impactful, we advocate that it should be considered in conjunction with a broader risk management framework that may include methods and strategies drawn from a diverse range of T&E techniques that each have their own context-specific strengths and weaknesses. We highlighted four types of T&E strategies — Basic-level T&E Strategies, Advanced-level T&E Strategies, Operational Testing Strategies, and Scenario and Simulation Testing Strategies — that NIST should consider when drafting guidance on T&E of AI systems and especially, Generative AI and LLMs.

Privacy-Enhancing Technologies for AI

We commended the Executive Order’s emphasis on using privacy-enhancing technologies to protect privacy and civil liberties. It is for this same reason that Palantir established one of the world’s first Privacy & Civil Liberties Engineering teams more than 10 years ago, specifically to focus on the development and deployment of privacy-protective technologies. Based on our experience building technology to uphold privacy and civil liberties, we advised NIST to adopt a broad definition of privacy-enhancing technologies (PETs) in order to encompass core data protection functions such as data minimization, purpose/use limitation, storage limitation, and management of sensitive data, all of which are highly effective at protecting privacy in practical applications of AI. We have long-held that data protection technologies, as part of a basic category of first-order PETs, can be more effective in practice than their more exotic, less proven counterparts. We highlighted several of our own privacy-protective capabilities to demonstrate the effectiveness of these tools in practice, including systems for granular data deletion and purpose-based access controls.

How to Create Beneficial and Responsible Global Technical Standards

Lastly, we discussed the need for balancing both country- and domain-specific regulatory standards, as well as global technical standards that can address shared opportunities and risks that transcend national borders. We emphasized the importance of the U.S. — through NIST, the U.S. Artificial Intelligence Safety Institute, and other efforts — acting as a first-mover to lay the groundwork for global technical standards. We believe that the U.S. is well-positioned to take a leadership role in coordinating across international AI regulatory efforts, and can help overcome the challenges of multiple competing — and potentially incompatible — standards. In addition, we included specific methodological suggestions for developing such a global standard, including coordinating on the use of common definitions and technical references, as well as encouraging NIST to create avenues for dialogue with the commercial sector, academia, and civil society.


Our response to NIST builds on our prior responses on critical matters of technology policy to the FTC, NTIA, OSTP, and OMB. We encourage readers interested in learning more about Palantir’s position and perspective in these AI governance discussions to explore our comprehensive collection of AI policy contributions.


Anthony Bak, Head of AI Implementation, Palantir Technologies
Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead, Palantir Technologies
Carmen Jenkins, Solutions Lead, Commerce, Palantir Technologies
Morgan Kaplan, Senior Policy & Communications Lead, Palantir Technologies

Palantir’s Response to NIST RFI on Artificial Intelligence was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.