Categories: FAANG

Palantir’s Response to OMB on Privacy Impact Assessments

Editors Note: This blog post highlights Palantir’s response to a Request for Information pursuant to the 2023 Executive Order on Safe, Secure, and Trustworthy AI. For more information about Palantir’s contributions to AI Policy, visit our website here.

Introduction

At Palantir, we believe that protecting privacy and civil liberties is essential for the responsible and effective use of technology. This commitment to privacy and civil liberties is demonstrated not only in the capabilities of our software platform, but also in our company culture. In 2010, we established the world’s first Privacy and Civil Liberties (PCL) Engineering team, specifically to focus on building privacy protective technology and fostering a culture of responsibility around its development and use. Product development is an essential component of the PCL mission, but we recognize that protecting privacy cannot be solved with technology alone.

The way in which an organization uses technology is just as important as the tools for privacy mitigation that the technology might offer. For this reason, we were proud to share our perspective with the Office of Management and Budget’s Request for Information (RFI) on how Privacy Impact Assessments (PIA) — a tool frequently used by US Government agencies to evaluate privacy risks and mitigations — can be made more effective, especially in relation to increased adoption of AI technologies. This RFI is pursuant to the 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI), and our response builds upon ideas and suggestions that we have shared with NIST, OMB, and OSTP as they also carry out assignments under the Executive Order, in addition to preceding efforts by NTIA and FTC to understand similar privacy risks. A full list of Palantir’s AI Policy contributions can be found here.

Our response to this RFI on PIAs is based on more than two decades of experience helping our customers carry out their most critical missions while upholding principles of privacy and civil liberties. Specifically, we share our perspective on where PIAs can serve as a key instrument of privacy protection, how they can be improved, and how technical approaches for privacy can — and should — be coupled with PIAs, especially as organizations seek to adopt more advanced technology, including AI. In this blog post, we share more about the perspective we offered OMB. You can find our full response to the RFI here.

Improvements to OMB Guidance on PIAs

Our responses included several focused suggestions on how to improve PIAs:

  • We recommended that OMB provide guidance on what resources an information technology provider can or should provide a government agency conducting a PIA when an agency procures or develops a new information technology system. As we mention in our response, commercial technology providers are often best positioned to articulate the technical details of privacy-protective tools and their configurable capabilities to support risk mitigation measures, and clearer guidance from OMB can help commercial entities provide more direct information and support to client agencies seeking clarifying details or technical assistance in conducting PIAs.
  • We recommended that OMB provide baseline guidance or requirements for digital infrastructure used to collect, store, or process new sources of personally identifiable information (PII). Best practices, to which agencies can demonstrate adherence in a PIA, can help agencies maintain a high standard of digital security and data protection, instead of falling back on openly available collection tools, such as free-to-use survey tools or internet forms, which may not meet high security and privacy standards.
  • We recommended OMB consider additional triggering criteria for PIAs, including when an organization engages in cross-agency sharing of PII.

In addition to these recommendations, we underscored that PIAs are a useful source of information for third parties working with government agencies to receive guidance on how agencies implement privacy protections in software and other supporting technology applications. From our own experience, we know that PIAs can help commercial providers understand how agencies have sought to deal with privacy risks in the past and how such precedents might inform an improved understanding of institutional data and systems governance best practices going forward. To improve PIA’s crucial role in promoting transparency, we recommend that PIAs contain accessible metadata and are indexed in a manner that allows structured searching through this metadata. Furthermore, we encourage OMB to promote version control standards for PIAs, so individuals and organizations can better assess how agencies’ PIAs change over time.

Privacy Risks Associated with AI and other Advanced Technologies

We recognize that there are a growing set of privacy risks that are specific to training, evaluation, or use of AI and AI-enabled systems, including misuse of personal information, inference of personal characteristics, and model leakage of sensitive data. We proposed several recommendations for how OMB can improve their guidance to mitigate these privacy risks. For example, we suggest OMB recommend to agencies to carefully evaluate not only the source of data used for AI systems, but also the data’s necessity for the AI system.

Building on our previous response to NIST’s RFI on AI and the OMB’s draft memorandum on AI governance, we also re-emphasized the importance of Testing and Evaluation (T&E) of AI systems. Moreover, as developers of privacy-protective technology, we understand the importance of capabilities for deletion, access controls, and audit logging, among other data protection and oversight features, and we recommend that OMB consider sharing guidance for technical requirements to assist agencies in mitigating privacy harms in digital software platforms.

We also emphasized that these concerns of privacy are not unique to AI. Rather, rights-protective capabilities are applicable for a broad set of data management and analytics applications, including more advanced emerging technologies.

Beyond PIAs

We also urged OMB not to consider privacy in isolation. Rather, as a possible alternative to solely privacy-focused impact assessments, we proposed treating privacy as one risk category among a multitude of other risk considerations (e.g., security, data protection, human rights, AI accountability, etc.). As such, we recommended that OMB consider a more comprehensive risk impact assessment (CRIA) framework to not only consolidate various impact assessment efforts, but better target areas of risk that lie at the intersection of various fundamental rights concerns.

Conclusion

Our response to OMB on PIAs underscores our long-standing commitment to privacy and civil liberties — not only in the capabilities we offer in our software platforms for data protection, privacy, and security, but also in our support of our customers as they seek to both protect privacy and tackle their most pressing problems. We are proud to share our insights and perspective with OMB, and we look forward to continued engagement with OMB and other U.S. Government agencies as they carry out obligations under the AI Executive Order. We encourage interested readers to check out our full response to OMB on PIAs here and our full list of AI policy contributions here.

Authors

Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead, Palantir Technologies
Morgan Kaplan, Senior Policy & Communications Lead, Palantir Technologies


Palantir’s Response to OMB on Privacy Impact Assessments was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

AI Generated Robotic Content

Recent Posts

Can “Safe AI” Companies Survive in an Unrestrained AI Landscape?

TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…

3 hours ago

Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

Whether a company begins with a proof-of-concept or live deployment, they should start small, test…

4 hours ago

14 Best Planners: Weekly and Daily Notebooks & Accessories (2024)

Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…

4 hours ago

5 Tools for Visualizing Machine Learning Models

Machine learning (ML) models are built upon data.

1 day ago

AI Systems Governance through the Palantir Platform

Editor’s note: This is the second post in a series that explores a range of…

1 day ago

Introducing Configurable Metaflow

David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…

1 day ago