Palantir’s Response to NTIA on AI Accountability Policy

Recommendations for Context-Driven, Human-Centered Artificial Intelligence Policy

Introduction

As AI capabilities continue to capture the public imagination and become increasingly embedded in our lives, we face a growing need for technology accountability and policies that reinforce responsible development and use. Palantir is focused on providing software platforms that help to responsibly operationalize AI in its many forms — including generative AI and large language models (LLMs) — and we continually seek opportunities to reinforce our long-held position that technology should both enable critical institutional outcomes and protect the liberties and rights of individuals.

To that end, we welcome opportunities to engage in public discussion around technology policy and to offer AI policy recommendations. As our latest contribution, members of Palantir’s Privacy and Civil Liberties Engineering team and Data Protection Office are pleased to share a summary of our response to the National Telecommunication and Information Administration (“NTIA”) regarding the NTIA’s AI Accountability Policy Request for Comment (“RFC”). Our response provides a number of detailed policy considerations and suggestions that reinforce our belief that AI is both most effective and responsible when employed to assist and enhance human execution and decision-making, rather than replacing it.

“As should be the case with all technologies, the impact of AI should be in elevating humanity, not in undermining, endangering, or replacing it.”

Executive Summary

Our response to the AI Accountability Policy RFC presents our position, perspective, and recommendations for how the NTIA can encourage artificial intelligence (“AI”) accountability.

To briefly summarize the full response, which we encourage readers to explore, we have offered the following suggestions for the NTIA’s further consideration:

AI Accountability Objectives

What is the purpose of AI accountability mechanisms such as certifications, audits, and assessments?

  • AI accountability should always relate to the context of its application and context-specific concerns (historical, social, cultural, ethical, etc.).

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there trade-offs among these goals? To what extent can these inquiries be conducted by a single team or instrument?

  • Accountability measures are most practicable when established and evaluated in relation to the specific contexts of use. When certain goals and risks need to be traded against others, trade-off evaluations should be factored in as part of the accountability mechanism.

Can AI accountability mechanisms effectively deal with systemic and/or collective risks of harm, for example, with respect to worker and workplace health and safety, the health and safety of marginalized communities, the democratic process, human autonomy, or emergent risks?

  • To use AI responsibly, it is crucial to understand how AI accountability approaches can meaningfully mitigate certain AI harms, as well as the risks that remain unaddressed through accountability mechanisms.

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?

  • It may be useful to create separate accountability mechanisms for generative AI as workflow tools and generative AI as decision analysis or decision-making tools. Accountable deployment of AI must address broader systems-level considerations, and not just accountability for particular components of AI systems.

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?

  • Accountability mechanisms for AI can be ineffective if they are built on a misunderstanding of the AI system technology, capabilities, and limitations; do not clearly specify who bears responsibility; overly rely on ex-post measures; or are overly focused on technical considerations at the expense of social and ethical implications of AI.

Existing Resources and Models

Which non-U.S. or U.S. (federal, state, or local) laws and regulations [are] already requiring an AI audit, assessment, or other accountability mechanism are most useful and why? Which are least useful and why?

  • The General Data Privacy Regulation (GDPR), California Consumer Privacy Act (CCPA), forthcoming EU AI Act, and frameworks from public agencies like National Institute of Standards and Technology (NIST) contain useful mechanisms for ensuring accountability in AI systems and can serve as a model for further AI accountability measures.

Accountability Subjects

Where in the [AI] value chain should accountability efforts focus?

  • AI accountability is important throughout the entire lifecycle of an AI model, and different approaches can be taken to promote AI accountability at each step of the lifecycle. The design of a model lifecycle shapes the way that users interact with the model, which in turn informs how users apply AI accountability measures in practice.

How can accountability efforts at different points in the [AI] value chain best be coordinated and communicated?

  • Organizations can coordinate and communicate AI accountability efforts throughout the value chain by organizing the application of AI around a specific problem and collaboratively engaging in standardized testing and evaluation procedures.

Since the effects and performance of an AI system will depend on the context in which it is deployed, how can accountability measures accommodate unknowns about ultimate downstream implementation?

  • Sharing and maintaining context throughout the entire model lifecycle is one of the key challenges of deploying AI systems in practice. Any solution should center around improving context sharing and reducing information asymmetry throughout the entire model lifecycle.

Should AI accountability mechanisms focus narrowly on the technical characteristics of a defined model and relevant data? Or should they feature other aspects of the socio-technical system, including the system in which the AI is embedded? When is the narrower scope better and when is the broader better? How can the scope and limitations of the accountability mechanism be effectively communicated to outside stakeholders?

  • As described in our approach to AI Ethics, AI accountability mechanisms should focus on the entire socio-technical system.

How should AI audits or assessments be timed? At what stage of design, development, and deployment should they take place to provide meaningful accountability?

  • In order for AI audits and assessments to be most helpful, they should be included throughout the model lifecycle — before model deployment, during deployment, and after a model is recalled.

Barriers to Effective Accountability

Is the lack of a general federal data protection or privacy law a barrier to effective AI accountability? Is the lack of a federal law focused on AI systems a barrier to effective AI accountability?

  • The lack of federal AI regulation may be a barrier to effective AI accountability since there are few state-level mechanisms that could act as a substitute. Harmonizing existing AI regulations and frameworks at the federal level could improve interoperability and reduce the cost of compliance.

What is the role of intellectual property rights, terms of service, contractual obligations, or other legal entitlements in fostering or impeding a robust AI accountability ecosystem? For example, do nondisclosure agreements or trade secret protections impede the assessment or audit of AI systems and processes? If so, what legal or policy developments are needed to ensure an effective accountability framework?

  • Regulations around terms of service, copyright, and trade secrets will have a significant impact on AI accountability. It is crucial to understand the implications of such regulations on incentivizing innovation, promoting AI safety, and encouraging transparency, as well as how they intersect with other regulatory and self-regulatory accountability mechanisms.

AI Accountability Policies

What role should government policy have, if any, in the AI accountability ecosystem? For example: Should AI accountability policies and/or regulation be sectoral or horizontal, or some combination of the two? Should AI accountability regulation, if any, focus on inputs to audits or assessments (e.g., documentation, data management, testing and validation), on increasing access to AI systems for auditors and researchers, on mandating accountability measures, and/or on some other aspect of the accountability ecosystem?

  • While general principles could be reasonably established to cut across sectors and span most applications of AI, meaningful AI accountability will likely require standards and regulation that are established closer to the context of use (i.e., on a per industry basis).

What specific activities should government fund to advance a strong AI accountability ecosystem?

  • Funding opportunities for responsibly-constructed, “field-to-learn” experiments is an effective way to expose technologists, ethicists, policy-makers, and AI users to the specific challenges of AI deployment and use, which is necessary to create and test accountability mechanisms that address the operational and real-world challenges of AI technologies.

What kinds of incentives should government explore to promote the use of AI accountability measures?

  • Accountability measures should focus on incentives that encourage producers of AI systems and tools to be held accountable to clear and grounded representations of their AI products.

Is it important that there be uniformity of AI accountability requirements and/or practices across the United States? Across global jurisdictions? If so, is it important only within a sector or across sectors? What is the best way to achieve it? Alternatively, is harmonization or interoperability sufficient and what is the best way to achieve that?

  • Uniformity of AI accountability requirements across jurisdictions could contribute to a regulatory environment that is less costly and more effective. On the other hand, it may be most effective to tailor accountability frameworks to context-specific sectoral requirements, as opposed to creating general-purpose cross-sectoral requirements.

Conclusion

Our response to this request for comment is based on Palantir’s 20 years of experience building technology to uphold and enforce ethical and accountable practices in the use of our software products, including tools and platforms that enable the responsible use of AI. We are grateful to the NTIA for the opportunity to contribute to this important policy discussion and welcome further engagement with government, industry, civil society, and individuals whose lives stand to be most impacted by AI technologies.

Authors

Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead, Palantir Technologies


Palantir’s Response to NTIA on AI Accountability Policy was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.