Recommendations for Context-Driven, Human-Centered Artificial Intelligence Policy
As AI capabilities continue to capture the public imagination and become increasingly embedded in our lives, we face a growing need for technology accountability and policies that reinforce responsible development and use. Palantir is focused on providing software platforms that help to responsibly operationalize AI in its many forms — including generative AI and large language models (LLMs) — and we continually seek opportunities to reinforce our long-held position that technology should both enable critical institutional outcomes and protect the liberties and rights of individuals.
To that end, we welcome opportunities to engage in public discussion around technology policy and to offer AI policy recommendations. As our latest contribution, members of Palantir’s Privacy and Civil Liberties Engineering team and Data Protection Office are pleased to share a summary of our response to the National Telecommunication and Information Administration (“NTIA”) regarding the NTIA’s AI Accountability Policy Request for Comment (“RFC”). Our response provides a number of detailed policy considerations and suggestions that reinforce our belief that AI is both most effective and responsible when employed to assist and enhance human execution and decision-making, rather than replacing it.
“As should be the case with all technologies, the impact of AI should be in elevating humanity, not in undermining, endangering, or replacing it.”
Our response to the AI Accountability Policy RFC presents our position, perspective, and recommendations for how the NTIA can encourage artificial intelligence (“AI”) accountability.
To briefly summarize the full response, which we encourage readers to explore, we have offered the following suggestions for the NTIA’s further consideration:
AI Accountability Objectives
What is the purpose of AI accountability mechanisms such as certifications, audits, and assessments?
AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there trade-offs among these goals? To what extent can these inquiries be conducted by a single team or instrument?
Can AI accountability mechanisms effectively deal with systemic and/or collective risks of harm, for example, with respect to worker and workplace health and safety, the health and safety of marginalized communities, the democratic process, human autonomy, or emergent risks?
Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?
Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?
Existing Resources and Models
Which non-U.S. or U.S. (federal, state, or local) laws and regulations [are] already requiring an AI audit, assessment, or other accountability mechanism are most useful and why? Which are least useful and why?
Accountability Subjects
Where in the [AI] value chain should accountability efforts focus?
How can accountability efforts at different points in the [AI] value chain best be coordinated and communicated?
Since the effects and performance of an AI system will depend on the context in which it is deployed, how can accountability measures accommodate unknowns about ultimate downstream implementation?
Should AI accountability mechanisms focus narrowly on the technical characteristics of a defined model and relevant data? Or should they feature other aspects of the socio-technical system, including the system in which the AI is embedded? When is the narrower scope better and when is the broader better? How can the scope and limitations of the accountability mechanism be effectively communicated to outside stakeholders?
How should AI audits or assessments be timed? At what stage of design, development, and deployment should they take place to provide meaningful accountability?
Barriers to Effective Accountability
Is the lack of a general federal data protection or privacy law a barrier to effective AI accountability? Is the lack of a federal law focused on AI systems a barrier to effective AI accountability?
What is the role of intellectual property rights, terms of service, contractual obligations, or other legal entitlements in fostering or impeding a robust AI accountability ecosystem? For example, do nondisclosure agreements or trade secret protections impede the assessment or audit of AI systems and processes? If so, what legal or policy developments are needed to ensure an effective accountability framework?
AI Accountability Policies
What role should government policy have, if any, in the AI accountability ecosystem? For example: Should AI accountability policies and/or regulation be sectoral or horizontal, or some combination of the two? Should AI accountability regulation, if any, focus on inputs to audits or assessments (e.g., documentation, data management, testing and validation), on increasing access to AI systems for auditors and researchers, on mandating accountability measures, and/or on some other aspect of the accountability ecosystem?
What specific activities should government fund to advance a strong AI accountability ecosystem?
What kinds of incentives should government explore to promote the use of AI accountability measures?
Is it important that there be uniformity of AI accountability requirements and/or practices across the United States? Across global jurisdictions? If so, is it important only within a sector or across sectors? What is the best way to achieve it? Alternatively, is harmonization or interoperability sufficient and what is the best way to achieve that?
Our response to this request for comment is based on Palantir’s 20 years of experience building technology to uphold and enforce ethical and accountable practices in the use of our software products, including tools and platforms that enable the responsible use of AI. We are grateful to the NTIA for the opportunity to contribute to this important policy discussion and welcome further engagement with government, industry, civil society, and individuals whose lives stand to be most impacted by AI technologies.
Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead, Palantir Technologies
Palantir’s Response to NTIA on AI Accountability Policy was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
The report The economic potential of generative AI: The next productivity frontier, published by McKinsey…
The new model shows open-source closing in on closed-source models, suggesting reduced chances of one…
Samsung’s celebrated flagship soundbar does just enough to beat out the rest of its Dolby…
Even highly realistic androids can cause unease when their facial expressions lack emotional consistency. Traditionally,…
These beard tools deliver a quality trim for all types of facial hair.
Artificial intelligence (AI) research, particularly in the machine learning (ML) domain, continues to increase the…