12AB3f5Q2YubPOp yEIR7BA8w
Part of building and deploying privacy-protective technology is also fostering a community of responsibility around its development and use. This idea is central to the mission of Palantir’s Privacy & Civil Liberties Engineering team. It is also a guiding principle that drives us to share our insights and experiences with broader communities of stakeholders, including industry professionals, civil society groups, academics, policymakers, and regulators.
It is for this reason that we have historically sought to engage with public policy issues for which we can draw upon our experiences building world-class privacy-protective enterprise software and partnering with public, private, and non-profit organizations the world over. To point to a handful of examples over the last several years, we have provided comments to the NIST AI Standards Draft Plan for Federal Engagement in Developing Technical Standards and Related Tools, U.S. National Security Commission for Artificial Intelligence (NSCAI), NIST Proposal for Identifying and Managing Bias in Artificial Intelligence, National AI Research Resource (NAIRR) Task Force, OSTP Advancing Privacy-Enhancing Technology (PETs) RFI*, amongst a number of other contributions to open forums, conferences, think tank dialogues, and other public discussions.
In October 2022, Palantir’s Privacy & Civil Liberties Engineering team and Data Protection Office jointly submitted comments to the Federal Trade Commission (FTC) regarding the Proposed Rulemaking on Commercial Surveillance and Data Security (“Commercial Surveillance ANPR, R111004”). Our response reflects Palantir’s long-held position in support of regulation and rulemaking aimed at protecting the privacy interests of consumers. We believe we can most meaningfully contribute to such efforts both through our work and by conveying practical suggestions — based on our direct product and business experiences — for how rulemaking by the FTC can more effectively promote consumer outcomes, while also ensuring sensible and defensible boundaries for private sector technology innovation.
Our response focuses on a subset of questions presented by the FTC. In it, we outline our perspective on a handful of themes and articulate several key recommendations that we encourage the FTC to consider as this process unfolds.
First, we suggest that cost-benefit analysis should privilege as a benefit the long-term goals of preempting embedded surveillance practices. Our comments offer supporting arguments for framing the balancing of costs and benefits on a longer time horizon that encourages the weighting of more lasting societal outcomes over near-term gains. In a related vein, we advocate that new trade regulation rules on data security and commercial surveillance, if thoughtfully constructed, both enable and even enhance innovation by directing it towards the most socially valuable outcomes — including the protection of informational privacy.
Next, we argue for enhanced data security and further suggest avenues which can be used to incentivize security-friendly business practices. We express support for strict enforcement of security standards, particularly those technical and organizational measures that we have seen successfully implemented in practice. We also call for the FTC to consider rules enforced by foreign jurisdictions to both avoid unnecessary regulatory burden and open up international markets.
Our response expresses encouragement for the FTC to pursue data minimization and purpose justification requirements as part of this data security rulemaking. As practical proof points for this class of rulemaking, we explain — again from seasoned experience — that it is indeed technologically possible to implement highly effective approaches to purpose specification and purpose-based access rules in managing and using commercial datasets. We further argue that such requirements can significantly improve consumer data security and will neither hamper innovation nor pose undue administrative burdens. Moreover, we advocate for a tiered regulatory approach: the FTC should enforce general baseline data security requirements in this rulemaking and delegate any more specific requirements to more domain-specific regulations that can better assess the contextual needs of each domain.
In favor of data minimization rule-setting, we contend that such rules could substantially improve user privacy without requiring action on consumers’ part. We detail several reasons why we think minimization is among the most effective consumer privacy mechanisms, including how minimization contributes to more effective oversight, how minimization supplements privacy by design, and how minimization prevents data spills. We further explain how rules such as data minimization are simpler to implement and are less reliant on ex post, privacy-enhancing technologies that may be promising in experimental environments but remain unproven in live business environments. We also lay out considerations for shaping rules to focus on specifying goals rather than technical means to achieve those goals, as technology specificity can result in choosing poorly and can overlook the layering of different approaches tailored to a firm’s specific privacy threat models.
We offer our encouragement for the FTC to start a Section 18 process to establish baseline security rules given how widespread security incidents are, the difficulty consumers experience in negotiating security protections, and because security incidents cause substantial consumer injury. More specifically, we suggest that the FTC could mandate data deletion schedules, the minimization of all sensitive data, the requirement to keep provenance of data, and a requirement for data security governance.
In response to a commonly referenced concern, we argue that data minimization rulemaking need not adversely affect machine learning (“artificial intelligence”) projects and innovations. We suggest that such complaints are often unfounded because, in reality, sensible data minimization measures tend to promote better quality data, which carries an often understated importantance to ML efficacy. In fact, we argue, effective and responsible artificial intelligence frameworks converge upon values that are harmonious with privacy and product safety.
Finally, our response to the FTC offers some explicit suggestions on the implementation of Data Use and Data Retention Certifications frameworks and how they can be tailored and implemented most effectively. We further point to existing models that such standards may be able to draw upon.
We’re proud to be able to share these insights based on an extensive body of practical experiences we’ve built up over nearly two decades of a privacy engineering practice that is central to Palantir’s products, business, and culture.
Immediately below, we provide in full our cover letter to the Commission accompanying our comments. We also encourage interested readers to check out our full response posted here.
RE: Comments to the Federal Trade Commission Regarding Proposed Rulemaking on Commercial Surveillance and Data Security (“Commercial Surveillance ANPR, R111004”)
Dear Commissioners,
Palantir Technologies Inc. (“Palantir”) is a US-based software company with a global presence. We build data platforms that enable public, private, and non-governmental organizations to integrate, analyze, and collaborate on their data in a secure and privacy-protective way. Our vision is a future in which public institutions, commercial enterprises, and non-profit organizations are fully equipped to more effectively and responsibly use their data to carry out their mandates, to deliver value to their customers and constituencies, and to provide critical services to those most in need.
Palantir operates as a data processor with respect to our clients’ data. Unlike many other commercial and technology companies, our business model is not based on the collection, storage, dissemination, or monetization of consumer or citizen data. By contrast, our business involves building and deploying software to help some of the most critical organizations around the world make better use of the data they already lawfully possess or access.
As we build and implement technology, we believe that protecting privacy and other civil liberties is essential to that mission. We therefore welcome efforts by the Federal Trade Commission (“FTC” or “Commission”) to establish new trade regulation directed at reducing harmful commercial surveillance and lax data security practices. We seek to contribute to these efforts by sharing some of the lessons that we have learned in our nearly 20 year history about effective Privacy by Design and Default (PbDD) technology practices and about the ways that regulation — either self- or externally-imposed — can help to establish institutional governance and cultures of responsibility around data security, data protection, and informational privacy.
Specifically, we aim to describe some of the principles that we have developed internally as our guidance for building technology that promotes the responsible and value-enhancing use of information assets, including and especially as they relate to advanced data science and analytics techniques. Our experience of working with a broad range of private and public organizations has allowed us to gather unique, pragmatically oriented insights into the challenges of responsible data use and analytics.
We believe that some of these lessons can be a helpful resource for the Commission as it works towards a Trade Regulation Rule on Commercial Surveillance and Data Security. We also believe that our insights may help point the direction to ways that technological innovation can proceed responsibly within the bounds of well-constructed ethical and regulatory constraints. In fact, it may be the case that establishing additional well-constructed rules directing commercial organizations to optimize their technology and data practices for both business outcomes and consumer privacy interests may be a pathway to freeing industry from zero-sum proclivities (that data utility and commercial value creation must necessarily come at the expense of consumer rights, fair practices, and data privacy).
Given Palantir’s strategic and operational focus on responsible data integration and analytics, our following response to the Advance Notice of Proposed Rulemaking is limited to Questions 24, 26, 31, 32, 35, 45, 46, 47, 48, 49, and 51. The Executive Summary immediately following provides a brief overview of the insights we aim to share throughout our detailed responses to these questions.
We are thankful to the Commission for this opportunity to contribute to the ANPR and we welcome any requests for clarification, as well as further occasions to contribute as the rulemaking process continues.
Sincerely,
Courtney Bowman, Global Director of Privacy and Civil Liberties Engineering, Palantir Technologies
Arnav Jagasia, Privacy and Civil Liberties Engineering Lead, Palantir Technologies
Helena Vrabec, Data Protection and Privacy Lead, Palantir Technologies
*These documents speak as of the date they were written and submitted to the applicable agency, and we undertake no obligation to update such documents.
Palantir’s Response to the FTC on ‘Commercial Surveillance and Data Security’ Rulemaking was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Jasper Research Lab’s new shadow generation research and model enable brands to create more photorealistic…
We’re announcing new updates to Gemini 2.0 Flash, plus introducing Gemini 2.0 Flash-Lite and Gemini…
Interactive digital agents (IDAs) leverage APIs of stateful digital environments to perform tasks in response…
This post is co-written with Martin Holste from Trellix. Security teams are dealing with an…
As AI continues to unlock new opportunities for business growth and societal benefits, we’re working…
An internal email obtained by WIRED shows that NOAA workers received orders to pause “ALL…