Categories: FAANG

How Palantir’s Strategic Privacy Investments Enable Future Customer Success

Building for Tomorrow

Introduction

Palantir’s customers use our software platforms for their most critical challenges — from delivering vaccines to enabling force readiness to building resilient supply chains — and these challenges often require bringing together data with unique sensitivities from a variety of source systems. This is why we have spent the past 20-plus years building tools for security, privacy, and oversight directly into all of our software platforms, and why we established one of the industry’s first Privacy and Civil Liberties Engineering teams to carry out this mission.

For example, granular access controls are a key part of our software infrastructure, enabling administrators to limit access to data within an organization on a strictly need-to-know basis. Data Lineage maintains the provenance of data through the platform by default, ensuring customers can quickly understand where a piece of data came from and how it was transformed.

In the same way, we have been building tools for data protection into Palantir Foundry and Palantir’s Artificial Intelligence Platform (AIP) since the earliest days of these platforms, recognizing that our customers require best-in-class capabilities to process sensitive data. These early investments, in addition to helping our customers enable workflows on sensitive data with confidence, have also strengthened the ability of our Palantir platforms to address emerging challenges, including new regulatory obligations or novel issues arising from the use of generative AI.

In this blog post, we focus on one of those data protection tools: Checkpoints. Checkpoints is just one of many data protection tools that works across Palantir platforms, but it serves as a case study to demonstrate how strategic investments in privacy capabilities produce a platform well-suited to support customers in their current and future challenges. We share why we initially built Checkpoints and how it has been adapted to serve as a solution for novel challenges, enabling our customers to better anticipate and stay in front of the challenges to their mission.

Checkpoints: Purpose Justification at Scale

Beyond Binary Access Controls

Foundry and AIP’s access control capabilities enable administrators to granularly specify which sensitive actions each user on the platform is permitted to take. Access controls are a necessary, first-line approach to addressing security, privacy, and use limitation challenges. In some cases, however, controlling the binary decision of whether a user can or cannot take an action doesn’t fully address the nuance of working with sensitive data. Specifically, access controls don’t address the case where a user does have a legitimate reason to take a certain action but needs to exercise caution or provide a justification when doing so.

We recognized that there is a whole class of these workflows where risk could be mitigated by the introduction of configurable purpose-justification prompts. For example, while users may be able to download sensitive data, they might need to ensure their actions are consistent with organizational policies for handling data on their local machines. Or, certain users may be authorized to manage other users’ access controls through Groups or Markings, but they need to record a justification for these changes that can be reviewed later for compliance audits. The platform already creates audit logs, but the ability for users to record context-specific justifications provides a richer understanding of not just what happens on the platform, but why it happens. This added fidelity gives governance and compliance teams the ability to better understand and mitigate privacy risks within their organization, without disabling or restricting legitimate workflows. In addition, these contextual purpose justifications can also help organizations’ privacy, compliance, and governance teams investigate data handling missteps — or even abuses — to better ensure that users are accountable to data protection policies and practices.

Checkpoints in Action

The intent when we started building Checkpoints in 2018 was to develop a framework for collecting and reviewing these purpose justifications. Initially, the primary goal of Checkpoints was to provide a system for purpose justification to help customers meet requirements under data protection regulations, such as GDPR’s “Purpose Limitation” principle or the FIPPs’ “Purpose Specification” principle.

To build a purpose justification system for operational users, we looked to an already common element of user interface design: confirmation dialogs. Consider the humble “empty trash” dialog we see when deleting files from our computers or messages from our email servers.

Recognizing that humans are prone to “accidentally” clicking the wrong button, or might have an “oh no” moment immediately after deleting a file, software designers introduce these dialogs as speed bumps of sorts. They don’t aim to prevent a user from accomplishing their goal; rather, they introduce a small amount of friction that gives the user a chance to reconsider the command they are issuing to the system. While the platform shows these types of opinionated confirmation dialogs for interactions that are intrinsically destructive (like deleting data) or sensitive (such as configuring new network egress policies), Checkpoints gives governance teams the ability to inject customizable dialogs into operational workflows that reflect their organizational policies.

When a user clicks a download button in the platform and that download is covered by a Checkpoint, they can be presented with an organization-configured prompt and justification requirements. Organizations can link to relevant data handling policies, require the user to select a justification from a dropdown, enter a free-text description of their intent, and more. Upon completing these requirements, the user’s action continues, and the user’s justification is saved and made available for administrators to review.

Checkpoints works as a customizable framework for purpose justification, bringing a user’s experience throughout the platform in line with their organization’s best practices. Today, Checkpoints supports integrations with more than 60 types of sensitive actions across the platform, and new integrations are being added all the time.

While developing Checkpoints, we realized that confirmation dialogs can have a secondary benefit: friction.[1] Normally, software is designed to minimize friction wherever possible, but confirmation dialogs often reflect a necessary tension: humans should have ultimate control over the systems they use, but those systems should also account for the fact that humans can make mistakes. To address this, product designers will often guard destructive actions with confirmation dialogs.

Governance at Scale

While the purpose justification prompts and nudges can be helpful on their own, Checkpoints also provides real-time review for administrators on the platform. This lets compliance teams, data owners, governance officers, and other administrators understand which sensitive actions are being taken on the platform and why. Checkpoints provides the rich contextual history around each sensitive action, helping these administrator users understand what sensitive actions are happening, who is taking those actions, when and where in the platform such actions are happening, and most importantly, why the user took the action, through their justification response.

Checkpoints scales the impact of data protection and data governance policies in the platform, allowing administrators to gate more than 60 unique types of actions with purpose justification prompts and to start collecting justifications for review within just minutes.

Beyond Data Protection

Our early investments in building Checkpoints as an extensible framework for purpose justification have enabled us to solve novel challenges beyond the context of data protection. Let’s take a look at two case studies where our customers use Checkpoints to address other kinds of emerging challenges:

Case Study: Change Management for GxP

Organizations working in Life Sciences often must adhere to “GxP” regulations (Good “X” Practices, where the X is interchangeable based on industry standards) to ensure they are following best practices for quality and safety. One such standard is 21 CFR Part 11’s e-signature and e-record requirements, which require employees to provide an “e-signature” whenever they are creating data, modifying data, or performing highly controlled activities.

Building a platform that can meet 21 CFR Part 11 requirements poses novel technical challenges for a few reasons. First, users need to provide an e-signature through specific identity-based controls (like re-authentication) within their software platform, which requires integration with a user’s authentication provider (typically, a Single Sign On (SSO) flow). Second, this authentication or re-authentication needs to be triggered whenever a user can create or modify data in a platform. This includes version control workflows when writing code (merging pull requests), executing logic to build datasets (data transforms), creating a schedule for running those builds, among other consequential actions.

In AIP and Foundry, Checkpoints is used by our customers to satisfy their Part 11 e-signature and e-record requirements. Checkpoints already integrates with over 60 unique actions in the platform, including those for creating and modifying data, and we rolled out new Checkpoints integrations for Schedules and Pipelines to make sure we covered the actions relevant to these GxP e-signature requirements. To support the e-signature workflow, we also extended Checkpoints justification types beyond check-box, free-text, and dropdown to include re-authentication as a form of purpose justification. Today, a governance team can configure checkpoints to prompt users for re-authentication for any of the actions in the platform that might require an e-signature, and we continue to extend Checkpoints to cover new actions as the platform evolves.

A Checkpoint asking for re-authentication before merging new code into the main branch.

Reusing the Checkpoints framework also provides workflows for data governance beyond the requirements for e-signature. For example, Checkpoints satisfies the e-record requirements in 21 CFR Part 11 to better ensure oversight and compliance, as it was purpose-built to support compliance, privacy, and governance teams in reviewing purpose justifications and the sensitive actions they correspond to in real-time.

To learn more about how Checkpoints satisfies these GxP requirements, including further information about 21 CFR Part 11, and ALCOA+ principles, please see our full GxP addendum here:

Case Study: Human-in-the-Loop for Responsible Human/AI Teaming

Organizations are increasingly using generative AI-based logic and AI agents to suggest, recommend, and take actions in their operational workflows. To enable effective Human + AI teaming, domain experts need to easily hand off to and from AI models. In practice, this often means using generative AI or AI agents to stage edits to data, make recommendations, and propose suggestions. But, it’s often too easy for users to fall into the trap of automation bias and accept an AI suggestion without understanding its full impact. AI governance teams are also faced with the related challenge of overseeing all AI-enabled decision-making across their organizations. Solving these twin challenges — keeping a human “in the loop” and enabling AI governance at scale — is critical for organizations as they look to automate more business processes with AI.

In these AI-enabled settings, Checkpoints can help address automation bias and provide governance teams with a scalable way to oversee Human + AI teaming. Operational users carry out decisions, including those suggested by AI, through Actions in the Ontology. Checkpoints natively integrates with Actions through the “Submit Action” checkpoint type, allowing governance teams to configure a Checkpoint for a specific Action. This embeds the Checkpoint prompt within the user interface for submitting an action, wherever that action might be invoked across applications. This can inject reminders, nudges, and requests for justification or explanation directly into operational workflows in a centralized manner, without requiring application developers to configure new prompts for each application where an Action might be used. Importantly, unlike other parameters collected in Action submissions, Checkpoint justifications are always expected to be manually specified by the user and cannot be autofilled with the output of an AI system. Moreover, governance teams can review the submissions of the Checkpoint and the related information about the Action invoked in real-time.

This kind of in-workflow prompt for human sign-off or justification — especially when applied to AI-suggested decisions — can be helpful in a variety of contexts. For example, consider a pharmaceutical company that uses AI to monitor real-time sensor data during drug manufacturing and flag temperature anomalies in certain batches to alert a quality manager. The quality manager can then review the information about that drug batch before approving or rejecting its release, recording their justification and findings in a Checkpoint. This ensures human verification of critical AI-enabled decisions while maintaining auditability and compliance with Good Manufacturing Practices (GMP). Or consider an automotive manufacturer that uses AI to analyze sensor data and quality control metrics to clear a batch of engines for shipment from its assembly line. The manufacturer might add a Checkpoint on the action to approve the batch of engines for shipment, reminding a quality supervisor to review the quality metrics and confirm the AI-suggested action should proceed. As organizations across industries integrate AI into their operations, Checkpoints can serve as a centrally managed framework to ensure both human review and auditability of critical AI-enabled decisions.

A Checkpoint integrated with an approval flow for executing an AI-suggested action.

Conclusion

While Checkpoints started as a tool intended solely for purpose justification, customers have used it for more advanced workflows, such as completing e-signatures and requiring in-workflow justifications for AI-suggested decisions. Checkpoints is just one example of how early investments in products for data protection and privacy pay dividends down the line, enabling our customers to stay ahead of upcoming standards and regulations, even in other domains. We’re committed to building software platforms that not only solve our customers’ workflows today but also prepare them for the challenges of tomorrow.

[1] For further reading about this kind of friction, we recommend Paul Ohm and Jonathan Frankle’s Desirable Inefficiency (https://scholarship.law.ufl.edu/flr/vol70/iss4/2/).


How Palantir’s Strategic Privacy Investments Enable Future Customer Success was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

AI Generated Robotic Content

Recent Posts

VHS filters work great with AI footage (WAN 2.2 + NTSC-RS)

submitted by /u/mtrx3 [link] [comments]

14 hours ago

Algorithm Showdown: Logistic Regression vs. Random Forest vs. XGBoost on Imbalanced Data

Imbalanced datasets are a common challenge in machine learning.

14 hours ago

Unlock global AI inference scalability using new global cross-Region inference on Amazon Bedrock with Anthropic’s Claude Sonnet 4.5

Organizations are increasingly integrating generative AI capabilities into their applications to enhance customer experiences, streamline…

14 hours ago

Connect Spark data pipelines to Gemini and other AI models with Dataproc ML library

Many data science teams rely on Apache Spark running on Dataproc managed clusters for powerful,…

14 hours ago

The Lenovo Go S Is $120 Off

The upgraded version of the Legion Go S with SteamOS makes for a nice Steam…

15 hours ago

AI could make it easier to create bioweapons that bypass current security protocols

Artificial intelligence is transforming biology and medicine by accelerating the discovery of new drugs and…

15 hours ago