Categories: FAANG

Evaluating Software (Palantir RFx Blog Series, #0)

This series tackles the rarely simple and often messy solicitation process. We explore how organizations can better evaluate digital transformation software.

Welcome to the RFx Blog Series, which explores the question: how should commercial organizations evaluate digital transformation software?

In this series, we use language commonly found in formal solicitations, including specific questions and functional requirements that can be inserted into a Request for Proposal (RFP). This isn’t because we love RFPs (though we do) or because RFPs are a great way to evaluate software (they’re not) — but because, for all their faults, RFXs are a fixture in the software community. We want to help companies do them better.

The RFX reigns supreme

Though there are many mechanisms commercial companies use to evaluate and acquire software (e.g., pilot, proof of concept, and bake-off), perhaps the most common is the RFx, a formal solicitation process in which an organization defines requirements (in an RFI, RFQ, RFP, etc.) and the vendor submits a proposal in response. RFXs are often combined with other evaluation mechanisms as part of a broader evaluation process (e.g. RFI → RFP → Pilot).

In theory, the RFx process is simple: the customer lists their requirements in a solicitation for vendors to meet as they are able. In practice, however, the RFx process can be very messy. For starters, it can be difficult to compile all the necessary requirements of a digital transformation project. There are many critical components of a software ecosystem, any one of which could lead to failure if improperly scoped or poorly delivered. Also, most RFXs are structured in ways that make it difficult for evaluators to gain an accurate and nuanced understanding of a vendor’s capabilities. Complex requirements are difficult to articulate in checklist format, and respondents are often asked to squeeze their answers into an arbitrary grading rubric (e.g. yes/no/needs customization), giving them an opportunity to declare themselves fully compliant when the reality is much more complicated.

Common RFx pitfalls

Not all RFXs are made equal, and some common mistakes include:

  • Requirement(s) are vaguely scoped. Overly broad requirements invite unhelpfully broad responses. Instead of asking “How is your solution scalable?” consider describing specific project needs in terms of number of users, data scale, performance requirements, etc., and ask vendors how their solution can meet those needs.
  • Requirements invite unprovable responses. Many requirements lack accountability mechanisms with questions that invite affirmative responses. For example, “Does the solution provide enterprise-wide visibility for planning operations?” is unhelpful for evaluation purposes. Questions should demand responses that can be demonstrated in concrete ways, such as through past performance, a live software demo, or a pilot.
  • Complex requirements are reduced to numerical scale. RFXs often grade responses on a numerical scale (i.e., 1 = out of the box; 2 = customization; 3 = planned for future development), with 1 being the preferred response. This rubric can be unhelpful or even detrimental to software evaluation given data schemas requiring customization are generally more effective and scalable than OOTB or fixed schemas. (See Blog Post #1 for a deep dive into this specific issue.)
  • Scope of requirements are too business-oriented. Given the many critical components of a data ecosystem and limited insights into those needs, RFx requirements might overlook or improperly scope crucial elements related to data modeling, governance, management, security, and other technical aspects — risking project failure.

What makes a good RFx?

Through this RFx blog series, we explore how organizations can better evaluate digital transformation software and how acquisitions officials can ask the right questions in the right way to actually solve their identified problems.

Each post is informed by Palantir’s experience deploying technology into customer environments around the world over the past two decades. During this time, we have also responded to hundreds of RFIs and RFPs, and observed the various tactics vendors employ to gain an upper hand. In subsequent posts, we examine how companies can frame requirements that yield the most useful information for evaluators to assess the merits of a solution, and avoid requirements that may yield unhelpful or misleading responses. Throughout, we focus on some aspect of the technology stack required to create an effective data ecosystem, starting with the Ontology.

This blog series should be seen as a living document that attempts to address these thorny questions and will hopefully result in more effective RFXs from the universe of customers, regardless of their specific mission and place in the digital journey.

Let’s begin

The first post in the Palantir RFx Blog Series is called Ontology. In it, we explore how ontologies unify, standardize, and enhance the data ecosystem, exploring the collection of tools and technologies that enable an organization to achieve more efficient data-driven operations in ways that are secure, transparent, auditable, and defensible.


Evaluating Software (Palantir RFx Blog Series, #0) was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

AI Generated Robotic Content

Recent Posts

Qwen + Wan 2.2 Low Noise T2I (2K GGUF Workflow Included)

Workflow : https://pastebin.com/f32CAsS7 Hardware : RTX 3090 24GB Models : Qwen Q4 GGUF + Wan…

4 hours ago

7 Pandas Tricks for Time-Series Feature Engineering

Feature engineering is one of the most important steps when it comes to building effective…

4 hours ago

How AI is helping advance the science of bioacoustics to save endangered species

Our new Perch model helps conservationists analyze audio faster to protect endangered species, from Hawaiian…

4 hours ago

Adaptive Knowledge Distillation for Device-Directed Speech Detection

Device-directed speech detection (DDSD) is a binary classification task that separates the user’s queries to…

4 hours ago

The DIVA logistics agent, powered by Amazon Bedrock

DTDC is India’s leading integrated express logistics provider, operating the largest network of customer access…

4 hours ago

ChatGPT users dismayed as OpenAI pulls popular models GPT-4o, o3 and more — enterprise API remains (for now)

OpenAI has announced GPT-5 will replace all models on ChatGPT. Many users are mourning the…

5 hours ago