By Akshay Krishnaswamy & Ted Mabrey
The AI revolution is emerging at a crucial time for the Western world. The geopolitical chessboard is more complex than at any time since the fall of the Berlin Wall; institutions across the public and private sectors face a deepening legitimacy crisis; and as every enterprise fights to execute its essential mission, the pervasive feeling is that the last generation of technology investments have failed to meet their moment.
Whether protecting the nation, producing food, or delivering medical care, the diagnosis is similar: daily operations have exploded in complexity, and technology has not kept pace. Core business systems have ossified into straitjackets that constrain frontline workers. The endless SaaS solutions produced by the Software Industrial Complex have glittered the desks of managers while avoiding the friction of operational reality. The last decade of marketing and steak dinners have promised digital transformation, and instead delivered fragmentation and parasitic cost models.
With AI, enterprises have a generational opportunity to break these chains. The question is: what must be done differently to produce a worthy revolution?
Given that the goal is to transform the way the enterprise operates, the AI system must revolve around process — not simply data. Every consequential decision-making process involves interconnected stages, hand-off points, and feedback-driven learning. Think of how staff scheduling works in a hospital, how flight routing is performed by an airline, or how production is optimized on a manufacturing line. In these contexts, operational teams are dealing with time-sensitive tasks that require them to reason through different courses of action and execute decisions — often in dynamic situations with multiple stakeholders.
Software that is simply ‘data-centric’ can assist with aggregating information, creating dashboards, and bubbling up the occasional insight. This is managerial technology, in the original parlance of Silicon Valley; it is intended for offline analysis and keeping an eye on the metaphorical scoreboard. It is not designed to power inventory rebalancing, fleet positioning, or any critical frontline activity.
In the age of AI, the prospective value is not incremental; it is about fundamental transformation and the goal of driving automation into every core function. The data-centric architectures that enable ‘chat’ or ‘RAG’ workflows are streamlining information retrieval but fundamentally remain a one-way street of producing visualizations and insights. Infusing core workflows with automation requires an operational architecture where the process is primary and where progressively more AI-driven approaches can be continuously tested, calibrated, and scaled based on human feedback.
A process can only be automated if it is encoded in software.
For a given process, there are phases that involve identifying which tasks require attention — often in situations where there are competing priorities. These feed into more exploratory phases, where different possible courses of action are assessed. Sometimes the ‘decision space’ being explored is governed by a rigid, rules-based model, while in more complex cases, it might involve a blend of multi-step simulations and human judgment. The chosen decision then typically needs to be ‘compiled’ in some manner; i.e., the course of action needs to be verified, given the constraints of the business and the approvals and checkpoints that might be in place.
The final phase is the actual execution of the decision; this can involve updating transactional systems, publishing events to edge systems, or orchestrating a piece of operational logic that drives a production controller or machine on a factory floor. This execution updates the state of the world, which then feeds into the next loop through the process.
Many of the elements that make up these process phases are found across disconnected systems.
Disparate data systems contain structured data, streaming data, media files, documents, and other pieces of information that — when properly integrated — provide the context needed by those who are carrying out critical workflows. This means that the AI system needs a full-spectrum approach to data integration, which can fuse all types of data, from any source, into a common model of the operational world. Tooling for virtualizing existing data, monitoring the health of data pipelines, and enforcing granular security policies must work across all types of data, at any scale.
Critically, a process is defined by more than data. Logic sources, such as systems containing business rules, machine learning workbenches containing forecasting models, and solvers that run optimizations, provide the elements for powering exploratory reasoning and scenario analysis. Encoding the relevant logic assets into a shared model of the process can mean importing existing code as containers; shifting existing codebases into secure, elastic infrastructure; or dynamically calling these pieces of logic through APIs.
Last but not least, the actions — every ‘verb’ that is paired with every ‘noun’ — must be encoded. This means tracking the inputs into every action that is taken, the different pathways that are evaluated during the reasoning phases, and the consequences of executing the action, including feedback and learning. Concretely, this means the AI system must integrate with the enterprise’s fragmented systems of action (e.g., ERP systems, MES systems, PLMs, edge controllers), and must maintain a durable record of the actions orchestrated across environments.
While most operational processes have a blend of elements that can be sourced from different digital systems, they often also have critical elements that are not encoded anywhere but in the minds of operational users.
Hospital staff do not schedule patients by looking at rigid tables in the EMR systems or abiding by a rules engine; those things are inputs of course — but ultimately the decision maker draws from experience, after weighing several options that are partially represented across systems. In automotive manufacturing, timely and integrated data can help quality engineers properly categorize issues and identify which investigations to conduct; but ultimately, the data and algorithmic techniques feed a broader process, which is anchored in the engineer’s operational experience.
The AI system must enable the best-effort, continuous encoding of these human-driven parts of the process. This means providing flexible application building tools that enable human users to weave into workflows that are starting with minimal or partial automation, which can organically evolve over time. It can also mean leveraging Generative AI to capture the context and reasoning involved in human-driven tasks, which have historically been sequestered within documents, images, videos, and audio files.
The continuous encoding of operational processes allows AI-powered automation to steadily expand in scope and learn from frontline feedback. Every process where humans and AI are working together becomes a rich canvas for learning. Where did the operator hand off to the AI? Where did the AI make a sensible recommendation? Where did it not?
Human preferences — captured through scenarios being evaluated and actions being taken — form a stream of ‘tribal knowledge’ that can be fed into every AI-powered component of the decision-making process. Over time, this distills the business acumen that’s been limited to certain individuals, combines it with the vast amount of operational context that AI can leverage, and scales it through the resulting automation.
Palantir uses the AI Levels framework to assess the depth of automation, within a given operational process. In short:
Level 0 involves basic and secure access to LLMs; where the models are helping with summarization or data extraction — but ultimately so that human users can continue to reason and act themselves.
Level 1 takes this one step further, with secure data integration, and the ability to conduct both search and visualization workflows. Level 1 tasks often involve AI reacting to prompts, to aid with open-ended exploration.
Level 2 is where AI begins to interact with both data and logic, to provide decision guidance. This can involve agents using tools to run simulations and other calculations, to guide human users as they work in real time.
Level 3 is reached when Generative AI is integrated with the data, logic, and action elements of the process. Agents are deployed across entire workflows, and they are absorbing the tribal knowledge from humans who remain in the loop.
Level 4 is defined by agents taking the primary role in a process; humans may still provide oversight and feedback but the end-to-end executions, learning, and improvement loops are now AI-driven.
These levels of depth are complemented by a holistic measure of breadth; i.e., how many enterprise processes are being infused with AI — and what is the compound effect? One dimension of breadth is simply coverage within the operational space. How many of the processes that constitute the fulfillment workflow in the supply chain are automated? How many of the workflows across customer service are automated? To what degree?
The other dimension is connectivity; where is the infusion of AI enabling the elimination of the artificial walls between processes? To what degree has AI-driven automation redrawn the rigid lines and enabled the enterprise to better connect strategy with operations?
At the limit, AI should allow longstanding processes to be re-engineered from first principles. Workflows that have traditionally been spread across disjoint teams can be integrated into continuous single processes, and can be measured against KPIs that more authentically reflect operational outcomes. As exogenous conditions change, and internal goals continue to evolve, the enterprise should be able to encode new strategic objectives and have them seamlessly update all nested operational goals.
The role of the operational user evolves as well, as the breadth and depth of AI expand. As agents manage steadily more end-to-end processes, human users gain increasing operational leverage. Their roles shift from being purely execution-centric to instead managing fleets of agents — and overseeing the integration of the feedback-based learning that is now being processed at machine speed.
This is emblematic of true operational steering — the “cybernetic enterprise,” which was the original goal of Silicon Valley and the computing revolution.
For the AI revolution to be a worthy one, it must be driven through open, extensible architectures that break from the shackles of the Software Industrial Complex.
This means that every component of a process that is integrated from digital sources — data systems, logic sources, systems of action — must be represented as building blocks that can drive AI-enabled application building both inside and outside of the platform, through robust APIs and SDKs. It also means that the system must abide by open standards, and enable flexibility. Every data element should be stored in non-proprietary formats; a range of open compute engines should be made available; there must be flexibility in how models can be developed, stored, and orchestrated — in adherence with open standards.
The platform must provide compounding leverage for AI developers as each new application is delivered. Every data integration must produce a reusable artifact which can be automated; every piece of business logic must become a scalable tool for both human and AI usage; and every user interface component must allow for human-driven action to smoothly transition to human-AI teaming, and then to end-to-end automation. Developers should also not be constrained by a particular environment or framework; the platform should provide a secure, modular development environment which can be seamlessly integrated with common coding environments and DevOps toolchains.
Taken together, these requirements describe the need for a very different system architecture. Trying to extend traditional data or analytics architectures might work for simple retrieval-oriented workflows like RAG, but building AI-driven automations requires a technical foundation that is fundamentally process-centric, not data-centric. Loops, rather than linear tabulations and BI reports.
Everything from the multimodal data architecture, to the interoperable model architecture, to the real-time workflow architecture, to the underlying security architecture — must be geared towards infusing AI throughout core operational processes, removing the latency between strategic shifts and operational action, and enabling the enterprise to optimally execute its essential mission.
The Cybernetic Enterprise was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Our next iteration of the FSF sets out stronger security protocols on the path to…
Large neural networks pretrained on web-scale corpora are central to modern machine learning. In this…
Generative AI has revolutionized technology through generating content and solving complex problems. To fully take…
At Google Cloud, we're deeply invested in making AI helpful to organizations everywhere — not…
Advanced Micro Devices reported revenue of $7.658 billion for the fourth quarter, up 24% from…