The Open Standard that Ensures Your Drone is a Major Player on the Battlefield

Editor’s Note: The following blog post is intended for drone manufacturers aligned to U.S. interests who are seeking industry standard implementation guidance to help make their products more competitive. It is widely recognized that Chinese drone manufacturers have a dominant market position. At Palantir, we believe it is critical to cultivate innovation and competition in the commercial drone market and increase buying choices for the United States and its allies.

Your small drone could have a big part to play in defending against the Russian invasion of Ukraine, but you are wasting time on features that don’t matter. From Palantir’s 10-plus years integrating large drone platforms into intelligence, surveillance, target acquisition, and reconnaissance workflows, we know that the number one feature drones need to maximize accuracy and appropriate impact is implementation of telemetry standards, specifically the MISB standard. This open specification ensures a consistent method for embedding telemetry data and camera-pointing information into the video stream.

Everything your customer cares about is downstream of the MISB standard: Effective targeting, wasting less ammunition, avoiding threats, etc. It all requires accurate telemetry data. Operators need to be able to plot points of interest from drone footage on a live map — where it can then be fused with every other piece of critical information — for effective wartime decision-making. Next-generation full motion video (FMV) players are able to perform the reverse — plotting data from a 2D static map overlaid live on the drone video with high geolocation accuracy to enable Augmented Reality (AR). This standard is the difference between your drone as a kid’s toy and your drone as an extension of the warfighting forces.

Large drone companies have successfully implemented the MISB standard and reaped the benefits. Small drone companies are equally capable of making this investment, and Palantir wants to help you do so as quickly as possible. Read on for more information about the standard, how to implement it, and the powerful software capabilities waiting on the other side.

The Standard

Practically all drones generate some telemetry data (e.g., roll/pitch/yaw, GPS, camera pointing information, etc.)associated with their FMV feeds. This telemetry data is used to calculate accurate geolocation of both the drone itself and the field of view within the video, so operators can plot points of interest in drone videos on a map. It is required for just about any operational workflow.

Because of the importance of telemetry data and the need to provide it reliably and consistently to other downstream systems, the National Geospatial-Intelligence Agency (NGA) stood up the Motion Imagery Standards Board (MISB) and published various specifications to define how to encode information like this into video streams. For streaming telemetry data from Unmanned Air Systems (UAS), the standard is NGA MISB Standard 0601. It details the UAS Datalink Local Set (LS) for UAS platforms. The major drone manufacturers have all implemented the MISB standard for the large Government drones, but the small unmanned aircraft systems (sUAS) market is lagging behind.

The Necessary Fields

There are a lot of fields in the 0601 Spec. All of these are not going to be relevant for most commercial sUAS platforms (e.g., you likely don’t have an “icing detected“ status!)

The ones we most rely on to be able to georegister the video are below:

  • Tag 2: Precision Time Stamp
  • Tag 5: Platform Heading Angle
  • Tag 6: Platform Pitch Angle
  • Tag 7: Platform Roll Angle
  • Tag 13: Sensor Latitude
  • Tag 14: Sensor Longitude
  • Tag 15: Sensor True Altitude
  • Tag 16: Sensor Horizontal Field of View (HFOV)
  • Tag 17: Sensor Vertical Field of View (VFOV)
  • Tag 18: Sensor Relative Azimuth Angle
  • Tag 19: Sensor Relative Elevation Angle
  • Tag 20: Sensor Relative Roll Angle

Note 1: Generally, the platform angles (Tags 5–7) and the sensor angles (Tags 18–20) can be combined as single absolute roll, pitch, and yaw fields. Both sets aren’t necessary, but the absolute angles are (you could 0 out a set if unable to provide that level of granularity).
Note 2: Tags 23–25 are not necessary, but very valuable for platforms capable of providing this data.

And that’s it. Basically, we need the time (to link to the proper frame), we need the aircraft positional information, and we need the sensor pointing information.

Below, we describe the key technology components that enable the magical combination of Palantir software + drones to achieve the use cases described above.

The Algorithm

Standardized telemetry data is a necessary input to perform georeferencing of the video feed, so users can map video pixels to real world coordinates. In some cases, this is sufficient. However, due to errors in the GPS, sensors, or elevation data discrepancies, a further step is typically needed to accurately align the video to the earth. This is called georegistration, which provides precise geolocation of the drone’s camera bounds and is itself a dependency to overlaying AR annotations on the FMV feed.

Palantir Georegistration uses computer vision to dynamically compare video feeds to known reference imagery. (e.g., Mapbox, USGS, NGA, MapTiler) and reconcile discrepancies, correcting geospatial information of the video frames and aligning to more precise coordinates. Our cutting-edge modern matching techniques are designed around resilience to multimodal and low-structure imagery. This allows us to establish strong matches with reference imagery across sensor modalities and in environments that defeat traditional matching methodologies. Capabilities include:

  • Application across sensor modalities and collection parameters: Computer-vision approach provides flexibility of solution to different sensors (e.g., EO, IR, and SAR) as well as seasonal, temporal, and lighting changes.
  • Frame-by-frame georegistration: Our algorithm takes into account each video frame to identify key ones on which to align and anchor off, reducing jitter and noise on every frame for increased accuracy and usability. This allows users to take advantage of rich AR overlays with the locations of assets and their respective fields-of-view accurate to real world positioning. Additionally, we clearly display precise model information to the user to convey the current georegistration precision and strength of the “lock” on the viewable region.
  • Available offline: Reliable information overlays are provided even in denied, disconnected, intermittent, or limited bandwidth (DDIL) environments. Georegistration on full motion video along with AI insights can deploy fully self-contained with no network needed.

While Palantir offers the advanced georegistration solution detailed above, our software is algorithm agnostic and enables any georegistration algorithm that accepts 0601 telemetry data and produces corrected 0601 data or earth intersection points.

Transforming Drone Footage with Software

The sheer volume of FMV data continues to increase, rapidly outpacing users’ ability to analyze and act on insights. Moreover, teams reviewing video must often collaborate manually across siloed databases, slowing speed-to-decision and increasing the risk of error. The Palantir platform solves these problems with a decentralized and distributed video architecture that syncs data and insights back to a multi-INT map with up-to-date information on all targets, allowing for an accelerated and more accurate Military Decision Making Process (MDMP).

Once the deployed drone sends the necessary telemetry data to the Palantir Platform, along with the video, users unlock the ability to build mission critical intelligence and execute live operations in contested environments, all in the much discussed but rarely achieved “single pane of glass.” Applications include:

  • Common Operating Picture for intel/ops fusion
  • Improved accuracy for identification and targeting workflows
  • Reduced cognitive load on human analysts
  • Processing at the edge for real-time insights and hot-swapping of models

Augmented Reality (AR) Overlays

The value of drone footage increases exponentially when it is fused with other complementary data sources for real-time intel/ops fusion. The Palantir platform enables users to overlay a variety of data sources to contextualize feeds and fix known locations onto each video frame. These other data sources included in an analysts Common Operating Picture (COP) map, Blue Force Trackers (BFT/ATAK), ELINT, OSINT, prohibited or sensitive locations such as No Strike Lists (NSLs), artificial intelligence (AI) detections, and more.

Palantir supports Augmented Reality overlays that display intelligence from canonical sources, live data streams, and map layers in real time. Georegistration ensures that every pixel in every video frame is mapped to a real-world geocoordinate. All data and markings notional.

Contact Us

If you are a drone manufacturer looking to make your platform more useful by integrating with capabilities like those described here, we would love to hear from you. Reach out to us at drones@palantir.com.

Authors

Rob Imig, Head of USG Research & Development, Palantir
Madeline Zimmerman, Deployment Strategist, R&D Federal
Dan Zangri, Product Manager


The Open Standard that Ensures Your Drone is a Major Player on the Battlefield was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.