Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps
Optimal transport (OT) theory focuses, among all maps that can morph a probability measure onto another, on those that are the “thriftiest”, i.e. such that the averaged cost between and its image be as small as possible. Many computational approaches have been proposed to estimate such Monge maps when is the distance, e.g., using entropic maps (Pooladian and Niles-Weed, 2021), or neural networks (Makkuva et al., 2020; Korotin et al., 2020). We propose a new model for transport maps, built on a family of translation invariant costs , where and is a regularizer. We propose a…
Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets. In this context, given two large point clouds of sizes nnn and mmm in Rdmathbb{R}^dRd, entropic OT (EOT) solvers have emerged as the most reliable tool to either solve the Kantorovich problem and…
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. Although remarkably challenging to solve in theory, disentanglement is often achieved in practice through prior matching. Furthermore, recent works have shown that prior matching approaches…
AI policy sets boundaries on acceptable behavior for AI models, but this is challenging in the context of large language models (LLMs): how do you ensure coverage over a vast behavior space? We introduce policy maps, an approach to AI policy design inspired by the practice of physical mapmaking. Instead…