Depth Anything 3: Recovering the Visual Space from Any Views ( Code , Model available). lot of examples on project page.
Project page: https://depth-anything-3.github.io/ Paper: https://arxiv.org/pdf/2511.10647 Demo: https://huggingface.co/spaces/depth-anything/depth-anything-3 Github: https://github.com/ByteDance-Seed/depth-anything-3 Depth Anything 3, a single transformer model trained exclusively for joint any-view depth and pose estimation via a specially chosen ray representation. Depth Anything 3 reconstructs the visual space, producing consistent depth and ray maps that can be fused into accurate point clouds, resulting in high-fidelity …