Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model

1École Polytechnique Fédérale de Lausanne (EPFL), 2Technical University of Darmstadt, 3hessian.AI

TL;DR: Given a pair of equirectangular images captured by two vertically stacked omnidirectional cameras, DFI-OmniStereo integrates a large-scale pre-trained monocular relative depth foundation model into an iterative stereo matching approach. This method improves depth estimation accuracy, significantly outperforming the previous state-of-the-art method on the Helvipad dataset.

Abstract

Omnidirectional depth perception is essential for mobile robotics applications that require scene understanding across a full 360° field of view. Camera-based setups offer a cost-effective option by using stereo depth estimation to generate dense, high-resolution depth maps without relying on expensive active sensing. However, existing omnidirectional stereo matching approaches achieve only limited depth accuracy across diverse environments, depth ranges, and lighting conditions, due to the scarcity of real-world data.

We present DFI-OmniStereo, a novel omnidirectional stereo matching method that leverages a large-scale pre-trained foundation model for relative monocular depth estimation within an iterative optimization-based stereo matching architecture. We introduce a dedicated two-stage training strategy to utilize the relative monocular depth features for our omnidirectional stereo matching before scale-invariant fine-tuning.

DFI-OmniStereo achieves state-of-the-art results on the real-world Helvipad dataset, reducing disparity MAE by approximately 16% compared to the previous best omnidirectional stereo method.

Method

Architecture visualisation

A shared depth foundation model (purple) is utilized to extract representations from a top and bottom image. Subsequently, an omnidirectional stereo matching head (pink) predicts disparity, utilizing the image features as follows: The intermediate representations and relative depth maps of both images are adapted to be processed as multi-scale feature maps by the iterative matching head. This head predicts a disparity map using vertical warping for cost volume construction.

The training consists of two stages. In training stage A (yellow), we adapt the stereo matching head to the omnidirectional data and the foundation model features (foundation model frozen) using a conventional stereo matching loss \( \mathcal{L}_{A} \). In stage B (orange), we fine-tune the foundation model decoder and the stereo matching head, utilizing a scale-invariant logarithmic loss \( \mathcal{L}_{B} \). Frozen and trainable modules are denoted with a snowflake and fire symbol, respectively.

Results

The table below shows the comparative results on the Helvipad test split. Our method (DFI-OmniStereo) achieves the best results in 6 of 8 metrics. Only the LRCE metrics are slightly worse compared to the previous state-of-the-art method (360-IGEV-Stereo).

Method Stereo Setting Disparity (°) Depth (m)
MAE RMSE MARE LRCE MAE RMSE MARE LRCE
PSMNet conventional 0.286 0.496 0.248 - 2.509 5.673 0.176 1.809
360SD-Net omnidirectional 0.224 0.419 0.191 - 2.122 5.077 0.152 0.904
IGEV-Stereo conventional 0.225 0.423 0.172 - 1.860 4.474 0.146 1.203
360-IGEV-Stereo omnidirectional 0.188 0.404 0.146 0.054 1.720 4.297 0.130 0.388
DFI-OmniStereo omnidirectional 0.158 0.338 0.120 0.058 1.463 3.767 0.108 0.397

The videos below show the depth maps predicted by our method (DFI-OmniStereo) and the previous state-of-the-art method (360-IGEV-Stereo) on an outdoor, day time and outdoor, night time scene of the Helvipad dataset. A video of an indoor scene is provided at the top of this page. DFI-OmniStereo yields predictions with sharper edges, finer details, and smoother surfaces.

Outdoor Day Scene

Outdoor Night Scene

BibTeX

@article{endres2025dfiomnistereo,
  author    = {Endres, Jannik and Hahn, Oliver and Corbière, Charles and Schaub-Meyer, Simone and Roth, Stefan and Alahi, Alexandre},
  title     = {Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model},
  journal   = {arxiv:2503.23502 [cs.CV]},
  year      = {2025},
}