Self-supervised Monocular Depth Estimation: Let’s Talk About the Weather

Kieran Saunders*, George Vogiatzis, Luis J. Manso

*Corresponding author for this work

Research output: Chapter in Book/Published conference outputConference publication


Current, self-supervised depth estimation architectures rely on clear and sunny weather scenes to train deep neural networks. However, in many locations, this assumption is too strong. For example in the UK (2021), 149 days consisted of rain. For these architectures to be effective in real-world applications, we must create models that can generalise to all weather conditions, times of the day and image qualities. Using a combination of computer graphics and generative models, one can augment existing sunny-weather data in a variety of ways that simulate adverse weather effects. While it is tempting to use such data augmentations for self-supervised depth, in the past this was shown to degrade performance instead of improving it. In this paper, we put forward a method that uses augmentations to remedy this problem. By exploiting the correspondence between unaugmented and augmented data we introduce a pseudo-supervised loss for both depth and pose estimation. This brings back some of the benefits of supervised learning while still not requiring any labels. We also make a series of practical recommendations which collectively offer a reliable, efficient framework for weather-related augmentation of self-supervised depth from monocular video. We present extensive testing to show that our method, Robust-Depth, achieves SotA performance on the KITTI dataset while significantly surpassing SotA on challenging, adverse condition data such as DrivingStereo, Foggy CityScape and NuScenes-Night.
Original languageEnglish
Title of host publicationProceedings of the 2023 International Conference on Computer Vision
Number of pages11
Publication statusE-pub ahead of print - 6 Oct 2023
EventThe 2023 International Conference on Computer Vision - Paris, France
Duration: 2 Oct 20236 Oct 2023


ConferenceThe 2023 International Conference on Computer Vision
Abbreviated titleICCV 2023
Internet address

Bibliographical note

This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.

Funding & Acknowledgements: This research was funded and supported by the EPSRC’s DTP, Grant EP/W524566/1. Most experiments were run on Aston EPS Machine Learning Server, funded by the EPSRC Core Equipment Fund, Grant EP/V036106/1.


Dive into the research topics of 'Self-supervised Monocular Depth Estimation: Let’s Talk About the Weather'. Together they form a unique fingerprint.

Cite this