Seeing Through Fog Without Seeing Fog:
Deep Multimodal Sensor Fusion in Unseen Adverse Weather

Mario Bijelic
Tobias Gruber
Fahim Mannan
Werner Ritter
Klaus Dietmayer
Felix Heide

CVPR 2020




We demonstrate that it is possible to learn multimodal fusion for extreme adverse weather conditions from clean data only. Multimodal sensor data, including camera, lidar, and gated cameras, can be asymetrically degraded in harsh weather (see example on the right), i.e., only a subset of sensory streams is degraded, which makes it challenging to learn redudancies for fusion methods.

The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information in good environmental conditions, they fail in adverse weather where the sensory streams can be asymmetrically distorted. These rare ``edge-case'' scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this challenge we present a novel multimodal dataset acquired in over 10,000km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar, and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy.



Paper

Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, Felix Heide

Seeing Through Fog Without Seeing Fog:
Deep Multimodal Sensor Fusion in Unseen Adverse Weather

CVPR 2020

Please address correspondence to Felix Heide and Mario Bijelic.

[Paper]
[Supplement]
[Bibtex]
[Code]
[Dataset]


Dataset




Major sites of recording


Vehicle setup


Dataset Brief

We introduce a object detection dataset for challenging adverse weather conditions covering in real world driving scenes in controlled weather conditions in a fog chamber. The dataset covers diverse weather conditions, such as fog, snow and rain and was acquired by over 10,000 km of driving in northern Europe. The capture routes and sensor setup are shown above. In total, 100k objects where labeled with accurate 2D and 3D bounding boxes. Below are sample videos in severe adverse weather.


Dense Fog

Here a typical driving scene in dense fog at an intersection is shown. We show measurements from a conventional RGB camera, a FIR camera, lidar, and gated camera measurements. Note, the short visible range and the point cloud wobbling effects in the lidar measurements due to fog movement and inhomogeneities.


Snowfall

Two examples of dense snowfall. You can easily see the drop in contrast and the thick snowflakes. These cause artifacts in all sensors. In lidarpoint clouds, we can observer uniform clutter as distrubance. In the gated camera view, we show the middle gated slice which allows to gate out backscatter efficiently.


Fog Chamber


Furthermore, our data set provides examples in controlled fog chamber conditions. This allows the exact analysis of sensor degeneration in different weather conditions. Here is a scenario in light fog with an oncoming vehicle.


Video Summary


Video summary introducing the severe dataset bias of existing datasets towards good weather conditions, the proposed method to overcome this issue, and the proposed adverse weather dataset for the assessment in harsh conditions.


Multimodal Fusion




Entropy-Steered Multimodal Fusion

The proposed dataset, although large, is not large enough to cover enough combinations of scene semantics and asymmetric sensor degradation that would allow supervised fusion. Instead, we learn from clear data only and rely on the proposed dataset for validation. To achieve this feat, we departe from proposal level fusion and propose an adaptive fusion driven by measurement entropy. This entropy-level fusion allows detection also in case of unknown adverse weather effects.


Qualitative Results



The proposed method outperforms existing fusion methods, including recent lidar-camera, lidar-only, or camera-only detectors, and generalizes to challenging unseen weather conditions. Qualitative results are shown above.




Additional Applications of the Proposed Dataset


Validating Simulation Models in Adverse Weather




Fog Forward Model for Lidar Pointclouds

The proposed datataset allows us to validate existing simulation models and test their capability of generalization. Based on calibrated fogchamber measurements we provide parameters both for Velodyne HDL-S3D and HDL-S2 sensors. Here the calibrated fog forward model has been applied to the KITTI dataset and Velodyne HDL-S sensor. Please see [here].


Domain Adaptation



Input

Adapted

Input

Adapted

Input

Adapted

Adverse Weather Style Transfer

Examples of domain adaptation from clear winter captures to adverse weather scenes. The first two rows show a mapping from clear images to clear winter captures with style transfer using CyCADA.


Fog and Snow Removal



Input

AOD-Net

Dehaze-Net

Pix2Pix-AOD

Pix2PixHD

Pix2Pix-CJ

Image Reconstruction in Winter Conditions

Additional image-to-image reconstruction results (top to bottom): Measured input image, AODNet, DehazeNet, Pix2PixHD AOD, Pix2PixHD and Pix2PixHD CJ in real adverse weather. The proposed datset enables learning and assessing image-to-image mapping methods in adverse weather.