Seeing Around Street Corners: Non-Line-of-Sight
Detection and Tracking In-the-Wild Using Doppler Radar

Nicolas Scheiner
Florian Kraus
Fangyin Wei
Buu Phan
Fahim Mannan

Nils Appenrodt
Werner Ritter
Jürgen Dickmann
Klaus Dietmayer
Bernhard Sick
Felix Heide

CVPR 2020



We demonstrate that it is possible to recover moving objects outside the direct line of sight in large automotive environments from Doppler radar measurements. Using static building facades or parked vehicles as relay walls, we jointly classify, reconstruct, and track occluded objects. Left: Illustration of the non-line-of-sight doppler image formation. Right: Joint detection and tracking results of the proposed model.

Conventional sensor systems record information about directly visible objects, whereas occluded scene components are considered lost in the measurement process. Non-line-of-sight (NLOS) methods try to recover such hidden objects from their indirect reflections -- faint signal components, traditionally treated as measurement noise. Existing NLOS approaches struggle to record these low-signal components outside the lab, and do not scale to large-scale outdoor scenes and high-speed motion, typical in automotive scenarios. In particular, optical NLOS capture is fundamentally limited by the quartic intensity falloff of diffuse indirect reflections. In this work, we depart from visible-wavelength approaches and demonstrate detection, classification, and tracking of hidden objects in large-scale dynamic environments using Doppler radars that can be manufactured at low-cost in series production. To untangle noisy indirect and direct reflections, we learn from temporal sequences of Doppler velocity and position measurements, which we fuse in a joint NLOS detection and tracking network over time. We validate the approach on in-the-wild automotive scenes, including sequences of parked cars or house facades as relay surfaces, and demonstrate low-cost, real-time NLOS in dynamic automotive environments.



Paper


Nicolas Scheiner, Florian Kraus, Fangyin Wei, Buu Phan, Fahim Mannan, Nils Appenrodt,
Werner Ritter, Jürgen Dickmann, Klaus Dietmayer, Bernhard Sick, Felix Heide

Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar

CVPR 2020

[Paper]
[Supplement]
[Bibtex]
[Code]

Please address correspondence to Felix Heide.



Data Set


Validation and Training Dataset Acquisition and Statistics.

(a) Prototype vehicle with measurement setup. To acquire training data in an automated fashion we use GNSS and IMU for a full pose estimation of egovehicle and the hidden vulnerable road users.
(b) Hidden object and observer distances to the relay wall are in a wide range.



The following sample camera images including the (later on) hidden object. We show wide range of different types of relay walls appearing in this dataset.






Single Car
Van
Three Cars
Three Cars
Guard Rail





Mobile Office
Utility Access
Garage Doors
Curbstone
Marble Wall





House Corner
Garden Wall
House Facade
House Facade
Building Exit

NLOS training and evaluation data set for large outdoor scenarios.

We capture a total of 100 sequences in-the-wild automotive scenes with 21 different scenarios. We split the dataset into non-overlapping training and validation sets, where the validation set consists of four scenes with 20 sequences and 3063 frames.



Selected Results




Joint detection and tracking results.

Joint detection and tracking results for automotive scenes with different relay wall type and object class in each row. The first column shows the observer vehicle front-facing camera view. The next three columns plot BEV radar and lidar point clouds together with bounding box ground truth and predictions. NLOS velocity is plotted as line segment from the predicted box center: red and green corresponds to moving towards and away from the vehicle.





Tracking trajectories for both training and testing data sets.

Here we show nine scenes in total. The top-middle scene and the last three scenes are from the testing data set. For each scene, the first row is the trajectory and the second row is the front-facing vehicle camera. We can see a variety of wall types, trajectories and viewpoints of the observing vehicles. The predictions consist of segments, with each corresponding to a different tracking ID visualized in different colors.