DeepSignals: Predicting Intent of Drivers Through Visual Attributes

    Abstract

    Detecting the intention of drivers is an essential task in self-driving, necessary to anticipate sudden events like lane changes and stops. Turn signals and emergency flashers communicate such intentions, providing seconds of potentially critical reaction time. In this paper, we propose to detect these signals in video sequences by using a deep neural network that reasons about both spatial and temporal information. Our experiments on more than a million frames show high per-frame accuracy in very challenging scenarios.

    Authors

    Davi Frossard, Eric Kee, Raquel Urtasun

    Conference

    ICRA 2019

    Full Paper

    ‘DeepSignals: Predicting Intent of Drivers Through Visual Attributes’ (PDF)

    Uber ATG

    Comments
    Previous articleExploratory Stage Lighting Design using Visual Objectives
    Next articleUPSNet: A Unified Panoptic Segmentation Network
    Davi Frossard
    I'm a research scientist at Uber ATG Toronto, developing cutting edge computer vision technology for self driving vehicles. Concurrently, I'm a PhD student at the University of Toronto doing research on computer vision under the supervision of prof. Raquel Urtasun. My research interests lie in the intersection of classic computational geometry, vision and machine learning. Previously, I did my Master's at the University of Toronto (also with prof. Urtasun) with the thesis on extracting visual vehicle attributes from a self driving platform and my Bachelor's degree at the Federal University of Espirito Santo (UFES) in Computer Engineering, writing my thesis on end-to-end learning of multiple object tracking.
    Eric Kee
    Eric received his Ph.D. with advisor Hany Farid at Dartmouth College, where he studied physics-based vision and its applications to image forensics. As a postdoc, Eric studied computational imaging under advisor Shree Nayar at Columbia University. Following Columbia, Eric joined the Facebook computational imaging group, and a graphics & simulation startup, Avametric, with Prof. James O’Brien, U.C. Berkeley where he developed methods for fitting deformable models of human bodies for virtual clothing try-on. Eric joined the Uber ATG perception group in 2016, then led by Prof. Drew Bagnel of CMU, before the first public launch of self-driving vehicles in Pittsburgh. In his work at ATG, Eric (and coauthors) developed and deployed the first deep neural network architecture for object detection to run on Uber’s self-driving fleet. Eric is currently a member of the Uber ATG R&D group, led by Raquel Urtasun. His research interests include self-driving, machine learning, and biological vision.
    Raquel Urtasun
    Raquel Urtasun is the Chief Scientist for Uber ATG and the Head of Uber ATG Toronto. She is also a Professor at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded CVPR in 2013 and 2017. She was also named Chatelaine 2018 Woman of the year, and 2018 Toronto’s top influencers by Adweek magazine