aUToLights Dataset Collection Setup at the University of Toronto Institute for Aerospace Studies.

aUToLights is a traffic light dataset that can be used to train, validate, and evaluate traffic light detection and tracking performance on a multi-camera setup. The data was collected by members of aUToronto on the multi-modality perception system deployed on our new self-driving car, Artemis.

The primary sensors involved in collecting this dataset include a Novatel PwrPak7 GPS/IMU with TerraStar corrections (< 10 cm reported position error), two front-facing 7.1 MP Cameras (one with a 16mm f/1.8 lens as long-range, and another with a 8mm f/1.8 lens as wide-angle) from Lucid Vision Lab. The ground truth annotations are human labelled 2D bounding boxes with the aid of traffic lights’ state sequence from the light controller.

This dataset contains three types of traffic lights that conform with Michigan Department of Transportation, namely 3-light, 4-light of protected left, and 5-light doghouse with protected left.

Zeus is aUToronto’s self-driving car depicted here at the University of Toronto Institute for Aerospace Studies. This dataset contains GPS/IMU, 3D LIDAR, and Monocular camera data.

Zeus is aUToronto’s self-driving car depicted here at the University of Toronto Institute for Aerospace Studies. This dataset contains GPS/IMU, 3D LIDAR, and Monocular camera data.

UofTPed50 is a dataset that can be used for benchmarking the positional accuracy of 3D pedestrian detection. We provide accurate positioning information by attaching a GPS system to the pedestrian itself. This dataset consists of 50 sequences of varying distance, pedestrian trajectory, and ego-vehicle trajectory. Each sequence contains one pedestrian. The scenarios are broken into four groups:

  1. 34 Sequences of a straight-line pedestrian trajectory with a stationary ego-vehicle at seven distances. (Seq. 1-34)

  2. 3 sequences tracking straight lateral pedestrian trajectories with respect to a dynamic ego-vehicle. (Seq. 35-37)

  3. 8 sequences tracking straight longitudinal trajectory trajectories, 4 with a static ego-vehicle and 4 with a dynamic ego-vehicle.

  4. 6 sequences tracking complex trajectories (curves, zig-zag motion) with respect to a stationary ego-vehicle.

Data was collected on our self-driving car, Zeus, illustrated above. Sensor data includes a Velodyne HDL-64 3D LIDAR, a 5 MP monocular camera, and a Novatel PwrPak7 GPS/IMU with TerraStar corrections (< 10 cm reported position error). Position data for the pedestrian was collected by attaching a tethered antenna of a separate Novatel PwrPak7 GPS with TerraStar corrections.

To synchronize data between the ego-vehicle and the pedestrian, we use UTC timestamps.

You can download the current version of this dataset here

If this dataset was helpful in your work, please consider citing our CRV paper

Any questions or comments regarding the dataset can be sent to keenan (dot) burnett ‘at’ autodrive.utoronto.ca

 

Publications

 

Abstract

The University of Toronto is one of eight teams competing in the SAE AutoDrive Challenge - a competition to develop a self-driving car by 2020. After placing first at the Year 1 challenge, we are headed to MCity in June 2019 for the second challenge. There, we will interact with pedestrians, cyclists, and cars. For safe operation, it is critical to have an accurate estimate of the position of all objects surrounding the vehicle. The contributions of this work are twofold: First, we present a new object detection and tracking dataset (UofTPed50), which uses GPS to ground truth the position and velocity of a pedestrian. To our knowledge, a dataset of this type for pedestrians has not been shown in the literature before. Second, we present a lightweight object detection and tracking system (aUToTrack) that uses vision, LIDAR, and GPS/IMU positioning to achieve state-of-the-art performance on the KITTI Object Tracking benchmark. We show that aUToTrack accurately estimates the position and velocity of pedestrians, in real-time, using CPUs only. aUToTrack has been tested in closed-loop experiments on a real self-driving car, and we demonstrate its performance on our dataset.

Authors

Keenan Burnett; Sepehr Samavi; Steven Waslander; Timothy Barfoot; Angela Schoellig

K. Burnett, S. Samavi, S. Waslander, T. Barfoot and A. Schoellig, "aUToTrack: A Lightweight Object Detection and Tracking System for the SAE AutoDrive Challenge," 2019 16th Conference on Computer and Robot Vision (CRV), 2019, pp. 209-216, doi: 10.1109/CRV.2019.00036.

 
 

Abstract

The SAE AutoDrive Challenge is a three-year competition to develop a Level 4 autonomous vehicle by 2020. The first set of challenges were held in April of 2018 in Yuma, Arizona. Our team (aUToronto/Zeus) placed first. In this paper, we describe Zeus' complete system architecture and specialized algorithms that enabled us to win. We show that it is possible to develop a vehicle with basic autonomy features in just six months relying on simple, robust algorithms. We do not make use of a prior map. Instead, we have developed a multi-sensor visual localization solution. All the algorithms in the paper run in real-time using CPUs only. We also highlight the closed-loop performance of the system in detail in several experiments.

Authors

Keenan Burnett; Andreas Schimpe; Sepehr Samavi; Mona Gridseth; Chengzhi Winston Liu; Qiyang Li; Zachary Kroeze; Angela P. Schoellig

K. Burnett et al., "Building a Winning Self-Driving Car in Six Months," 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 9583-9589, doi: 10.1109/ICRA.2019.8794029.

 
 

Abstract

The SAE AutoDrive Challenge is a 3-year collegiate competition to develop a self-driving car by 2020. The second year of the competition was held in June 2019 at MCity, a mock town built for self-driving car testing at the University of Michigan. Teams were required to autonomously navigate a series of inter-sections while handling pedestrians, traffic lights, and traffic signs. Zeus is aUToronto's winning entry in the AutoDrive Challenge. This article describes the system design and development of Zeus as well as many of the lessons learned along the way. This includes details on the team's organizational structure, sensor suite, software components, and performance at the Year 2 competition. With a team of mostly undergraduates and minimal resources, aUToronto has made progress toward a functioning self-driving vehicle, in just 2 years. This article may prove valuable to researchers looking to develop their own self-driving platform.

Authors

Keenan Burnett | Jingxing Qian | Xintong Du | Linqiao Liu | David J. Yoon | Tianchang Shen | Susan Sun | Sepehr Samavi | Michael J. Sorocky | Mollie Bianchi | Kaicheng Zhang | Arkady Arkhangorodsky | Quinlan Sykora | Shichen Lu | Yizhou Huang | Angela P. Schoellig | Timothy D. Barfoot

Burnett, K, Qian, J, Du, X, et al. Zeus: A system description of the two-time winner of the collegiate SAE autodrive competition. J Field Robotics. 2021; 38: 139– 166. https://doi.org/10.1002/rob.21958