Robotics research has a reproducibility problem, owing in part to robots’ myriad interacting components. These components tend to be complex, only partially observable, and trained with AI techniques where performance varies greatly across environments. In an effort to address some of the challenges specific to the autonomous driving domain, researchers at ETH Zurich, the Toyota Technological Institute, Mila in Montreal, and NuTonomy developed what they call the Decentralized Urban Collaborative Benchmarking Network (DuckieNet), a setup built using the open source Duckietown platform. DuckieNet provides a framework for developing, testing, and deploying both perception and navigation algorithms, and the researchers claim it’s highly scalable but inexpensive to construct.
The Duckietown project, which was conceived by a 2016 graduate class at MIT, employs cheap wheeled robots called Duckiebots built almost entirely from off-the-shelf parts. The only onboard sensor is the forward-facing camera; a Raspberry Pi handles computation, and a pair of DC motors power the wheels. Duckietowns consist of the roads, which are constructed from exercise mats and tape, and the signage the robots use to navigate around. Traffic lights have the same hardware as the Duckiebots (excluding the wheels) and are capable of sensing, computing, and actuation through their LEDs.
DuckieNet builds on Duckietown by adding specialized components to the platform. A challenges server stores machine learning algorithms, benchmarks, and results. The server computes leaderboards, dispatching jobs to be executed to a set of evaluation machines. The evaluation machines, which can be local or cloud-based, run autonomous driving simulations. Physical labs with DuckieNet installations carry out real-world experiments, and a localization network of “watchtowers” (low-cost structures that use the same sensing and computation as the Duckiebots) tracks tags affixed to the Duckiebots’ bodies.
DuckieNet is in some ways akin to AWS DeepRacer, Amazon’s service that supplies developers with a cloud-based simulator for developing autonomous driving models and deploying them to a model car. But DuckieNet users can define benchmarks like mean position deviation (lateral displacement of a Duckiebot from the center of a lane) and mean orientation deviation (mean orientation with respect to the lane orientation) in Docker containers submitted to the challenges server. (Algorithms can also be submitted as Docker containers and observed.) Moreover, except for tasks like resetting experiments and recharging the Duckiebots, the platform is entirely autonomous.
The researchers say one of DuckieNet’s key applications is hosting research competitions. In point of fact, DuckieNet has been used since early 2019 in the AI Driving Olympics, a biannual competition that serves to benchmark the state of the art in autonomous vehicle driving. DuckieNet generates visualizations of performance metrics and leaderboards while providing access to the underlying raw data, including open source baselines and documentation.
“Our contention is that there is a need for stronger efforts towards reproducible research for robotics, and that to achieve this we need to consider the evaluation in equal terms as the algorithms themselves,” the researchers wrote in a paper describing their work. “In this fashion, we can obtain reproducibility by design through the research and development processes. Achieving this on a large scale will contribute to a more systemic evaluation of robotics research and, in turn, increase the progress of development.”