AutoCast: Scalable Infrastructure-less Cooperative Perception for Distributed Collaborative Driving

Hang Qiu    Pohan Huang    Namo Asavisanu    Xiaochen Liu
    Konstantinos Psounis    Ramesh Govindan   

University of Southern California   

ACM Mobisys 2022

Paper | Code | Demo | Bibtex

AutoCast is an end-to-end autonomous system that enables scalable infrastructure-less cooperative perception using direct vehicle-to-vehicle (V2V) communication. Using limited V2V bandwidth, AutoCast can easily eliminate safety hazards by coordinating autonomous vehicles to cast objects in the occluded/invisible area to their peer receipients' perspective. It carefully determines which objects to share based on positional relationships between traffic participants, and the time evolution of their trajectories. It coordinates vehicles and optimally schedules transmissions in a scalable and distributed fashion.
AutoCast Intro


AutoCast Overview

AutoCast’s end-to-end architecture contains a control-plane that exchanges beacons and makes transmission scheduling decisions, and a data-plane that processes, transmits, and uses point clouds to make trajectory planning decisions. This decoupling of data and control ensures that bandwidth intensive point cloud data is directly transmitted between vehicles with minimum delay for real time decisions, while at the same time the control plane is able to make near optimal scheduling decisions.

Evaluation Scenarios

Evaluation Scenarios
End-to-end evaluation scenarios: overtaking, unprotected left-turn, and red-light violation. A planner on the ego vehicle (gray, bottom of the bird-eye view) finds a trajectory to navigate through each scenario without collision. The gradient trajectory color (green to blue) indicates a temporal horizon (closer to farther). The LiDAR views show the perception results using either non-sharing baseline (Single) or AutoCast. The red points in the LiDAR view are shared points, while the white ones are from the ego vehicle itself. In each scenario, a passive (without communication capability) collider vehicle (red), occluded by a truck (orange) and thus invisible from the ego's Single view, may cause a hazardous situation. AutoCast makes the ego vehicle aware of the collider so the ego can react early to avoid a collision.

Qualitative Results

For each evaluation scenario, we show both the results of AutoCast at high traffic density, as well as side-by-side comparisons to single-vehicle-based perception. Without changing the planner, AutoCast enables the same planner to avoid crashes and make better informed decision, simply by providing more visibility using cooperative perception. Under high traffic density, AutoCast can still operate at over 10 Hz, prioritizing the most safety-critical objects for sharing.
1 / 6
2 / 6
3 / 6
4 / 6
5 / 6
6 / 6

Citation

@inproceedings{autocast,
  title={AutoCast: Scalable Infrastructure-less Cooperative Perception for Distributed Collaborative Driving},
  author={Hang Qiu and Pohan Huang and Namo Asavisanu and Xiaochen Liu and Konstantinos Psounis and Ramesh Govindan},
  booktitle = {Proceedings of the 20th Annual International Conference on Mobile Systems, Applications, and Services},
  series = {MobiSys '22},
  year={2022},
}