Low-Latency Edge Video Analytics for On-Road Perception of Autonomous Ground Vehicles

Jie Lin, Peng Yang, Ning Zhang, Feng Lyu, Xianfu Chen, Li Yu

Research output: Contribution to journalArticleScientificpeer-review

Abstract

To improve the transportation efficiency of advanced manufacturing, cameras have been extensively deployed to enhance the on-road perception of Autonomous Ground Vehicles in smart industrial parks. Considering the informative yet substantial data volume of contents generated by those cameras, we employ vehicle-to-everything links to deliver the captured video frames to neighboring vehicles, road-side units, or base stations, in order to respond to vehicle-control related video queries. To help vehicles obtain low-latency and high-accuracy on-road information for autonomous driving, an optimization problem is formulated, taking into account the impact of vehicle mobility and diverse resource demands of different video queries. Then, a two-stage algorithm is proposed to determine the frame rate of video cameras, as well as the destination of the associated video frames based on matching theory. Extensive simulation results show that this approach can improve the accuracy by up to 18% and reduce the response delay by 13.2%.

Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalIEEE Transactions on Industrial Informatics
DOIs
Publication statusAccepted/In press - 2022
MoE publication typeA1 Journal article-refereed

Keywords

  • Autonomous ground vehicle
  • Cameras
  • Collaboration
  • edge computing
  • Low latency communication
  • low latency communication
  • Roads
  • Throughput
  • Vehicle-to-everything
  • video analytics
  • Visual analytics

Fingerprint

Dive into the research topics of 'Low-Latency Edge Video Analytics for On-Road Perception of Autonomous Ground Vehicles'. Together they form a unique fingerprint.

Cite this