Low-Latency Edge Video Analytics for On-Road Perception of Autonomous Ground Vehicles

Jie Lin, Peng Yang, Ning Zhang, Feng Lyu, Xianfu Chen, Li Yu

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)


To improve the transportation efficiency of advanced manufacturing, cameras have been extensively deployed to enhance the on-road perception of autonomous ground vehicles in smart industrial parks. Considering the informative yet substantial data volume of contents generated by those cameras, we employ vehicle-to-everything links to deliver the captured video frames to neighboring vehicles, road-side units, or base stations, in order to respond to vehicle-control-related video queries. To help vehicles obtain low-latency and high-accuracy on-road information for autonomous driving, an optimization problem is formulated, taking into account the impact of vehicle mobility and diverse resource demands of different video queries. Then, a two-stage algorithm is proposed to determine the frame rate of video cameras, as well as the destination of the associated video frames based on matching theory. Extensive simulation results show that this approach can improve the accuracy by up to 18% and reduce the response delay by 13.2%.

Original languageEnglish
Pages (from-to)1512-1523
Number of pages12
JournalIEEE Transactions on Industrial Informatics
Issue number2
Publication statusPublished - 1 Feb 2023
MoE publication typeA1 Journal article-refereed


  • Autonomous ground vehicle
  • Cameras
  • Collaboration
  • edge computing
  • Low latency communication
  • low latency communication
  • Roads
  • Throughput
  • Vehicle-to-everything
  • video analytics
  • Visual analytics


Dive into the research topics of 'Low-Latency Edge Video Analytics for On-Road Perception of Autonomous Ground Vehicles'. Together they form a unique fingerprint.

Cite this