TY - JOUR
T1 - DRLE
T2 - Decentralized Reinforcement Learning at the Edge for Traffic Light Control in the IoV
AU - Zhou, Pengyuan
AU - Chen, Xianfu
AU - Liu, Zhi
AU - Braud, Tristan
AU - Hui, Pan
AU - Kangasharju, Jussi
N1 - Funding Information:
Manuscript received April 15, 2020; revised August 31, 2020; accepted October 30, 2020. Date of publication December 1, 2020; date of current version March 31, 2021. This work was supported in part by the Academy of Finland in the 5GEAR Project, FIT Project, and AIDA Project, in part by the Hong Kong Research Grants Council under Project 16214817, in part by the FL4IoT Project from Huawei, in part by the JSPS KAKENHI under Grant 19H04092 and Grant 20H04174, and in part by the ROIS NII Open Collaborative Research under Grant 2020(20FA02). The Associate Editor for this article was C. Wu. (Corresponding author: Zhi Liu.) Pengyuan Zhou and Jussi Kangasharju are with the Department of Computer Science, University of Helsinki, 00560 Helsinki, Finland (e-mail: pengyuan.zhou@helsinki.fi; jussi.kangasharju@helsinki.fi).
Publisher Copyright:
© 2000-2011 IEEE.
PY - 2021/4
Y1 - 2021/4
N2 - The Internet of Vehicles (IoV) enables real-time data exchange among vehicles and roadside units and thus provides a promising solution to alleviate traffic jams in the urban area. Meanwhile, better traffic management via efficient traffic light control can benefit the IoV as well by enabling a better communication environment and decreasing the network load. As such, IoV and efficient traffic light control can formulate a virtuous cycle. Edge computing, an emerging technology to provide low-latency computation capabilities at the edge of the network, can further improve the performance of this cycle. However, while the collected information is valuable, an efficient solution for better utilization and faster feedback has yet to be developed for edge-empowered IoV. To this end, we propose a Decentralized Reinforcement Learning at the Edge for traffic light control in the IoV (DRLE). DRLE exploits the ubiquity of the IoV to accelerate traffic data collection and interpretation towards better traffic light control and congestion alleviation. Operating within the coverage of the edge servers, DRLE aggregates data from neighboring edge servers for city-scale traffic light control. DRLE decomposes the highly complex problem of large area control into a decentralized multi-agent problem. We prove its global optima with concrete mathematical reasoning and demonstrate its superiority over several state-of-the-art algorithms via extensive evaluations.
AB - The Internet of Vehicles (IoV) enables real-time data exchange among vehicles and roadside units and thus provides a promising solution to alleviate traffic jams in the urban area. Meanwhile, better traffic management via efficient traffic light control can benefit the IoV as well by enabling a better communication environment and decreasing the network load. As such, IoV and efficient traffic light control can formulate a virtuous cycle. Edge computing, an emerging technology to provide low-latency computation capabilities at the edge of the network, can further improve the performance of this cycle. However, while the collected information is valuable, an efficient solution for better utilization and faster feedback has yet to be developed for edge-empowered IoV. To this end, we propose a Decentralized Reinforcement Learning at the Edge for traffic light control in the IoV (DRLE). DRLE exploits the ubiquity of the IoV to accelerate traffic data collection and interpretation towards better traffic light control and congestion alleviation. Operating within the coverage of the edge servers, DRLE aggregates data from neighboring edge servers for city-scale traffic light control. DRLE decomposes the highly complex problem of large area control into a decentralized multi-agent problem. We prove its global optima with concrete mathematical reasoning and demonstrate its superiority over several state-of-the-art algorithms via extensive evaluations.
KW - Edge computing
KW - Internet of Vehicles
KW - multi-agent deep reinforcement learning
KW - traffic light control
UR - http://www.scopus.com/inward/record.url?scp=85097443800&partnerID=8YFLogxK
U2 - 10.1109/TITS.2020.3035841
DO - 10.1109/TITS.2020.3035841
M3 - Article
AN - SCOPUS:85097443800
SN - 1524-9050
VL - 22
SP - 2262
EP - 2273
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 4
ER -