Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning

Xianfu Chen, Honggang Zhang, Celimuge Wu, Shiwen Mao, Yusheng Ji, Mehdi Bennis

    Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

    7 Citations (Scopus)

    Abstract

    To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is emerging as a promising paradigm by providing computing capabilities within radio access networks in close proximity. Nevertheless, the design of computation offloading policies for a MEC system remains challenging. Specifically, whether to execute an arriving computation task at local mobile device or to offload a task for cloud execution should adapt to the environmental dynamics in a smarter manner. In this paper, we consider MEC for a representative mobile user in an ultra dense network, where one of multiple base stations (BSs) can be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to minimize the long-term cost and an offloading decision is made based on the channel qualities between the mobile user and the BSs, the energy queue state as well as the task queue state. To break the curse of high dimensionality in state space, we propose a deep Q-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics. Numerical experiments provided in this paper show that our proposed algorithm achieves a significant improvement in average cost compared with baseline policies.

    Original languageEnglish
    Title of host publication2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)
    PublisherIEEE Institute of Electrical and Electronic Engineers
    Number of pages6
    ISBN (Electronic)978-1-5386-6358-5, 978-1-5386-6357-8
    ISBN (Print)978-1-5386-6359-2
    DOIs
    Publication statusPublished - 2018
    MoE publication typeA4 Article in a conference publication
    Event88th IEEE Vehicular Technology Conference, VTC-Fall 2018 - Chicago, United States
    Duration: 27 Aug 201830 Aug 2018
    Conference number: 88

    Publication series

    SeriesIEEE Vehicular Technology Conference Papers
    Volume2018-August
    ISSN1090-3038

    Conference

    Conference88th IEEE Vehicular Technology Conference, VTC-Fall 2018
    Abbreviated titleVTC-Fall 2018
    CountryUnited States
    CityChicago
    Period27/08/1830/08/18

    Fingerprint

    Performance Optimization
    Reinforcement learning
    Reinforcement Learning
    Computing
    Mobile devices
    Mobile Devices
    Base stations
    Queue
    Average Cost
    Markov Decision Process
    Optimal Policy
    Electron energy levels
    Proximity
    Dimensionality
    Costs
    Baseline
    State Space
    Paradigm
    Numerical Experiment
    Statistics

    Keywords

    • cellular radio
    • cloud computing
    • optimisation
    • radio access networks
    • Markov process
    • mobile computing

    Cite this

    Chen, X., Zhang, H., Wu, C., Mao, S., Ji, Y., & Bennis, M. (2018). Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning. In 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall) [8690980] IEEE Institute of Electrical and Electronic Engineers . IEEE Vehicular Technology Conference Papers, Vol.. 2018-August https://doi.org/10.1109/VTCFall.2018.8690980
    Chen, Xianfu ; Zhang, Honggang ; Wu, Celimuge ; Mao, Shiwen ; Ji, Yusheng ; Bennis, Mehdi. / Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning. 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall). IEEE Institute of Electrical and Electronic Engineers , 2018. (IEEE Vehicular Technology Conference Papers, Vol. 2018-August).
    @inproceedings{4d8f68388db94959aa21b004940a0ea9,
    title = "Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning",
    abstract = "To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is emerging as a promising paradigm by providing computing capabilities within radio access networks in close proximity. Nevertheless, the design of computation offloading policies for a MEC system remains challenging. Specifically, whether to execute an arriving computation task at local mobile device or to offload a task for cloud execution should adapt to the environmental dynamics in a smarter manner. In this paper, we consider MEC for a representative mobile user in an ultra dense network, where one of multiple base stations (BSs) can be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to minimize the long-term cost and an offloading decision is made based on the channel qualities between the mobile user and the BSs, the energy queue state as well as the task queue state. To break the curse of high dimensionality in state space, we propose a deep Q-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics. Numerical experiments provided in this paper show that our proposed algorithm achieves a significant improvement in average cost compared with baseline policies.",
    keywords = "cellular radio, cloud computing, optimisation, radio access networks, Markov process, mobile computing",
    author = "Xianfu Chen and Honggang Zhang and Celimuge Wu and Shiwen Mao and Yusheng Ji and Mehdi Bennis",
    year = "2018",
    doi = "10.1109/VTCFall.2018.8690980",
    language = "English",
    isbn = "978-1-5386-6359-2",
    series = "IEEE Vehicular Technology Conference Papers",
    publisher = "IEEE Institute of Electrical and Electronic Engineers",
    booktitle = "2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)",
    address = "United States",

    }

    Chen, X, Zhang, H, Wu, C, Mao, S, Ji, Y & Bennis, M 2018, Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning. in 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)., 8690980, IEEE Institute of Electrical and Electronic Engineers , IEEE Vehicular Technology Conference Papers, vol. 2018-August, 88th IEEE Vehicular Technology Conference, VTC-Fall 2018, Chicago, United States, 27/08/18. https://doi.org/10.1109/VTCFall.2018.8690980

    Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning. / Chen, Xianfu; Zhang, Honggang; Wu, Celimuge; Mao, Shiwen; Ji, Yusheng; Bennis, Mehdi.

    2018 IEEE 88th Vehicular Technology Conference (VTC-Fall). IEEE Institute of Electrical and Electronic Engineers , 2018. 8690980 (IEEE Vehicular Technology Conference Papers, Vol. 2018-August).

    Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

    TY - GEN

    T1 - Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning

    AU - Chen, Xianfu

    AU - Zhang, Honggang

    AU - Wu, Celimuge

    AU - Mao, Shiwen

    AU - Ji, Yusheng

    AU - Bennis, Mehdi

    PY - 2018

    Y1 - 2018

    N2 - To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is emerging as a promising paradigm by providing computing capabilities within radio access networks in close proximity. Nevertheless, the design of computation offloading policies for a MEC system remains challenging. Specifically, whether to execute an arriving computation task at local mobile device or to offload a task for cloud execution should adapt to the environmental dynamics in a smarter manner. In this paper, we consider MEC for a representative mobile user in an ultra dense network, where one of multiple base stations (BSs) can be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to minimize the long-term cost and an offloading decision is made based on the channel qualities between the mobile user and the BSs, the energy queue state as well as the task queue state. To break the curse of high dimensionality in state space, we propose a deep Q-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics. Numerical experiments provided in this paper show that our proposed algorithm achieves a significant improvement in average cost compared with baseline policies.

    AB - To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is emerging as a promising paradigm by providing computing capabilities within radio access networks in close proximity. Nevertheless, the design of computation offloading policies for a MEC system remains challenging. Specifically, whether to execute an arriving computation task at local mobile device or to offload a task for cloud execution should adapt to the environmental dynamics in a smarter manner. In this paper, we consider MEC for a representative mobile user in an ultra dense network, where one of multiple base stations (BSs) can be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to minimize the long-term cost and an offloading decision is made based on the channel qualities between the mobile user and the BSs, the energy queue state as well as the task queue state. To break the curse of high dimensionality in state space, we propose a deep Q-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics. Numerical experiments provided in this paper show that our proposed algorithm achieves a significant improvement in average cost compared with baseline policies.

    KW - cellular radio

    KW - cloud computing

    KW - optimisation

    KW - radio access networks

    KW - Markov process

    KW - mobile computing

    UR - http://www.scopus.com/inward/record.url?scp=85064934442&partnerID=8YFLogxK

    U2 - 10.1109/VTCFall.2018.8690980

    DO - 10.1109/VTCFall.2018.8690980

    M3 - Conference article in proceedings

    AN - SCOPUS:85064934442

    SN - 978-1-5386-6359-2

    T3 - IEEE Vehicular Technology Conference Papers

    BT - 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)

    PB - IEEE Institute of Electrical and Electronic Engineers

    ER -

    Chen X, Zhang H, Wu C, Mao S, Ji Y, Bennis M. Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning. In 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall). IEEE Institute of Electrical and Electronic Engineers . 2018. 8690980. (IEEE Vehicular Technology Conference Papers, Vol. 2018-August). https://doi.org/10.1109/VTCFall.2018.8690980