Abstract
In this paper, we investigate computation offloading in a multi-access wireless network, which supports both cellular and WiFi connectivity between a mobile user (MU) and the edge server. The MU decides to process an arrived computation task locally at the device or offload it to the edge server for remote execution. The technical challenges of designing a computation offloading policy lie in the network uncertainties due to the MU mobility, the sporadic task arrivals, the spatially distributed WiFi connectivity and the intermittent wireless charging opportunities. Accordingly, we apply a Markov decision process framework to formulate the problem of computation offloading over the infinite discrete time horizon. The objective of the MU is to find a policy to minimize the expected long-term cost. Without the knowledge of network uncertainty statistics, this paper makes the first attempt to exploit the model-free DQNReg, which is built upon a deep Q-network by adding a weighted Q-value to the squared Bellman error, to solve an optimal computation offloading policy. Experiments validate the superior performance from our approach compared to the baselines in terms of average computation offloading cost.
| Original language | English |
|---|---|
| Title of host publication | AIIOT 2022 - Proceedings of the 2022 1st Workshop on Digital Twin and Edge AI for Industrial IoT, Part of MobiCom 2022 |
| Publisher | Association for Computing Machinery ACM |
| Pages | 19-24 |
| Number of pages | 6 |
| ISBN (Electronic) | 978-1-4503-9784-1 |
| DOIs | |
| Publication status | Published - 17 Oct 2022 |
| MoE publication type | A4 Article in a conference publication |
| Event | 2022 1st Workshop on Digital Twin and Edge AI for Industrial IoT, AIIOT 2022 - Part of MobiCom 2022 - Sydney, Australia Duration: 21 Oct 2022 → … |
Conference
| Conference | 2022 1st Workshop on Digital Twin and Edge AI for Industrial IoT, AIIOT 2022 - Part of MobiCom 2022 |
|---|---|
| Country/Territory | Australia |
| City | Sydney |
| Period | 21/10/22 → … |
Funding
This work was supported in part by the Collaborative Innovation Major Project of Zhengzhou (20XTZX06013), in part by the National Natural Science Foundation of China (61972092), in part by the Zhejiang Provincial Natural Science Foundation (LGG22F010008), in part by the National Key Research and Development Program of China (2021YFB2900200), in part by the Key Research and Development Program of Zhejiang Province (2021C01197), and in part by the Zhejiang Lab Open Program (2021LC0AB06). This work was supported in part by the Collaborative Innovation Major Project of Zhengzhou (20XTZX06013), in part by the National Natural Science Foundation of China (61972092), in part by the Zhe-jiang Provincial Natural Science Foundation (LGG22F010008), in part by the National Key Research and Development Program of China (2021YFB2900200), in part by the Key Research and Development Program of Zhejiang Province (2021C01197), and in part by the Zhejiang Lab Open Program (2021LC0AB06).
Keywords
- deep reinforcement learning
- heterogeneous wireless networks
- Markov decision process
- mobile edge computing
- wireless charging