Abstract
Original language | English |
---|---|
Pages (from-to) | 627-640 |
Journal | IEEE Journal on Selected Areas in Communications |
Volume | 33 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2015 |
MoE publication type | A1 Journal article-refereed |
Fingerprint
Keywords
- wireless networks
- compact state representation
- discrete-time Markov decision process
- energy saving
- heterogeneous cellular networks
- reinforcement learning
- team Markov game
- traffic load balancing
- traffic offloading
Cite this
}
Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks. / Chen, Xianfu; Wu, Jinsong; Cai, Yueming; Zhang, Honggang; Chen, Tao.
In: IEEE Journal on Selected Areas in Communications, Vol. 33, No. 4, 2015, p. 627-640.Research output: Contribution to journal › Article › Scientific › peer-review
TY - JOUR
T1 - Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks
AU - Chen, Xianfu
AU - Wu, Jinsong
AU - Cai, Yueming
AU - Zhang, Honggang
AU - Chen, Tao
N1 - Project code: 101914
PY - 2015
Y1 - 2015
N2 - This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.
AB - This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.
KW - wireless networks
KW - compact state representation
KW - discrete-time Markov decision process
KW - energy saving
KW - heterogeneous cellular networks
KW - reinforcement learning
KW - team Markov game
KW - traffic load balancing
KW - traffic offloading
U2 - 10.1109/JSAC.2015.2393496
DO - 10.1109/JSAC.2015.2393496
M3 - Article
VL - 33
SP - 627
EP - 640
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
SN - 0733-8716
IS - 4
ER -