Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks

Xianfu Chen, Jinsong Wu, Yueming Cai, Honggang Zhang, Tao Chen

Research output: Contribution to journalArticleScientificpeer-review

129 Citations (Scopus)

Abstract

This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.
Original languageEnglish
Pages (from-to)627-640
JournalIEEE Journal on Selected Areas in Communications
Volume33
Issue number4
DOIs
Publication statusPublished - 2015
MoE publication typeA1 Journal article-refereed

Fingerprint

Base stations
Energy efficiency
Macros
Wireless networks
Quality of service
Energy utilization
Controllers
Reinforcement learning
Learning algorithms
Frequency bands
Energy conservation
Statistics

Keywords

  • wireless networks
  • compact state representation
  • discrete-time Markov decision process
  • energy saving
  • heterogeneous cellular networks
  • reinforcement learning
  • team Markov game
  • traffic load balancing
  • traffic offloading

Cite this

@article{a5caa4219c454fe6b9d995f918a088da,
title = "Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks",
abstract = "This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.",
keywords = "wireless networks, compact state representation, discrete-time Markov decision process, energy saving, heterogeneous cellular networks, reinforcement learning, team Markov game, traffic load balancing, traffic offloading",
author = "Xianfu Chen and Jinsong Wu and Yueming Cai and Honggang Zhang and Tao Chen",
note = "Project code: 101914",
year = "2015",
doi = "10.1109/JSAC.2015.2393496",
language = "English",
volume = "33",
pages = "627--640",
journal = "IEEE Journal on Selected Areas in Communications",
issn = "0733-8716",
publisher = "IEEE Institute of Electrical and Electronic Engineers",
number = "4",

}

Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks. / Chen, Xianfu; Wu, Jinsong; Cai, Yueming; Zhang, Honggang; Chen, Tao.

In: IEEE Journal on Selected Areas in Communications, Vol. 33, No. 4, 2015, p. 627-640.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Energy-efficiency oriented traffic offloading in wireless networks: A brief survey and a learning approach for heterogeneous cellular networks

AU - Chen, Xianfu

AU - Wu, Jinsong

AU - Cai, Yueming

AU - Zhang, Honggang

AU - Chen, Tao

N1 - Project code: 101914

PY - 2015

Y1 - 2015

N2 - This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.

AB - This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q -learning with compact state representation algorithm, which is named QC -learning. Moreover, a decentralized version of the QC -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC -learning algorithms in balancing the tradeo- f between energy saving and QoS satisfaction.

KW - wireless networks

KW - compact state representation

KW - discrete-time Markov decision process

KW - energy saving

KW - heterogeneous cellular networks

KW - reinforcement learning

KW - team Markov game

KW - traffic load balancing

KW - traffic offloading

U2 - 10.1109/JSAC.2015.2393496

DO - 10.1109/JSAC.2015.2393496

M3 - Article

VL - 33

SP - 627

EP - 640

JO - IEEE Journal on Selected Areas in Communications

JF - IEEE Journal on Selected Areas in Communications

SN - 0733-8716

IS - 4

ER -