Intelligent handover decision scheme using double deep reinforcement learning
View/ Open
Date
2020-10Author
Mollel, Michael
Abubakar, Attai Ibrahim
Ozturk, Metin
Kaijage, Shubi
Michael, Kisangiri,
Zoha, Ahmed
Imran, Muhammad Ali
Abbasi, Qammer
Metadata
Show full item recordAbstract
Handovers (HOs) have been envisioned to be more challenging in 5G networks due to the inclusion of millimetre wave (mm-wave) frequencies, resulting in more intense base station (BS) deployments. This, by its turn, increases the number of HOs taken due to smaller footprints of mm-wave BSs thereby making HO management a more crucial task as reduced quality of service (QoS) and quality of experience (QoE) along with higher signalling overhead are more likely with the growing number of HOs. In this paper, we propose an offline scheme based on double deep reinforcement learning (DDRL) to minimize the frequency of HOs in mm-wave networks, which subsequently mitigates the adverse QoS. Due to continuous and substantial state spaces arising from the inherent characteristics of the considered 5G environment, DDRL is preferred over conventional -learning algorithm. Furthermore, in order to alleviate the negative impacts of online learning policies in terms of computational costs, an offline learning framework is adopted in this study, a known trajectory is considered in a simulation environment while ray-tracing is used to estimate channel characteristics. The number of HO occurrence during the trajectory and the system throughput are taken as performance metrics. The results obtained reveal that the proposed method largely outperform conventional and other artificial intelligence (AI)-based models.
URI
https://doi.org/10.1016/j.phycom.2020.101133https://dspace.nm-aist.ac.tz/handle/20.500.12479/778