Abstract
In order to utilize renewable energy and save traditional energy, many literatures in recent years have drawn attention to content caching in wireless communications. In this article, we focus on content push and cache to increase green energy utilization and save traditional energy. The state transition probability and future rewards in the mobile environment are unknown. Therefore, we use reinforcement learning to solve the problem of green energy distribution and the content push. Q-Learning is a model-free enhanced learning technology that can find an optimal action selection strategy in the MDP question. The Boltzmann distribution method is used to update the strategy. Finally, we can find the desired action based on the current state and the optimal strategy. SBS selects actions according to the Boltzmann strategy and then iteratively updates the Q-tables to get the best action in each state. Through numerical simulation, we prove the validity of the model and get the regularity of SBS’s decision.
Supported by the National Natural Science Foundation of China (61871058).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ku, M.L., Li, W., Chen, Y., Liu, K.J.R.: Advances in energy harvesting communications: past, present, and future challenges. IEEE Commun. Surv. Tutor. 18(2), 1384–1412 (2017)
Jha, R., Gandotra, P., Jain, S.: Green communication in next generation cellular networks: a survey. IEEE Access PP(99), 1–1 (2017)
Peng, H., Liu, Y., Wang, J.: Verifiable diversity ranking search over encrypted outsourced data. CMC: Comput. Mater. Continua 55(1), 037–057 (2018)
Varan, B., Yener, A.: Delay constrained energy harvesting networks with limited energy and data storage. IEEE J. Sel. Areas Commun. 34(5), 1550–1564 (2016)
Ulukus, S., et al.: Energy harvesting wireless communications: a review of recent advances. IEEE J. Sel. Areas Commun. 33(3), 360–381 (2015)
Tao, X., Dong, L., Li, Y., Zhou, J.: Real-time personalized content catering via viewer sentiment feedback: a QoE perspective. Netw. IEEE 29(6), 14–19 (2015)
Mao, Y., Luo, Y., Zhang, J., Letaief, K.B.: Energy harvesting small cell networks: feasibility, deployment, and operation. Commun. Mag. IEEE 53(6), 94–101 (2015)
Klaine, P.V., Imran, M.A., Onireti, O., Souza, R.D.: A survey of machine learning techniques applied to self-organizing cellular networks. IEEE Commun. Surv. Tutor. 19(4), 2392–2431 (2017)
Paschos, G., Bastug, E., Land, I., Caire, G., Debbah, M.: Wireless caching: technical misconceptions and business barriers. IEEE Commun. Mag. 54(8), 16–22 (2016)
Zhang, S., He, P., Suto, K., Yang, P., Zhao, L., Shen, X.S.: Cooperative edge caching in user-centric clustered mobile networks. IEEE Trans. Mob. Comput. PP(99), 1–1 (2017)
Ma, Y., Li, Y., Huang, Z., Wen, G.: acSB: anti-collision selective-based broadcast protocol in CR-AdHocs. CMC: Comput. Mater. Continua 56(1), 35–46 (2018)
Zhang, Q., Lin, M., Yang, L.T., Chen, Z., Li, P.: Energy-efficient scheduling for real-time systems based on deep Q-learning model. IEEE Trans. Sustain. Comput. PP(99), 1–1 (2017)
Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. Comput. Sci. (2015)
Li, Y.: Deep reinforcement learning: an overview (2017)
Acknowledgment
This work was supported by the National Natural Science Foundation of China (61871058).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Y., Wei, Y., Guo, D., Song, M. (2019). Reinforcement Learning Based Content Push Policy for HetNets with Energy Harvesting Small Cells. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11633. Springer, Cham. https://doi.org/10.1007/978-3-030-24265-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-24265-7_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24264-0
Online ISBN: 978-3-030-24265-7
eBook Packages: Computer ScienceComputer Science (R0)