Skip to main content

Reinforcement Learning Based Content Push Policy for HetNets with Energy Harvesting Small Cells

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2019)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11633))

Included in the following conference series:

  • 1705 Accesses

Abstract

In order to utilize renewable energy and save traditional energy, many literatures in recent years have drawn attention to content caching in wireless communications. In this article, we focus on content push and cache to increase green energy utilization and save traditional energy. The state transition probability and future rewards in the mobile environment are unknown. Therefore, we use reinforcement learning to solve the problem of green energy distribution and the content push. Q-Learning is a model-free enhanced learning technology that can find an optimal action selection strategy in the MDP question. The Boltzmann distribution method is used to update the strategy. Finally, we can find the desired action based on the current state and the optimal strategy. SBS selects actions according to the Boltzmann strategy and then iteratively updates the Q-tables to get the best action in each state. Through numerical simulation, we prove the validity of the model and get the regularity of SBS’s decision.

Supported by the National Natural Science Foundation of China (61871058).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
CHF34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
CHF 24.95
Price includes VAT (Switzerland)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
CHF 47.00
Price excludes VAT (Switzerland)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
CHF 59.00
Price excludes VAT (Switzerland)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ku, M.L., Li, W., Chen, Y., Liu, K.J.R.: Advances in energy harvesting communications: past, present, and future challenges. IEEE Commun. Surv. Tutor. 18(2), 1384–1412 (2017)

    Article  Google Scholar 

  2. Jha, R., Gandotra, P., Jain, S.: Green communication in next generation cellular networks: a survey. IEEE Access PP(99), 1–1 (2017)

    Google Scholar 

  3. Peng, H., Liu, Y., Wang, J.: Verifiable diversity ranking search over encrypted outsourced data. CMC: Comput. Mater. Continua 55(1), 037–057 (2018)

    Google Scholar 

  4. Varan, B., Yener, A.: Delay constrained energy harvesting networks with limited energy and data storage. IEEE J. Sel. Areas Commun. 34(5), 1550–1564 (2016)

    Article  Google Scholar 

  5. Ulukus, S., et al.: Energy harvesting wireless communications: a review of recent advances. IEEE J. Sel. Areas Commun. 33(3), 360–381 (2015)

    Article  Google Scholar 

  6. Tao, X., Dong, L., Li, Y., Zhou, J.: Real-time personalized content catering via viewer sentiment feedback: a QoE perspective. Netw. IEEE 29(6), 14–19 (2015)

    Article  Google Scholar 

  7. Mao, Y., Luo, Y., Zhang, J., Letaief, K.B.: Energy harvesting small cell networks: feasibility, deployment, and operation. Commun. Mag. IEEE 53(6), 94–101 (2015)

    Article  Google Scholar 

  8. Klaine, P.V., Imran, M.A., Onireti, O., Souza, R.D.: A survey of machine learning techniques applied to self-organizing cellular networks. IEEE Commun. Surv. Tutor. 19(4), 2392–2431 (2017)

    Article  Google Scholar 

  9. Paschos, G., Bastug, E., Land, I., Caire, G., Debbah, M.: Wireless caching: technical misconceptions and business barriers. IEEE Commun. Mag. 54(8), 16–22 (2016)

    Article  Google Scholar 

  10. Zhang, S., He, P., Suto, K., Yang, P., Zhao, L., Shen, X.S.: Cooperative edge caching in user-centric clustered mobile networks. IEEE Trans. Mob. Comput. PP(99), 1–1 (2017)

    Google Scholar 

  11. Ma, Y., Li, Y., Huang, Z., Wen, G.: acSB: anti-collision selective-based broadcast protocol in CR-AdHocs. CMC: Comput. Mater. Continua 56(1), 35–46 (2018)

    Google Scholar 

  12. Zhang, Q., Lin, M., Yang, L.T., Chen, Z., Li, P.: Energy-efficient scheduling for real-time systems based on deep Q-learning model. IEEE Trans. Sustain. Comput. PP(99), 1–1 (2017)

    Google Scholar 

  13. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. Comput. Sci. (2015)

    Google Scholar 

  14. Li, Y.: Deep reinforcement learning: an overview (2017)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the National Natural Science Foundation of China (61871058).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Wei, Y., Guo, D., Song, M. (2019). Reinforcement Learning Based Content Push Policy for HetNets with Energy Harvesting Small Cells. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11633. Springer, Cham. https://doi.org/10.1007/978-3-030-24265-7_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-24265-7_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-24264-0

  • Online ISBN: 978-3-030-24265-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

  NODES
INTERN 1
Note 2