Atjaunināt sīkdatņu piekrišanu

E-grāmata: Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games

  • Formāts: PDF+DRM
  • Sērija : Advances in Industrial Control
  • Izdošanas datums: 05-Mar-2024
  • Izdevniecība: Springer International Publishing AG
  • Valoda: eng
  • ISBN-13: 9783031452529
  • Formāts - PDF+DRM
  • Cena: 142,75 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Sērija : Advances in Industrial Control
  • Izdošanas datums: 05-Mar-2024
  • Izdevniecība: Springer International Publishing AG
  • Valoda: eng
  • ISBN-13: 9783031452529

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games develops its specific learning techniques, motivated by application to autonomous driving and microgrid systems, with breadth and depth: integral reinforcement learning (RL) achieves model-free control without system estimation compared with system identification methods and their inevitable estimation errors; novel inverse RL methods fill a gap that will help them to attract readers interested in finding data-driven model-free solutions for inverse optimization and optimal control, imitation learning and autonomous driving among other areas.





 





Graduate students will find that this book offers a thorough introduction to integral and inverse RL for feedback control related to optimal regulation and tracking, disturbance rejection, and multiplayer and multiagent systems. For researchers, it provides a combination of theoretical analysis, rigorous algorithms, and a wide-ranging selection of examples. The book equips practitioners working in various domains aircraft, robotics, power systems, and communication networks among them with theoretical insights valuable in tackling the real-world challenges they face.
1. Introduction.-
2. Background on Integral and Inverse Reinforcement Learning for Dynamic System Feedback.-
3. Integral Reinforcement Learning for Optimal Regulation.-
4. Integral Reinforcement Learning for Optimal Tracking.-
5. Integral Reinforcement Learning for Nonlinear Tracker.- Integral Reinforcement Learning for H-infinity Control.-
6. Inverse Reinforcement Learning for Linear and Nonlinear Systems.-
7. Inverse Reinforcement Learning for Two-Player Zero-Sum Games.-
8. Inverse Reinforcement Learning for Multi-player Nonzero-sum Games.
Bosen Lian obtained his B.S. degree from the North China University of Water Resources and Electric Power, Zhengzhou, China, in 2015, the M.S. degree from Northeastern University, Shenyang, China, in 2018, and the Ph.D. from the University of Texas at Arlington, TX, USA, in 2021. He is currently an Assistant Professor at the Electrical and Computer Engineering Department, Auburn University, Auburn, AL, USA. Prior to that, he was an Adjunct Professor at the Electrical Engineering Department, University of Texas at Arlington and a Postdoctoral Research Associate at the University of Texas at Arlington Research Institute. His research interests focus on reinforcement learning, inverse reinforcement learning, distributed estimation, distributed control, and robotics. 

Wenqian Xue received the B.Eng. degree from the Qingdao University, Qingdao, China, in 2015, the M.SE. degree from the Northeastern University, Shenyang, China, in 2018, where she is currently pursuing towards the Ph.D. degree. She was a Research Assistant (Visiting Schlor) with the University of Texas at Arlington from 2019 to 2021. Her current research interests include learning-based data-driven control, reinforcement learning and inverse reinforcement learning, game theory, distributed control of multi-agent systems. She is a Reviewer of Automatica, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, etc.





Frank L. Lewis obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from Univ. W. Florida, and the Ph.D. at Ga. Tech. Fellow, National Academy of Inventors. Fellow IEEE, Fellow IFAC, Fellow AAAS, Fellow European Union Academy of Science, Fellow U.K. Institute of Measurement & Control. PE Texas, U.K. Chartered Engineer. UTA Charter Distinguished Scholar Professor, UTA Distinguished Teaching Professor, and Moncrief-ODonnell Chair at the University of Texas at Arlington Research Institute. Lewis is Ranked as number 19 in the world of all scientists in Electronics and Electrical Engineering by Research.com. Ranked number 5 in the world in the subfield of Industrial Engineering and Automation according to a Stanford University Research Study in 2021. 80,000 Google Scholar Citations, H-index 123. He works in feedback control, intelligent systems, reinforcement learning, cooperative control systems, and nonlinear systems. He is author of 8 U.S. patents, numerous journal special issues, 445 journal papers, 20 books, including the textbooks Optimal Control, Aircraft Control, Optimal Estimation, and Robot Manipulator Control. He received the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, Int. Neural Network Soc. Gabor Award, U.K. Inst Measurement & Control Honeywell Field Engineering Medal, IEEE Computational Intelligence Society Neural Networks Pioneer Award, AIAA Intelligent Systems Award, AACC Ragazzini Award. He has received over $12M in 100 research grants from NSF, ARO, ONR, AFOSR, DARPA, and USA industry contracts. Helped win the US SBA Tibbets Award in 1996 as Director of the UTA Research Institute SBIR Program.





Hamidreza Modares received the B.S. degree from the University of Tehran, Tehran, Iran, in 2004, the M.S. degree from the Shahrood University of Technology, Shahrood, Iran, in 2006, and the Ph.D. degree from the University of Texas at Arlington, Arlington, TX, USA, in 2015. He is currently an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber-physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.

Bahare Kiumarsi received the B.S. degree in electrical engineering from the Shahrood University of Technology, Iran, in 2009, the M.S. degree in electrical engineering from the Ferdowsi University of Mashhad, Iran, in 2013, and the Ph.D. degree in electrical engineering from the University of Texas at Arlington, Arlington, TX, USA, in 2017. In 2018, she was a Post-Doctoral Research Associate with the Coordinated Science Laboratory, University of Illinois at UrbanaChampaign, Urbana, IL, USA. She is currently an Assistant Professor with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA. Her current research interests include machine learning in control, security of cyber-physical systems, game theory, and distributed control.