Atjaunināt sīkdatņu piekrišanu

Robot Learning Human Skills and Intelligent Control Design [Hardback]

(University of the West of England, Bristol), (University of Hamburg, Germany), (University of Hamburg, Germany)
  • Formāts: Hardback, 184 pages, height x width: 234x156 mm, weight: 453 g, 9 Tables, black and white; 86 Line drawings, black and white; 45 Halftones, black and white; 131 Illustrations, black and white
  • Izdošanas datums: 22-Jun-2021
  • Izdevniecība: CRC Press
  • ISBN-10: 0367634368
  • ISBN-13: 9780367634360
Citas grāmatas par šo tēmu:
  • Hardback
  • Cena: 171,76 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Bibliotēkām
  • Formāts: Hardback, 184 pages, height x width: 234x156 mm, weight: 453 g, 9 Tables, black and white; 86 Line drawings, black and white; 45 Halftones, black and white; 131 Illustrations, black and white
  • Izdošanas datums: 22-Jun-2021
  • Izdevniecība: CRC Press
  • ISBN-10: 0367634368
  • ISBN-13: 9780367634360
Citas grāmatas par šo tēmu:

In the last decades robots are expected to be of increasing intelligence to deal with a large range of tasks. Especially, robots are supposed to be able to learn manipulation skills from humans. To this end, a number of learning algorithms and techniques have been developed and successfully implemented for various robotic tasks. Among these methods, learning from demonstrations (LfD) enables robots to effectively and efficiently acquire skills by learning from human demonstrators, such that a robot can be quickly programmed to perform a new task.

This book introduces recent results on the development of advanced LfD-based learning and control approaches to improve the robot dexterous manipulation. First, there's an introduction to the simulation tools and robot platforms used in the authors' research. In order to enable a robot learning of human-like adaptive skills, the book explains how to transfer a human user’s arm variable stiffness to the robot, based on the online estimation from the muscle electromyography (EMG). Next, the motion and impedance profiles can be both modelled by dynamical movement primitives such that both of them can be planned and generalized for new tasks. Furthermore, the book introduces how to learn the correlation between signals collected from demonstration, i.e., motion trajectory, stiffness profile estimated from EMG and interaction force, using statistical models such as hidden semi-Markov model and Gaussian Mixture Regression. Several widely used human-robot interaction interfaces (such as motion capture-based teleoperation) are presented, which allow a human user to interact with a robot and transfer movements to it in both simulation and real-word environments. Finally, improved performance of robot manipulation resulted from neural network enhanced control strategies is presented. A large number of examples of simulation and experiments of daily life tasks are included in this book to facilitate better understanding of the readers.

Preface ix
Author Biography xiii
Acknowledgements xv
Chapter 1 Introduction
1(16)
1.1 Overview of sEMG-based stiffness transfer
1(2)
1.2 Overview of robot learning motion skills from humans
3(3)
1.3 Overview of robot intelligent control design
6(2)
References
8(9)
Chapter 2 Robot Platforms and Software Systems
17(12)
2.1 Baxter robot
17(1)
2.2 Nao robot
18(1)
2.3 KUKA LBR iiwa robot
19(1)
2.4 Kinect camera
20(1)
2.5 MYO Armband
20(1)
2.6 Leap Motion
21(1)
2.7 Oculus Rift DK 2
22(1)
2.8 MATLAB Robotics Toolbox
23(2)
2.9 CoppeliaSim
25(1)
2.10 Gazebo
26(1)
References
27(2)
Chapter 3 Human-Robot Stiffness Transfer-Based on sEMG Signals
29(32)
3.1 Introduction
29(1)
3.2 Brief introduction of sEMG signals
30(1)
3.3 Calculation of human arm Jacobian matrix
30(2)
3.4 Stiffness estimation
32(3)
3.4.1 Incremental stiffness estimation method
32(1)
3.4.2 Stochastic perturbation method
33(2)
3.5 Interface design for stiffness transfer
35(2)
3.6 Human-robot stiffness mapping
37(2)
3.7 Stiffness transfer for various tasks
39(18)
3.7.1 Comparative tests for lifting tasks
39(4)
3.7.2 Writing tasks
43(2)
3.7.3 Human-robot-human writing skill transfer
45(7)
3.7.4 Plugging-in task
52(5)
3.8 Conclusion
57(1)
References
57(4)
Chapter 4 Learning and Generalization of Variable Impedance Skills
61(30)
4.1 Introduction
61(1)
4.2 Overview of the framework
62(1)
4.3 Trajectory segmentation
63(4)
4.3.1 Data segmentation using difference method
63(1)
4.3.2 Beta process autoregressive hidden Markov model
64(3)
4.4 Trajectory alignment methods
67(1)
4.5 Dynamical movement primitives
67(1)
4.6 Modeling of impedance skills
68(2)
4.7 Experimental study
70(17)
4.7.1 Learning writing tasks
70(1)
4.7.2 Pushing tasks
71(3)
4.7.3 Cutting and lift-place tasks
74(8)
4.7.4 Water-lifting tasks
82(5)
4.8 Conclusion
87(1)
References
88(3)
Chapter 5 Learning Human Skills from Multimodal Demonstration
91(22)
5.1 Introduction
91(1)
5.2 System Description
92(2)
5.3 HSMM-GMR Model Description
94(2)
5.3.1 Data Modeling with HSMM
94(1)
5.3.2 Task Reproduction with GMR
95(1)
5.4 Impedance Controller in Task Space
96(2)
5.5 Experimental Study
98(11)
5.5.1 Button-pressing Task
98(2)
5.5.2 Box-pushing Task
100(1)
5.5.3 Pushing Task
101(7)
5.5.4 Experimental Analysis
108(1)
5.6 Conclusion
109(1)
References
110(3)
Chapter 6 Skill Modeling Based on Extreme Learning Machine
113(30)
6.1 Introduction
113(1)
6.2 System of teleoperation-based robotic learning
114(7)
6.2.1 Overview of teleoperation demonstration system
114(1)
6.2.2 Motion Capture Approach based on Kinect
115(5)
6.2.3 Measurement of angular velocity by MYO armband
120(1)
6.2.4 Communication between Kinect and V-REP
120(1)
6.3 Human/robot joint angle calculation using Kinect camera
121(2)
6.4 Processing of demonstration data
123(3)
6.4.1 Dynamic time warping
123(2)
6.4.2 Kalman Filter
125(1)
6.4.3 Dragon naturally speaking system for verbal command
126(1)
6.5 Skill modeling using extreme learning machine
126(3)
6.6 Experimental study
129(11)
6.6.1 Motion Capture for Tracking of Human Arm Pose
129(1)
6.6.2 Teleoperation-Based demonstration in VREP
130(2)
6.6.3 VR-based teleoperation for task demonstration
132(2)
6.6.4 Writing Task
134(6)
6.7 Conclusion
140(1)
References
140(3)
Chapter 7 Neural Network-Enhanced Robot Manipulator Control
143(22)
7.1 Introduction
143(1)
7.2 Problem description
144(1)
7.3 Learning from multiple demonstrations
145(3)
7.3.1 Gaussian mixture model
146(1)
7.3.2 Fuzzy Gaussian mixture model
146(2)
7.4 Neural networks techniques
148(1)
7.4.1 Radial basis function neural network
148(1)
7.4.2 Cerebellar model articulation neural networks
149(1)
7.5 Robot manipulator controller design
149(6)
7.5.1 NN-based controller for robotic manipulator
149(5)
7.5.2 Adaptive admittance controller
154(1)
7.6 Experimental study
155(7)
7.6.1 Test of the adaptive admittance controller
155(3)
7.6.2 Test of the NN-based controller
158(1)
7.6.3 Pouring task
158(4)
7.7 Conclusion
162(1)
References
162(3)
Index 165
Chenguang Yang is a Co-Chair of the Technical Committee on Collaborative Automation for Flexible Manufacturing (CAFM), IEEE Robotics and Automation Society and Co-Chair of the Technical Committee on Bio-mechatronics and Bio-robotics Systems (B2S), IEEE Systems, Man, and Cybernetics Society.

Chao Zeng is currently a Research Associate at the Institute of Technical Aspects of Multimodal Systems, Universität Hamburg.

Jianwei Zhang is the director of TAMS, Department of Informatics, Universität Hamburg, Germany.