Atjaunināt sīkdatņu piekrišanu

Human Action Recognition with Depth Cameras 2014 ed. [Mīkstie vāki]

  • Formāts: Paperback / softback, 59 pages, height x width: 235x155 mm, weight: 1182 g, 9 Illustrations, color; 23 Illustrations, black and white; VIII, 59 p. 32 illus., 9 illus. in color., 1 Paperback / softback
  • Sērija : SpringerBriefs in Computer Science
  • Izdošanas datums: 04-Feb-2014
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319045601
  • ISBN-13: 9783319045603
  • Mīkstie vāki
  • Cena: 46,91 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 55,19 €
  • Ietaupiet 15%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 59 pages, height x width: 235x155 mm, weight: 1182 g, 9 Illustrations, color; 23 Illustrations, black and white; VIII, 59 p. 32 illus., 9 illus. in color., 1 Paperback / softback
  • Sērija : SpringerBriefs in Computer Science
  • Izdošanas datums: 04-Feb-2014
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319045601
  • ISBN-13: 9783319045603
Action recognition technology has many real-world applications in human-computer interaction, surveillance, video retrieval, retirement home monitoring, and robotics. The commoditization of depth sensors has also opened up further applications that were not feasible before. This text focuses on feature representation and machine learning algorithms for action recognition from depth sensors. After presenting a comprehensive overview of the state of the art, the authors then provide in-depth descriptions of their recently developed feature representations and machine learning techniques, including lower-level depth and skeleton features, higher-level representations to model the temporal structure and human-object interactions, and feature selection techniques for occlusion handling. This work enables the reader to quickly familiarize themselves with the latest research, and to gain a deeper understanding of recently developed techniques. It will be of great use for both researchers and practitioners.

Recenzijas

It is a relatively short but self-contained volume that presents recent advances in the popular research area of human action recognition. I was quite pleased when the student, to whom I passed the book for a through read, told me at the end that he found it very useful and a good start for his research. ... book is a good read for someone with an existing background in depth camera technology and research about human action recognition. (Nicola Bellotto, IAPR Newsletter, Vol. 37 (2), 2015)

1 Introduction
1(10)
1.1 Introduction
1(1)
1.2 Skeleton-Based Features
2(1)
1.3 Depthmap-Based Features
3(2)
1.4 Recognition Paradigms
5(1)
1.5 Datasets
6(5)
References
8(3)
2 Learning Actionlet Ensemble for 3D Human Action Recognition
11(30)
2.1 Introduction
11(3)
2.2 Related Work
14(2)
2.3 Spatio-Temporal Features
16(5)
2.3.1 Invariant Features for 3D Joint Positions
16(1)
2.3.2 Local Occupancy Patterns
17(1)
2.3.3 Fourier Temporal Pyramid
18(1)
2.3.4 Orientation Normalization
19(2)
2.4 Actionlet Ensemble
21(4)
2.4.1 Mining Discriminative Actionlets
22(1)
2.4.2 Learning Actionlet Ensemble
23(2)
2.5 Experimental Results
25(13)
2.5.1 MSR-A'ction3D Dataset
25(4)
2.5.2 DailyActivity3D Dataset
29(4)
2.5.3 Multiview 3D Event Dataset
33(2)
2.5.4 Cornell Activity Dataset
35(2)
2.5.5 CMU MoCap Dataset
37(1)
2.6 Conclusion
38(3)
References
38(3)
3 Random Occupancy Patterns
41(16)
3.1 Introduction
41(1)
3.2 Related Work
42(2)
3.3 Random Occupancy Patterns
44(1)
3.4 Weighted Sampling Approach
45(2)
3.4.1 Dense Sampling Space
45(1)
3.4.2 Weighted Sampling
45(2)
3.5 Learning Classification Functions
47(1)
3.6 Experimental Results
48(6)
3.6.1 MSR-Action3D
48(4)
3.6.2 Gesture3D Dataset
52(2)
3.7 Conclusion
54(3)
References
54(3)
4 Conclusion
57(2)
4.1 Conclusion
57(2)
Reference
58(1)
Index 59