Atjaunināt sīkdatņu piekrišanu

E-grāmata: New Development in Robot Vision

Edited by , Edited by , Edited by
  • Formāts: PDF+DRM
  • Sērija : Cognitive Systems Monographs 23
  • Izdošanas datums: 26-Sep-2014
  • Izdevniecība: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • Valoda: eng
  • ISBN-13: 9783662438596
Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 53,52 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Sērija : Cognitive Systems Monographs 23
  • Izdošanas datums: 26-Sep-2014
  • Izdevniecība: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • Valoda: eng
  • ISBN-13: 9783662438596
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

The field of robotic vision has advanced dramatically recently with the development of new range sensors. Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related manipulation motion models. For autonomous robot navigation, different vision-based localization and tracking strategies and algorithms are discussed. New approaches using probabilistic analysis for robot navigation, online learning of vision-based robot control, and 3D motion estimation via intensity differences from a monocular camera are described. This collection will be beneficial to graduate students, researchers, and professionals working in the area of robotic vision.

1 Multi-modal Manhattan World Structure Estimation for Domestic Robots
1(18)
Kai Zhou
Karthik Mahesh Varadarajan
Michael Zillich
Markus Vincze
1.1 Introduction
2(3)
1.2 Related Work
5(2)
1.2.1 Multi-modal Plane Estimation
5(2)
1.2.2 Multi-modal Planar Modeling for Robotics
7(1)
1.3 Relationship between Pairwise Data
7(3)
1.3.1 Generalized Distance Matrix
8(1)
1.3.2 Jensen-Shannon Divergence (JSD)
9(1)
1.4 Modeling and Selection of Inliers
10(1)
1.5 Experiments
11(4)
1.6 Conclusion
15(4)
References
16(3)
2 RMSD: A 3D Real-Time Mid-level Scene Description System
19(14)
Kristiyan Georgiev
Rolf Lakaemper
2.1 Introduction
19(4)
2.2 Related Work
23(1)
2.3 Method Overview
24(2)
2.3.1 Line Segment Extraction
25(1)
2.3.2 Ellipse Extraction
26(1)
2.3.3 System Limitations
26(1)
2.4 3D Object Extraction
26(2)
2.5 Experiments
28(2)
2.5.1 3D Kinect Experiments
28(1)
2.5.2 Object Tracking
29(1)
2.5.3 Mobile Robot Experiment
29(1)
2.6 Conclusion and Future Work
30(3)
References
30(3)
3 Semantic and Spatial Content Fusion for Scene Recognition
33(22)
Elahe Farahzadeh
Tat-Jen Cham
Wanqing Li
3.1 Introduction
33(2)
3.2 Related Work
35(1)
3.3 Overview of the Proposed Framework
36(1)
3.4 Feature Extraction and Representation
37(2)
3.4.1 Capturing Semantic Information
37(1)
3.4.2 Capturing Contextual Information
38(1)
3.4.3 Capturing Spatial Location Information
38(1)
3.5 Spatial Semantic Feature Fusion (SSFF)
39(6)
3.5.1 Exemplar-Set Selection
39(1)
3.5.2 Learning Phase for SSFF Method
40(5)
3.5.3 Scene Type Recognition for SSFF Method
45(1)
3.6 Experimental Results
45(7)
3.6.1 Results on 15-Scene Dataset
47(2)
3.6.2 Results on MIT 67-Indoor Scenes Dataset
49(3)
3.7 Conclusion
52(3)
References
52(3)
4 Improving RGB-D Scene Reconstruction Using Rolling Shutter Rectification
55(18)
Hannes Ovren
Per-Erik Forssen
David Tornqvist
4.1 Introduction
55(3)
4.1.1 Related Work
57(1)
4.1.2 Structure
57(1)
4.1.3 Notation
58(1)
4.2 Sensor Calibration
58(5)
4.2.1 Synchronizing the Timestamps
58(3)
4.2.2 Relation of Coordinate Frames
61(2)
4.3 Depth Map Rectification
63(2)
4.3.1 Gyro Integration
63(1)
4.3.2 Rectification
64(1)
4.4 Experiments
65(5)
4.4.1 Experiment Setup
65(1)
4.4.2 Pan and Tilt Distortions
66(3)
4.4.3 Wobble Distortions
69(1)
4.5 Concluding Remarks
70(3)
References
70(3)
5 Modeling Paired Objects and Their Interaction
73(16)
Yu Sun
Yun Lin
5.1 Introduction
73(3)
5.2 Human-Object-Object-Interaction Modeling
76(7)
5.2.1 Bayesian Network Model for HOO Interaction
77(1)
5.2.2 Object Detection
78(1)
5.2.3 Motion Analysis
79(3)
5.2.4 Object Reaction
82(1)
5.2.5 Bayesian Network Inference
82(1)
5.3 Experiments and Results
83(2)
5.4 Conclusions
85(4)
References
85(4)
6 Probabilistic Active Recognition of Multiple Objects Using Hough-Based Geometric Matching Features
89(22)
Natasha Govender
Philip Torr
Mogomotsi Keaikitse
Fred Nicolls
Jonathan Warrell
6.1 Introduction
89(2)
6.2 Related Work
91(2)
6.3 Active Recognition of a Single Object
93(4)
6.4 Active Recognition of Multiple Objects
97(4)
6.5 Relationship to Mutual Information
101(1)
6.6 Experimentation
102(6)
6.7 Discussion
108(3)
References
108(3)
7 Incremental Light Bundle Adjustment: Probabilistic Analysis and Application to Robotic Navigation
111(26)
Vadim Indelman
Frank Dellaert
7.1 Introduction
111(3)
7.2 Related Work
114(2)
7.2.1 Computationally Efficient Bundle Adjustment
114(1)
7.2.2 SLAM and Vision-Aided Navigation
115(1)
7.3 Incremental Light Bundle Adjustment
116(4)
7.3.1 Bundle Adjustment
116(1)
7.3.2 Algebraic Elimination of 3D Points Using Three-View Constraints
117(1)
7.3.3 Incremental Smoothing
118(2)
7.4 Probabilistic Analysis of Light Bundle Adjustment
120(7)
7.4.1 Datasets for Evaluation and Implementation
122(1)
7.4.2 Evaluation
123(4)
7.5 Application iLBA to Robotic Navigation
127(7)
7.5.1 Formulation
128(1)
7.5.2 Equivalent IMU Factor
129(2)
7.5.3 Evaluation in a Simulated Aerial Scenario
131(3)
7.6 Conclusions and Future Work
134(3)
References
135(2)
8 Online Learning of Vision-Based Robot Control during Autonomous Operation
137(20)
Kristoffer Ofjall
Michael Felsberg
8.1 Introduction
137(2)
8.2 Previous Work
139(3)
8.2.1 Inverse Kinematics
139(1)
8.2.2 Active Learning and Exploration
140(1)
8.2.3 Visual Autonomous Navigation
140(1)
8.2.4 Locally Weighted Projection Regression
141(1)
8.2.5 Numerical Optimization
142(1)
8.3 Proposed Method
142(3)
8.3.1 Learning Inverse Kinematics by Exploration
143(1)
8.3.2 Learning Autonomous Driving from Demonstration
144(1)
8.4 Evaluation
145(8)
8.4.1 Learning from Exploration
145(4)
8.4.2 Learning from Demonstration
149(4)
8.5 Conclusions
153(4)
References
154(3)
9 3D Space Automated Aligning Task Performed by a Microassembly System Based on Multi-channel Microscope Vision Systems
157(24)
Zhengtao Zhang
De Xu
Juan Zhang
9.1 Introduction
157(1)
9.2 System Structure
158(2)
9.3 Features Selection and Relative Pose Calculation
160(1)
9.4 Coarse-to-Fine Alignment Strategy with Active Zooming Algorithm
161(5)
9.4.1 Coarse Alignment
161(1)
9.4.2 Active Zooming
162(3)
9.4.3 Fine Alignment
165(1)
9.5 Vision Servo Control Based on Jacobian
166(5)
9.5.1 Image Jacobin Matrix Derivation
167(1)
9.5.2 Feature Select for the Image Jacobian Control
168(1)
9.5.3 Online Self-calibration for Jacobian
169(1)
9.5.4 Controller Design for Image Servo Based on Jacobian
170(1)
9.6 Experiments and Results
171(8)
9.6.1 Hardware Setup
171(1)
9.6.2 Error Analysis for the Position-Based Method
171(6)
9.6.3 Image Servo Control Based on Jacobian Matrix
177(2)
9.7 Conclusion
179(2)
References
179(2)
10 Intensity-Difference Based Monocular Visual Odometry for Planetary Rovers
181(18)
Geovanni Martinez
10.1 Introduction
181(3)
10.2 Monocular Visual Odometry Algorithm
184(7)
10.2.1 Planet's Ground Surface Model
185(1)
10.2.2 Observation Points
185(1)
10.2.3 Conditional Probability of the Intensity Differences
186(4)
10.2.4 Maximizing the Conditional Probability
190(1)
10.2.5 Planet's Ground Surface Model Initialization
190(1)
10.3 Experimental Results
191(4)
10.4 Summary and Conclusions
195(1)
10.5 Future Work
195(4)
References
197(2)
Author Index 199