Atjaunināt sīkdatņu piekrišanu

Error Estimation for Pattern Recognition [Hardback]

, (Center for Imaging Science, Rochester Institute of Technology)
  • Formāts: Hardback, 336 pages, height x width x depth: 239x160x31 mm, weight: 703 g
  • Izdošanas datums: 29-Jun-2015
  • Izdevniecība: Wiley-IEEE Press
  • ISBN-10: 1118999738
  • ISBN-13: 9781118999738
  • Hardback
  • Cena: 156,08 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Bibliotēkām
  • Formāts: Hardback, 336 pages, height x width x depth: 239x160x31 mm, weight: 703 g
  • Izdošanas datums: 29-Jun-2015
  • Izdevniecība: Wiley-IEEE Press
  • ISBN-10: 1118999738
  • ISBN-13: 9781118999738
This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to distributional and Bayesian theory, it covers important topics and essential issues pertaining to the scientific validity of pattern classification.

Error Estimation for Pattern Recognition focuses on error estimation, which is a broad and poorly understood topic that reaches all research areas using pattern classification. It includes model-based approaches and discussions of newer error estimators such as bolstered and Bayesian estimators. This book was motivated by the application of pattern recognition to high-throughput data with limited replicates, which is a basic problem now appearing in many areas. The first two chapters cover basic issues in classification error estimation, such as definitions, test-set error estimation, and training-set error estimation. The remaining chapters in this book cover results on the performance and representation of training-set error estimators for various pattern classifiers.

Additional features of the book include:

The latest results on the accuracy of error estimation Performance analysis of re-substitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches Highly interactive computer-based exercises and end-of-chapter problems

This is the first book exclusively about error estimation for pattern recognition.

Ulisses M. Braga Neto is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University, USA. He received his PhD in Electrical and Computer Engineering from The Johns Hopkins University. Dr. Braga Neto received an NSF CAREER Award for his work on error estimation for pattern recognition with applications in genomic signal processing. He is an IEEE Senior Member.

Edward R. Dougherty is a Distinguished Professor, Robert F. Kennedy 26 Chair, and Scientific Director at the Center for Bioinformatics and Genomic Systems Engineering at Texas A&M University, USA. He is a fellow of both the IEEE and SPIE, and he has received the SPIE Presidents Award. Dr. Dougherty has authored several books including Epistemology of the Cell: A Systems Perspective on Biological Knowledge and Random Processes for Image and Signal Processing (Wiley-IEEE Press).
Preface xiii
Acknowledgments xix
List Of Symbols
xxi
1 Classification
1(34)
1.1 Classifiers
1(2)
1.2 Population-Based Discriminants
3(5)
1.3 Classification Rules
8(5)
1.4 Sample-Based Discriminants
13(3)
1.4.1 Quadratic Discriminants
14(1)
1.4.2 Linear Discriminants
15(1)
1.4.3 Kernel Discriminants
16(1)
1.5 Histogram Rule
16(4)
1.6 Other Classification Rules
20(5)
1.6.1 k-Nearest-Neighbor Rules
20(1)
1.6.2 Support Vector Machines
21(1)
1.6.3 Neural Networks
22(1)
1.6.4 Classification Trees
23(1)
1.6.5 Rank-Based Rules
24(1)
1.7 Feature Selection
25(10)
Exercises
28(7)
2 Error Estimation
35(42)
2.1 Error Estimation Rules
35(3)
2.2 Performance Metrics
38(5)
2.2.1 Deviation Distribution
39(2)
2.2.2 Consistency
41(1)
2.2.3 Conditional Expectation
41(1)
2.2.4 Linear Regression
42(1)
2.2.5 Confidence Intervals
42(1)
2.3 Test-Set Error Estimation
43(3)
2.4 Resubstitution
46(2)
2.5 Cross-Validation
48(7)
2.6 Bootstrap
55(2)
2.7 Convex Error Estimation
57(4)
2.8 Smoothed Error Estimation
61(2)
2.9 Bolstered Error Estimation
63(14)
2.9.1 Gaussian-Bolstered Error Estimation
67(1)
2.9.2 Choosing the Amount of Bolstering
68(3)
2.9.3 Calibrating the Amount of Bolstering
71(2)
Exercises
73(4)
3 Performance Analysis
77(20)
3.1 Empirical Deviation Distribution
77(2)
3.2 Regression
79(3)
3.3 Impact on Feature Selection
82(2)
3.4 Multiple-Data-Set Reporting Bias
84(2)
3.5 Multiple-Rule Bias
86(6)
3.6 Performance Reproducibility
92(5)
Exercises
94(3)
4 Error Estimation For Discrete Classification
97(18)
4.1 Error Estimators
98(3)
4.1.1 Resubstitution Error
98(1)
4.1.2 Leave-One-Out Error
98(1)
4.1.3 Cross-Validation Error
99(1)
4.1.4 Bootstrap Error
99(2)
4.2 Small-Sample Performance
101(9)
4.2.1 Bias
101(2)
4.2.2 Variance
103(2)
4.2.3 Deviation Variance, RMS, and Correlation
105(1)
4.2.4 Numerical Example
106(2)
4.2.5 Complete Enumeration Approach
108(2)
4.3 Large-Sample Performance
110(5)
Exercises
114(1)
5 Distribution Theory
115(30)
5.1 Mixture Sampling Versus Separate Sampling
115(4)
5.2 Sample-Based Discriminants Revisited
119(1)
5.3 True Error
120(1)
5.4 Error Estimators
121(4)
5.4.1 Resubstitution Error
121(1)
5.4.2 Leave-One-Out Error
122(1)
5.4.3 Cross-Validation Error
122(2)
5.4.4 Bootstrap Error
124(1)
5.5 Expected Error Rates
125(11)
5.5.1 True Error
125(3)
5.5.2 Resubstitution Error
128(2)
5.5.3 Leave-One-Out Error
130(2)
5.5.4 Cross-Validation Error
132(1)
5.5.5 Bootstrap Error
133(3)
5.6 Higher-Order Moments of Error Rates
136(4)
5.6.1 True Error
136(1)
5.6.2 Resubstitution Error
137(2)
5.6.3 Leave-One-Out Error
139(1)
5.7 Sampling Distribution of Error Rates
140(5)
5.7.1 Resubstitution Error
140(1)
5.7.2 Leave-One-Out Error
141(1)
Exercises
142(3)
6 Gaussian Distribution Theory: Univariate Case
145(34)
6.1 Historical Remarks
146(1)
6.2 Univariate Discriminant
147(1)
6.3 Expected Error Rates
148(6)
6.3.1 True Error
148(3)
6.3.2 Resubstitution Error
151(1)
6.3.3 Leave-One-Out Error
152(1)
6.3.4 Bootstrap Error
152(2)
6.4 Higher-Order Moments of Error Rates
154(12)
6.4.1 True Error
154(3)
6.4.2 Resubstitution Error
157(3)
6.4.3 Leave-One-Out Error
160(5)
6.4.4 Numerical Example
165(1)
6.5 Sampling Distributions of Error Rates
166(13)
6.5.1 Marginal Distribution of Resubstitution Error
166(3)
6.5.2 Marginal Distribution of Leave-One-Out Error
169(5)
6.5.3 Joint Distribution of Estimated and True Errors
174(2)
Exercises
176(3)
7 Gaussian Distribution Theory: Multivariate Case
179(42)
7.1 Multivariate Discriminants
179(1)
7.2 Small-Sample Methods
180(19)
7.2.1 Statistical Representations
181(13)
7.2.2 Computational Methods
194(5)
7.3 Large-Sample Methods
199(22)
7.3.1 Expected Error Rates
200(7)
7.3.2 Second-Order Moments of Error Rates
207(11)
Exercises
218(3)
8 Bayesian MMSE Error Estimation
221(38)
8.1 The Bayesian MMSE Error Estimator
222(4)
8.2 Sample-Conditioned MSE
226(1)
8.3 Discrete Classification
227(11)
8.4 Linear Classification of Gaussian Distributions
238(8)
8.5 Consistency
246(7)
8.6 Calibration
253(2)
8.7 Concluding Remarks
255(2)
Exercises
257(2)
A Basic Probability Review
259(18)
A.1 Sample Spaces and Events
259(1)
A.2 Definition of Probability
260(1)
A.3 Borel-Cantelli Lemmas
261(1)
A.4 Conditional Probability
262(1)
A.5 Random Variables
263(2)
A.6 Discrete Random Variables
265(1)
A.7 Expectation
266(2)
A.8 Conditional Expectation
268(1)
A.9 Variance
269(1)
A.10 Vector Random Variables
270(1)
A.11 The Multivariate Gaussian
271(2)
A.12 Convergence of Random Sequences
273(2)
A.13 Limiting Theorems
275(2)
B Vapnik-Chervonenkis Theory
277(8)
B.1 Shatter Coefficients
277(1)
B.2 The VC Dimension
278(1)
B.3 VC Theory of Classification
279(3)
B.3.1 Linear Classification Rules
279(1)
B.3.2 kNN Classification Rule
280(1)
B.3.3 Classification Trees
280(1)
B.3.4 Nonlinear SVMs
281(1)
B.3.5 Neural Networks
281(1)
B.3.6 Histogram Rules
281(1)
B.4 Vapnik-Chervonenkis Theorem
282(3)
C Double Asymptotics
285(6)
Bibliography 291(10)
Author Index 301(4)
Subject Index 305
Ulisses M. Braga Neto is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University, USA. He received his PhD in Electrical and Computer Engineering from The Johns Hopkins University. Dr. Braga Neto received an NSF CAREER Award for his work on error estimation for pattern recognition with applications in genomic signal processing. He is an IEEE Senior Member.

Edward R. Dougherty is a Distinguished Professor, Robert F. Kennedy 26 Chair, and Scientific Director at the Center for Bioinformatics and Genomic Systems Engineering at Texas A&M University, USA. He is a fellow of both the IEEE and SPIE, and he has received the SPIE Presidents Award. Dr. Dougherty has authored several books including Epistemology of the Cell: A Systems Perspective on Biological Knowledge and Random Processes for Image and Signal Processing (Wiley-IEEE Press).