|
|
1 | (22) |
|
1.1 Relevance of Nonlinear System Identification |
|
|
1 | (5) |
|
1.1.1 Linear or Nonlinear? |
|
|
2 | (1) |
|
|
2 | (2) |
|
|
4 | (1) |
|
|
4 | (1) |
|
|
5 | (1) |
|
|
5 | (1) |
|
|
6 | (1) |
|
1.2 Views on Nonlinear System Identification |
|
|
6 | (1) |
|
1.3 Tasks in Nonlinear System Identification |
|
|
7 | (10) |
|
1.3.1 Choice of the Model Inputs |
|
|
9 | (2) |
|
1.3.2 Choice of the Excitation Signals |
|
|
11 | (1) |
|
1.3.3 Choice of the Model Architecture |
|
|
11 | (1) |
|
1.3.4 Choice of the Dynamics Representation |
|
|
12 | (1) |
|
1.3.5 Choice of the Model Order |
|
|
13 | (1) |
|
1.3.6 Choice of the Model Structure and Complexity |
|
|
13 | (1) |
|
1.3.7 Choice of the Model Parameters |
|
|
14 | (1) |
|
|
14 | (1) |
|
1.3.9 The Role of Fiddle Parameters |
|
|
15 | (2) |
|
1.4 White Box, Black Box, and Gray Box Models |
|
|
17 | (1) |
|
1.5 Outline of the Book and Some Reading Suggestions |
|
|
18 | (2) |
|
|
20 | (3) |
|
|
|
2 Introduction to Optimization |
|
|
23 | (12) |
|
2.1 Overview of Optimization Techniques |
|
|
25 | (1) |
|
|
25 | (3) |
|
2.3 Loss Functions for Supervised Methods |
|
|
28 | (6) |
|
2.3.1 Maximum Likelihood Method |
|
|
30 | (2) |
|
2.3.2 Maximum A Posteriori and Bayes Method |
|
|
32 | (2) |
|
2.4 Loss Functions for Unsupervised Methods |
|
|
34 | (1) |
|
|
35 | (58) |
|
|
37 | (37) |
|
3.1.1 Covariance Matrix of the Parameter Estimate |
|
|
45 | (2) |
|
|
47 | (3) |
|
3.1.3 Orthogonal Regressors |
|
|
50 | (1) |
|
3.1.4 Regularization/Ridge Regression |
|
|
51 | (6) |
|
3.1.5 Ridge Regression: Alternative Formulation |
|
|
57 | (3) |
|
|
60 | (1) |
|
|
61 | (2) |
|
3.1.8 Weighted Least Squares (WLS) |
|
|
63 | (2) |
|
|
65 | (1) |
|
3.1.10 Least Squares with Equality Constraints |
|
|
66 | (1) |
|
|
67 | (3) |
|
3.1.12 Effective Number of Parameters |
|
|
70 | (1) |
|
|
71 | (3) |
|
3.2 Recursive Least Squares (RLS) |
|
|
74 | (5) |
|
3.2.1 Reducing the Computational Complexity |
|
|
75 | (2) |
|
3.2.2 Tracking Time-Variant Processes |
|
|
77 | (1) |
|
3.2.3 Relationship Between the RLS and the Kalman Filter |
|
|
78 | (1) |
|
3.3 Linear Optimization with Inequality Constraints |
|
|
79 | (1) |
|
|
80 | (10) |
|
3.4.1 Methods for Subset Selection |
|
|
81 | (4) |
|
3.4.2 Orthogonal Least Squares (OLS) for Forward Selection |
|
|
85 | (4) |
|
3.4.3 Ridge Regression or Subset Selection? |
|
|
89 | (1) |
|
|
90 | (1) |
|
|
91 | (2) |
|
4 Nonlinear Local Optimization |
|
|
93 | (36) |
|
4.1 Batch and Sample Adaptation |
|
|
95 | (4) |
|
4.1.1 Mini-Batch Adaptation |
|
|
97 | (1) |
|
|
97 | (2) |
|
|
99 | (2) |
|
4.3 Direct Search Algorithms |
|
|
101 | (4) |
|
4.3.1 Simplex Search Method |
|
|
101 | (3) |
|
4.3.2 Hooke-Jeeves Method |
|
|
104 | (1) |
|
4.4 General Gradient-Based Algorithms |
|
|
105 | (12) |
|
|
106 | (2) |
|
4.4.2 Finite Difference Techniques |
|
|
108 | (1) |
|
|
109 | (2) |
|
|
111 | (2) |
|
4.4.5 Quasi-Newton Methods |
|
|
113 | (2) |
|
4.4.6 Conjugate Gradient Methods |
|
|
115 | (2) |
|
4.5 Nonlinear Least Squares Problems |
|
|
117 | (5) |
|
4.5.1 Gauss-Newton Method |
|
|
119 | (2) |
|
4.5.2 Levenberg-Marquardt Method |
|
|
121 | (1) |
|
4.6 Constrained Nonlinear Optimization |
|
|
122 | (4) |
|
|
126 | (2) |
|
|
128 | (1) |
|
5 Nonlinear Global Optimization |
|
|
129 | (24) |
|
5.1 Simulated Annealing (SA) |
|
|
132 | (4) |
|
5.2 Evolutionary Algorithms (EA) |
|
|
136 | (13) |
|
5.2.1 Evolution Strategies (ES) |
|
|
139 | (4) |
|
5.2.2 Genetic Algorithms (GA) |
|
|
143 | (4) |
|
5.2.3 Genetic Programming (GP) |
|
|
147 | (2) |
|
5.3 Branch and Bound (B&B) |
|
|
149 | (2) |
|
|
151 | (1) |
|
|
151 | (1) |
|
|
152 | (1) |
|
6 Unsupervised Learning Techniques |
|
|
153 | (22) |
|
6.1 Principal Component Analysis (PCA) |
|
|
155 | (3) |
|
6.2 Clustering Techniques |
|
|
158 | (14) |
|
|
160 | (3) |
|
6.2.2 Fuzzy C-Means (FCM) Algorithm |
|
|
163 | (2) |
|
6.2.3 Gustafson-Kessel Algorithm |
|
|
165 | (1) |
|
6.2.4 Kohonen's Self-Organizing Map (SOM) |
|
|
166 | (3) |
|
|
169 | (1) |
|
6.2.6 Adaptive Resonance Theory (ART) Network |
|
|
170 | (1) |
|
6.2.7 Incorporating Information About the Output |
|
|
171 | (1) |
|
|
172 | (1) |
|
|
173 | (2) |
|
7 Model Complexity Optimization |
|
|
175 | (58) |
|
|
176 | (1) |
|
7.2 Bias/Variance Tradeoff |
|
|
177 | (13) |
|
|
178 | (2) |
|
|
180 | (3) |
|
|
183 | (7) |
|
7.3 Evaluating the Test Error and Alternatives |
|
|
190 | (15) |
|
7.3.1 Training, Validation, and Test Data |
|
|
191 | (1) |
|
7.3.2 Cross-Validation (CV) |
|
|
192 | (5) |
|
7.3.3 Information Criteria |
|
|
197 | (3) |
|
7.3.4 Multi-Objective Optimization |
|
|
200 | (2) |
|
|
202 | (2) |
|
7.3.6 Correlation-Based Methods |
|
|
204 | (1) |
|
7.4 Explicit Structure Optimization |
|
|
205 | (2) |
|
7.5 Regularization: Implicit Structure Optimization |
|
|
207 | (11) |
|
7.5.1 Effective Parameters |
|
|
208 | (1) |
|
7.5.2 Regularization by Non-Smoothness Penalties |
|
|
209 | (2) |
|
7.5.3 Regularization by Early Stopping |
|
|
211 | (2) |
|
7.5.4 Regularization by Constraints |
|
|
213 | (2) |
|
7.5.5 Regularization by Staggered Optimization |
|
|
215 | (1) |
|
7.5.6 Regularization by Local Optimization |
|
|
216 | (2) |
|
7.6 Structured Models for Complexity Reduction |
|
|
218 | (11) |
|
7.6.1 Curse of Dimensionality |
|
|
218 | (3) |
|
|
221 | (4) |
|
7.6.3 Projection-Based Structures |
|
|
225 | (1) |
|
7.6.4 Additive Structures |
|
|
226 | (1) |
|
7.6.5 Hierarchical Structures |
|
|
227 | (1) |
|
7.6.6 Input Space Decomposition with Tree Structures |
|
|
228 | (1) |
|
|
229 | (1) |
|
|
230 | (3) |
|
|
233 | (6) |
|
|
|
9 Introduction to Static Models |
|
|
239 | (10) |
|
|
240 | (1) |
|
9.2 Basis Function Formulation |
|
|
241 | (4) |
|
9.2.1 Global and Local Basis Functions |
|
|
241 | (2) |
|
9.2.2 Linear and Nonlinear Parameters |
|
|
243 | (2) |
|
9.3 Extended Basis Function Formulation |
|
|
245 | (1) |
|
|
246 | (1) |
|
|
247 | (2) |
|
10 Linear, Polynomial, and Look-Up Table Models |
|
|
249 | (30) |
|
|
249 | (2) |
|
|
251 | (11) |
|
10.2.1 Regularized Polynomials |
|
|
254 | (4) |
|
10.2.2 Orthogonal Polynomials |
|
|
258 | (3) |
|
10.2.3 Summary Polynomials |
|
|
261 | (1) |
|
10.3 Look-Up Table Models |
|
|
262 | (14) |
|
10.3.1 One-Dimensional Look-Up Tables |
|
|
263 | (2) |
|
10.3.2 Two-Dimensional Look-Up Tables |
|
|
265 | (3) |
|
10.3.3 Optimization of the Heights |
|
|
268 | (1) |
|
10.3.4 Optimization of the Grid |
|
|
269 | (2) |
|
10.3.5 Optimization of the Complete Look-Up Table |
|
|
271 | (1) |
|
10.3.6 Incorporation of Constraints |
|
|
271 | (3) |
|
10.3.7 Properties of Look-Up Table Models |
|
|
274 | (2) |
|
|
276 | (1) |
|
|
277 | (2) |
|
|
279 | (68) |
|
11.1 Construction Mechanisms |
|
|
282 | (5) |
|
11.1.1 Ridge Construction |
|
|
283 | (1) |
|
11.1.2 Radial Construction |
|
|
283 | (3) |
|
11.1.3 Tensor Product Construction |
|
|
286 | (1) |
|
11.2 Multilayer Perceptron (MLP) Network |
|
|
287 | (22) |
|
|
288 | (2) |
|
|
290 | (3) |
|
|
293 | (1) |
|
|
294 | (4) |
|
11.2.5 Simulation Examples |
|
|
298 | (2) |
|
|
300 | (2) |
|
11.2.7 Projection Pursuit Regression (PPR) |
|
|
302 | (1) |
|
11.2.8 Multiple Hidden Layers |
|
|
303 | (2) |
|
|
305 | (4) |
|
11.3 Radial Basis Function (RBF) Networks |
|
|
309 | (24) |
|
|
309 | (4) |
|
|
313 | (2) |
|
|
315 | (8) |
|
11.3.4 Simulation Examples |
|
|
323 | (2) |
|
|
325 | (3) |
|
11.3.6 Regularization Theory |
|
|
328 | (2) |
|
11.3.7 Normalized Radial Basis Function (NRBF) Networks |
|
|
330 | (3) |
|
11.4 Other Neural Networks |
|
|
333 | (10) |
|
11.4.1 General Regression Neural Network (GRNN) |
|
|
334 | (1) |
|
11.4.2 Cerebellar Model Articulation Controller (CMAC) |
|
|
335 | (4) |
|
|
339 | (1) |
|
11.4.4 Just-In-Time Models |
|
|
340 | (3) |
|
|
343 | (1) |
|
|
344 | (3) |
|
12 Fuzzy and Neuro-Fuzzy Models |
|
|
347 | (46) |
|
|
348 | (4) |
|
12.1.1 Membership Functions |
|
|
349 | (1) |
|
|
350 | (1) |
|
|
351 | (1) |
|
|
352 | (1) |
|
12.2 Types of Fuzzy Systems |
|
|
352 | (7) |
|
12.2.1 Linguistic Fuzzy Systems |
|
|
353 | (2) |
|
12.2.2 Singleton Fuzzy Systems |
|
|
355 | (2) |
|
12.2.3 Takagi-Sugeno Fuzzy Systems |
|
|
357 | (2) |
|
12.3 Neuro-Fuzzy (NF) Networks |
|
|
359 | (12) |
|
12.3.1 Fuzzy Basis Functions |
|
|
359 | (2) |
|
12.3.2 Equivalence Between RBF Networks and Fuzzy Models |
|
|
361 | (1) |
|
|
362 | (3) |
|
12.3.4 Interpretation of Neuro-Fuzzy Networks |
|
|
365 | (5) |
|
12.3.5 Incorporating and Preserving Prior Knowledge |
|
|
370 | (1) |
|
12.3.6 Simulation Examples |
|
|
371 | (1) |
|
12.4 Neuro-Fuzzy Learning Schemes |
|
|
371 | (18) |
|
12.4.1 Nonlinear Local Optimization |
|
|
373 | (1) |
|
12.4.2 Nonlinear Global Optimization |
|
|
374 | (1) |
|
12.4.3 Orthogonal Least Squares Learning |
|
|
375 | (2) |
|
12.4.4 Fuzzy Rule Extraction by a Genetic Algorithm (FUREGA) |
|
|
377 | (10) |
|
12.4.5 Adaptive Spline Modeling of Observation Data (ASMOD) |
|
|
387 | (2) |
|
|
389 | (1) |
|
|
390 | (3) |
|
13 Local Linear Neuro-Fuzzy Models: Fundamentals |
|
|
393 | (54) |
|
|
394 | (11) |
|
13.1.1 Illustration of Local Linear Neuro-Fuzzy Models |
|
|
396 | (4) |
|
13.1.2 Interpretation of the Local Linear Model Offsets |
|
|
400 | (1) |
|
13.1.3 Interpretation as Takagi-Sugeno Fuzzy System |
|
|
401 | (3) |
|
13.1.4 Interpretation as Extended NRBF Network |
|
|
404 | (1) |
|
13.2 Parameter Optimization of the Rule Consequents |
|
|
405 | (14) |
|
|
405 | (2) |
|
|
407 | (3) |
|
13.2.3 Global Versus Local Estimation |
|
|
410 | (5) |
|
|
415 | (1) |
|
13.2.5 Regularized Regression |
|
|
416 | (1) |
|
|
417 | (2) |
|
13.3 Structure Optimization of the Rule Premises |
|
|
419 | (25) |
|
13.3.1 Local Linear Model Tree (LOLIMOT) Algorithm |
|
|
421 | (9) |
|
13.3.2 Different Objectives for Structure and Parameter Optimization |
|
|
430 | (2) |
|
13.3.3 Smoothness Optimization |
|
|
432 | (2) |
|
13.3.4 Splitting Ratio Optimization |
|
|
434 | (2) |
|
13.3.5 Merging of Local Models |
|
|
436 | (2) |
|
13.3.6 Principal Component Analysis for Preprocessing |
|
|
438 | (2) |
|
13.3.7 Models with Multiple Outputs |
|
|
440 | (4) |
|
|
444 | (1) |
|
|
445 | (2) |
|
14 Local Linear Neuro-Fuzzy Models: Advanced Aspects |
|
|
447 | (136) |
|
14.1 Different Input Spaces for Rule Premises and Consequents |
|
|
448 | (6) |
|
14.1.1 Identification of Processes with Direction-Dependent Behavior |
|
|
451 | (3) |
|
14.1.2 Piecewise Affine (PWA) Models |
|
|
454 | (1) |
|
14.2 More Complex Local Models |
|
|
454 | (8) |
|
14.2.1 From Local Neuro-Fuzzy Models to Polynomials |
|
|
454 | (3) |
|
14.2.2 Local Quadratic Models for Input Optimization |
|
|
457 | (3) |
|
14.2.3 Different Types of Local Models |
|
|
460 | (2) |
|
14.3 Structure Optimization of the Rule Consequents |
|
|
462 | (4) |
|
14.4 Interpolation and Extrapolation Behavior |
|
|
466 | (8) |
|
14.4.1 Interpolation Behavior |
|
|
466 | (3) |
|
14.4.2 Extrapolation Behavior |
|
|
469 | (5) |
|
14.5 Global and Local Linearization |
|
|
474 | (4) |
|
|
478 | (11) |
|
14.6.1 Online Adaptation of the Rule Consequents |
|
|
480 | (6) |
|
14.6.2 Online Construction of the Rule Premise Structure |
|
|
486 | (3) |
|
14.7 Oblique Partitioning |
|
|
489 | (7) |
|
14.7.1 Smoothness Determination |
|
|
489 | (1) |
|
14.7.2 Hinging Hyperplanes |
|
|
490 | (2) |
|
14.7.3 Smooth Hinging Hyperplanes |
|
|
492 | (2) |
|
14.7.4 Hinging Hyperplane Trees (HHT) |
|
|
494 | (2) |
|
14.8 Hierarchical Local Model Tree (HILOMOT) Algorithm |
|
|
496 | (40) |
|
14.8.1 Forming the Partition of Unity |
|
|
497 | (3) |
|
14.8.2 Split Parameter Optimization |
|
|
500 | (5) |
|
14.8.3 Building up the Hierarchy |
|
|
505 | (5) |
|
14.8.4 Smoothness Adjustment |
|
|
510 | (3) |
|
14.8.5 Separable Nonlinear Least Squares |
|
|
513 | (5) |
|
14.8.6 Analytic Gradient Derivation |
|
|
518 | (8) |
|
14.8.7 Analyzing Input Relevance from Partitioning |
|
|
526 | (4) |
|
14.8.8 HILOMOT Versus LOLIMOT |
|
|
530 | (6) |
|
14.9 Errorbars, Design of Excitation Signals, and Active Learning |
|
|
536 | (7) |
|
|
537 | (3) |
|
14.9.2 Detecting Extrapolation |
|
|
540 | (1) |
|
14.9.3 Design of Excitation Signals |
|
|
541 | (2) |
|
14.10 Design of Experiments |
|
|
543 | (24) |
|
14.10.1 Unsupervised Methods |
|
|
543 | (3) |
|
14.10.2 Model Variance-Oriented Methods |
|
|
546 | (4) |
|
14.10.3 Model Bias-Oriented Methods |
|
|
550 | (4) |
|
14.10.4 Active Learning with HILOMOT DoE |
|
|
554 | (13) |
|
14.11 Bagging Local Model Trees |
|
|
567 | (8) |
|
|
569 | (1) |
|
14.11.2 Bagging with HILOMOT |
|
|
569 | (2) |
|
14.11.3 Bootstrapping for Confidence Assessment |
|
|
571 | (2) |
|
|
573 | (2) |
|
14.12 Summary and Conclusions |
|
|
575 | (5) |
|
|
580 | (3) |
|
15 Input Selection for Local Model Approaches |
|
|
583 | (56) |
|
|
586 | (4) |
|
15.1.1 Test Process One (TP1) |
|
|
586 | (1) |
|
15.1.2 Test Process Two (TP2) |
|
|
587 | (1) |
|
15.1.3 Test Process Three (TP3) |
|
|
588 | (1) |
|
15.1.4 Test Process Four (TP4) |
|
|
588 | (2) |
|
15.2 Mixed Wrapper-Embedded Input Selection Approach: Authored by Julian Belz |
|
|
590 | (14) |
|
15.2.1 Investigation with Test Processes |
|
|
593 | (1) |
|
|
594 | (1) |
|
15.2.3 Extensive Simulation Studies |
|
|
595 | (9) |
|
15.3 Regularization-Based Input Selection Approach: Authored by Julian Belz |
|
|
604 | (16) |
|
15.3.1 Normalized LI Split Regularization |
|
|
606 | (5) |
|
15.3.2 Investigation with Test Processes |
|
|
611 | (9) |
|
15.4 Embedded Approach: Authored by Julian Belz |
|
|
620 | (6) |
|
15.4.1 Partition Analysis |
|
|
621 | (2) |
|
15.4.2 Investigation with Test Processes |
|
|
623 | (3) |
|
15.5 Visualization: Partial Dependence Plots |
|
|
626 | (5) |
|
15.5.1 Investigation with Test Processes |
|
|
628 | (3) |
|
15.6 Miles per Gallon Data Set |
|
|
631 | (8) |
|
15.6.1 Mixed Wrapper-Embedded Input Selection |
|
|
632 | (1) |
|
15.6.2 Regularization-Based Input Selection |
|
|
633 | (2) |
|
15.6.3 Visualization: Partial Dependence Plot |
|
|
635 | (1) |
|
15.6.4 Critical Assessment of Partial Dependence Plots |
|
|
636 | (3) |
|
16 Gaussian Process Models (GPMs) |
|
|
639 | (70) |
|
16.1 Overview on Kernel Methods |
|
|
640 | (4) |
|
|
644 | (1) |
|
16.1.2 Non-LS Kernel Methods |
|
|
644 | (1) |
|
|
644 | (2) |
|
16.3 Kernel Ridge Regression |
|
|
646 | (3) |
|
16.3.1 Transition to Kernels |
|
|
647 | (2) |
|
16.4 Regularizing Parameters and Functions |
|
|
649 | (4) |
|
16.4.1 Discrepancy in Penalty Terms |
|
|
651 | (2) |
|
16.5 Reproducing Kernel Hilbert Spaces (RKHS) |
|
|
653 | (6) |
|
|
653 | (1) |
|
16.5.2 RKHS Objective and Solution |
|
|
654 | (2) |
|
16.5.3 Equivalent Kernels and Locality |
|
|
656 | (2) |
|
16.5.4 Two Points of View |
|
|
658 | (1) |
|
16.6 Gaussian Processes/Kriging |
|
|
659 | (18) |
|
|
|
|
660 | (1) |
|
|
661 | (5) |
|
|
666 | (5) |
|
16.6.5 Incorporating Output Noise |
|
|
671 | (1) |
|
|
672 | (1) |
|
16.6.7 Incorporating a Base Model |
|
|
673 | (2) |
|
16.6.8 Relationship to RBF Networks |
|
|
675 | (1) |
|
16.6.9 High-Dimensional Kernels |
|
|
676 | (1) |
|
|
677 | (29) |
|
16.7.1 Influence of the Hyperparameters |
|
|
678 | (8) |
|
16.7.2 Optimization of the Hyperparameters |
|
|
686 | (6) |
|
16.7.3 Marginal Likelihood |
|
|
692 | (12) |
|
16.7.4 A Note on the Prior Variance |
|
|
704 | (2) |
|
|
706 | (2) |
|
|
708 | (1) |
|
|
709 | (6) |
|
|
|
18 Linear Dynamic System Identification |
|
|
715 | (116) |
|
18.1 Overview of Linear System Identification |
|
|
716 | (1) |
|
|
717 | (4) |
|
18.3 General Model Structure |
|
|
721 | (16) |
|
18.3.1 Terminology and Classification |
|
|
723 | (6) |
|
|
729 | (4) |
|
18.3.3 Some Remarks on the Optimal Predictor |
|
|
733 | (2) |
|
18.3.4 Prediction Error Methods |
|
|
735 | (2) |
|
|
737 | (4) |
|
18.4.1 Autoregressive (AR) |
|
|
738 | (1) |
|
18.4.2 Moving Average (MA) |
|
|
739 | (1) |
|
18.4.3 Autoregressive Moving Average (ARMA) |
|
|
740 | (1) |
|
18.5 Models with Output Feedback |
|
|
741 | (29) |
|
18.5.1 Autoregressive with Exogenous Input (ARX) |
|
|
741 | (11) |
|
18.5.2 Autoregressive Moving Average with Exogenous Input (ARMAX) |
|
|
752 | (5) |
|
18.5.3 Autoregressive Autoregressive with Exogenous Input (ARARX) |
|
|
757 | (3) |
|
|
760 | (4) |
|
|
764 | (2) |
|
18.5.6 State Space Models |
|
|
766 | (2) |
|
18.5.7 Simulation Example |
|
|
768 | (2) |
|
18.6 Models Without Output Feedback |
|
|
770 | (33) |
|
18.6.1 Finite Impulse Response (FIR) |
|
|
771 | (4) |
|
18.6.2 Regularized FIR Models |
|
|
775 | (4) |
|
18.6.3 Bias and Variance of Regularized FIR Models |
|
|
779 | (1) |
|
18.6.4 Impulse Response Preservation (IRP) FIR Approach |
|
|
780 | (10) |
|
18.6.5 Orthonormal Basis Functions (OBF) |
|
|
790 | (9) |
|
18.6.6 Simulation Example |
|
|
799 | (4) |
|
18.7 Some Advanced Aspects |
|
|
803 | (8) |
|
18.7.1 Initial Conditions |
|
|
803 | (2) |
|
|
805 | (1) |
|
18.7.3 Frequency-Domain Interpretation |
|
|
806 | (2) |
|
18.7.4 Relationship Between Noise Model and Filtering |
|
|
808 | (1) |
|
|
809 | (2) |
|
18.8 Recursive Algorithms |
|
|
811 | (5) |
|
18.8.1 Recursive Least Squares (RLS) Method |
|
|
812 | (1) |
|
18.8.2 Recursive Instrumental Variables (RIV) Method |
|
|
812 | (2) |
|
18.8.3 Recursive Extended Least Squares (RELS) Method |
|
|
814 | (1) |
|
18.8.4 Recursive Prediction Error Methods (RPEM) |
|
|
815 | (1) |
|
18.9 Determination of Dynamic Orders |
|
|
816 | (1) |
|
18.10 Multivariate Systems |
|
|
817 | (6) |
|
18.10.1 P-Canonical Model |
|
|
819 | (1) |
|
18.10.2 Matrix Polynomial Model |
|
|
820 | (3) |
|
|
823 | (1) |
|
18.11 Closed-Loop Identification |
|
|
823 | (5) |
|
|
824 | (2) |
|
|
826 | (1) |
|
18.11.3 Identification for Control |
|
|
827 | (1) |
|
|
828 | (1) |
|
|
829 | (2) |
|
19 Nonlinear Dynamic System Identification |
|
|
831 | (62) |
|
19.1 From Linear to Nonlinear System Identification |
|
|
832 | (2) |
|
|
834 | (17) |
|
19.2.1 Illustration of the External Dynamics Approach |
|
|
834 | (7) |
|
19.2.2 Series-Parallel and Parallel Models |
|
|
841 | (2) |
|
19.2.3 Nonlinear Dynamic Input/Output Model Classes |
|
|
843 | (6) |
|
19.2.4 Restrictions of Nonlinear Input/Output Models |
|
|
849 | (2) |
|
|
851 | (1) |
|
19.4 Parameter Scheduling Approach |
|
|
851 | (1) |
|
19.5 Training Recurrent Structures |
|
|
852 | (4) |
|
19.5.1 Backpropagation-Through-Time (BPTT) Algorithm |
|
|
853 | (2) |
|
19.5.2 Real-Time Recurrent Learning |
|
|
855 | (1) |
|
19.6 Multivariate Systems |
|
|
856 | (3) |
|
19.6.1 Issues with Multiple Inputs |
|
|
857 | (2) |
|
|
859 | (14) |
|
19.7.1 From PRBS to APRBS |
|
|
860 | (4) |
|
|
864 | (1) |
|
|
865 | (1) |
|
|
866 | (1) |
|
|
867 | (2) |
|
19.7.6 NARX and NOBF Input Spaces |
|
|
869 | (2) |
|
|
871 | (1) |
|
|
872 | (1) |
|
19.8 Optimal Excitation Signal Generator: Coauthored by Tim O. Heinz |
|
|
873 | (14) |
|
19.8.1 Approaches with Fisher Information |
|
|
874 | (2) |
|
19.8.2 Optimized Nonlinear Input Signal (OMNIPUS) for SISO Systems |
|
|
876 | (2) |
|
19.8.3 Optimized Nonlinear Input Signal (OMNIPUS) for MISO Systems |
|
|
878 | (9) |
|
19.9 Determination of Dynamic Orders |
|
|
887 | (3) |
|
|
890 | (1) |
|
|
890 | (3) |
|
20 Classical Polynomial Approaches |
|
|
893 | (10) |
|
20.1 Properties of Dynamic Polynomial Models |
|
|
894 | (1) |
|
20.2 Kolmogorov-Gabor Polynomial Models |
|
|
895 | (1) |
|
20.3 Volterra-Series Models |
|
|
896 | (1) |
|
20.4 Parametric Volterra-Series Models |
|
|
897 | (1) |
|
|
898 | (1) |
|
|
898 | (2) |
|
|
900 | (1) |
|
|
901 | (2) |
|
21 Dynamic Neural and Fuzzy Models |
|
|
903 | (16) |
|
21.1 Curse of Dimensionality |
|
|
904 | (1) |
|
|
904 | (1) |
|
|
905 | (1) |
|
21.1.3 Singleton Fuzzy and NRBF Models |
|
|
905 | (1) |
|
21.2 Interpolation and Extrapolation Behavior |
|
|
905 | (2) |
|
|
907 | (2) |
|
|
908 | (1) |
|
|
908 | (1) |
|
21.3.3 Singleton Fuzzy and NRBF Models |
|
|
909 | (1) |
|
21.4 Integration of a Linear Model |
|
|
909 | (1) |
|
|
910 | (6) |
|
|
911 | (2) |
|
|
913 | (2) |
|
21.5.3 Singleton Fuzzy and NRBF Models |
|
|
915 | (1) |
|
|
916 | (1) |
|
|
917 | (2) |
|
22 Dynamic Local Linear Neuro-Fuzzy Models |
|
|
919 | (52) |
|
22.1 One-Step Prediction Error Versus Simulation Error |
|
|
923 | (1) |
|
22.2 Determination of the Rule Premises |
|
|
924 | (2) |
|
|
926 | (6) |
|
22.3.1 Static and Dynamic Linearization |
|
|
927 | (1) |
|
22.3.2 Dynamics of the Linearized Model |
|
|
928 | (2) |
|
22.3.3 Different Rule Consequent Structures |
|
|
930 | (2) |
|
|
932 | (6) |
|
22.4.1 Influence of Rule Premise Inputs on Stability |
|
|
933 | (2) |
|
22.4.2 Lyapunov Stability and Linear Matrix Inequalities (LMIs) |
|
|
935 | (2) |
|
22.4.3 Ensuring Stable Extrapolation |
|
|
937 | (1) |
|
22.5 Dynamic LOLIMOT Simulation Studies |
|
|
938 | (9) |
|
22.5.1 Nonlinear Dynamic Test Processes |
|
|
939 | (1) |
|
22.5.2 Hammerstein Process |
|
|
940 | (2) |
|
|
942 | (2) |
|
|
944 | (2) |
|
22.5.5 Dynamic Nonlinearity Process |
|
|
946 | (1) |
|
22.6 Advanced Local Linear Methods and Models |
|
|
947 | (5) |
|
22.6.1 Local Linear Instrumental Variables (IV) Method |
|
|
948 | (3) |
|
22.6.2 Local Linear Output Error (OE) Models |
|
|
951 | (1) |
|
22.6.3 Local Linear ARMAX Models |
|
|
951 | (1) |
|
22.7 Local Regularized Finite Impulse Response Models: Coauthored by Tobias Miinker |
|
|
952 | (5) |
|
|
952 | (2) |
|
|
954 | (1) |
|
22.7.3 Hyperparamter Tuning |
|
|
954 | (1) |
|
22.7.4 Evaluation of Performance |
|
|
955 | (2) |
|
22.8 Local Linear Orthonormal Basis Functions Models |
|
|
957 | (5) |
|
22.9 Structure Optimization of the Rule Consequents |
|
|
962 | (4) |
|
22.10 Summary and Conclusions |
|
|
966 | (4) |
|
|
970 | (1) |
|
23 Neural Networks with Internal Dynamics |
|
|
971 | (14) |
|
23.1 Fully Recurrent Networks |
|
|
972 | (1) |
|
23.2 Partially Recurrent Networks |
|
|
973 | (1) |
|
23.3 State Recurrent Networks |
|
|
973 | (2) |
|
23.4 Locally Recurrent Globally Feedforward Networks |
|
|
975 | (1) |
|
23.5 Long Short-Term Memory (LSTM) Networks |
|
|
976 | (3) |
|
23.6 Internal Versus External Dynamics |
|
|
979 | (3) |
|
|
982 | (3) |
|
|
|
24 Applications of Static Models |
|
|
985 | (22) |
|
|
986 | (20) |
|
24.1.1 Process Description |
|
|
986 | (1) |
|
24.1.2 Smoothing of a Driving Cycle |
|
|
987 | (1) |
|
24.1.3 Improvements and Extensions |
|
|
988 | (1) |
|
|
989 | (1) |
|
24.1.5 The Role of Look-Up Tables in Automotive Electronics |
|
|
990 | (4) |
|
24.1.6 Modeling of Exhaust Gases |
|
|
994 | (3) |
|
24.1.7 Optimization of Exhaust Gases |
|
|
997 | (7) |
|
24.1.8 Outlook: Dynamic Models |
|
|
1004 | (2) |
|
|
1006 | (1) |
|
25 Applications of Dynamic Models |
|
|
1007 | (36) |
|
|
1007 | (6) |
|
25.1.1 Process Description |
|
|
1008 | (1) |
|
25.1.2 Experimental Results |
|
|
1009 | (4) |
|
25.2 Diesel Engine Turbocharger |
|
|
1013 | (10) |
|
25.2.1 Process Description |
|
|
1015 | (2) |
|
25.2.2 Experimental Results |
|
|
1017 | (6) |
|
|
1023 | (18) |
|
25.3.1 Process Description |
|
|
1024 | (1) |
|
|
1025 | (5) |
|
25.3.3 Tubular Heat Exchanger |
|
|
1030 | (5) |
|
25.3.4 Cross-Flow Heat Exchanger |
|
|
1035 | (6) |
|
|
1041 | (2) |
|
|
1043 | (52) |
|
26.1 Practical DoE Aspects: Authored by Julian Belz |
|
|
1044 | (26) |
|
26.1.1 Function Generator |
|
|
1044 | (3) |
|
26.1.2 Order of Experimentation |
|
|
1047 | (2) |
|
26.1.3 Biggest Gap Sequence |
|
|
1049 | (1) |
|
26.1.4 Median Distance Sequence |
|
|
1050 | (1) |
|
26.1.5 Intelligent-Means Sequence |
|
|
1050 | (1) |
|
26.1.6 Other Determination Strategies |
|
|
1051 | (2) |
|
26.1.7 Comparison on Synthetic Functions |
|
|
1053 | (3) |
|
|
1056 | (3) |
|
|
1059 | (5) |
|
26.1.10 Comparison of Space-Filling Designs |
|
|
1064 | (6) |
|
26.2 Active Learning for Structural Health Monitoring |
|
|
1070 | (7) |
|
26.2.1 Simulation Results |
|
|
1072 | (2) |
|
26.2.2 Experimental Results |
|
|
1074 | (3) |
|
26.3 Active Learning for Engine Measurement |
|
|
1077 | (10) |
|
|
1077 | (3) |
|
26.3.2 Operating Point-Specific Engine Models |
|
|
1080 | (4) |
|
26.3.3 Global Engine Model |
|
|
1084 | (3) |
|
26.4 Nonlinear Dynamic Excitation Signal Design for Common Rail Injection |
|
|
1087 | (8) |
|
26.4.1 Example: High-Pressure Fuel Supply System |
|
|
1087 | (1) |
|
26.4.2 Identifying the Rail Pressure System |
|
|
1088 | (1) |
|
|
1089 | (6) |
|
27 Input Selection Applications |
|
|
1095 | (30) |
|
27.1 Air Mass Flow Prediction |
|
|
1096 | (5) |
|
27.1.1 Mixed Wrapper-Embedded Input Selection |
|
|
1098 | (2) |
|
27.1.2 Partition Analysis |
|
|
1100 | (1) |
|
27.2 Fan Metamodeling: Authored by Julian Belz |
|
|
1101 | (14) |
|
27.2.1 Centrifugal Impeller Geometry |
|
|
1102 | (1) |
|
27.2.2 Axial Impeller Geometry |
|
|
1103 | (1) |
|
|
1103 | (2) |
|
27.2.4 Design of Experiments: Centrifugal Fan Metamodel |
|
|
1105 | (1) |
|
27.2.5 Design of Experiments: Axial Fan Metamodel |
|
|
1105 | (1) |
|
27.2.6 Order of Experimentation |
|
|
1106 | (2) |
|
27.2.7 Goal-Oriented Active Learning |
|
|
1108 | (3) |
|
27.2.8 Mixed Wrapper-Embedded Input Selection |
|
|
1111 | (1) |
|
27.2.9 Centrifugal Fan Metamodel |
|
|
1111 | (2) |
|
27.2.10 Axial Fan Metamodel |
|
|
1113 | (2) |
|
|
1115 | (1) |
|
27.3 Heating, Ventilating, and Air Conditioning System |
|
|
1115 | (10) |
|
27.3.1 Problem Configuration |
|
|
1116 | (1) |
|
27.3.2 Available Data Sets |
|
|
1117 | (1) |
|
27.3.3 Mixed Wrapper-Embedded Input Selection |
|
|
1118 | (1) |
|
|
1119 | (6) |
|
28 Applications of Advanced Methods |
|
|
1125 | (26) |
|
28.1 Nonlinear Model Predictive Control |
|
|
1126 | (4) |
|
|
1130 | (10) |
|
28.2.1 Variable Forgetting Factor |
|
|
1131 | (1) |
|
28.2.2 Control and Adaptation Models |
|
|
1132 | (1) |
|
28.2.3 Parameter Transfer |
|
|
1133 | (2) |
|
28.2.4 Systems with Multiple Inputs |
|
|
1135 | (1) |
|
28.2.5 Experimental Results |
|
|
1136 | (4) |
|
|
1140 | (6) |
|
|
1141 | (2) |
|
28.3.2 Experimental Results |
|
|
1143 | (3) |
|
|
1146 | (3) |
|
|
1146 | (2) |
|
28.4.2 Experimental Results |
|
|
1148 | (1) |
|
|
1149 | (2) |
|
|
1151 | (14) |
|
29.1 Termination Criteria |
|
|
1152 | (4) |
|
|
1153 | (1) |
|
|
1153 | (1) |
|
|
1154 | (1) |
|
29.1.4 Maximum Number of Local Models |
|
|
1154 | (1) |
|
29.1.5 Effective Number of Parameters |
|
|
1155 | (1) |
|
29.1.6 Maximum Training Time |
|
|
1156 | (1) |
|
29.2 Polynominal Degree of Local Models |
|
|
1156 | (1) |
|
|
1157 | (3) |
|
29.3.1 Nonlinear Orthonormal Basis Function Models |
|
|
1160 | (1) |
|
29.4 Different Input Spaces x and z |
|
|
1160 | (1) |
|
|
1160 | (1) |
|
|
1161 | (1) |
|
29.7 Visualization and Simplified Tool |
|
|
1161 | |
|
Correction to: Nonlinear System Identification |
|
|
1 | |
|
|
1165 | (4) |
|
A.1 Vector and Matrix Derivatives |
|
|
1165 | (2) |
|
A.2 Gradient, Hessian, and Jacobian |
|
|
1167 | (2) |
|
|
1169 | (20) |
|
B.1 Deterministic and Random Variables |
|
|
1169 | (2) |
|
B.2 Probability Density Function (pdf) |
|
|
1171 | (2) |
|
B.3 Stochastic Processes and Ergodicity |
|
|
1173 | (3) |
|
|
1176 | (3) |
|
|
1179 | (1) |
|
B.6 Correlation and Covariance |
|
|
1180 | (3) |
|
B.7 Properties of Estimators |
|
|
1183 | (6) |
References |
|
1189 | (28) |
Index |
|
1217 | |