Atjaunināt sīkdatņu piekrišanu

E-grāmata: Statistical Methods in the Atmospheric Sciences

4.30/5 (48 ratings by Goodreads)
(Department of Earth and Atmospheric Sciences, Cornell University, USA)
  • Formāts: PDF+DRM
  • Sērija : International Geophysics
  • Izdošanas datums: 04-Jul-2011
  • Izdevniecība: Academic Press Inc
  • Valoda: eng
  • ISBN-13: 9780123850232
  • Formāts - PDF+DRM
  • Cena: 72,56 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Sērija : International Geophysics
  • Izdošanas datums: 04-Jul-2011
  • Izdevniecība: Academic Press Inc
  • Valoda: eng
  • ISBN-13: 9780123850232

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Wilks (earth and atmospheric sciences, Cornell U.) presents a textbook for an upper-division undergraduate or beginning graduate course for students who have completed a first course in statistics and are interested in learning further statistics in the context of atmospheric sciences. No mathematics beyond first-year calculus is required, nor any background in atmospheric science, though some would be helpful. He also has in mind researchers using the book as a reference. No dates are cited for previous editions, this one adds a chapter on Bayesian inference, updates the treatment throughout, and includes new references to recently published literature. Academic Press in an imprint of Elsevier. Annotation ©2011 Book News, Inc., Portland, OR (booknews.com)

Praise for the First Edition:
"I recommend this book, without hesitation, as either a reference or course text...Wilks' excellent book provides a thorough base in applied statistical methods for atmospheric sciences."--BAMS (Bulletin of the American Meteorological Society)

Fundamentally, statistics is concerned with managing data and making inferences and forecasts in the face of uncertainty. It should not be surprising, therefore, that statistical methods have a key role to play in the atmospheric sciences. It is the uncertainty in atmospheric behavior that continues to move research forward and drive innovations in atmospheric modeling and prediction.

This revised and expanded text explains the latest statistical methods that are being used to describe, analyze, test and forecast atmospheric data. It features numerous worked examples, illustrations, equations, and exercises with separate solutions. Statistical Methods in the Atmospheric Sciences, Second Edition will help advanced students and professionals understand and communicate what their data sets have to say, and make sense of the scientific literature in meteorology, climatology, and related disciplines.
  • Accessible presentation and explanation of techniques for atmospheric data summarization, analysis, testing and forecasting

  • Many worked examples

  • End-of-chapter exercises, with answers provided

  • Recenzijas

    "I would strongly recommend this book... To those who already posses the first edition and are satisfied users, you would be hard-pressed to do without the second edition." --Bulletin of the American Meteorological Society

    "What makes this book specific to meterology, and not just to applied statistics, are it's extensive examples and two chapters on statistcal forecasting and forecast evaluation." --William (Matt) Briggs, Weill Medical College of Cornell University

    "Wilks (earth and atmospheric sciences, Cornell U.) presents a textbook for an upper-division undergraduate or beginning graduate course for students who have completed a first course in statistics and are interested in learning further statistics in the context of atmospheric sciences. No mathematics beyond first-year calculus is required, nor any background in atmospheric science, though some would be helpful. He also has in mind researchers using the book as a reference. No dates are cited for previous editions, this one adds a chapter on Bayesian inference, updates the treatment throughout, and includes new references to recently published literature." --SciTech Book News

    Papildus informācija

    Expanded coverage and key updates help readers describe, analyze, test, and forecast atmospheric data.
    Preface v
    Part I Preliminaries
    1(20)
    1 Introduction
    3(4)
    1.1 What Is Statistics?
    3(1)
    1.2 Descriptive and Inferential Statistics
    3(1)
    1.3 Uncertainty about the Atmosphere
    4(3)
    2 Review of Probability
    7(14)
    2.1 Background
    7(1)
    2.2 The Elements of Probability
    7(2)
    2.2.1 Events
    7(1)
    2.2.2 The Sample Space
    8(1)
    2.2.3 The Axioms of Probability
    9(1)
    2.3 The Meaning of Probability
    9(1)
    2.3.1 Frequency Interpretation
    9(1)
    2.3.2 Bayesian (Subjective) Interpretation
    10(1)
    2.4 Some Properties of Probability
    10(8)
    2.4.1 Domain, Subsets, Complements, and Unions
    11(1)
    2.4.2 DeMorgan's Laws
    12(1)
    2.4.3 Conditional Probability
    13(1)
    2.4.4 Independence
    14(2)
    2.4.5 Law of Total Probability
    16(1)
    2.4.6 Bayes' Theorem
    17(1)
    2.5 Exercises
    18(3)
    Part II Univariate Statistics
    21(436)
    3 Empirical Distributions and Exploratory Data Analysis
    23(48)
    3.1 Background
    23(2)
    3.1.1 Robustness and Resistance
    23(1)
    3.1.2 Quantiles
    24(1)
    3.2 Numerical Summary Measures
    25(3)
    3.2.1 Location
    26(1)
    3.2.2 Spread
    26(1)
    3.2.3 Symmetry
    27(1)
    3.3 Graphical Summary Devices
    28(14)
    3.3.1 Stem-and-Leaf Display
    28(1)
    3.3.2 Boxplots
    29(2)
    3.3.3 Schematic Plots
    31(2)
    3.3.4 Other Boxplot Variants
    33(1)
    3.3.5 Histograms
    33(1)
    3.3.6 Kernel Density Smoothing
    34(5)
    3.3.7 Cumulative Frequency Distributions
    39(3)
    3.4 Reexpression
    42(7)
    3.4.1 Power Transformations
    42(4)
    3.4.2 Standardized Anomalies
    46(3)
    3.5 Exploratory Techniques for Paired Data
    49(11)
    3.5.1 Scatterplots
    50(1)
    3.5.2 Pearson (Ordinary) Correlation
    50(5)
    3.5.3 Spearman Rank Correlation and Kendall's τ
    55(2)
    3.5.4 Serial Correlation
    57(2)
    3.5.5 Autocorrelation Function
    59(1)
    3.6 Exploratory Techniques for Higher-Dimensional Data
    60(10)
    3.6.1 The Star Plot
    60(1)
    3.6.2 The Glyph Scatterplot
    61(2)
    3.6.3 The Rotating Scatterplot
    63(1)
    3.6.4 The Correlation Matrix
    63(3)
    3.6.5 The Scatterplot Matrix
    66(1)
    3.6.6 Correlation Maps
    67(3)
    3.7 Exercises
    70(1)
    4 Parametric Probability Distributions
    71(62)
    4.1 Background
    71(2)
    4.1.1 Parametric versus Empirical Distributions
    71(1)
    4.1.2 What Is a Parametric Distribution?
    72(1)
    4.1.3 Parameters versus Statistics
    72(1)
    4.1.4 Discrete versus Continuous Distributions
    72(1)
    4.2 Discrete Distributions
    73(9)
    4.2.1 Binomial Distribution
    73(3)
    4.2.2 Geometric Distribution
    76(1)
    4.2.3 Negative Binomial Distribution
    77(3)
    4.2.4 Poisson Distribution
    80(2)
    4.3 Statistical Expectations
    82(3)
    4.3.1 Expected Value of a Random Variable
    82(1)
    4.3.2 Expected Value of a Function of a Random Variable
    83(2)
    4.4 Continuous Distributions
    85(27)
    4.4.1 Distribution Functions and Expected Values
    85(2)
    4.4.2 Gaussian Distributions
    87(8)
    4.4.3 Gamma Distributions
    95(8)
    4.4.4 Beta Distributions
    103(2)
    4.4.5 Extreme-Value Distributions
    105(5)
    4.4.6 Mixture Distributions
    110(2)
    4.5 Qualitative Assessments of the Goodness of Fit
    112(4)
    4.5.1 Superposition of a Fitted Parametric Distribution and Data Histogram
    113(2)
    4.5.2 Quantile---Quantile (Q---Q) Plots
    115(1)
    4.6 Parameter Fitting Using Maximum Likelihood
    116(6)
    4.6.1 The Likelihood Function
    116(2)
    4.6.2 The Newton-Raphson Method
    118(1)
    4.6.3 The EM Algorithm
    119(3)
    4.6.4 Sampling Distribution of Maximum-Likelihood Estimates
    122(1)
    4.7 Statistical Simulation
    122(8)
    4.7.1 Uniform Random-Number Generators
    123(2)
    4.7.2 Nonuniform Random-Number Generation by Inversion
    125(1)
    4.7.3 Nonuniform Random-Number Generation by Rejection
    126(2)
    4.7.4 Box-Muller Method for Gaussian Random-Number Generation
    128(1)
    4.7.5 Simulating from Mixture Distributions and Kernel Density Estimates
    128(2)
    4.8 Exercises
    130(3)
    5 Frequentist Statistical Inference
    133(54)
    5.1 Background
    133(8)
    5.1.1 Parametric versus Nonparametric Inference
    133(1)
    5.1.2 The Sampling Distribution
    134(1)
    5.1.3 The Elements of Any Hypothesis Test
    134(1)
    5.1.4 Test Levels and p Values
    135(1)
    5.1.5 Error Types and the Power of a Test
    135(1)
    5.1.6 One-Sided versus Two-Sided Tests
    136(1)
    5.1.7 Confidence Intervals: Inverting Hypothesis Tests
    137(4)
    5.2 Some Commonly Encountered Parametric Tests
    141(17)
    5.2.1 One-Sample t Test
    141(1)
    5.2.2 Tests for Differences of Mean under Independence
    142(2)
    5.2.3 Tests for Differences of Mean for Paired Samples
    144(1)
    5.2.4 Tests for Differences of Mean under Serial Dependence
    145(4)
    5.2.5 Goodness-of-Fit Tests
    149(7)
    5.2.6 Likelihood Ratio Tests
    156(2)
    5.3 Nonparametric Tests
    158(20)
    5.3.1 Classical Nonparametric Tests for Location
    159(7)
    5.3.2 Mann-Kendall Trend Test
    166(2)
    5.3.3 Introduction to Resampling Tests
    168(1)
    5.3.4 Permutation Tests
    169(3)
    5.3.5 The Bootstrap
    172(6)
    5.4 Multiplicity and "Field Significance"
    178(7)
    5.4.1 The Multiplicity Problem for Independent Tests
    178(2)
    5.4.2 Field Significance and the False Discovery Rate
    180(1)
    5.4.3 Field Significance and Spatial Correlation
    181(4)
    5.5 Exercises
    185(2)
    6 Bayesian Inference
    187(28)
    6.1 Background
    187(1)
    6.2 The Structure of Bayesian Inference
    188(6)
    6.2.1 Bayes' Theorem for Continuous Variables
    188(3)
    6.2.2 Inference and the Posterior Distribution
    191(1)
    6.2.3 The Role of the Prior Distribution
    192(2)
    6.2.4 The Predictive Distribution
    194(1)
    6.3 Conjugate Distributions
    194(12)
    6.3.1 Definition of Conjugate Distributions
    194(1)
    6.3.2 Binomial Data-Generating Process
    195(4)
    6.3.3 Poisson Data-Generating Process
    199(4)
    6.3.4 Gaussian Data-Generating Process
    203(3)
    6.4 Dealing with Difficult Integrals
    206(7)
    6.4.1 Markov Chain Monte Carlo (MCMC) Methods
    206(1)
    6.4.2 The Metropolis-Hastings Algorithm
    207(3)
    6.4.3 The Gibbs Sampler
    210(3)
    6.5 Exercises
    213(2)
    7 Statistical Forecasting
    215(86)
    7.1 Background
    215(1)
    7.2 Linear Regression
    215(22)
    7.2.1 Simple Linear Regression
    216(2)
    7.2.2 Distribution of the Residuals
    218(2)
    7.2.3 The Analysis of Variance Table
    220(1)
    7.2.4 Goodness-of-Fit Measures
    221(2)
    7.2.5 Sampling Distributions of the Regression Coefficients
    223(2)
    7.2.6 Examining Residuals
    225(5)
    7.2.7 Prediction Intervals
    230(3)
    7.2.8 Multiple Linear Regression
    233(1)
    7.2.9 Derived Predictor Variables in Multiple Regression
    233(4)
    7.3 Nonlinear Regression
    237(7)
    7.3.1 Generalized Linear Models
    237(1)
    7.3.2 Logistic Regression
    238(4)
    7.3.3 Poisson Regression
    242(2)
    7.4 Predictor Selection
    244(11)
    7.4.1 Why Is Careful Predictor Selection Important?
    244(3)
    7.4.2 Screening Predictors
    247(2)
    7.4.3 Stopping Rules
    249(3)
    7.4.4 Cross Validation
    252(3)
    7.5 Objective Forecasts Using Traditional Statistical Methods
    255(12)
    7.5.1 Classical Statistical Forecasting
    255(2)
    7.5.2 Perfect Prog and MOS
    257(7)
    7.5.3 Operational MOS Forecasts
    264(3)
    7.6 Ensemble Forecasting
    267(17)
    7.6.1 Probabilistic Field Forecasts
    267(1)
    7.6.2 Stochastic Dynamical Systems in Phase Space
    267(3)
    7.6.3 Ensemble Forecasts
    270(1)
    7.6.4 Choosing Initial Ensemble Members
    271(2)
    7.6.5 Ensemble Average and Ensemble Dispersion
    273(2)
    7.6.6 Graphical Display of Ensemble Forecast Information
    275(7)
    7.6.7 Effects of Model Errors
    282(2)
    7.7 Ensemble MOS
    284(8)
    7.7.1 Why Ensembles Need Postprocessing
    284(2)
    7.7.2 Regression Methods
    286(4)
    7.7.3 Kernel Density (Ensemble "Dressing") Methods
    290(2)
    7.8 Subjective Probability Forecasts
    292(6)
    7.8.1 The Nature of Subjective Forecasts
    292(1)
    7.8.2 The Subjective Distribution
    293(1)
    7.8.3 Central Credible Interval Forecasts
    294(2)
    7.8.4 Assessing Discrete Probabilities
    296(1)
    7.8.5 Assessing Continuous Distributions
    297(1)
    7.9 Exercises
    298(3)
    8 Forecast Verification
    301(94)
    8.1 Background
    301(5)
    8.1.1 Purposes of Forecast Verification
    301(1)
    8.1.2 The Joint Distribution of Forecasts and Observations
    302(1)
    8.1.3 Scalar Attributes of Forecast Performance
    303(2)
    8.1.4 Forecast Skill
    305(1)
    8.2 Nonprobabilistic Forecasts for Discrete Predictands
    306(17)
    8.2.1 The 2 x 2 Contingency Table
    306(2)
    8.2.2 Scalar Attributes of the 2 x 2 Contingency Table
    308(3)
    8.2.3 Skill Scores for 2 x 2 Contingency Tables
    311(4)
    8.2.4 Which Score?
    315(1)
    8.2.5 Conversion of Probabilistic to Nonprobabilistic Forecasts
    316(2)
    8.2.6 Extensions for Multicategory Discrete Predictands
    318(5)
    8.3 Nonprobabilistic Forecasts for Continuous Predictands
    323(6)
    8.3.1 Conditional Quantile Plots
    324(1)
    8.3.2 Scalar Accuracy Measures
    325(2)
    8.3.3 Skill Scores
    327(2)
    8.4 Probability Forecasts for Discrete Predictands
    329(22)
    8.4.1 The Joint Distribution for Dichotomous Events
    329(2)
    8.4.2 The Brier Score
    331(1)
    8.4.3 Algebraic Decomposition of the Brier Score
    332(2)
    8.4.4 The Reliability Diagram
    334(6)
    8.4.5 The Discrimination Diagram
    340(1)
    8.4.6 The Logarithmic, or Ignorance Score
    341(1)
    8.4.7 The ROC Diagram
    342(4)
    8.4.8 Hedging, and Strictly Proper Scoring Rules
    346(2)
    8.4.9 Probability Forecasts for Multiple-Category Events
    348(3)
    8.5 Probability Forecasts for Continuous Predictands
    351(4)
    8.5.1 Full Continuous Forecast Probability Distributions
    351(3)
    8.5.2 Central Credible Interval Forecasts
    354(1)
    8.6 Nonprobabilistic Forecasts for Fields
    355(14)
    8.6.1 General Considerations for Field Forecasts
    355(2)
    8.6.2 The S1 Score
    357(2)
    8.6.3 Mean Squared Error
    359(5)
    8.6.4 Anomaly Correlation
    364(3)
    8.6.5 Field Verification Based on Spatial Structure
    367(2)
    8.7 Verification of Ensemble Forecasts
    369(8)
    8.7.1 Characteristics of a Good Ensemble Forecast
    369(2)
    8.7.2 The Verification Rank Histogram
    371(4)
    8.7.3 Minimum Spanning Tree (MST) Histogram
    375(1)
    8.7.4 Shadowing, and Bounding Boxes
    376(1)
    8.8 Verification Based on Economic Value
    377(5)
    8.8.1 Optimal Decision Making and the Cost/Loss Ratio Problem
    377(2)
    8.8.2 The Value Score
    379(2)
    8.8.3 Connections with Other Verification Approaches
    381(1)
    8.9 Verification When the Observation is Uncertain
    382(1)
    8.10 Sampling and Inference for Verification Statistics
    383(8)
    8.10.1 Sampling Characteristics of Contingency Table Statistics
    383(3)
    8.10.2 ROC Diagram Sampling Characteristics
    386(2)
    8.10.3 Brier Score and Brier Skill Score Inference
    388(1)
    8.10.4 Reliability Diagram Sampling Characteristics
    389(1)
    8.10.5 Resampling Verification Statistics
    390(1)
    8.11 Exercises
    391(4)
    9 Time Series
    395(62)
    9.1 Background
    395(2)
    9.1.1 Stationary
    395(1)
    9.1.2 Time-Series Models
    396(1)
    9.1.3 Time-Domain versus Frequency-Domain Approaches
    396(1)
    9.2. Time Domain---I Discrete Data
    397(13)
    9.2.1 Markov Chains
    397(1)
    9.2.2 Two-State, First-Order Markov Chains
    398(4)
    9.2.3 Test for Independence versus First-Order Serial Dependence
    402(2)
    9.2.4 Some Applications of Two-State Markov Chains
    404(2)
    9.2.5 Multiple-State Markov Chains
    406(1)
    9.2.6 Higher-Order Markov Chains
    407(1)
    9.2.7 Deciding among Alternative Orders of Markov Chains
    408(2)
    9.3. Time Domain---II Continuous Data
    410(18)
    9.3.1 First-Order Autoregression
    410(4)
    9.3.2 Higher-Order Autoregressions
    414(1)
    9.3.3 The AR(2) Model
    415(4)
    9.3.4 Order Selection Criteria
    419(2)
    9.3.5 The Variance of a Time Average
    421(2)
    9.3.6 Autoregressive-Moving Average Models
    423(1)
    9.3.7 Simulation and Forecasting with Continuous Time-Domain Models
    424(4)
    9.4. Frequency Domain---I Harmonic Analysis
    428(10)
    9.4.1 Cosine and Sine Functions
    428(1)
    9.4.2 Representing a Simple Time Series with a Harmonic Function
    429(3)
    9.4.3 Estimation of the Amplitude and Phase of a Single Harmonic
    432(3)
    9.4.4 Higher Harmonics
    435(3)
    9.5 Frequency Domain---II Spectral Analysis
    438(17)
    9.5.1 The Harmonic Functions as Uncorrelated Regression Predictors
    438(2)
    9.5.2 The Periodogram, or Fourier Line Spectrum
    440(4)
    9.5.3 Computing Spectra
    444(1)
    9.5.4 Aliasing
    445(2)
    9.5.5 The Spectra of Autoregressive Models
    447(3)
    9.5.6 Sampling Properties of Spectral Estimates
    450(5)
    9.6 Exercises
    455(2)
    Part III Multivariate Statistics
    457(160)
    10 Matrix Algebra and Random Matrices
    459(32)
    10.1 Background to Multivariate Statistics
    459(2)
    10.1.1 Contrasts between Multivariate and Univariate Statistics
    459(1)
    10.1.2 Organization of Data and Basic Notation
    459(1)
    10.1.3 Multivariate Extensions of Common Univariate Statistics
    460(1)
    10.2 Multivariate Distance
    461(3)
    10.2.1 Euclidean Distance
    462(1)
    10.2.2 Mahalanobis (Statistical) Distance
    463(1)
    10.3 Matrix Algebra Review
    464(18)
    10.3.1 Vectors
    464(3)
    10.3.2 Matrices
    467(9)
    10.3.3 Eigenvalues and Eigenvectors of a Square Matrix
    476(3)
    10.3.4 Square Roots of a Symmetric Matrix
    479(2)
    10.3.5 Singular-Value Decomposition (SVD)
    481(1)
    10.4 Random Vectors and Matrices
    482(7)
    10.4.1 Expectations and Other Extensions of Univariate Concepts
    482(1)
    10.4.2 Partitioning Vectors and Matrices
    483(2)
    10.4.3 Linear Combinations
    485(2)
    10.4.4 Mahalanobis Distance, Revisited
    487(2)
    10.5 Exercises
    489(2)
    11 The Multivariate Normal (MVN) Distribution
    491(28)
    11.1 Definition of the MVN
    491(2)
    11.2 Four Handy Properties of the MVN
    493(3)
    11.3 Assessing Multinormality
    496(3)
    11.4 Simulation from the Multivariate Normal Distribution
    499(5)
    11.4.1 Simulating Independent MVN Variates
    499(1)
    11.4.2 Simulating Multivariate Time Series
    500(4)
    11.5 Inferences about a Multinormal Mean Vector
    504(13)
    11.5.1 Multivariate Central Limit Theorem
    504(1)
    11.5.2 Hotelling's T2
    505(6)
    11.5.3 Simultaneous Confidence Statements
    511(4)
    11.5.4 Interpretation of Multivariate Statistical Significance
    515(2)
    11.6 Exercises
    517(2)
    12 Principal Component (EOF) Analysis
    519(44)
    12.1 Basics of Principal Component Analysis
    519(12)
    12.1.1 Definition of PCA
    519(6)
    12.1.2 PCA Based on the Covariance Matrix versus the Correlation Matrix
    525(2)
    12.1.3 The Varied Terminology of PCA
    527(1)
    12.1.4 Scaling Conventions in PCA
    528(2)
    12.1.5 Connections to the Multivariate Normal Distribution
    530(1)
    12.2 Application of PCA to Geophysical Fields
    531(7)
    12.2.1 PCA for a Single Field
    531(2)
    12.2.2 Simultaneous PCA for Multiple Fields
    533(3)
    12.2.3 Scaling Considerations and Equalization of Variance
    536(1)
    12.2.4 Domain Size Effects: Buell Patterns
    536(2)
    12.3 Truncation of the Principal Components
    538(4)
    12.3.1 Why Truncate the Principal Components?
    538(1)
    12.3.2 Subjective Truncation Criteria
    539(1)
    12.3.3 Rules Based on the Size of the Last Retained Eigenvalue
    539(2)
    12.3.4 Rules Based on Hypothesis-Testing Ideas
    541(1)
    12.3.5 Rules Based on Structure in the Retained Principal Components
    542(1)
    12.4 Sampling Properties of the Eigenvalues and Eigenvectors
    542(5)
    12.4.1 Asymptotic Sampling Results for Multivariate Normal Data
    542(2)
    12.4.2 Effective Multiplets
    544(1)
    12.4.3 The North et al. Rule of Thumb
    545(2)
    12.4.4 Bootstrap Approximations to the Sampling Distributions
    547(1)
    12.5 Rotation of the Eigenvectors
    547(7)
    12.5.1 Why Rotate the Eigenvectors?
    547(1)
    12.5.2 Rotation Mechanics
    548(3)
    12.5.3 Sensitivity of Orthogonal Rotation to Initial Eigenvector Scaling
    551(3)
    12.6 Computational Considerations
    554(1)
    12.6.1 Direct Extraction of Eigenvalues and Eigenvectors from [ S]
    554(1)
    12.6.2 PCA via SVD
    555(1)
    12.7 Some Additional Uses of PCA
    555(7)
    12.7.1 Singular Spectrum Analysis (SSA): Time-Series PCA
    555(4)
    12.7.2 Principal-Component Regression
    559(1)
    12.7.3 The Biplot
    560(2)
    12.8 Exercises
    562(1)
    13 Canonical Correlation Analysis (CCA)
    563(20)
    13.1 Basics of CCA
    563(8)
    13.1.1 Overview
    563(1)
    13.1.2 Canonical Variates, Canonical Vectors, and Canonical Correlations
    564(1)
    13.1.3 Some Additional Properties of CCA
    565(6)
    13.2 CCA Applied to Fields
    571(5)
    13.2.1 Translating Canonical Vectors to Maps
    571(1)
    13.2.2 Combining CCA with PCA
    572(1)
    13.2.3 Forecasting with CCA
    572(4)
    13.3 Computational Considerations
    576(4)
    13.3.1 Calculating CCA through Direct Eigendecomposition
    576(1)
    13.3.2 CCA via SVD
    577(3)
    13.4 Maximum Covariance Analysis (MCA)
    580(2)
    13.5 Exercises
    582(1)
    14 Discrimination and Classification
    583(20)
    14.1 Discrimination versus Classification
    583(1)
    14.2 Separating Two Populations
    583(9)
    14.2.1 Equal Covariance Structure: Fisher's Linear Discriminant
    583(5)
    14.2.2 Fisher's Linear Discriminant for Multivariate Normal Data
    588(1)
    14.2.3 Minimizing Expected Cost of Misclassification
    589(2)
    14.2.4 Unequal Covariances: Quadratic Discrimination
    591(1)
    14.3 Multiple Discriminant Analysis (MDA)
    592(5)
    14.3.1 Fisher's Procedure for More Than Two Groups
    592(3)
    14.3.2 Minimizing Expected Cost of Misclassification
    595(1)
    14.3.3 Probabilistic Classification
    596(1)
    14.4 Forecasting with Discriminant Analysis
    597(2)
    14.5 Alternatives to Classical Discriminant Analysis
    599(2)
    14.5.1 Discrimination and Classification Using Logistic Regression
    599(1)
    14.5.2 Discrimination and Classification Using Kernel Density Estimates
    600(1)
    14.6 Exercises
    601(2)
    15 Cluster Analysis
    603(14)
    15.1 Background
    603(1)
    15.1.1 Cluster Analysis versus Discriminant Analysis
    603(1)
    15.1.2 Distance Measures and the Distance Matrix
    603(1)
    15.2 Hierarchical Clustering
    604(10)
    15.2.1 Agglomerative Methods Using the Distance Matrix
    604(2)
    15.2.2 Ward's Minimum Variance Method
    606(1)
    15.2.3 The Dendrogram, or Tree Diagram
    607(1)
    15.2.4 How Many Clusters?
    607(5)
    15.2.5 Divisive Methods
    612(2)
    15.3 Nonhierarchical Clustering
    614(1)
    15.3.1 The K-Means Method
    614(1)
    15.3.2 Nucleated Agglomerative Clustering
    614(1)
    15.3.3 Clustering Using Mixture Distributions
    615(1)
    15.4 Exercises
    615(2)
    Appendix A Example Data Sets 617(2)
    Appendix B Probability Tables 619(8)
    Appendix C Answers to Exercises 627(8)
    References 635(26)
    Index 661
    Daniel S. Wilks has been a member of the Atmospheric Sciences faculty at Cornell University since 1987, and is the author of Statistical Methods in the Atmospheric Sciences (2011, Academic Press), which is in its third edition and has been continuously in print since 1995. Research areas include statistical forecasting, forecast postprocessing, and forecast evaluation.