Atjaunināt sīkdatņu piekrišanu

Statistical Methods in the Atmospheric Sciences 2nd edition, Volume 100 [Hardback]

4.30/5 (35 ratings by Goodreads)
(Department of Earth and Atmospheric Sciences, Cornell University, USA)
  • Formāts: Hardback, 648 pages, height x width: 260x184 mm, weight: 1338 g
  • Sērija : International Geophysics
  • Izdošanas datums: 12-Dec-2005
  • Izdevniecība: Academic Press Inc
  • ISBN-10: 0127519661
  • ISBN-13: 9780127519661
Citas grāmatas par šo tēmu:
  • Formāts: Hardback, 648 pages, height x width: 260x184 mm, weight: 1338 g
  • Sērija : International Geophysics
  • Izdošanas datums: 12-Dec-2005
  • Izdevniecība: Academic Press Inc
  • ISBN-10: 0127519661
  • ISBN-13: 9780127519661
Citas grāmatas par šo tēmu:
Praise for the First Edition:
"I recommend this book, without hesitation, as either a reference or course text...Wilks' excellent book provides a thorough base in applied statistical methods for atmospheric sciences."--BAMS (Bulletin of the American Meteorological Society)

Fundamentally, statistics is concerned with managing data and making inferences and forecasts in the face of uncertainty. It should not be surprising, therefore, that statistical methods have a key role to play in the atmospheric sciences. It is the uncertainty in atmospheric behavior that continues to move research forward and drive innovations in atmospheric modeling and prediction.

This revised and expanded text explains the latest statistical methods that are being used to describe, analyze, test and forecast atmospheric data. It features numerous worked examples, illustrations, equations, and exercises with separate solutions. Statistical Methods in the Atmospheric Sciences, Second Edition will help advanced students and professionals understand and communicate what their data sets have to say, and make sense of the scientific literature in meteorology, climatology, and related disciplines.

* Presents and explains techniques used in atmospheric data summarization, analysis, testing, and forecasting
* Chapters feature numerous worked examples and exercises
* Model Output Statistic (MOS) includes an introduction to the Kalman filter, an approach that tolerates frequent model changes
* Detailed section on forecast verification, including statistical inference, diagrams, and other methods

New in this Edition:
* Expanded treatment of resampling tests within nonparametric tests
* Updated treatment of ensemble forecasting
* Expanded coverage of key analysis techniques, such as principle component analysis, canonical correlation analysis, discriminant analysis, and cluster analysis
* Careful updates and edits throughout, based on users' feedback

Recenzijas

"I would strongly recommend this book... To those who already posses the first edition and are satisfied users, you would be hard-pressed to do without the second edition." --Bulletin of the American Meteorological Society

"What makes this book specific to meterology, and not just to applied statistics, are it's extensive examples and two chapters on statistcal forecasting and forecast evaluation." --William (Matt) Briggs, Weill Medical College of Cornell University

Preface to the First Edition xv
Preface to the Second Edition xvii
PART I Preliminaries
1(20)
Introduction
3(4)
What Is Statistics?
3(1)
Descriptive and Inferential Statistics
3(1)
Uncertainty about the Atmosphere
4(3)
Review of Probability
7(14)
Background
7(1)
The Elements of Probability
7(2)
Events
7(1)
The Sample Space
8(1)
The Axioms of Probability
9(1)
The Meaning of Probability
9(2)
Frequency Interpretation
10(1)
Bayesian (Subjective) Interpretation
10(1)
Some Properties of Probability
11(7)
Domain, Subsets, Complements, and Unions
11(2)
DeMorgan's Laws
13(1)
Conditional Probability
13(1)
Independence
14(2)
Law of Total Probability
16(1)
Bayes' Theorem
17(1)
Exercises
18(3)
PART II Univariate Statistics
21(380)
Empirical Distributions and Exploratory Data Analysis
23(48)
Background
23(2)
Robustness and Resistance
23(1)
Quantiles
24(1)
Numerical Summary Measures
25(3)
Location
26(1)
Spread
26(2)
Symmetry
28(1)
Graphical Summary Techniques
28(14)
Stem-and-Leaf Display
29(1)
Boxplots
30(1)
Schematic Plots
31(2)
Other Boxplot Variants
33(1)
Histograms
33(2)
Kernel Density Smoothing
35(4)
Cumulative Frequency Distributions
39(3)
Reexpression
42(7)
Power Transformations
42(5)
Standardized Anomalies
47(2)
Exploratory Techniques for Paired Data
49(10)
Scatterplots
49(1)
Pearson (Ordinary) Correlation
50(5)
Spearman Rank Correlation and Kendall's τ
55(2)
Serial Correlation
57(1)
Autocorrelation Function
58(1)
Exploratory Techniques for Higher-Dimensional Data
59(10)
The Star Plot
59(1)
The Glyph Scatterplot
60(2)
The Rotating Scatterplot
62(1)
The Correlation Matrix
63(2)
The Scatterplot Matrix
65(2)
Correlation Maps
67(2)
Exercises
69(2)
Parametric Probability Distributions
71(60)
Background
71(2)
Parametric vs. Empirical Distributions
71(1)
What Is a Parametric Distribution?
72(1)
Parameters vs. Statistics
72(1)
Discrete vs. Continuous Distributions
73(1)
Discrete Distributions
73(9)
Binomial Distribution
73(3)
Geometric Distribution
76(1)
Negative Binomial Distribution
77(3)
Poisson Distribution
80(2)
Statistical Expectations
82(3)
Expected Value of a Random Variable
82(1)
Expected Value of a Function of a Random Variable
83(2)
Continuous Distributions
85(26)
Distribution Functions and Expected Values
85(3)
Gaussian Distributions
88(7)
Gamma Distributions
95(7)
Beta Distributions
102(2)
Extreme-Value Distributions
104(5)
Mixture Distributions
109(2)
Qualitative Assessments of the Goodness of Fit
111(3)
Superposition of a Fitted Parametric Distribution and Data Histogram
111(2)
Quantile-Quantile (Q--Q) Plots
113(1)
Parameter Fitting Using Maximum Likelihood
114(6)
The Likelihood Function
114(2)
The Newton-Raphson Method
116(1)
The EM Algorithm
117(3)
Sampling Distribution of Maximum-Likelihood Estimates
120(1)
Statistical Simulation
120(8)
Uniform Random Number Generators
121(2)
Nonuniform Random Number Generation by Inversion
123(1)
Nonuniform Random Number Generation by Rejection
124(2)
Box-Muller Method for Gaussian Random Number Generation
126(1)
Simulating from Mixture Distributions and Kernel Density Estimates
127(1)
Exercises
128(3)
Hypothesis Testing
131(48)
Background
131(7)
Parametric vs. Nonparametric Tests
131(1)
The Sampling Distribution
132(1)
The Elements of Any Hypothesis Test
132(1)
Test Levels and p Values
133(1)
Error Types and the Power of a Test
133(1)
One-Sided vs. Two-Sided Tests
134(1)
Confidence Intervals: Inverting Hypothesis Tests
135(3)
Some Parametric Tests
138(18)
One-Sample t Test
138(2)
Tests for Differences of Mean under Independence
140(1)
Tests for Differences of Mean for Paired Samples
141(2)
Test for Differences of Mean under Serial Dependence
143(3)
Goodness-of-Fit Tests
146(8)
Likelihood Ratio Test
154(2)
Nonparametric Tests
156(14)
Classical Nonparametric Tests for Location
156(6)
Introduction to Resampling Tests
162(2)
Permutation Tests
164(2)
The Bootstrap
166(4)
Field Significance and Multiplicity
170(6)
The Multiplicity Problem for Independent Tests
171(1)
Field Significance Given Spatial Correlation
172(4)
Exercises
176(3)
Statistical Forecasting
179(76)
Background
179(1)
Linear Regression
180(21)
Simple Linear Regression
180(2)
Distribution of the Residuals
182(2)
The Analysis of Variance Table
184(1)
Goodness-of-Fit Measures
185(2)
Sampling Distributions of the Regression Coefficients
187(2)
Examining Residuals
189(5)
Prediction Intervals
194(3)
Multiple Linear Regression
197(1)
Derived Predictor Variables in Multiple Regression
198(3)
Nonlinear Regression
201(6)
Logistic Regression
201(4)
Poisson Regression
205(2)
Predictor Selection
207(10)
Why Is Careful Predictor Selection Important?
207(2)
Screening Predictors
209(3)
Stopping Rules
212(3)
Cross Validation
215(2)
Objective Forecasts Using Traditional Statistical Methods
217(12)
Classical Statistical Forecasting
217(3)
Perfect Prog and MOS
220(6)
Operational MOS Forecasts
226(3)
Ensemble Forecasting
229(16)
Probabilistic Field Forecasts
229(1)
Stochastic Dynamical Systems in Phase Space
229(3)
Ensemble Forecasts
232(1)
Choosing Initial Ensemble Members
233(1)
Ensemble Average and Ensemble Dispersion
234(2)
Graphical Display of Ensemble Forecast Information
236(6)
Effects of Model Errors
242(1)
Statistical Postprocessing: Ensemble MOS
243(2)
Subjective Probability Forecasts
245(7)
The Nature of Subjective Forecasts
245(1)
The Subjective Distribution
246(2)
Central Credible Interval Forecasts
248(2)
Assessing Discrete Probabilities
250(1)
Assessing Continuous Distributions
251(1)
Exercises
252(3)
Forecast Verification
255(82)
Background
255(5)
Purposes of Forecast Verification
255(1)
The Joint Distribution of Forecasts and Observations
256(2)
Scalar Attributes of Forecast Performance
258(1)
Forecast Skill
259(1)
Nonprobabilistic Forecasts of Discrete Predictands
260(16)
The 2 x 2 Contingency Table
260(2)
Scalar Attributes Characterizing 2 x 2 Contingency Tables
262(3)
Skill Scores for 2 x 2 Contingency Tables
265(3)
Which Score?
268(1)
Conversion of Probabilistic to Nonprobabilistic Forecasts
269(2)
Extensions for Multicategory Discrete Predictands
271(5)
Nonprobabilistic Forecasts of Continuous Predictands
276(6)
Conditional Quantile Plots
277(1)
Scalar Accuracy Measures
278(2)
Skill Scores
280(2)
Probability Forecasts of Discrete Predictands
282(20)
The Joint Distribution for Dichotomous Events
282(2)
The Brier Score
284(1)
Algebraic Decomposition of the Brier Score
285(2)
The Reliability Diagram
287(6)
The Discrimination Diagram
293(1)
The ROC Diagram
294(4)
Hedging, and Strictly Proper Scoring Rules
298(1)
Probability Forecasts for Multiple-Category Events
299(3)
Probability Forecasts for Continuous Predictands
302(2)
Full Continuous Forecast Probability Distributions
302(1)
Central Credible Interval Forecasts
303(1)
Nonprobabilistic Forecasts of Fields
304(10)
General Considerations for Field Forecasts
304(2)
The SI Score
306(1)
Mean Squared Error
307(4)
Anomaly Correlation
311(3)
Recent Ideas in Nonprobabilistic Field Verification
314(1)
Verification of Ensemble Forecasts
314(7)
Characteristics of a Good Ensemble Forecast
314(2)
The Verification Rank Histogram
316(3)
Recent Ideas in Verification of Ensemble Forecasts
319(2)
Verification Based on Economic Value
321(5)
Optimal Decision Making and the Cost/Loss Ratio Problem
321(3)
The Value Score
324(1)
Connections with Other Verification Approaches
325(1)
Sampling and Inference for Verification Statistics
326(6)
Sampling Characteristics of Contingency Table Statistics
326(3)
ROC Diagram Sampling Characteristics
329(1)
Reliability Diagram Sampling Characteristics
330(2)
Resampling Verification Statistics
332(1)
Exercises
332(5)
Time Series
337(64)
Background
337(2)
Stationarity
337(1)
Time-Series Models
338(1)
Time-Domain vs. Frequency-Domain Approaches
339(1)
Time Domain---I. Discrete Data
339(13)
Markov Chains
339(1)
Two-State, First-Order Markov Chains
340(4)
Test for Independence vs. First-Order Serial Dependence
344(2)
Some Applications of Two-State Markov Chains
346(2)
Multiple-State Markov Chains
348(1)
Higher-Order Markov Chains
349(1)
Deciding among Alternative Orders of Markov Chains
350(2)
Time Domain---II. Continuous Data
352(19)
First-Order Autoregression
352(5)
Higher-Order Autoregressions
357(1)
The AR(2) Model
358(4)
Order Selection Criteria
362(1)
The Variance of a Time Average
363(3)
Autoregressive-Moving Average Models
366(1)
Simulation and Forecasting with Continuous Time-Domain Models
367(4)
Frequency Domain---I. Harmonic Analysis
371(10)
Cosine and Sine Functions
371(1)
Representing a Simple Time Series with a Harmonic Function
372(3)
Estimation of the Amplitude and Phase of a Single Harmonic
375(3)
Higher Harmonics
378(3)
Frequency Domain---II. Spectral Analysis
381(18)
The Harmonic Functions as Uncorrelated Regression Predictors
381(2)
The Periodogram, or Fourier Line Spectrum
383(4)
Computing Spectra
387(1)
Aliasing
388(2)
Theoretical Spectra of Autoregressive Models
390(4)
Sampling Properties of Spectral Estimates
394(5)
Exercises
399(2)
PART III Multivariate Statistics
401(164)
Matrix Algebra and Random Matrices
403(32)
Background to Multivariate Statistics
403(3)
Contrasts between Multivariate and Univariate Statistics
403(1)
Organization of Data and Basic Notation
404(1)
Multivariate Extensions of Common Univariate Statistics
405(1)
Multivariate Distance
406(2)
Euclidean Distance
406(1)
Mahalanobis (Statistical) Distance
407(1)
Matrix Algebra Review
408(18)
Vectors
409(2)
Matrices
411(9)
Eigenvalues and Eigenvectors of a Square Matrix
420(3)
Square Roots of a Symmetric Matrix
423(2)
Singular-Value Decomposition (SVD)
425(1)
Random Vectors and Matrices
426(6)
Expectations and Other Extensions of Univariate Concepts
426(1)
Partitioning Vectors and Matrices
427(2)
Linear Combinations
429(2)
Mahalanobis Distance, Revisited
431(1)
Exercises
432(3)
The Multivariate Normal (MVN) Distribution
435(28)
Definition of the MVN
435(2)
Four Handy Properties of the MVN
437(3)
Assessing Multinormality
440(4)
Simulation from the Multivariate Normal Distribution
444(4)
Simulating Independent MVN Variates
444(1)
Simulating Multivariate Time Series
445(3)
Inferences about a Multinormal Mean Vector
448(14)
Multivariate Central Limit Theorem
449(1)
Hotelling's T2
449(7)
Simultaneous Confidence Statements
456(3)
Interpretation of Multivariate Statistical Significance
459(3)
Exercises
462(1)
Principal Component (EOF) Analysis
463(46)
Basics of Principal Component Analysis
463(12)
Definition of PCA
463(6)
PCA Based on the Covariance Matrix vs. the Correlation Matrix
469(2)
The Varied Terminology of PCA
471(1)
Scaling Conventions in PCA
472(1)
Connections to the Multivariate Normal Distribution
473(2)
Application of PCA to Geophysical Fields
475(6)
PCA for a Single Field
475(2)
Simultaneous PCA for Multiple Fields
477(2)
Scaling Considerations and Equalization of Variance
479(1)
Domain Size Effects: Buell Patterns
480(1)
Truncation of the Principal Components
481(5)
Why Truncate the Principal Components?
481(1)
Subjective Truncation Criteria
482(2)
Rules Based on the Size of the Last Retained Eigenvalue
484(1)
Rules Based on Hypothesis Testing Ideas
484(2)
Rules Based on Structure in the Retained Principal Components
486(1)
Sampling Properties of the Eigenvalues and Eigenvectors
486(6)
Asymptotic Sampling Results for Multivariate Normal Data
486(2)
Effective Multiplets
488(1)
The North et al. Rule of Thumb
489(3)
Bootstrap Approximations to the Sampling Distributions
492(1)
Rotation of the Eigenvectors
492(7)
Why Rotate the Eigenvectors?
492(1)
Rotation Mechanics
493(3)
Sensitivity of Orthogonal Rotation to Initial Eigenvector Scaling
496(3)
Computational Considerations
499(2)
Direct Extraction of Eigenvalues and Eigenvectors from [ S]
499(1)
PCA via SVD
500(1)
Some Additional Uses of PCA
501(6)
Singular Spectrum Analysis (SSA): Time-Series PCA
501(3)
Principal-Component Regression
504(1)
The Biplot
505(2)
Exercises
507(2)
Canonical Correlation Analysis (CCA)
509(20)
Basics of CCA
509(8)
Overview
509(1)
Canonical Variates, Canonical Vectors, and Canonical Correlations
510(2)
Some Additional Properties of CCA
512(5)
CCA Applied to Fields
517(5)
Translating Canonical Vectors to Maps
517(1)
Combining CCA with PCA
517(2)
Forecasting with CCA
519(3)
Computational Considerations
522(4)
Calculating CCA through Direct Eigendecomposition
522(2)
Calculating CCA through SVD
524(2)
Maximum Covariance Analysis
526(2)
Exercises
528(1)
Discrimination and Classification
529(20)
Discrimination vs. Classification
529(1)
Separating Two Populations
530(8)
Equal Covariance Structure: Fisher's Linear Discriminant
530(4)
Fisher's Linear Discriminant for Multivariate Normal Data
534(1)
Minimizing Expected Cost of Misclassification
535(2)
Unequal Covariances: Quadratic Discrimination
537(1)
Multiple Discriminant Analysis (MDA)
538(6)
Fisher's Procedure for More Than Two Groups
538(3)
Minimizing Expected Cost of Misclassification
541(1)
Probabilistic Classification
542(2)
Forecasting with Discriminant Analysis
544(1)
Alternatives to Classical Discriminant Analysis
545(2)
Discrimination and Classification Using Logistic Regression
545(1)
Discrimination and Classification Using Kernel Density Estimates
546(1)
Exercises
547(2)
Cluster Analysis
549(16)
Background
549(2)
Cluster Analysis vs. Discriminant Analysis
549(1)
Distance Measures and the Distance Matrix
550(1)
Hierarchical Clustering
551(8)
Agglomerative Methods Using the Distance Matrix
551(1)
Ward's Minimum Variance Method
552(1)
The Dendrogram, or Tree Diagram
553(1)
How Many Clusters?
554(5)
Divisive Methods
559(1)
Nonhierarchical Clustering
559(2)
The K-Means Method
559(1)
Nucleated Agglomerative Clustering
560(1)
Clustering Using Mixture Distributions
561(1)
Exercises
561(4)
APPENDIX A Example Data Sets
565(4)
Table A.1. Daily precipitation and temperature data for Ithaca and Canandaigua, New York, for January 1987
565(1)
Table A.2. January precipitation data for Ithaca, New York, 1933--1982
566(1)
Table A.3. June climate data for Guayaquil, Ecuador, 1951--1970
566(3)
APPENDIX B Probability Tables
569(10)
Table B.1. Cumulative Probabilities for the Standard Gaussian Distribution
569(2)
Table B.2. Quantiles of the Standard Gamma Distribution
571(5)
Table B.3. Right-tail quantiles of the Chi-square distribution
576(3)
APPENDIX C Answers to Exercises
579(8)
References 587(24)
Index 611


Daniel S. Wilks has been a member of the Atmospheric Sciences faculty at Cornell University since 1987, and is the author of Statistical Methods in the Atmospheric Sciences (2011, Academic Press), which is in its third edition and has been continuously in print since 1995. Research areas include statistical forecasting, forecast postprocessing, and forecast evaluation.