Atjaunināt sīkdatņu piekrišanu

Applied Univariate, Bivariate, and Multivariate Statistics: Understanding Statistics for Social and Natural Scientists, With Applications in SPSS and R 2nd edition [Hardback]

(University of Montana)
  • Formāts: Hardback, 576 pages, height x width x depth: 10x10x10 mm, weight: 454 g
  • Izdošanas datums: 16-Apr-2021
  • Izdevniecība: John Wiley & Sons Inc
  • ISBN-10: 1119583047
  • ISBN-13: 9781119583042
Citas grāmatas par šo tēmu:
  • Hardback
  • Cena: 137,88 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Bibliotēkām
  • Formāts: Hardback, 576 pages, height x width x depth: 10x10x10 mm, weight: 454 g
  • Izdošanas datums: 16-Apr-2021
  • Izdevniecība: John Wiley & Sons Inc
  • ISBN-10: 1119583047
  • ISBN-13: 9781119583042
Citas grāmatas par šo tēmu:
AN UPDATED GUIDE TO STATISTICAL MODELING TECHNIQUES USED IN THE SOCIAL AND NATURAL SCIENCES

This revised and updated second edition of Applied Univariate, Bivariate, and Multivariate Statistics: Understanding Statistics for Social and Natural Scientists, with Applications in SPSS and R contains an accessible introduction to statistical modeling techniques commonly used in the social and natural sciences. The text offers a blend of statistical theory and methodology and reviews both the technical and theoretical aspects of good data analysis.

Featuring applied resources at various levels, the book includes statistical techniques using software packages such as R and SPSS®. To promote a more in-depth interpretation of statistical techniques across the sciences, the book surveys some of the technical arguments underlying formulas and equations. The second edition has been designed to be more approachable by minimizing theoretical or technical jargon and maximizing conceptual understanding with easy-to-apply software examples. This important text:





Offers demonstrations of statistical techniques using software packages such as R and SPSS® Contains examples of hypothetical and real data with statistical analyses Provides historical and philosophical insights into many of the techniques used in modern science Includes a companion website that features further instructional details, additional data sets, and solutions to selected exercises

 

Written for students of social and applied sciences, Applied Univariate, Bivariate, and Multivariate Statistics, Second Edition offers a thorough introduction to the world of statistical modeling techniques in the sciences.
Preface xviii
About the Companion Website xxi
1 Preliminary Considerations
1(15)
1.1 The Philosophical Bases of Knowledge: Rationalistic Versus Empiricist Pursuits
1(2)
1.2 What is a "Model"?
3(2)
1.3 Social Sciences Versus Hard Sciences
5(2)
1.4 Is Complexity a Good Depiction of Reality? Are Multivariate Methods Useful?
7(1)
1.5 Causality
8(1)
1.6 The Nature of Mathematics: Mathematics as a Representation of Concepts
8(2)
1.7 As a Scientist, How Much Mathematics Do You Need to Know?
10(1)
1.8 Statistics and Relativity
11(1)
1.9 Experimental Versus Statistical Control
12(1)
1.10 Statistical Versus Physical Effects
12(1)
1.11 Understanding What "Applied Statistics" Means
13(3)
Review Exercises
14(1)
Further Discussion and Activities
14(2)
2 Introductory Statistics
16(81)
2.1 Densities and Distributions
17(10)
2.1.1 Plotting Normal Distributions
19(2)
2.1.2 Binomial Distributions
21(2)
2.1.3 Normal Approximation
23(1)
2.1.4 Joint Probability Densities: Bivariate and Multivariate Distributions
24(3)
2.2 Chi-Square Distributions and Goodness-of-Fit Test
27(4)
2.2.1 Power for Chi-Square Test of Independence
30(1)
2.3 Sensitivity and Specificity
31(1)
2.4 Scales of Measurement: Nominal, Ordinal, Interval, Ratio
31(3)
2.4.1 Nominal Scale
32(1)
2.4.2 Ordinal Scale
32(1)
2.4.3 Interval Scale
33(1)
2.4.4 Ratio Scale
33(1)
2.5 Mathematical Variables Versus Random Variables
34(1)
2.6 Moments and Expectations
35(3)
2.6.1 Sample and Population Mean Vectors
36(2)
2.7 Estimation and Estimators
38(1)
2.8 Variance
39(2)
2.9 Degrees of Freedom
41(1)
2.10 Skewness and Kurtosis
42(2)
2.11 Sampling Distributions
44(3)
2.11.1 Sampling Distribution of the Mean
44(3)
2.12 Central Limit Theorem
47(1)
2.13 Confidence Intervals
47(2)
2.14 Maximum Likelihood
49(1)
2.15 Akaike's Information Criteria
50(1)
2.16 Covariance and Correlation
50(4)
2.17 Psychometric Validity, Reliability: A Common Use of Correlation Coefficients
54(3)
2.18 Covariance and Correlation Matrices
57(1)
2.19 Other Correlation Coefficients
58(3)
2.20 Student's t Distribution
61(6)
2.20.1 T-Tests for One Sample
61(4)
2.20.2 T-Tests for Two Samples
65(1)
2.20.3 Two-Sample t-Tests in R
65(2)
2.21 Statistical Power
67(2)
2.21.1 Visualizing Power
69(1)
2.22 Power Estimation Using R and G*Power
69(4)
2.22.1 Estimating Sample Size and Power for Independent Samples r-Test
71(2)
2.23 Paired-Samples t-Test: Statistical Test for Matched-Pairs (Elementary Blocking) Designs
73(3)
2.24 Blocking With Several Conditions
76(1)
2.25 Composite Variables: Linear Combinations
76(1)
2.26 Models in Matrix Form
77(2)
2.27 Graphical Approaches
79(3)
2.27.1 Box-and-Whisker Plots
79(3)
2.28 What Makes a p-Value Small? A Critical Overview and Practical Demonstration of Null Hypothesis Significance Testing
82(7)
2.28.1 Null Hypothesis Significance Testing (NHST): A Legacy of Criticism
82(3)
2.28.2 The Make-Up of a p-Value: A Brief Recap and Summary
85(1)
2.28.3 The Issue of Standardized Testing: Are Students in Your School Achieving More than the National Average?
85(1)
2.28.4 Other Test Statistics
86(1)
2.28.5 The Solution
87(1)
2.28.6 Statistical Distance: Cohen's d
87(1)
2.28.7 What Does Cohen's d Actually Tell Us?
88(1)
2.28.8 Why and Where the Significance Test Still Makes Sense
89(1)
2.29
Chapter Summary and Highlights
89(8)
Review Exercises
92(3)
Further Discussion and Activities
95(2)
3 Analysis Of Variance: Fixed Effects Models
97(49)
3.1 What is Analysis of Variance? Fixed Versus Random Effects
98(3)
3.1.1 Small Sample Example: Achievement as a Function of Teacher
99(1)
3.1.2 Is Achievement a Function of Teacher?
100(1)
3.2 How Analysis of Variance Works: A Big Picture Overview
101(2)
3.2.1 Is the Observed Difference Likely? ANOVA as a Comparison (Ratio) of Variances
102(1)
3.3 Logic and Theory of ANOVA: A Deeper Look
103(6)
3.3.1 Independent-Samples t-Tests Versus Analysis of Variance
104(1)
3.3.2 The ANOVA Model: Explaining Variation
105(1)
3.3.3 Breaking Down a Deviation
106(1)
3.3.4 Naming the Deviations
107(1)
3.3.5 The Sums of Squares of ANOVA
108(1)
3.4 From Sums of Squares to Unbiased Variance Estimators: Dividing by Degrees of Freedom
109(1)
3.5 Expected Mean Squares for One-Way Fixed Effects Model: Deriving the F-ratio
110(2)
3.6 The Null Hypothesis in ANOVA
112(1)
3.7 Fixed Effects ANOVA: Model Assumptions
113(2)
3.8 A Word on Experimental Design and Randomization
115(1)
3.9 A Preview of the Concept of Nesting
116(1)
3.10 Balanced Versus Unbalanced Data in ANOVA Models
116(1)
3.11 Measures of Association and Effect Size in ANOVA: Measures of Variance Explained
117(1)
3.11.1 η2 Eta-Squared
117(1)
3.11.2 Omega-Squared
118(1)
3.12 The F-Test and the Independent Samples t-Test
118(1)
3.13 Contrasts and Post-Hocs
119(5)
3.13.1 Independence of Contrasts
122(1)
3.13.2 Independent Samples t-Test as a Linear Contrast
123(1)
3.14 Post-Hoc Tests
124(6)
3.14.1 Newman-Keuls and Tukey HSD
126(1)
3.14.2 Tukey HSD
127(1)
3.14.3 Scheffe" Test
128(1)
3.14.4 Other Post-Hoc Tests
129(1)
3.14.5 Contrast Versus Post-Hoc? Which Should I be Doing?
129(1)
3.15 Sample Size and Power for ANOVA: Estimation With R and G*Power
130(3)
3.15.1 Power for ANOVA in R and G*Power
130(1)
3.15.2 Computing
130(3)
3.16 Fixed effects One-Way Analysis of Variance in R: Mathematics Achievement as a Function of Teacher
133(5)
3.16.1 Evaluating Assumptions
134(3)
3.16.2 Post-Hoc Tests on Teacher
137(1)
3.17 Analysis of Variance Via R's lm
138(1)
3.18 Kruskal-Wallis Test in R and the Motivation Behind Nonparametric Tests
138(2)
3.19 ANOVA in SPSS: Achievement as a Function of Teacher
140(2)
3.20
Chapter Summary and Highlights
142(4)
Review Exercises
143(2)
Further Discussion and Activities
145(1)
4 Factorial Analysis Of Variance: Modeling Interactions
146(29)
4.1 What is Factorial Analysis of Variance?
146(2)
4.2 Theory of Factorial ANOVA: A Deeper Look
148(5)
4.2.1 Deriving the Model for Two-Way Factorial ANOVA
149(1)
4.2.2 Cell Effects
150(1)
4.2.3 Interaction Effects
151(1)
4.2.4 Cell Effects Versus Interaction Effects
152(1)
4.2.5 A Model for the Two-Way Fixed Effects ANOVA
152(1)
4.3 Comparing One-Way ANOVA to Two-Way ANOVA: Cell Effects in Factorial ANOVA Versus Sample Effects in One-Way ANOVA
153(1)
4.4 Partitioning the Sums of Squares for Factorial ANOVA: The Case of Two Factors
153(6)
4.4.1 SS Total: A Measure of Total Variation
154(1)
4.4.2 Model Assumptions: Two-Way Factorial Model
155(1)
4.4.3 Expected Mean Squares for Factorial Design
156(3)
4.4.4 Recap of Expected Mean Squares
159(1)
4.5 Interpreting Main Effects in the Presence of Interactions
159(1)
4.6 Effect Size Measures
160(1)
4.7 Three-Way, Four-Way, and Higher Models
161(1)
4.8 Simple Main Effects
161(1)
4.9 Nested Designs
162(2)
4.9.1 Varieties of Nesting: Nesting of Levels Versus Subjects
163(1)
4.10 Achievement as a Function of Teacher and Textbook: Example of Factorial ANOVA in R
164(7)
4.10.1 Comparing Models Through AIC
167(2)
4.10.2 Visualizing Main Effects and Interaction Effects Simultaneously
169(1)
4.10.3 Simple Main Effects for Achievement Data: Breaking Down Interaction Effects
170(1)
4.11 Interaction Contrasts
171(1)
4.12
Chapter Summary and Highlights
172(3)
Review Exercises
173(2)
5 Introduction To Random Effects And Mixed Models
175(29)
5.1 What is Random Effects Analysis of Variance?
176(1)
5.2 Theory of Random Effects Models
177(1)
5.3 Estimation in Random Effects Models
178(2)
5.3.1 Transitioning from Fixed Effects to Random Effects
178(1)
5.3.2 Expected Mean Squares for MS Between and MS Within
179(1)
5.4 Defining Null Hypotheses in Random Effects Models
180(2)
5.4.1 F-Ratio for Testing H0
181(1)
5.5 Comparing Null Hypotheses in Fixed Versus Random Effects Models: The Importance of Assumptions
182(1)
5.6 Estimating Variance Components in Random Effects Models: ANOVA, ML, REML Estimators
183(2)
5.6.1 ANOVA Estimators of Variance Components
183(1)
5.6.2 Maximum Likelihood and Restricted Maximum Likelihood
184(1)
5.7 Is Achievement a Function of Teacher? One-Way Random Effects Model in R
185(3)
5.7.1 Proportion of Variance Accounted for by Teacher
187(1)
5.8 R Analysis Using REML
188(1)
5.9 Analysis in SPSS: Obtaining Variance Components
188(2)
5.10 Factorial Random Effects: A Two-Way Model
190(1)
5.11 Fixed Effects Versus Random Effects: A Way of Conceptualizing Their Differences
191(1)
5.12 Conceptualizing the Two-Way Random Effects Model: The Make-Up of a Randomly Chosen Observation
192(1)
5.13 Sums of Squares and Expected Mean Squares for Random Effects: The Contaminating Influence of Interaction Effects
193(2)
5.13.1 Testing Null Hypotheses
194(1)
5.14 You Get What You Go In With: The Importance of Model Assumptions and Model Selection
195(1)
5.15 Mixed Model Analysis of Variance: Incorporating Fixed and Random Effects
196(3)
5.15.1 Mixed Model in R
199(1)
5.16 Mixed Models in Matrices
199(1)
5.17 Multilevel Modeling as a Special Case of the Mixed Model: Incorporating Nesting and Clustering
200(1)
5.18
Chapter Summary and Highlights
201(3)
Review Exercises
202(2)
6 Randomized Blocks And Repeated Measures
204(28)
6.1 What is a Randomized Block Design?
205(1)
6.2 Randomized Block Designs: Subjects Nested Within Blocks
205(2)
6.3 Theory of Randomized Block Designs
207(4)
6.3.1 Nonadditive Randomized Block Design
208(1)
6.3.2 Additive Randomized Block Design
209(2)
6.4 Tukey Test for Nonadditivity
211(1)
6.5 Assumptions for the Covariance Matrix
212(1)
6.6 Intraclass Correlation
213(2)
6.7 Repeated Measures Models: A Special Case of Randomized Block Designs
215(1)
6.8 Independent Versus Paired-Samples f-Test
215(1)
6.9 The Subject Factor: Fixed or Random Effect?
216(1)
6.10 Model for One-Way Repeated Measures Design
217(1)
6.10.1 Expected Mean Squares for Repeated Measures Models
217(1)
6.11 Analysis Using R: One-Way Repeated Measures: Learning as a Function of Trial
218(4)
6.12 Analysis Using SPSS: One-Way Repeated Measures: Learning as a Function of Trial
222(4)
6.12.1 Which Results Should Be Interpreted?
224(2)
6.13 SPSS Two-Way Repeated Measures Analysis of Variance Mixed Design: One Between Factor, One Within Factor
226(4)
6.13.1 Another Look at the Between-Subjects Factor
229(1)
6.14
Chapter Summary and Highlights
230(2)
Review Exercises
231(1)
7 Linear Regression
232(54)
7.1 Brief History of Regression
233(2)
7.2 Regression Analysis and Science: Experimental Versus Correlational Distinctions
235(1)
7.3 A Motivating Example: Can Offspring Height Be Predicted?
236(2)
7.4 Theory of Regression Analysis: A Deeper Look
238(2)
7.5 Multilevel Yearnings
240(1)
7.6 The Least-Squares Line
240(1)
7.7 Making Predictions Without Regression
241(2)
7.8 More about εi
243(1)
7.9 Model Assumptions for Linear Regression
243(3)
7.9.1 Model Specification
245(1)
7.9.2 Measurement Error
245(1)
7.10 Estimation of Model Parameters in Regression
246(2)
7.10.1 Ordinary Least-Squares (OLS)
247(1)
7.11 Null Hypotheses for Regression
248(2)
7.12 Significance Tests and Confidence Intervals for Model Parameters
250(1)
7.13 Other Formulations of the Regression Model
251(1)
7.14 The Regression Model in Matrices: Allowing for More Complex Multivariable Models
252(3)
7.15 Ordinary Least-Squares in Matrices
255(1)
7.16 Analysis of Variance for Regression
256(3)
7.17 Measures of Model Fit for Regression: How Well Does the Linear Equation Fit?
259(1)
7.18 Adjusted R2
260(1)
7.19 What "Explained Variance" Means and More Importantly, What It Does Not Mean
260(1)
7.20 Values Fit by Regression
261(1)
7.21 Least-Squares Regression in R: Using Matrix Operations
262(3)
7.22 Linear Regression Using R
265(2)
7.23 Regression Diagnostics: A Check on Model Assumptions
267(8)
7.23.1 Understanding How Outliers Influence a Regression Model
268(1)
7.23.2 Examining Outliers and Residuals
269(3)
7.23.3 Detecting Outliers
272(2)
7.23.4 Normality of Residuals
274(1)
7.24 Regression in SPSS: Predicting Quantitative from Verbal
275(4)
7.25 Power Analysis for Linear Regression in R
279(2)
7.26
Chapter Summary and Highlights
281(5)
Review Exercises
283(2)
Further Discussion and Activities
285(1)
8 Multiple Linear Regression
286(30)
8.1 Theory of Partial Correlation
287(1)
8.2 Semipartial Correlations
288(1)
8.3 Multiple Regression
289(1)
8.4 Some Perspective on Regression Coefficients: "Experimental Coefficients"?
290(1)
8.5 Multiple Regression Model in Matrices
291(1)
8.6 Estimation of Parameters
292(1)
8.7 Conceptualizing Multiple R
292(1)
8.8 Interpreting Regression Coefficients: Correlated Versus Uncorrelated Predictors
293(1)
8.9 Anderson's Iris Data: Predicting Sepal Length From Petal Length and Petal Width
293(4)
8.10 Fitting Other Functional Forms: A Brief Look at Polynomial Regression
297(1)
8.11 Measures of Collinearity in Regression: Variance Inflation Factor and Tolerance
298(2)
8.12 R-squared as a Function of Partial and Semipartial Correlations: The Stepping Stones to Forward and Stepwise Regression
300(1)
8.13 Model-Building Strategies: Simultaneous, Hierarchical, Forward, Stepwise
301(6)
8.13.1 Simultaneous, Hierarchical, Forward
303(2)
8.13.2 Stepwise Regression
305(1)
8.13.3 Selection Procedures in R
306(1)
8.13.4 Which Regression Procedure Should Be Used? Concluding Comments and Recommendations Regarding Model-Building
306(1)
8.14 Power Analysis for Multiple Regression
307(1)
8.15 Introduction to Statistical Mediation: Concepts and Controversy
307(4)
8.15.1 Statistical Versus True Mediation: Some Philosophical Pitfalls in the Interpretation of Mediation Analysis
309(2)
8.16 Brief Survey of Ridge and Lasso Regression: Penalized Regression Models and the Concept of Shrinkage
311(2)
8.17
Chapter Summary and Highlights
313(3)
Review Exercises
314(1)
Further Discussion and Activities
315(1)
9 Interactions In Multiple Linear Regression
316(17)
9.1 The Additive Regression Model With Two Predictors
317(1)
9.2 Why the Interaction is the Product Term XiZi: Drawing an Analogy to Factorial ANOVA
318(1)
9.3 A Motivating Example of Interaction in Regression: Crossing a Continuous Predictor With a Dichotomous Predictor
319(4)
9.4 Analysis of Covariance
323(3)
9.4.1 Is ANCOVA "Controlling" for Anything?
325(1)
9.5 Continuous Moderators
326(1)
9.6 Summing Up the Idea of Interactions in Regression
326(1)
9.7 Do Moderators Really "Moderate" Anything?
326(1)
9.7.1 Some Philosophical Considerations
326(1)
9.8 Interpreting Model Coefficients in the Context of Moderators
327(1)
9.9 Mean-Centering Predictors: Improving the Interpretability of Simple Slopes
328(2)
9.10 Multilevel Regression: Another Special Case of the Mixed Model
330(1)
9.11
Chapter Summary and Highlights
331(2)
Review Exercises
331(2)
10 Logistic Regression And The Generalized Linear Model
333(28)
10.1 Nonlinear Models
335(1)
10.2 Generalized Linear Models
336(2)
10.2.1 The Logic of the Generalized Linear Model: How the Link Function Transforms Nonlinear Response Variables
337(1)
10.3 Canonical Links
338(1)
10.3.1 Canonical Link for Gaussian Variable
339(1)
10.4 Distributions and Generalized Linear Models
339(1)
10.4.1 Logistic Models
339(1)
10.4.2 Poisson Models
340(1)
10.5 Dispersion Parameters and Deviance
340(1)
10.6 Logistic Regression
341(2)
10.6.1 A Generalized Linear Model for Binary Responses
341(1)
10.6.2 Model for Single Predictor
342(1)
10.7 Exponential and Logarithmic Functions
343(4)
10.7.1 Logarithms
345(1)
10.7.2 The Natural Logarithm
346(1)
10.8 Odds and the Logit
347(1)
10.9 Putting It All Together: Logistic Regression
348(3)
10.9.1 The Logistic Regression Model
348(1)
10.9.2 Interpreting the Logit: A Survey of Logistic Regression Output
348(3)
10.10 Logistic Regression in R
351(3)
10.10.1 Challenger O-ring Data
351(3)
10.11 Challenger Analysis in SPSS
354(4)
10.11.1 Predictions of New Cases
356(2)
10.12 Sample Size, Effect Size, and Power
358(1)
10.13 Further Directions
358(1)
10.14
Chapter Summary and Highlights
359(2)
Review Exercises
360(1)
11 Multivariate Analysis Of Variance
361(33)
11.1 A Motivating Example: Quantitative and Verbal Ability as a Variate
362(1)
11.2 Constructing the Composite
363(1)
11.3 Theory of MANOVA
364(1)
11.4 Is the Linear Combination Meaningful?
365(3)
11.4.1 Control Over Type I Error Rate
365(1)
11.4.2 Covariance Among Dependent Variables
366(1)
11.4.3 Rao's Paradox
367(1)
11.5 Multivariate Hypotheses
368(1)
11.6 Assumptions of MANOVA
368(1)
11.7 Hotelling's J2: The Case of Generalizing From Univariate to Multivariate
369(4)
11.8 The Covariance Matrix S
373(2)
11.9 From Sums of Squares and Cross-Products to Variances and Covariances
375(1)
11.10 Hypothesis and Error Matrices of MANOVA
376(1)
11.11 Multivariate Test Statistics
376(3)
11.11.1 Pillai's Trace
378(1)
11.11.2 Lawley--Hotelling's Trace
379(1)
11.12 Equality of Covariance Matrices
379(2)
11.13 Multivariate Contrasts
381(1)
11.14 MANOVA in R and SPSS
382(5)
11.14.1 Univariate Analyses
386(1)
11.15 MANOVA of Fisher's Iris Data
387(1)
11.16 Power Analysis and Sample Size for MANOVA
388(1)
11.17 Multivariate Analysis of Covariance and Multivariate Models: A Bird's Eye View of Linear Models
389(1)
11.18
Chapter Summary and Highlights
389(5)
Review Exercises
391(2)
Further Discussion and Activities
393(1)
12 Discriminant Analysis
394(29)
12.1 What is Discriminant Analysis? The Big Picture on the Iris Data
395(1)
12.2 Theory of Discriminant Analysis
396(3)
12.2.1 Discriminant Analysis for Two Populations
397(1)
12.2.2 Substituting the Maximizing Vector into Squared Standardized Difference
398(1)
12.3 LDA in R and SPSS
399(6)
12.4 Discriminant Analysis for Several Populations
405(3)
12.4.1 Theory for Several Populations
405(3)
12.5 Discriminating Species of Iris: Discriminant Analyses for Three Populations
408(2)
12.6 A Note on Classification and Error Rates
410(2)
12.6.1 Statistical Lives
412(1)
12.7 Discriminant Analysis and Beyond
412(1)
12.8 Canonical Correlation
413(1)
12.9 Motivating Example for Canonical Correlation: Hotelling's 1936 Data
414(1)
12.10 Canonical Correlation as a General Linear Model
415(1)
12.11 Theory of Canonical Correlation
416(2)
12.12 Canonical Correlation of Hotelling's Data
418(1)
12.13 Canonical Correlation on the Iris Data: Extracting Canonical Correlation From Regression, MANOVA, LDA
419(1)
12.14
Chapter Summary and Highlights
420(3)
Review Exercises
421(1)
Further Discussion and Activities
422(1)
13 Principal Components Analysis
423(26)
13.1 History of Principal Components Analysis
424(2)
13.2 Hotelling 1933
426(2)
13.3 Theory of Principal Components Analysis
428(1)
13.3.1 The Theorem of Principal Components Analysis
428(1)
13.4 Eigenvalues as Variance
429(1)
13.5 Principal Components as Linear Combinations
429(1)
13.6 Extracting the First Component
430(1)
13.6.1 Sample Variance of a Linear Combination
430(1)
13.7 Extracting the Second Component
431(1)
13.8 Extracting Third and Remaining Components
432(1)
13.9 The Eigenvalue as the Variance of a Linear Combination Relative to its Length
432(1)
13.10 Demonstrating Principal Components Analysis: Pearson's 1901 Illustration
433(3)
13.11 Scree Plots
436(3)
13.12 Principal Components Versus Least-Squares Regression Lines
439(2)
13.13 Covariance Versus Correlation Matrices: Principal Components and Scaling
441(1)
13.14 Principal Components Analysis Using SPSS
441(4)
13.15
Chapter Summary and Highlights
445(4)
Review Exercises
446(2)
Further Discussion and Activities
448(1)
14 Factor Analysis
449(48)
14.1 History of Factor Analysis
450(1)
14.2 Factor Analysis at a Glance
450(1)
14.3 Exploratory Versus Confirmatory Factor Analysis
451(1)
14.4 Theory of Factor Analysis: The Exploratory Factor-Analytic Model
451(1)
14.5 The Common Factor-Analytic Model
452(2)
14.6 Assumptions of the Factor-Analytic Model
454(1)
14.7 Why Model Assumptions are Important
455(1)
14.8 The Factor Model as an Implication for the Covariance Matrix Σ
456(1)
14.9 Again, Why is Σ = ΛΛ' + Ψ So Important a Result?
457(1)
14.10 The Major Critique Against Factor Analysis: Indeterminacy and the Nonuniqueness of Solutions
457(2)
14.11 Has Your Factor Analysis Been Successful?
459(1)
14.12 Estimation of Parameters in Exploratory Factor Analysis
460(1)
14.13 Principal Factor
460(1)
14.14 Maximum Likelihood
461(1)
14.15 The Concepts (and Criticisms) of Factor Rotation
462(2)
14.16 Varimax and Quartimax Rotation
464(1)
14.17 Should Factors Be Rotated? Is That Not Cheating?
465(1)
14.18 Sample Size for Factor Analysis
466(1)
14.19 Principal Components Analysis Versus Factor Analysis: Two Key Differences
466(2)
14.19.1 Hypothesized Model and Underlying Theoretical Assumptions
466(1)
14.19.2 Solutions are Not Invariant in Factor Analysis
467(1)
14.20 Principal Factor in SPSS: Principal Axis Factoring
468(6)
14.21 Bartlett Test of Sphericity and Kaiser-Meyer-Olkin Measure of Sampling Adequacy (MSA)
474(2)
14.22 Factor Analysis in R: Holzinger and Swineford (1939)
476(1)
14.23 Cluster Analysis
477(1)
14.24 What is Cluster Analysis? The Big Picture
478(2)
14.25 Measuring Proximity
480(3)
14.26 Hierarchical Clustering Approaches
483(2)
14.27 Nonhierarchical Clustering Approaches
485(1)
14.28 K-Means Cluster Analysis in R
486(3)
14.29 Guidelines and Warnings About Cluster Analysis
489(1)
14.30 A Brief Look at Multidimensional Scaling
489(3)
14.31
Chapter Summary and Highlights
492(5)
Review Exercises
493(3)
Further Discussion and Activities
496(1)
15 Path Analysis And Structural Equation Modeling
497(37)
15.1 Path Analysis: A Motivating Example---Predicting IQ Across Generations
498(2)
15.2 Path Analysis and "Causal Modeling"
500(2)
15.3 Early Post-Wright Path Analysis: Predicting Child's IQ (Burks, 1928)
502(1)
15.4 Decomposing Path Coefficients
503(1)
15.5 Path Coefficients and Wright's Contribution
504(1)
15.6 Path Analysis in R---A Quick Overview: Modeling Galton's Data
505(5)
15.6.1 Path Model in AMOS
508(2)
15.7 Confirmatory Factor Analysis: The Measurement Model
510(4)
15.7.1 Confirmatory Factor Analysis as a Means of Evaluating Construct Validity and Assessing Psychometric Qualities
512(2)
15.8 Structural Equation Models
514(1)
15.9 Direct, Indirect, and Total Effects
515(1)
15.10 Theory of Statistical Modeling: A Deeper Look Into Covariance Structures and General Modeling
516(2)
15.11 The Discrepancy Function and Chi-Square
518(1)
15.12 Identification
519(1)
15.13 Disturbance Variables
520(1)
15.14 Measures and Indicators of Model Fit
521(1)
15.15 Overall Measures of Model Fit
522(1)
15.15.1 Root Mean Square Residual and Standardized Root Mean Square Residual
522(1)
15.15.2 Root Mean Square Error of Approximation
523(1)
15.16 Model Comparison Measures: Incremental Fit Indices
523(2)
15.17 Which Indicator of Model Fit is Best?
525(1)
15.18 Structural Equation Model in R
526(2)
15.19 How All Variables Are Latent: A Suggestion for Resolving the Manifest-Latent Distinction
528(1)
15.20 The Structural Equation Model as a General Model: Some Concluding Thoughts on Statistics and Science
529(1)
15.21
Chapter Summary and Highlights
530(4)
Review Exercises
531(2)
Further Discussion and Activities
533(1)
References 534(14)
Index 548
DANIEL J. DENIS, PhD, is Professor of Quantitative Psychology at the University of Montana where he teaches courses in univariate and multivariate statistics. He has published a number of articles in peer-reviewed journals and has served as consultant to researchers and practitioners in a variety of fields.