Atjaunināt sīkdatņu piekrišanu

Statistical Methods for Psychology 8th Revised edition [Hardback]

3.52/5 (154 ratings by Goodreads)
  • Formāts: Hardback, 792 pages, height x width x depth: 257x206x36 mm, weight: 1657 g
  • Izdošanas datums: 01-Jan-2012
  • Izdevniecība: Wadsworth Publishing Co Inc
  • ISBN-10: 1111835489
  • ISBN-13: 9781111835484
Citas grāmatas par šo tēmu:
  • Formāts: Hardback, 792 pages, height x width x depth: 257x206x36 mm, weight: 1657 g
  • Izdošanas datums: 01-Jan-2012
  • Izdevniecība: Wadsworth Publishing Co Inc
  • ISBN-10: 1111835489
  • ISBN-13: 9781111835484
Citas grāmatas par šo tēmu:
STATISTICAL METHODS FOR PSYCHOLOGY surveys the statistical techniques commonly used in the behavioral and social sciences, especially psychology and education. To help students gain a better understanding of the specific statistical hypothesis tests that are covered throughout the text, author David Howell emphasize conceptual understanding. Along with significantly updated discussions of effect size and meta-analysis, this Eighth Edition continues to focus on two key themes that are the cornerstones of this book's success: the importance of looking at the data before beginning a hypothesis test, and the importance of knowing the relationship between the statistical test in use and the theoretical questions being asked by the experiment.
Preface xv
About the Author xix
Chapter 1 Basic Concepts 1(14)
1.1 Important Terms
2(3)
1.2 Descriptive and Inferential Statistics
5(1)
1.3 Measurement Scales
6(2)
1.4 Using Computers
8(1)
1.5 What You Should Know about this Edition
9(6)
Chapter 2 Describing and Exploring Data 15(48)
2.1 Plotting Data
16(2)
2.2 Histograms
18(3)
2.3 Fitting Smoothed Lines to Data
21(3)
2.4 Stem-and-Leaf Displays
24(3)
2.5 Describing Distributions
27(3)
2.6 Notation
30(2)
2.7 Measures of Central Tendency
32(3)
2.8 Measures of Variability
35(12)
2.9 Boxplots: Graphical Representations of Dispersions and Extreme Scores
47(4)
2.10 Obtaining Measures of Dispersion Using SPSS
51(1)
2.11 Percentiles, Quartiles, and Deciles
51(1)
2.12 The Effect of Linear Transformations on Data
52(11)
Chapter 3 The Normal Distribution 63(20)
3.1 The Normal Distribution
66(3)
3.2 The Standard Normal Distribution
69(2)
3.3 Using the Tables of the Standard Normal Distribution
71(3)
3.4 Setting Probable Limits on an Observation
74(1)
3.5 Assessing Whether Data are Normally Distributed
75(3)
3.6 Measures Related to z
78(5)
Chapter 4 Sampling Distributions and Hypothesis Testing 83(24)
4.1 Two Simple Examples Involving Course Evaluations and Rude Motorists
84(2)
4.2 Sampling Distributions
86(2)
4.3 Theory of Hypothesis Testing
88(2)
4.4 The Null Hypothesis
90(3)
4.5 Test Statistics and Their Sampling Distributions
93(1)
4.6 Making Decisions About the Null Hypothesis
93(1)
4.7 Type I and Type II Errors
94(3)
4.8 One- and Two-Tailed Tests
97(2)
4.9 What Does it Mean to Reject the Null Hypothesis?
99(1)
4.10 An Alternative View of Hypothesis Testing
99(2)
4.11 Effect Size
101(1)
4.12 A Final Worked Example
102(1)
4.13 Back to Course Evaluations and Rude Motorists
103(4)
Chapter 5 Basic Concepts of Probability 107(30)
5.1 Probability
108(2)
5.2 Basic Terminology and Rules
110(4)
5.3 Discrete versus Continuous Variables
114(1)
5.4 Probability Distributions for Discrete Variables
115(1)
5.5 Probability Distributions for Continuous Variables
115(2)
5.6 Permutations and Combinations
117(3)
5.7 Bayes' Theorem
120(4)
5.8 The Binomial Distribution
124(4)
5.9 Using the Binomial Distribution to Test Hypotheses
128(3)
5.10 The Multinomial Distribution
131(6)
Chapter 6 Categorical Data and Chi-Square 137(40)
6.1 The Chi-Square Distribution
138(1)
6.2 The Chi-Square Goodness-of-Fit Test-One-Way Classification
139(5)
6.3 Two Classification Variables: Contingency Table Analysis
144(4)
6.4 An Additional Example-A 4 X 2 Design
148(4)
6.5 Chi-Square for Ordinal Data
152(1)
6.6 Summary of the Assumptions of Chi-Square
153(1)
6.7 Dependent or Repeated Measures
154(2)
6.8 One- and Two-Tailed Tests
156(1)
6.9 Likelihood Ratio Tests
157(1)
6.10 Mantel-Haenszel Statistic
158(2)
6.11 Effect Sizes
160(6)
6.12 Measure of Agreement
166(1)
6.13 Writing up the Results
167(10)
Chapter 7 Hypothesis Tests Applied to Means 177(52)
7.1 Sampling Distribution of the Mean
178(3)
7.2 Testing Hypotheses About Means-σ Known
181(2)
7.3 Testing a Sample Mean When σ is Unknown-The One-Sample t Test
183(14)
7.4 Hypothesis Tests Applied to Means-Two Matched Samples
197(9)
7.5 Hypothesis Tests Applied to Means-Two Independent Samples
206(11)
7.6 Heterogeneity of Variance: the Behrens-Fisher Problem
217(3)
7.7 Hypothesis Testing Revisited
220(9)
Chapter 8 Power 229(22)
8.1 The Basic Concept of Power
231(1)
8.2 Factors Affecting the Power of a Test
232(2)
8.3 Calculating Power the Traditional Way
234(2)
8.4 Power Calculations for the One-Sample t
236(2)
8.5 Power Calculations for Differences Between Two Independent Means
238(3)
8.6 Power Calculations for Matched-Sample t
241(1)
8.7 Turning the Tables on Power
242(1)
8.8 Power Considerations in More Complex Designs
243(1)
8.9 The Use of G*Power to Simplify Calculations
243(2)
8.10 Retrospective Power
245(2)
8.11 Writing Up the Results of a Power Analysis
247(4)
Chapter 9 Correlation and Regression 251(52)
9.1 Scatterplot
253(2)
9.2 The Relationship Between Pace of Life and Heart Disease
255(2)
9.3 The Relationship Between Stress and Health
257(1)
9.4 The Covariance
258(2)
9.5 The Pearson Product-Moment Correlation Coefficient (r)
260(1)
9.6 The Regression Line
261(5)
9.7 Other Ways of Fitting a Line to Data
266(1)
9.8 The Accuracy of Prediction
266(6)
9.9 Assumptions Underlying Regression and Correlation
272(2)
9.10 Confidence Limits on Y
274(3)
9.11 A Computer Example Showing the Role of Test-Taking Skills
277(3)
9.12 Hypothesis Testing
280(8)
9.13 One Final Example
288(2)
9.14 The Role of Assumptions in Correlation and Regression
290(1)
9.15 Factors that Affect the Correlation
291(2)
9.16 Power Calculation for Pearson's r
293(10)
Chapter 10 Alternative Correlational Techniques 303(22)
10.1 Point-Biserial Correlation and Phi: Pearson Correlations by Another Name
304(9)
10.2 Biserial and Tetrachoric Correlation: Non-Pearson Correlation Coefficients
313(1)
10.3 Correlation Coefficients for Ranked Data
313(4)
10.4 Analysis of Contingency Tables with Ordered Data
317(3)
10.5 Kendall's Coefficient of Concordance (W)
320(5)
Chapter 11 Simple Analysis of Variance 325(44)
11.1 An Example
326(1)
11.2 The Underlying Model
327(2)
11.3 The Logic of the Analysis of Variance
329(3)
11.4 Calculations in the Analysis of Variance
332(6)
11.5 Writing Up the Results
338(1)
11.6 Computer Solutions
339(2)
11.7 Unequal Sample Sizes
341(2)
11.8 Violations of Assumptions
343(3)
11.9 Transformations
346(7)
11.10 Fixed versus Random Models
353(1)
11.11 The Size of an Experimental Effect
353(4)
11.12 Power
357(4)
11.13 Computer Analyses
361(8)
Chapter 12 Multiple Comparisons Among Treatment Means 369(42)
12.1 Error Rates
370(3)
12.2 Multiple Comparisons in a Simple Experiment on Morphine Tolerance
373(3)
12.3 A Priori Comparisons
376(12)
12.4 Confidence Intervals and Effect Sizes for Contrasts
388(3)
12.5 Reporting Results
391(1)
12.6 Post Hoc Comparisons
391(2)
12.7 Tukey's Test
393(5)
12.8 Which Test?
398(1)
12.9 Computer Solutions
398(3)
12.10 Trend Analysis
401(10)
Chapter 13 Factorial Analysis of Variance 411(46)
13.1 An Extension of the Eysenck Study
414(4)
13.2 Structural Models and Expected Mean Squares
418(1)
13.3 Interactions
419(1)
13.4 Simple Effects
420(3)
13.5 Analysis of Variance Applied to the Effects of Smoking
423(3)
13.6 Comparisons Among Means
426(1)
13.7 Power Analysis for Factorial Experiments
427(3)
13.8 Alternative Experimental Designs
430(7)
13.9 Measures of Association and Effect Size
437(6)
13.10 Reporting the Results
443(1)
13.11 Unequal Sample Sizes
444(2)
13.12 Higher-Order Factorial Designs
446(5)
13.13 A Computer Example
451(6)
Chapter 14 Repeated-Measures Designs 457(50)
14.1 The Structural Model
460(1)
14.2 F Ratios
460(1)
14.3 The Covariance Matrix
461(1)
14.4 Analysis of Variance Applied to Relaxation Therapy
462(3)
14.5 Contrasts and Effect Sizes in Repeated Measures Designs
465(1)
14.6 Writing Up the Results
466(1)
14.7 One Between-Subjects Variable and One Within-Subjects Variable
467(11)
14.8 Two Between-Subjects Variables and One Within-Subjects Variable
478(6)
14.9 Two Within-Subjects Variables and One Between-Subjects Variable
484(5)
14.10 Intraclass Correlation
489(2)
14.11 Other Considerations With Repeated Measures Analyses
491(1)
14.12 Mixed Models for Repeated-Measures Designs
492(15)
Chapter 15 Multiple Regression 507(66)
15.1 Multiple Linear Regression
508(11)
15.2 Using Additional Predictors
519(2)
15.3 Standard Errors and Tests of Regression Coefficients
521(1)
15.4 A Resampling Approach
522(2)
15.5 Residual Variance
524(1)
15.6 Distribution Assumptions
524(1)
15.7 The Multiple Correlation Coefficient
525(2)
15.8 Partial and Semipartial Correlation
527(4)
15.9 Suppressor Variables
531(1)
15.10 Regression Diagnostics
532(7)
15.11 Constructing a Regression Equation
539(4)
15.12 The "Importance" of Individual Variables
543(2)
15.13 Using Approximate Regression Coefficients
545(1)
15.14 Mediating and Moderating Relationships
546(10)
15.15 Logistic Regression
556(17)
Chapter 16 Analyses of Variance and Covariance as General Linear Models 573(50)
16.1 The General Linear Model
574(3)
16.2 One-Way Analysis of Variance
577(3)
16.3 Factorial Designs
580(7)
16.4 Analysis of Variance with Unequal Sample Sizes
587(7)
16.5 The One-Way Analysis of Covariance
594(10)
16.6 Computing Effect Sizes in an Analysis of Covariance
604(2)
16.7 Interpreting an Analysis of Covariance
606(1)
16.8 Reporting the Results of an Analysis of Covariance
607(1)
16.9 The Factorial Analysis of Covariance
607(8)
16.10 Using Multiple Covariates
615(1)
16.11 Alternative Experimental Designs
616(7)
Chapter 17 Meta-Analysis and Single-Case Designs 623(34)
Meta-Analysis
624(17)
17.1 A Brief Review of Effect Size Measures
625(3)
17.2 An Example-Child and Adolescent Depression
628(10)
17.3 A Second Example-Nicotine Gum and Smoking Cessation
638(3)
Single-Case Designs
641(16)
17.4 Analyses that Examine Standardized Mean Differences
641(1)
17.5 A Case Study of Depression
642(4)
17.6 A Second Approach to a Single-Case Design-Using Piecewise Regression
646(11)
Chapter 18 Resampling and Nonparametric Approaches to Data 657(28)
18.1 Bootstrapping as a General Approach
659(2)
18.2 Bootstrapping with One Sample
661(1)
18.3 Bootstrapping Confidence Limits on a Correlation Coefficient
662(3)
18.4 Resampling Tests with Two Paired Samples
665(2)
18.5 Resampling Tests with Two Independent Samples
667(1)
18.6 Wilcoxon's Rank-Sum Test
668(5)
18.7 Wilcoxon's Matched-Pairs Signed-Ranks Test
673(4)
18.8 The Sign Test
677(1)
18.9 Kruskal-Wallis One-Way Analysis of Variance
678(1)
18.10 Friedman's Rank Test for k Correlated Samples
679(6)
Appendices 685(34)
References 719(14)
Answers to Exercises 733(24)
Index 757