Atjaunināt sīkdatņu piekrišanu

Program Evaluation: An Introduction 5th ed. [Mīkstie vāki]

3.31/5 (238 ratings by Goodreads)
(Florida State University Volume Editor), ,
  • Formāts: Paperback / softback, 401 pages, height x width x depth: 231x160x18 mm, weight: 544 g, Figures; Tables, black and white
  • Izdošanas datums: 01-Mar-2009
  • Izdevniecība: Wadsworth Publishing Co Inc
  • ISBN-10: 0495601667
  • ISBN-13: 9780495601661
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 224,83 €*
  • * Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena
  • Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena.
  • Daudzums:
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 401 pages, height x width x depth: 231x160x18 mm, weight: 544 g, Figures; Tables, black and white
  • Izdošanas datums: 01-Mar-2009
  • Izdevniecība: Wadsworth Publishing Co Inc
  • ISBN-10: 0495601667
  • ISBN-13: 9780495601661
Citas grāmatas par šo tēmu:
Praised by instructors and students alike, PROGRAM EVALUATION: AN INTRODUCTION helps your students evaluate services and programs that they will encounter in their professional practice. In the process of learning evaluation techniques and skills, students will become proficient at critically analyzing evaluation studies conducted by others. The authors present and simplify all the essentials needed for a critical appreciation of evaluation issues and methodology. The authors? clear writing style and clear presentation of concepts, as well as the text s hands-on and applied focus, will guide students on how to gather evidence and demonstrate that their interventions and programs are effective in improving clients? lives.
Preface ix
Introduction
1(33)
The Importance of Program Evaluation
1(4)
What Is a Program?
5(1)
Characteristics of ``Good'' Programs
5(7)
Program Evaluation Defined
12(1)
Reasons Why Programs Are Evaluated
13(1)
Motivations for Program Evaluation
14(4)
Overcoming the Subjective Perspective
18(4)
Philosophical Assumptions of Program Evaluation
22(4)
More on Positivism
26(3)
Chapter Recap
29(5)
Ethical Issues in Program Evaluation
34(21)
Historical Background: The Origin and Growth of Institutional Review Boards (IRBS)
34(1)
Ethical Guidelines
35(5)
How Ethical Guidelines Get Operationalized
40(3)
Research with Special Populations
43(1)
Ethical Issues Pertaining to Diverse Cultures and Groups
44(3)
The Practitioner-Evaluator's Ethical Responsibilities
47(5)
Concluding Thoughts on Ethics and Evaluation
52(3)
Needs Assessment
55(28)
What Is Needs Assessment?
55(2)
Definitions of Needs
57(2)
Planning a Needs Assessment
59(2)
Selecting a Needs Assessment Approach
61(8)
Convergent Analysis and Multimethods Approaches
69(3)
Thinking Creatively About Needs Assessment
72(4)
Community Readiness
76(7)
Qualitative an Mixed Methods in Evaluation
83(25)
Introduction
83(1)
What Is ``Qualitative Evaluation''?
83(1)
When Is Qualitative Evaluation Useful?
84(2)
Qualitative Methods and Evaluation of Practitioners In Situ
86(1)
Methods of Qualitative Evaluation
86(7)
Managing and Organization Data
93(1)
Data Analysis
93(1)
Quality Control
94(2)
Writing the Report
96(2)
Disseminating and Advocacy
98(4)
Examples of Qualitative Evaluations
102(6)
Formative and Process Evaluation
108(33)
Developing a Logic Model
108(4)
Formative Evaluation
112(1)
Conducting a Formative Evaluation
113(3)
Process Evaluation
116(1)
Program Description
117(4)
Program Monitoring
121(1)
Becoming a Program Monitor
122(1)
Mission Statements, Goals, and Objectives
123(4)
Writing Program Objectives
127(2)
What Should Be Monitored?
129(3)
Quality Assurance
132(2)
Total Quality Management
134(3)
Chapter Recap
137(4)
Single System Research Designs
141(34)
What Are Single System Research Designs?
141(2)
Selecting Outcome Measures
143(2)
Assessing Measures Over Time
145(1)
Needs Assessments
145(2)
Notation and General Principles
147(1)
Formative Evaluations
148(4)
Quality Assurance Studies
152(2)
Summative Evaluation Designs
154(3)
Experimental Designs
157(5)
External Validity
162(3)
Inferential Statistics
165(2)
How to Prepare Graphs
167(1)
Ethics of Single System Research Designs
168(2)
Chapter Recap
170(5)
Client Satisfaction
175(19)
The Importance of Monitoring Consumer Satisfactions
175(2)
The Problems with Client Satisfactions Studies
177(3)
A Sampling of Recent Client Satisfaction Studies
180(1)
Annotations about Client Satisfaction
181(2)
Explanations for High Ratings
183(1)
Recommendations for Client Satisfaction Studies
184(10)
Sampling
194(13)
Nonprobability Sampling Designs
194(4)
Probability (Scientific) Sampling
198(2)
Considerations in Selecting a Sample
200(1)
How Big Should the Sample Be?
201(3)
Chapter Recap
204(3)
Group Research Designs
207(48)
What Are Group Research Designs?
207(1)
Starting an Outcome Evaluation
208(2)
Outcome Evaluation Designs
210(2)
General Principles of Group Research Designs
212(1)
Pre-experimental Research Designs
212(11)
Quasi-Experimental Research Designs
223(6)
Some Threats to Internal Validity
229(1)
Protection Against Alternative Explanations
229(5)
Experimental Designs
234(7)
Efficacy and Effectiveness studies
241(1)
What About Negative Outcome studies?
242(3)
A Note About the Term Experiment
245(2)
Chapter Recap
247(8)
Cost-Effectiveness and Cost Analysis
255(16)
Cost as an Evaluate Criterion
255(2)
Example of a Cost-Effectiveness Evaluation
257(1)
How to Do a Cost-Effectiveness Study
258(2)
Whose Point of View?
260(2)
Cost-Benefit Analysis
262(3)
Chapter Recap
265(6)
Measurement Tools and strategies
271(30)
Importance of Measurement
271(2)
Deciding What to Measure
273(2)
Reliability
275(10)
Locating Appropriate Instruments
285(3)
Constructing ``Good'' Evaluation Instruments---Questionnaire Design
288(3)
Troubleshooting Question---An Opportunity to Apply What You've Learned
291(10)
Illustrations of Instruments
301(16)
Criteria for Selecting Evaluation Instruments
302(1)
Clinical Anxiety Scale
303(2)
CES-D Scale
305(2)
Rosenberg Self-Esteem Scale
307(2)
Evaluation Self-Efficacy Scale
309(2)
Policy Advocacy Behavior Scale
311(3)
The Community Attitudes Toward Sex Offenders Scale (CATSO)
314(3)
Pragmatic Issues
317(24)
Treatment Fidelity
317(4)
Fidelity Nightmares
321(3)
Program Drift and the Transfer of Programs
324(1)
Political Nature of Evaluation
325(3)
The ``Threat'' of Evaluation
328(1)
Guidelines for Evaluation in Politically Charged Arenas
329(3)
Culturally Sensitive Evaluation Practice
332(4)
Final Thoughts
336(5)
Data Analysis
341(32)
What Does It Mean to Analyze Data?
341(1)
Data Analysis and the Computer
342(1)
Levels of Measurement
343(2)
Univariate Analysis
345(6)
Bivariate Analysis
351(9)
Multivariate Analysis
360(2)
Myths About Statistical Significance
362(2)
Understanding Trends
364(1)
Using Statistics in Reports
365(2)
Type I and Type II Errors
367(3)
Final Thoughts
370(3)
Writing Evaluation Proposals, Reports, and Journal Articles
373(20)
Components of the Evaluation Proposal and Report
373(10)
Considerations in Planning and Writing Evaluation Reports
383(1)
Common Mistakes made by Students in Writing Evaluation Reports
384(3)
Checklist for Writing and Assessing Evaluating Reports
387(1)
The Utilization of Evaluation Reports
387(2)
Writing for Professional Publication
389(4)
Index 393