Atjaunināt sīkdatņu piekrišanu

Program Evaluation: An Introduction to an Evidence-Based Approach 6th edition [Mīkstie vāki]

3.31/5 (246 ratings by Goodreads)
(University of Kentucky), (Florida State University), (New York University)
  • Formāts: Paperback / softback, 432 pages, height x width x depth: 20x175x254 mm, weight: 703 g
  • Izdošanas datums: 05-Jan-2015
  • Izdevniecība: Brooks/Cole
  • ISBN-10: 1305101960
  • ISBN-13: 9781305101968
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 94,74 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 118,43 €
  • Ietaupiet 20%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 432 pages, height x width x depth: 20x175x254 mm, weight: 703 g
  • Izdošanas datums: 05-Jan-2015
  • Izdevniecība: Brooks/Cole
  • ISBN-10: 1305101960
  • ISBN-13: 9781305101968
Citas grāmatas par šo tēmu:
PROGRAM EVALUATION, 6th Edition helps readers learn how to evaluate services and programs that they will encounter in their professional practice. In the process of learning evaluation techniques and skills, readers will become proficient at critically analyzing evaluation studies conducted by others. The authors present and simplify all the essentials needed for a critical appreciation of evaluation issues and methodology. The book's clear writing style and clear presentation of concepts, as well as the its hands-on and applied focus, will guide readers on how to gather evidence and demonstrate that their interventions and programs are effective in improving clients' lives. This edition's up-to-date coverage includes a greater number of references to current literature, emphasizing that consulting the literature is an important step in recognizing, developing, and evaluating evidence-based practice or research-informed practice.
Preface xi
Chapter 1 Introduction
1(40)
The Importance of Program Evaluation
1(7)
What Is a Program?
8(1)
Characteristics of "Good" Programs
9(3)
Evidence-Based Practice
12(6)
Program Evaluation Defined
18(1)
Reasons Why Programs Are Evaluated
19(1)
Motivations for Program Evaluation
20(4)
Overcoming the Subjective Perspective
24(5)
Philosophical Assumptions of Program Evaluation
29(4)
More on Positivism
33(3)
Chapter Recap
36(5)
Chapter 2 Ethical Issues in Program Evaluation
41(22)
The Role of Institutional Review Boards (IRBS)
42(1)
Ethical Guidelines
42(5)
How Ethical Guidelines Get Operationalized
47(2)
Research with Special Populations
49(2)
Ethical Issues Pertaining to Diverse Cultures and Groups
51(1)
Ethical Boundaries in Participatory and Action Research
52(4)
The Practitioner-Evaluator's Ethical Responsibilities
56(1)
Concluding Thoughts on Ethics and Evaluation
57(6)
Chapter 3 Needs Assessment
63(30)
Types of Needs Assessment
65(3)
How Does a Needs Assessment Start?
68(2)
What Are the Steps Involved in Conducting a Needs Assessment?
70(1)
How Do I Select a Needs Assessment Approach?
71(7)
What Do I Need To Know About Surveys?
78(3)
Special Considerations: Converging the Data
81(2)
Thinking Creatively About Needs Assessment
83(1)
What Is a Community Readiness Study?
84(9)
Chapter 4 Qualitative and Mixed Methods in Evaluation
93(28)
Introduction
93(1)
What Is "Qualitative Evaluation"?
94(1)
What Is "Mixed Methods Evaluation"?
94(1)
When Is Qualitative Evaluation Useful? When Is Mixed Methods Evaluation the Right Choice?
95(3)
Qualitative Methods and Evaluation of Practitioners In Situ
98(1)
Qualitative Methods Commonly Used in Evaluation Research
99(1)
Qualitative Evaluation
99(1)
Designing a Mixed Methods Evaluation
100(4)
Managing and Organizing Qualitative Data
104(1)
Data Analysis
105(2)
Mixed Methods Data Analysis and Integration
107(1)
Quality Control
108(2)
Writing the Report
110(1)
Dissemination and Advocacy
111(2)
Examples of Qualitative Evaluations
113(8)
Chapter 5 What Are Formative and Process Evaluation?
121(34)
How Do I Develop a Logic Model?
122(3)
A Realistic Scenario
125(1)
Formative Evaluation
125(1)
Conducting a Formative Evaluation
126(3)
What Is Process Evaluation?
129(2)
Process Evaluation: Program Description Function
131(5)
Process Evaluation: Program Monitoring
136(1)
Becoming a Program Monitor
136(2)
Mission Statements, Goals, and Objectives
138(3)
Writing Program Objectives
141(2)
What Should Be Monitored?
143(3)
Quality Assurance
146(2)
Total Quality Management
148(3)
Chapter Recap
151(4)
Chapter 6 Single System Research Designs
155(38)
What Are Single System Research Designs?
155(3)
Selecting Outcome Measures
158(2)
Assessing Measures Over Time
160(1)
Notation and General Principles
161(2)
Needs Assessments
163(2)
Formative Evaluations
165(2)
Quality Assurance Studies
167(2)
Summative Evaluation Designs
169(4)
Summative Experimental Designs
173(7)
External Validity
180(1)
Inferential Statistics
181(2)
How to Prepare Graphs
183(2)
Ethics of Single System Research Designs
185(1)
How to Critically Review a SSRD
186(1)
Chapter Recap
187(6)
Chapter 7 Client Satisfaction
193(18)
The Importance of Monitoring Consumer Satisfaction
193(2)
The Arguments for Client Satisfaction Studies
195(1)
The Problem with Client Satisfaction Studies
196(3)
A Sampling of Recent Client Satisfaction Studies
199(1)
How Do Consistently High Ratings Get Explained?
200(2)
Recommendations for Client Satisfaction Studies
202(9)
Chapter 8 Sampling
211(16)
What Are Nonprobability Sampling Designs?
212(4)
What Is Probability (Scientific) Sampling?
216(3)
Considerations in Selecting a Sample
219(2)
How Big Should the Sample Be?
221(3)
Chapter Recap
224(3)
Chapter 9 Group Research Designs
227(56)
What Are Group Research Designs?
227(1)
Starting an Outcome Evaluation
228(3)
Outcome Evaluation Designs
231(3)
General Principles of Group Research Designs
234(1)
Pre-Experimental Research Designs
235(11)
Quasi-Experimental Research Designs
246(6)
Some Threats to Internal Validity
252(5)
Protection Against Alternative Explanations
257(1)
Experimental Designs
258(8)
Efficacy and Effectiveness Studies
266(2)
What About Negative Outcome Studies?
268(3)
A Note About the Term Experiment
271(1)
Some Newer Developments in Experimental Program Evaluation
272(2)
Chapter Recap
274(9)
Chapter 10 Cost-Effectiveness and Cost Analysis
283(18)
Why Consider Cost as an Evaluative Criterion?
283(3)
Example of a Cost-Effectiveness Evaluation
286(1)
How Do I Begin a Cost-Effectiveness Study?
287(3)
Whose Point of View?
290(2)
Cost-Benefit Analysis
292(4)
Chapter Recap
296(5)
Chapter 11 Measurement Tools and Strategies
301(30)
Why Is Good Measurement so Important to Evaluation?
301(2)
What Should We Measure?
303(3)
Reliability
306(1)
What Do I Need to Know About Reliability?
307(5)
What Do I Need to Know About Validity?
312(3)
How Do I Find an Instrument for My Evaluation Project?
315(2)
How Does One Construct a "Good" Evaluation Instrument? What Do I Need to Know About Questionnaire Design?
317(3)
What Are Some of the Commons Errors Made in Developing Questionnaires?
320(7)
What Do I Need to Know About Levels of Measurement in Designing Instruments?
327(4)
Chapter 12 Selecting the Best Evaluation Measure for Your Project
331(20)
Checklist for Selecting Evaluation Instruments
332(2)
Clinical Anxiety Scale
334(2)
Evaluation Self-Efficacy Scale
336(2)
The Community Attitudes Toward Sex Offenders (CATSO) Scale
338(2)
Intimate Violence Responsibility Scale (IVRS)
340(3)
The School Support Scale
343(3)
Cultural Sensitivity
346(5)
Chapter 13 Pragmatic Issues
351(28)
What Is Treatment Fidelity and Why Is It so Important?
351(5)
Fidelity Nightmares
356(3)
Program Drift and the Transfer of Programs
359(1)
How Might Political Issues Affect a Program Evaluation?
360(2)
The "Threat" of Evaluation
362(1)
Guidelines for Evaluation in Politically Charged Arenas
363(4)
Internal and External Evaluators
367(3)
Culturally Sensitive Evaluation Practice
370(4)
Final Thoughts
374(5)
Chapter 14 Writing Evaluation Proposals, Reports, and Journal Articles
379(24)
Components of the Evaluation Proposal and Report
380(11)
Considerations in Planning and Writing Evaluation Reports and Manuscripts
391(1)
Common Mistakes Made by Students in Writing Evaluation Reports
392(3)
Checklist for Writing and Assessing Evaluation Reports
395(2)
The Utilization of Evaluation Reports
397(1)
Writing for Professional Publication
398(5)
Index 403
David Royse is a professor in the College of Social Work at the University of Kentucky. He earned his Ph.D. from Ohio State University in 1980. He has written several books, including two for Cengage Learning: RESEARCH METHODS IN SOCIAL WORK, Sixth Edition, and PROGRAM EVALUATION: AN INTRODUCTION TO AN EVIDENCE-BASED APPROACH, Sixth Edition. Bruce A. Thyer, Ph.D., is Dean and Professor, School of Social Work, Florida State University. He earned his Ph.D. in Social Work and Psychology from the University of Michigan in 1982. Recipient of a National Institute of Mental Health Faculty Scholar Award, Dr. Thyer has published extensively on social work practice, research, and evaluation. Deborah K. Padgett, Ph.D., MPH, is a professor at the New York University School of Social Work. She received her doctorate in urban anthropology in 1979 and completed post-doctoral programs in mental health services research at Columbia University School of Public Health (198586) and Duke University Department of Psychiatry (199495). Dr. Padgett's research interests include breast cancer screening and follow-up care for medically underserved women and qualitative studies of the 'process' of care for psychiatrically disabled homeless adults.