Atjaunināt sīkdatņu piekrišanu

E-grāmata: Program Evaluation and Performance Measurement: An Introduction to Practice

3.78/5 (33 ratings by Goodreads)
  • Formāts: PDF+DRM
  • Izdošanas datums: 16-Oct-2018
  • Izdevniecība: SAGE Publications Inc
  • Valoda: eng
  • ISBN-13: 9781506337050
Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 96,36 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Izdošanas datums: 16-Oct-2018
  • Izdevniecība: SAGE Publications Inc
  • Valoda: eng
  • ISBN-13: 9781506337050
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Program Evaluation and Performance Measurement offers a conceptual and practical introduction to program evaluation and performance measurement for public and non-profit organizations. James C. McDavid, Irene Huse, and Laura R.L. Hawthorn discuss topics in a detailed fashion, making it a useful guide for practitioners who are constructing and implementing performance measurement systems, as well as for students. Woven into the chapters is the performance management cycle in organizations, which includes: strategic planning and resource allocation; program and policy design; implementation and management; and the assessment and reporting of results.

The Third Edition has been revised to highlight and integrate the current economic, political, and socio-demographic context within which evaluators are expected to work, and includes new exemplars including the evaluation of body-worn police cameras.

Recenzijas

"Finally, a text that successfully brings together quantitative and qualitative methods for program evaluation." -- Kerry Freedman "This book provides a good balance between the topics of measurement and program evaluation, coupled with ample real-world examples of application. It breaks down the materials and it is easy to follow, which provided my students with main points related to measurement and program evaluation. In addition, this book emphasizes both quantitative and qualitative evaluation methods. The discussion questions and cases are good and can be useful in class and for homework assignments." -- Mariya Yukhymenko

Preface xv
Acknowledgments xvii
About the Authors 1(1)
Chapter 1 Key Concepts and Issues in Program Evaluation and Performance Management
2(48)
Introduction
3(7)
Integrating Program Evaluation and Performance Measurement
4(1)
Connecting Evaluation to the Performance Management System
5(3)
The Performance Management Cycle
8(2)
Policies and Programs
10(2)
Key Concepts in Program Evaluation
12(5)
Causality in Program Evaluations
12(2)
Formative and Summative Evaluations
14(1)
Ex Ante and Ex Post Evaluations
15(1)
The Importance of Professional Judgment in Evaluations
16(1)
Example: Evaluating a Police Body-Worn Camera Program in Rialto, California
17(5)
The Context: Growing Concerns With Police Use of Force and Community Relationship
17(1)
Implementing and Evaluating the Effects of Body-Worn Cameras in the Rialto Police Department
18(2)
Program Success Versus Understanding the Cause-and-Effect Linkages: The Challenge of Unpacking the Body-Worn Police Cameras "Black Box"
20(1)
Connecting Body-Worn Camera Evaluations to This Book
21(1)
Ten Key Evaluation Questions
22(6)
The Steps in Conducting a Program Evaluation
28(15)
General Steps in Conducting a Program Evaluation
28(2)
Assessing the Feasibility of the Evaluation
30(7)
Doing the Evaluation
37(4)
Making Changes Based on the Evaluation
41(2)
Summary
43(1)
Discussion Questions
44(1)
References
45(5)
Chapter 2 Understanding and Applying Program Logic Models
50(47)
Introduction
51(3)
Logic Models and the Open Systems Approach
52(2)
A Basic Logic Modeling Approach
54(6)
An Example of the Most Basic Type of Logic Model
58(2)
Working With Uncertainty
60(4)
Problems as Simple, Complicated, and Complex
60(1)
Interventions as Simple, Complicated, or Complex
61(1)
The Practical Challenges of Using Complexity Theory in Program Evaluations
62(2)
Program Objectives and Program Alignment With Government Goals
64(4)
Specifying Program Objectives
64(2)
Alignment of Program Objectives With Government and Organizational Goals
66(2)
Program Theories and Program Logics
68(7)
Systematic Reviews
69(1)
Contextual Factors
70(1)
Realist Evaluation
71(3)
Putting Program Theory Into Perspective: Theory-Driven Evaluations and Evaluation Practice
74(1)
Logic Models That Categorize and Specify Intended Causal Linkages
75(4)
Constructing A Logic Model For Program Evaluations
79(2)
Logic Models For Performance Measurement
81(3)
Strengths and Limitations of Logic Models
84(2)
Logic Models in a Turbulent World
85(1)
Summary
86(1)
Discussion Questions
87(1)
Appendices
88(6)
Appendix A Applying What You Have Learned: Development of a Logic Model for a Meals on Wheels Program
88(1)
Translating a Written Description of a Meals on Wheels Program Into a Program Logic Model
88(1)
Appendix B A Complex Logic Model Describing Primary Health Care in Canada
88(4)
Appendix C Logic Model for the Canadian Evaluation Society Credentialed Evaluator Program
92(2)
References
94(3)
Chapter 3 Research Designs For Program Evaluations
97(64)
Introduction
98(1)
Our Stance
98(6)
What Is Research Design?
104(6)
The Origins of Experimental Design
105(5)
Why Pay Attention to Experimental Designs?
110(2)
Using Experimental Designs to Evaluate Programs
112(6)
The Perry Preschool Study
112(3)
Limitations of the Perry Preschool Study
115(1)
The Perry Preschool Study in Perspective
116(2)
Defining and Working With the Four Basic Kinds of Threats to Validity
118(13)
Statistical Conclusions Validity
118(1)
Internal Validity
118(4)
Police Body-Worn Cameras: Randomized Controlled Trials and Quasi-Experiments
122(2)
Construct Validity
124(1)
The `Measurement Validity' Component of Construct Validity
125(1)
Other Construct Validity Problems
126(3)
External Validity
129(2)
Quasi-experimental Designs: Navigating Threats to Internal Validity
131(9)
The York Neighborhood Watch Program: An Example of an Interrupted Time Series Research Design Where the Program Starts, Stops, and Then Starts Again
136(1)
Findings and Conclusions From the Neighborhood Watch Evaluation
137(3)
Non-Experimental Designs
140(1)
Testing the Causal Linkages in Program Logic Models
141(4)
Research Designs and Performance Measurement
145(2)
Summary
147(1)
Discussion Questions
148(2)
Appendices
150(7)
Appendix 3A Basic Statistical Tools for Program Evaluation
150(2)
Appendix 3B Empirical Causal Model for the Perry Preschool Study
152(1)
Appendix 3C Estimating the Incremental Impact of a Policy Change---Implementing and Evaluating an Admission Fee Policy in the Royal British Columbia Museum
153(4)
References
157(4)
Chapter 4 Measurement for Program Evaluation and Performance Monitoring
161(44)
Introduction
162(2)
Introducing Reliability and Validity of Measures
164(11)
Understanding the Reliability of Measures
167(2)
Understanding Measurement Validity
169(1)
Types of Measurement Validity
170(1)
Ways to Assess Measurement Validity
171(1)
Validity Types That Relate a Single Measure to a Corresponding Construct
172(1)
Validity Types That Relate Multiple Measures to One Construct
172(1)
Validity Types That Relate Multiple Measures to Multiple Constructs
173(2)
Units of Analysis and Levels of Measurement
175(4)
Nominal Level of Measurement
176(1)
Ordinal Level of Measurement
177(1)
Interval and Ratio Levels of Measurement
177(2)
Sources of Data in Program Evaluations and Performance Measurement Systems
179(13)
Existing Sources of Data
179(3)
Sources of Data Collected by the Program Evaluator
182(1)
Surveys as an Evaluator-Initiated Data Source in Evaluations
182(3)
Working With Likert Statements in Surveys
185(2)
Designing and Conducting Surveys
187(2)
Structuring Survey Instruments: Design Considerations
189(3)
Using Surveys to Estimate the Incremental Effects of Programs
192(4)
Addressing Challenges of Personal Recall
192(2)
Retrospective Pre-tests: Where Measurement Intersects With Research Design
194(2)
Survey Designs Are Not Research Designs
196(1)
Validity of Measures and the Validity of Causes and Effects
197(2)
Summary
199(2)
Discussion Questions
201(1)
References
202(3)
Chapter 5 Applying Qualitative Evaluation Methods
205(43)
Introduction
206(1)
Comparing and Contrasting Different Approaches To Qualitative Evaluation
207(9)
Understanding Paradigms and Their Relevance to Evaluation
208(5)
Pragmatism as a Response to the Philosophical Divisions Among Evaluators
213(1)
Alternative Criteria for Assessing Qualitative Research and Evaluations
214(2)
Qualitative Evaluation Designs: Some Basics
216(5)
Appropriate Applications for Qualitative Evaluation Approaches
216(2)
Comparing and Contrasting Qualitative and Quantitative Evaluation Approaches
218(3)
Designing and Conducting Qualitative Program Evaluations
221(16)
1 Clarifying the Evaluation Purpose and Questions
222(1)
2 Identifying Research Designs and Appropriate Comparisons
222(1)
Within-Case Analysis
222(1)
Between-Case Analysis
223(1)
3 Mixed-Methods Evaluation Designs
224(4)
4 Identifying Appropriate Sampling Strategies in Qualitative Evaluations
228(2)
5 Collecting and Coding Qualitative Data
230(1)
Structuring Data Collection Instruments
230(1)
Conducting Qualitative Interviews
231(2)
6 Analyzing Qualitative Data
233(4)
7 Reporting Qualitative Results
237(1)
Assessing the Credibility and Generalizability of Qualitative Findings
237(2)
Connecting Qualitative Evaluation Methods to Performance Measurement
239(2)
The Power of Case Studies
241(2)
Summary
243(1)
Discussion Questions
244(1)
References
245(3)
Chapter 6 Needs Assessments for Program Development and Adjustment
248(50)
Introduction
249(8)
General Considerations Regarding Needs Assessments
250(1)
What Are Needs and Why Do We Conduct Needs Assessments?
250(2)
Group-Level Focus for Needs Assessments
252(1)
How Needs Assessments Fit Into the Performance Management Cycle
252(2)
Recent Trends and Developments in Needs Assessments
254(1)
Perspectives on Needs
255(1)
A Note on the Politics of Needs Assessment
256(1)
Steps in Conducting Needs Assessments
257(28)
Phase I Pre-Assessment
259(1)
7 Focusing the Needs Assessment
260(6)
2 Forming the Needs Assessment Committee (NAC)
266(1)
3 Learning as Much as We Can About Preliminary "What Should Be" and "What Is" Conditions From Available Sources
267(1)
4 Moving to Phase II and/or III or Stopping
268(1)
Phase II The Needs Assessment
268(1)
5 Conducting a Full Assessment About "What Should Be" and "What Is"
268(1)
6 Needs Assessment Methods Where More Knowledge Is Needed: Identifying the Discrepancies
269(9)
7 Prioritizing the Needs to Be Addressed
278(2)
8 Causal Analysis of Needs
280(1)
9 Identification of Solutions: Preparing a Document That Integrates Evidence and Recommendations
280(2)
10 Moving to Phase III or Stopping
282(1)
Phase III Post-Assessment: Implementing a Needs Assessment
283(1)
11 Making Decisions to Resolve Needs and Select Solutions
283(1)
12 Developing Action Plans
284(1)
13 Implementing, Monitoring and Evaluating
284(1)
Needs Assessment Example: Community Health Needs Assessment in New Brunswick
285(6)
The Needs Assessment Process
286(1)
Focusing the Needs Assessment
286(1)
Forming the Needs Assessment Committee
286(1)
Learning About the Community Through a Quantitative Data Review
287(1)
Learning About Key Issues in the Community Through Qualitative Interviews and Focus Groups
288(1)
Triangulating the Qualitative and Quantitative Lines of Evidence
288(1)
Prioritizing Primary Health-Related Issues in the Community
288(3)
Summary
291(1)
Discussion Questions
292(1)
Appendixes
293(2)
Appendix A Case Study: Designing a Needs Assessment for a Small Nonprofit Organization
293(1)
The Program
293(1)
Your Role
294(1)
Your Task
294(1)
References
295(3)
Chapter 7 Concepts and Issues in Economic Evaluation
298(42)
Introduction
299(7)
Why an Evaluator Needs to Know About Economic Evaluation
300(2)
Connecting Economic Evaluation With Program Evaluation: Program Complexity and Outcome Attribution
302(1)
Program Complexity and Determining Cost-Effectiveness of Program Success
302(1)
The Attribution Issue
303(1)
Three Types of Economic Evaluation
304(1)
The Choice of Economic Evaluation Method
304(2)
Economic Evaluation in the Performance Management Cycle
306(1)
Historical Developments in Economic Evaluation
307(1)
Cost-Benefit Analysis
308(12)
Standing
309(3)
Valuing Nonmarket Impacts
312(1)
Revealed and Stated Preferences Methods for Valuing Nonmarket Impacts
312(1)
Steps for Economic Evaluations
313(1)
1 Specify the Set of Alternatives
314(1)
2 Decide Whose Benefits and Costs Count (Standing]
314(1)
3 Categorize and Catalog the Costs and Benefits
314(1)
4 Predict Costs and Benefits Quantitatively Over the Life of the Project
315(1)
5 Monetize /Attach Dollar Values to} All Costs and Benefits
315(1)
6 Select a Discount Rate for Costs and Benefits Occurring in the Future
316(1)
7 Compare Costs With Outcomes, or Compute the Net Present Value of Each Alternative
317(1)
8 Perform Sensitivity and Distributional Analysis
318(1)
9 Make a Recommendation
319(1)
Cost-Effectiveness Analysis
320(1)
Cost-Utility Analysis
321(1)
Cost-Benefit Analysis Example: The High/Scope Perry Preschool Program
322(6)
1 Specify the Set of Alternatives
324(1)
2 Decide Whose Benefits and Costs Count (Standing)
324(1)
3 Categorize and Catalog Costs and Benefits
324(1)
4 Predict Costs and Benefits Quantitatively Over the Life of the Project
325(1)
5 Monetize (Attach Dollar Values to) All Costs and Benefits
325(1)
6 Select a Discount Rate for Costs and Benefits Occurring in the Future
326(1)
7 Compute the Net Present Value of the Program
327(1)
8 Perform Sensitivity and Distributional Analysis
327(1)
9 Make a Recommendation
327(1)
Strengths and Limitations of Economic Evaluation
328(5)
Strengths of Economic Evaluation
328(1)
Limitations of Economic Evaluation
329(4)
Summary
333(1)
Discussion Questions
334(2)
References
336(4)
Chapter 8 Performance Measurement as an Approach to Evaluation
340(31)
Introduction
341(1)
The Current Imperative To Measure Performance
342(1)
Performance Measurement For Accountability and Performance Improvement
343(1)
Growth and Evolution of Performance Measurement
344(6)
Performance Measurement Beginnings in Local Government
344(1)
Federal Performance Budgeting Reform
345(1)
The Emergence of New Public Management
346(3)
Steering, Control, and Performance Improvement
349(1)
Metaphors That Support and Sustain Performance Measurement
350(3)
Organizations as Machines
351(1)
Government as a Business
351(1)
Organizations as Open Systems
352(1)
Comparing Program Evaluation and Performance Measurement Systems
353(11)
Summary
364(1)
Discussion Questions
365(1)
References
366(5)
Chapter 9 Design and Implementation of Performance Measurement Systems
371(38)
Introduction
372(1)
The Technical/Rational View and the Political/Cultural View
372(2)
Key Steps in Designing and Implementing a Performance Measurement System
374(26)
1 Leadership: Identify the Organizational Champions of This Change
375(2)
2 Understand What Performance Measurement Systems Can and Cannot Do
377(2)
3 Communication: Establish Multi-Channel Ways of Communicating That Facilitate Top-Down, Bottom-Up, and Horizontal Sharing of Information, Problem Identification, and Problem Solving
379(1)
4 Clarify the Expectations for the Intended Uses of the Performance Information That is Created
380(3)
5 Identify the Resources and Plan for the Design, Implementation, and Maintenance of the Performance Measurement System
383(1)
6 Take the Time to Understand the Organizational History Around Similar Initiatives
384(1)
7 Develop Logic Models for the Programs for Which Performance Measures Are Being Designed and Identify the Key Constructs to Be Measured
385(2)
8 Identify Constructs Beyond Those in Single Programs: Consider Programs Within Their Place in the Organizational Structure
387(3)
9 Involve Prospective Users in Development of Logic Models and Constructs in the Proposed Performance Measurement System
390(1)
10 Translate the Constructs Into Observable Performance Measures that Compose the Performance Measurement System
391(4)
11 Highlight the Comparisons That Can Be Part of the Performance Measurement System
395(3)
12 Reporting and Making Changes to the Performance Measurement System
398(2)
Performance Measurement for Public Accountability
400(2)
Summary
402(1)
Discussion Questions
403(1)
Appendix A Organizational Logic Models
404(1)
References
405(4)
Chapter 10 Using Performance Measurement for Accountability and Performance Improvement
409(36)
Introduction
410(1)
Using Performance Measures
411(18)
Performance Measurement in a High-Stakes Environment: The British Experience
412(3)
Assessing the "Naming and Shaming" Approach to Performance Management in Britain
415(3)
A Case Study of Gaming: Distorting the Output of a Coal Mine
418(1)
Performance Measurement in a Medium-Stakes Environment: Legislator Expected Versus Actual Uses of Performance Reports in British Columbia, Canada
419(5)
The Role of Incentives and Organizational Politics in Performance Measurement Systems With a Public Reporting Emphasis
424(1)
Performance Measurement in a Low-Stakes Environment: Joining Internal and External Uses of Performance Information in Lethbridge, Alberta
425(4)
Rebalancing Accountability-Focused Performance Measurement Systems to Increase Performance Improvement Uses
429(8)
Making Changes to a Performance Measurement System
432(2)
Does Performance Measurement Give Managers the "Freedom to Manage?"
434(1)
Decentralized Performance Measurement: The Case of a Finnish Local Government
435(2)
When Performance Measurement Systems De-Emphasize Outputs and Outcomes: Performance Management Under Conditions of Chronic Fiscal Restraint
437(2)
Summary
439(1)
Discussion Questions
440(1)
References
441(4)
Chapter 11 Program Evaluation and Program Management
445(32)
Introduction
446(1)
Internal Evaluation: Views From the Field
447(9)
Intended Evaluation Purposes and Managerial Involvement
450(1)
When the Evaluations Are for Formative Purposes
450(2)
When the Evaluations Are for Summative Purposes
452(1)
Optimizing Internal Evaluation: Leadership and Independence
453(1)
Who Leads the Internal Evaluation?
454(1)
"Independence" for Evaluators
455(1)
Building an Evaluative Culture in Organizations: An Expanded Role for Evaluators
456(4)
Creating Ongoing Streams of Evaluative Knowledge
457(1)
Critical Challenges to Building and Sustaining an Evaluative Culture
458(2)
Building an Evaluative/Learning Culture in a Finnish Local Government: Joining Performance Measurement and Performance Management
460(1)
Striving for Objectivity in Program Evaluations
460(7)
Can Program Evaluators Claim Objectivity?
462(1)
Objectivity and Replicability
463(3)
Implications for Evaluation Practice: A Police Body-Worn Cameras Example
466(1)
Criteria for High-Quality Evaluations
467(3)
Summary
470(1)
Discussion Questions
471(1)
References
472(5)
Chapter 12 The Nature and Practice of Professional Judgment in Evaluation
477(40)
Introduction
478(1)
The Nature of the Evaluation Enterprise
478(4)
Our Stance
479(1)
Reconciling the Diversity in Evaluation Theory With Evaluation Practice
480(1)
Working in the Swamp: The Real World of Evaluation Practice
481(1)
Ethical Foundations of Evaluation Practice
482(4)
Power Relationships and Ethical Practice
485(1)
Ethical Guidelines for Evaluation Practice
486(4)
Evaluation Association-Based Ethical Guidelines
486(4)
Understanding Professional Judgment
490(9)
What Is Good Evaluation Theory and Practice?
490(2)
Tacit Knowledge
492(1)
Balancing Theoretical and Practical Knowledge in Professional Practice
492(1)
Aspects of Professional Judgment
493(2)
The Professional Judgment Process: A Model
495(2)
The Decision Environment
497(1)
Values, Beliefs, and Expectations
497(1)
Cultural Competence in Evaluation Practice
498(1)
Improving Professional Judgment in Evaluation
499(7)
Mindfulness and Reflective Practice
499(2)
Professional Judgment and Evaluation Competencies
501(3)
Education and Training-Related Activities
504(1)
Teamwork and Improving Professional Judgment
505(1)
The Prospects for an Evaluation Profession
506(3)
Summary
509(1)
Discussion Questions
510(1)
Appendix
511(2)
Appendix A Fiona's Choice: An Ethical Dilemma for a Program Evaluator
511(1)
Your Task
512(1)
References
513(4)
Glossary 517(13)
Index 530
James C. McDavid (PhD, Indiana, 1975) is a professor of Public Administration at the University of Victoria in British Columbia, Canada. He is a specialist in program evaluation, performance measurement, and organizational performance management. He has conducted extensive research and evaluations focusing on federal, state, provincial, and local governments in the United States and Canada. His published research has appeared in the American Journal of Evaluation, the Canadian Journal of Program Evaluation and New Directions for Evaluation. He is currently a member of the editorial board of the Canadian Journal of Program Evaluation and New Directions for Evaluation.

In 1993, Dr. McDavid won the prestigious University of Victoria Alumni Association Teaching Award. In 1996, he won the J. E. Hodgetts Award for the best English-language article published in Canadian Public Administration. From 1990 to 1996, he was Dean of the Faculty of Human and Social Development at the University of Victoria. In 2004, he was named a Distinguished University Professor at the University of Victoria and was also Acting Director of the School of Public Administration during that year. He teaches online courses in the School of Public Administration Graduate Certificate and Diploma in Evaluation Program.



Irene Huse holds a Master of Public Administration from the University of Victoria and is a PhD student in the School of Public Administration at the University of Victoria. She has worked as an evaluator and a researcher in universities and the private sector. Her research has appeared in the American Journal of Evaluation, the Canadian Journal of Program Evaluation and New Directions for Evaluation.







 

Laura R. L. Hawthorn holds a Master of Arts degree in Canadian history from Queens University in Ontario, Canada and a Master of Public Administration degree from the University of Victoria. After completing her MPA, she worked as a manager for several years in the British Columbia public service and non-profit sector before leaving to raise a family. She is currently living in Vancouver, running a small non-profit in the education field, and being mom to her two boys.