Preface |
|
xv | |
Acknowledgments |
|
xvii | |
About the Authors |
|
1 | (1) |
|
Chapter 1 Key Concepts and Issues in Program Evaluation and Performance Management |
|
|
2 | (48) |
|
|
3 | (7) |
|
Integrating Program Evaluation and Performance Measurement |
|
|
4 | (1) |
|
Connecting Evaluation to the Performance Management System |
|
|
5 | (3) |
|
The Performance Management Cycle |
|
|
8 | (2) |
|
|
10 | (2) |
|
Key Concepts in Program Evaluation |
|
|
12 | (5) |
|
Causality in Program Evaluations |
|
|
12 | (2) |
|
Formative and Summative Evaluations |
|
|
14 | (1) |
|
Ex Ante and Ex Post Evaluations |
|
|
15 | (1) |
|
The Importance of Professional Judgment in Evaluations |
|
|
16 | (1) |
|
Example: Evaluating a Police Body-Worn Camera Program in Rialto, California |
|
|
17 | (5) |
|
The Context: Growing Concerns With Police Use of Force and Community Relationship |
|
|
17 | (1) |
|
Implementing and Evaluating the Effects of Body-Worn Cameras in the Rialto Police Department |
|
|
18 | (2) |
|
Program Success Versus Understanding the Cause-and-Effect Linkages: The Challenge of Unpacking the Body-Worn Police Cameras "Black Box" |
|
|
20 | (1) |
|
Connecting Body-Worn Camera Evaluations to This Book |
|
|
21 | (1) |
|
Ten Key Evaluation Questions |
|
|
22 | (6) |
|
The Steps in Conducting a Program Evaluation |
|
|
28 | (15) |
|
General Steps in Conducting a Program Evaluation |
|
|
28 | (2) |
|
Assessing the Feasibility of the Evaluation |
|
|
30 | (7) |
|
|
37 | (4) |
|
Making Changes Based on the Evaluation |
|
|
41 | (2) |
|
|
43 | (1) |
|
|
44 | (1) |
|
|
45 | (5) |
|
Chapter 2 Understanding and Applying Program Logic Models |
|
|
50 | (47) |
|
|
51 | (3) |
|
Logic Models and the Open Systems Approach |
|
|
52 | (2) |
|
A Basic Logic Modeling Approach |
|
|
54 | (6) |
|
An Example of the Most Basic Type of Logic Model |
|
|
58 | (2) |
|
|
60 | (4) |
|
Problems as Simple, Complicated, and Complex |
|
|
60 | (1) |
|
Interventions as Simple, Complicated, or Complex |
|
|
61 | (1) |
|
The Practical Challenges of Using Complexity Theory in Program Evaluations |
|
|
62 | (2) |
|
Program Objectives and Program Alignment With Government Goals |
|
|
64 | (4) |
|
Specifying Program Objectives |
|
|
64 | (2) |
|
Alignment of Program Objectives With Government and Organizational Goals |
|
|
66 | (2) |
|
Program Theories and Program Logics |
|
|
68 | (7) |
|
|
69 | (1) |
|
|
70 | (1) |
|
|
71 | (3) |
|
Putting Program Theory Into Perspective: Theory-Driven Evaluations and Evaluation Practice |
|
|
74 | (1) |
|
Logic Models That Categorize and Specify Intended Causal Linkages |
|
|
75 | (4) |
|
Constructing A Logic Model For Program Evaluations |
|
|
79 | (2) |
|
Logic Models For Performance Measurement |
|
|
81 | (3) |
|
Strengths and Limitations of Logic Models |
|
|
84 | (2) |
|
Logic Models in a Turbulent World |
|
|
85 | (1) |
|
|
86 | (1) |
|
|
87 | (1) |
|
|
88 | (6) |
|
Appendix A Applying What You Have Learned: Development of a Logic Model for a Meals on Wheels Program |
|
|
88 | (1) |
|
Translating a Written Description of a Meals on Wheels Program Into a Program Logic Model |
|
|
88 | (1) |
|
Appendix B A Complex Logic Model Describing Primary Health Care in Canada |
|
|
88 | (4) |
|
Appendix C Logic Model for the Canadian Evaluation Society Credentialed Evaluator Program |
|
|
92 | (2) |
|
|
94 | (3) |
|
Chapter 3 Research Designs For Program Evaluations |
|
|
97 | (64) |
|
|
98 | (1) |
|
|
98 | (6) |
|
|
104 | (6) |
|
The Origins of Experimental Design |
|
|
105 | (5) |
|
Why Pay Attention to Experimental Designs? |
|
|
110 | (2) |
|
Using Experimental Designs to Evaluate Programs |
|
|
112 | (6) |
|
The Perry Preschool Study |
|
|
112 | (3) |
|
Limitations of the Perry Preschool Study |
|
|
115 | (1) |
|
The Perry Preschool Study in Perspective |
|
|
116 | (2) |
|
Defining and Working With the Four Basic Kinds of Threats to Validity |
|
|
118 | (13) |
|
Statistical Conclusions Validity |
|
|
118 | (1) |
|
|
118 | (4) |
|
Police Body-Worn Cameras: Randomized Controlled Trials and Quasi-Experiments |
|
|
122 | (2) |
|
|
124 | (1) |
|
The `Measurement Validity' Component of Construct Validity |
|
|
125 | (1) |
|
Other Construct Validity Problems |
|
|
126 | (3) |
|
|
129 | (2) |
|
Quasi-experimental Designs: Navigating Threats to Internal Validity |
|
|
131 | (9) |
|
The York Neighborhood Watch Program: An Example of an Interrupted Time Series Research Design Where the Program Starts, Stops, and Then Starts Again |
|
|
136 | (1) |
|
Findings and Conclusions From the Neighborhood Watch Evaluation |
|
|
137 | (3) |
|
|
140 | (1) |
|
Testing the Causal Linkages in Program Logic Models |
|
|
141 | (4) |
|
Research Designs and Performance Measurement |
|
|
145 | (2) |
|
|
147 | (1) |
|
|
148 | (2) |
|
|
150 | (7) |
|
Appendix 3A Basic Statistical Tools for Program Evaluation |
|
|
150 | (2) |
|
Appendix 3B Empirical Causal Model for the Perry Preschool Study |
|
|
152 | (1) |
|
Appendix 3C Estimating the Incremental Impact of a Policy Change---Implementing and Evaluating an Admission Fee Policy in the Royal British Columbia Museum |
|
|
153 | (4) |
|
|
157 | (4) |
|
Chapter 4 Measurement for Program Evaluation and Performance Monitoring |
|
|
161 | (44) |
|
|
162 | (2) |
|
Introducing Reliability and Validity of Measures |
|
|
164 | (11) |
|
Understanding the Reliability of Measures |
|
|
167 | (2) |
|
Understanding Measurement Validity |
|
|
169 | (1) |
|
Types of Measurement Validity |
|
|
170 | (1) |
|
Ways to Assess Measurement Validity |
|
|
171 | (1) |
|
Validity Types That Relate a Single Measure to a Corresponding Construct |
|
|
172 | (1) |
|
Validity Types That Relate Multiple Measures to One Construct |
|
|
172 | (1) |
|
Validity Types That Relate Multiple Measures to Multiple Constructs |
|
|
173 | (2) |
|
Units of Analysis and Levels of Measurement |
|
|
175 | (4) |
|
Nominal Level of Measurement |
|
|
176 | (1) |
|
Ordinal Level of Measurement |
|
|
177 | (1) |
|
Interval and Ratio Levels of Measurement |
|
|
177 | (2) |
|
Sources of Data in Program Evaluations and Performance Measurement Systems |
|
|
179 | (13) |
|
|
179 | (3) |
|
Sources of Data Collected by the Program Evaluator |
|
|
182 | (1) |
|
Surveys as an Evaluator-Initiated Data Source in Evaluations |
|
|
182 | (3) |
|
Working With Likert Statements in Surveys |
|
|
185 | (2) |
|
Designing and Conducting Surveys |
|
|
187 | (2) |
|
Structuring Survey Instruments: Design Considerations |
|
|
189 | (3) |
|
Using Surveys to Estimate the Incremental Effects of Programs |
|
|
192 | (4) |
|
Addressing Challenges of Personal Recall |
|
|
192 | (2) |
|
Retrospective Pre-tests: Where Measurement Intersects With Research Design |
|
|
194 | (2) |
|
Survey Designs Are Not Research Designs |
|
|
196 | (1) |
|
Validity of Measures and the Validity of Causes and Effects |
|
|
197 | (2) |
|
|
199 | (2) |
|
|
201 | (1) |
|
|
202 | (3) |
|
Chapter 5 Applying Qualitative Evaluation Methods |
|
|
205 | (43) |
|
|
206 | (1) |
|
Comparing and Contrasting Different Approaches To Qualitative Evaluation |
|
|
207 | (9) |
|
Understanding Paradigms and Their Relevance to Evaluation |
|
|
208 | (5) |
|
Pragmatism as a Response to the Philosophical Divisions Among Evaluators |
|
|
213 | (1) |
|
Alternative Criteria for Assessing Qualitative Research and Evaluations |
|
|
214 | (2) |
|
Qualitative Evaluation Designs: Some Basics |
|
|
216 | (5) |
|
Appropriate Applications for Qualitative Evaluation Approaches |
|
|
216 | (2) |
|
Comparing and Contrasting Qualitative and Quantitative Evaluation Approaches |
|
|
218 | (3) |
|
Designing and Conducting Qualitative Program Evaluations |
|
|
221 | (16) |
|
1 Clarifying the Evaluation Purpose and Questions |
|
|
222 | (1) |
|
2 Identifying Research Designs and Appropriate Comparisons |
|
|
222 | (1) |
|
|
222 | (1) |
|
|
223 | (1) |
|
3 Mixed-Methods Evaluation Designs |
|
|
224 | (4) |
|
4 Identifying Appropriate Sampling Strategies in Qualitative Evaluations |
|
|
228 | (2) |
|
5 Collecting and Coding Qualitative Data |
|
|
230 | (1) |
|
Structuring Data Collection Instruments |
|
|
230 | (1) |
|
Conducting Qualitative Interviews |
|
|
231 | (2) |
|
6 Analyzing Qualitative Data |
|
|
233 | (4) |
|
7 Reporting Qualitative Results |
|
|
237 | (1) |
|
Assessing the Credibility and Generalizability of Qualitative Findings |
|
|
237 | (2) |
|
Connecting Qualitative Evaluation Methods to Performance Measurement |
|
|
239 | (2) |
|
The Power of Case Studies |
|
|
241 | (2) |
|
|
243 | (1) |
|
|
244 | (1) |
|
|
245 | (3) |
|
Chapter 6 Needs Assessments for Program Development and Adjustment |
|
|
248 | (50) |
|
|
249 | (8) |
|
General Considerations Regarding Needs Assessments |
|
|
250 | (1) |
|
What Are Needs and Why Do We Conduct Needs Assessments? |
|
|
250 | (2) |
|
Group-Level Focus for Needs Assessments |
|
|
252 | (1) |
|
How Needs Assessments Fit Into the Performance Management Cycle |
|
|
252 | (2) |
|
Recent Trends and Developments in Needs Assessments |
|
|
254 | (1) |
|
|
255 | (1) |
|
A Note on the Politics of Needs Assessment |
|
|
256 | (1) |
|
Steps in Conducting Needs Assessments |
|
|
257 | (28) |
|
|
259 | (1) |
|
7 Focusing the Needs Assessment |
|
|
260 | (6) |
|
2 Forming the Needs Assessment Committee (NAC) |
|
|
266 | (1) |
|
3 Learning as Much as We Can About Preliminary "What Should Be" and "What Is" Conditions From Available Sources |
|
|
267 | (1) |
|
4 Moving to Phase II and/or III or Stopping |
|
|
268 | (1) |
|
Phase II The Needs Assessment |
|
|
268 | (1) |
|
5 Conducting a Full Assessment About "What Should Be" and "What Is" |
|
|
268 | (1) |
|
6 Needs Assessment Methods Where More Knowledge Is Needed: Identifying the Discrepancies |
|
|
269 | (9) |
|
7 Prioritizing the Needs to Be Addressed |
|
|
278 | (2) |
|
8 Causal Analysis of Needs |
|
|
280 | (1) |
|
9 Identification of Solutions: Preparing a Document That Integrates Evidence and Recommendations |
|
|
280 | (2) |
|
10 Moving to Phase III or Stopping |
|
|
282 | (1) |
|
Phase III Post-Assessment: Implementing a Needs Assessment |
|
|
283 | (1) |
|
11 Making Decisions to Resolve Needs and Select Solutions |
|
|
283 | (1) |
|
12 Developing Action Plans |
|
|
284 | (1) |
|
13 Implementing, Monitoring and Evaluating |
|
|
284 | (1) |
|
Needs Assessment Example: Community Health Needs Assessment in New Brunswick |
|
|
285 | (6) |
|
The Needs Assessment Process |
|
|
286 | (1) |
|
Focusing the Needs Assessment |
|
|
286 | (1) |
|
Forming the Needs Assessment Committee |
|
|
286 | (1) |
|
Learning About the Community Through a Quantitative Data Review |
|
|
287 | (1) |
|
Learning About Key Issues in the Community Through Qualitative Interviews and Focus Groups |
|
|
288 | (1) |
|
Triangulating the Qualitative and Quantitative Lines of Evidence |
|
|
288 | (1) |
|
Prioritizing Primary Health-Related Issues in the Community |
|
|
288 | (3) |
|
|
291 | (1) |
|
|
292 | (1) |
|
|
293 | (2) |
|
Appendix A Case Study: Designing a Needs Assessment for a Small Nonprofit Organization |
|
|
293 | (1) |
|
|
293 | (1) |
|
|
294 | (1) |
|
|
294 | (1) |
|
|
295 | (3) |
|
Chapter 7 Concepts and Issues in Economic Evaluation |
|
|
298 | (42) |
|
|
299 | (7) |
|
Why an Evaluator Needs to Know About Economic Evaluation |
|
|
300 | (2) |
|
Connecting Economic Evaluation With Program Evaluation: Program Complexity and Outcome Attribution |
|
|
302 | (1) |
|
Program Complexity and Determining Cost-Effectiveness of Program Success |
|
|
302 | (1) |
|
|
303 | (1) |
|
Three Types of Economic Evaluation |
|
|
304 | (1) |
|
The Choice of Economic Evaluation Method |
|
|
304 | (2) |
|
Economic Evaluation in the Performance Management Cycle |
|
|
306 | (1) |
|
Historical Developments in Economic Evaluation |
|
|
307 | (1) |
|
|
308 | (12) |
|
|
309 | (3) |
|
Valuing Nonmarket Impacts |
|
|
312 | (1) |
|
Revealed and Stated Preferences Methods for Valuing Nonmarket Impacts |
|
|
312 | (1) |
|
Steps for Economic Evaluations |
|
|
313 | (1) |
|
1 Specify the Set of Alternatives |
|
|
314 | (1) |
|
2 Decide Whose Benefits and Costs Count (Standing] |
|
|
314 | (1) |
|
3 Categorize and Catalog the Costs and Benefits |
|
|
314 | (1) |
|
4 Predict Costs and Benefits Quantitatively Over the Life of the Project |
|
|
315 | (1) |
|
5 Monetize /Attach Dollar Values to} All Costs and Benefits |
|
|
315 | (1) |
|
6 Select a Discount Rate for Costs and Benefits Occurring in the Future |
|
|
316 | (1) |
|
7 Compare Costs With Outcomes, or Compute the Net Present Value of Each Alternative |
|
|
317 | (1) |
|
8 Perform Sensitivity and Distributional Analysis |
|
|
318 | (1) |
|
|
319 | (1) |
|
Cost-Effectiveness Analysis |
|
|
320 | (1) |
|
|
321 | (1) |
|
Cost-Benefit Analysis Example: The High/Scope Perry Preschool Program |
|
|
322 | (6) |
|
1 Specify the Set of Alternatives |
|
|
324 | (1) |
|
2 Decide Whose Benefits and Costs Count (Standing) |
|
|
324 | (1) |
|
3 Categorize and Catalog Costs and Benefits |
|
|
324 | (1) |
|
4 Predict Costs and Benefits Quantitatively Over the Life of the Project |
|
|
325 | (1) |
|
5 Monetize (Attach Dollar Values to) All Costs and Benefits |
|
|
325 | (1) |
|
6 Select a Discount Rate for Costs and Benefits Occurring in the Future |
|
|
326 | (1) |
|
7 Compute the Net Present Value of the Program |
|
|
327 | (1) |
|
8 Perform Sensitivity and Distributional Analysis |
|
|
327 | (1) |
|
|
327 | (1) |
|
Strengths and Limitations of Economic Evaluation |
|
|
328 | (5) |
|
Strengths of Economic Evaluation |
|
|
328 | (1) |
|
Limitations of Economic Evaluation |
|
|
329 | (4) |
|
|
333 | (1) |
|
|
334 | (2) |
|
|
336 | (4) |
|
Chapter 8 Performance Measurement as an Approach to Evaluation |
|
|
340 | (31) |
|
|
341 | (1) |
|
The Current Imperative To Measure Performance |
|
|
342 | (1) |
|
Performance Measurement For Accountability and Performance Improvement |
|
|
343 | (1) |
|
Growth and Evolution of Performance Measurement |
|
|
344 | (6) |
|
Performance Measurement Beginnings in Local Government |
|
|
344 | (1) |
|
Federal Performance Budgeting Reform |
|
|
345 | (1) |
|
The Emergence of New Public Management |
|
|
346 | (3) |
|
Steering, Control, and Performance Improvement |
|
|
349 | (1) |
|
Metaphors That Support and Sustain Performance Measurement |
|
|
350 | (3) |
|
Organizations as Machines |
|
|
351 | (1) |
|
|
351 | (1) |
|
Organizations as Open Systems |
|
|
352 | (1) |
|
Comparing Program Evaluation and Performance Measurement Systems |
|
|
353 | (11) |
|
|
364 | (1) |
|
|
365 | (1) |
|
|
366 | (5) |
|
Chapter 9 Design and Implementation of Performance Measurement Systems |
|
|
371 | (38) |
|
|
372 | (1) |
|
The Technical/Rational View and the Political/Cultural View |
|
|
372 | (2) |
|
Key Steps in Designing and Implementing a Performance Measurement System |
|
|
374 | (26) |
|
1 Leadership: Identify the Organizational Champions of This Change |
|
|
375 | (2) |
|
2 Understand What Performance Measurement Systems Can and Cannot Do |
|
|
377 | (2) |
|
3 Communication: Establish Multi-Channel Ways of Communicating That Facilitate Top-Down, Bottom-Up, and Horizontal Sharing of Information, Problem Identification, and Problem Solving |
|
|
379 | (1) |
|
4 Clarify the Expectations for the Intended Uses of the Performance Information That is Created |
|
|
380 | (3) |
|
5 Identify the Resources and Plan for the Design, Implementation, and Maintenance of the Performance Measurement System |
|
|
383 | (1) |
|
6 Take the Time to Understand the Organizational History Around Similar Initiatives |
|
|
384 | (1) |
|
7 Develop Logic Models for the Programs for Which Performance Measures Are Being Designed and Identify the Key Constructs to Be Measured |
|
|
385 | (2) |
|
8 Identify Constructs Beyond Those in Single Programs: Consider Programs Within Their Place in the Organizational Structure |
|
|
387 | (3) |
|
9 Involve Prospective Users in Development of Logic Models and Constructs in the Proposed Performance Measurement System |
|
|
390 | (1) |
|
10 Translate the Constructs Into Observable Performance Measures that Compose the Performance Measurement System |
|
|
391 | (4) |
|
11 Highlight the Comparisons That Can Be Part of the Performance Measurement System |
|
|
395 | (3) |
|
12 Reporting and Making Changes to the Performance Measurement System |
|
|
398 | (2) |
|
Performance Measurement for Public Accountability |
|
|
400 | (2) |
|
|
402 | (1) |
|
|
403 | (1) |
|
Appendix A Organizational Logic Models |
|
|
404 | (1) |
|
|
405 | (4) |
|
Chapter 10 Using Performance Measurement for Accountability and Performance Improvement |
|
|
409 | (36) |
|
|
410 | (1) |
|
Using Performance Measures |
|
|
411 | (18) |
|
Performance Measurement in a High-Stakes Environment: The British Experience |
|
|
412 | (3) |
|
Assessing the "Naming and Shaming" Approach to Performance Management in Britain |
|
|
415 | (3) |
|
A Case Study of Gaming: Distorting the Output of a Coal Mine |
|
|
418 | (1) |
|
Performance Measurement in a Medium-Stakes Environment: Legislator Expected Versus Actual Uses of Performance Reports in British Columbia, Canada |
|
|
419 | (5) |
|
The Role of Incentives and Organizational Politics in Performance Measurement Systems With a Public Reporting Emphasis |
|
|
424 | (1) |
|
Performance Measurement in a Low-Stakes Environment: Joining Internal and External Uses of Performance Information in Lethbridge, Alberta |
|
|
425 | (4) |
|
Rebalancing Accountability-Focused Performance Measurement Systems to Increase Performance Improvement Uses |
|
|
429 | (8) |
|
Making Changes to a Performance Measurement System |
|
|
432 | (2) |
|
Does Performance Measurement Give Managers the "Freedom to Manage?" |
|
|
434 | (1) |
|
Decentralized Performance Measurement: The Case of a Finnish Local Government |
|
|
435 | (2) |
|
When Performance Measurement Systems De-Emphasize Outputs and Outcomes: Performance Management Under Conditions of Chronic Fiscal Restraint |
|
|
437 | (2) |
|
|
439 | (1) |
|
|
440 | (1) |
|
|
441 | (4) |
|
Chapter 11 Program Evaluation and Program Management |
|
|
445 | (32) |
|
|
446 | (1) |
|
Internal Evaluation: Views From the Field |
|
|
447 | (9) |
|
Intended Evaluation Purposes and Managerial Involvement |
|
|
450 | (1) |
|
When the Evaluations Are for Formative Purposes |
|
|
450 | (2) |
|
When the Evaluations Are for Summative Purposes |
|
|
452 | (1) |
|
Optimizing Internal Evaluation: Leadership and Independence |
|
|
453 | (1) |
|
Who Leads the Internal Evaluation? |
|
|
454 | (1) |
|
"Independence" for Evaluators |
|
|
455 | (1) |
|
Building an Evaluative Culture in Organizations: An Expanded Role for Evaluators |
|
|
456 | (4) |
|
Creating Ongoing Streams of Evaluative Knowledge |
|
|
457 | (1) |
|
Critical Challenges to Building and Sustaining an Evaluative Culture |
|
|
458 | (2) |
|
Building an Evaluative/Learning Culture in a Finnish Local Government: Joining Performance Measurement and Performance Management |
|
|
460 | (1) |
|
Striving for Objectivity in Program Evaluations |
|
|
460 | (7) |
|
Can Program Evaluators Claim Objectivity? |
|
|
462 | (1) |
|
Objectivity and Replicability |
|
|
463 | (3) |
|
Implications for Evaluation Practice: A Police Body-Worn Cameras Example |
|
|
466 | (1) |
|
Criteria for High-Quality Evaluations |
|
|
467 | (3) |
|
|
470 | (1) |
|
|
471 | (1) |
|
|
472 | (5) |
|
Chapter 12 The Nature and Practice of Professional Judgment in Evaluation |
|
|
477 | (40) |
|
|
478 | (1) |
|
The Nature of the Evaluation Enterprise |
|
|
478 | (4) |
|
|
479 | (1) |
|
Reconciling the Diversity in Evaluation Theory With Evaluation Practice |
|
|
480 | (1) |
|
Working in the Swamp: The Real World of Evaluation Practice |
|
|
481 | (1) |
|
Ethical Foundations of Evaluation Practice |
|
|
482 | (4) |
|
Power Relationships and Ethical Practice |
|
|
485 | (1) |
|
Ethical Guidelines for Evaluation Practice |
|
|
486 | (4) |
|
Evaluation Association-Based Ethical Guidelines |
|
|
486 | (4) |
|
Understanding Professional Judgment |
|
|
490 | (9) |
|
What Is Good Evaluation Theory and Practice? |
|
|
490 | (2) |
|
|
492 | (1) |
|
Balancing Theoretical and Practical Knowledge in Professional Practice |
|
|
492 | (1) |
|
Aspects of Professional Judgment |
|
|
493 | (2) |
|
The Professional Judgment Process: A Model |
|
|
495 | (2) |
|
|
497 | (1) |
|
Values, Beliefs, and Expectations |
|
|
497 | (1) |
|
Cultural Competence in Evaluation Practice |
|
|
498 | (1) |
|
Improving Professional Judgment in Evaluation |
|
|
499 | (7) |
|
Mindfulness and Reflective Practice |
|
|
499 | (2) |
|
Professional Judgment and Evaluation Competencies |
|
|
501 | (3) |
|
Education and Training-Related Activities |
|
|
504 | (1) |
|
Teamwork and Improving Professional Judgment |
|
|
505 | (1) |
|
The Prospects for an Evaluation Profession |
|
|
506 | (3) |
|
|
509 | (1) |
|
|
510 | (1) |
|
|
511 | (2) |
|
Appendix A Fiona's Choice: An Ethical Dilemma for a Program Evaluator |
|
|
511 | (1) |
|
|
512 | (1) |
|
|
513 | (4) |
Glossary |
|
517 | (13) |
Index |
|
530 | |