1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

Program evaluation and performance measurement an introduction to practice

683 20 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 683
Dung lượng 9,38 MB

Nội dung

Reviews of the Third Edition “The book is thorough and comprehensive in its coverage of principles and practices of program evaluation and performance measurement The authors are striving to bridge two worlds: contemporary public governance contexts and an emerging professional role for evaluators, one that is shaped by professional judgement informed by ethical/moral principles, cultural understandings, and reflection With this edition the authors successfully open up the conversation about possible interconnections between conventional evaluation in new public management governance contexts and evaluation grounded in the discourse of moral-political purpose.” —J Bradley Cousins University of Ottawa “The multiple references to body-worn-camera evaluation research in this textbook are balanced and interesting, and a fine addition to the Third Edition of this book This careful application of internal and external validity for body-worn cameras will be illustrative for students and researchers alike The review of research methods is specific yet broad enough to appeal to the audience of this book, and the various examples are contemporary and topical to evaluation research.” —Barak Ariel University of Cambridge, UK, and Alex Sutherland, RAND Europe, Cambridge, UK “This book provides a good balance between the topics of measurement and program evaluation, coupled with ample real-world application examples The discussion questions and cases are useful in class and for homework assignments.” —Mariya Yukhymenko California State University, Fresno “Finally, a text that successfully brings together quantitative and qualitative methods for program evaluation.” —Kerry Freedman Northern Illinois University “The Third Edition of Program Evaluation and Performance Measurement: An Introduction to Practice remains an excellent source book for introductory courses to program evaluation, and a very useful reference guide for seasoned evaluators In addition to covering in an in-depth and interesting manner the core areas of program evaluation, it clearly presents the increasingly complementary relationship between program evaluation and performance measurement Moreover, the three chapters devoted to performance measurement are the most detailed and knowledgeable treatment of the area that I have come across in a textbook I expect that the updated book will prove to be a popular choice for instructors training program evaluators to work in the public and notfor-profit sectors.” —Tim Aubry University of Ottawa “This text guides students through both the philosophical and practical origins of performance measurement and program evaluation, equipping them with a profound understanding of the abuses, nuances, mysteries, and successes [of those topics] Ultimately, the book helps students become the professionals needed to advance not just the discipline but also the practice of government.” —Erik DeVries Treasury Board of Canada Secretariat Program Evaluation and Performance Measurement Third Edition This book is dedicated to our teachers, people who have made our love of learning a life’s work From Jim McDavid: Elinor Ostrom, Tom Pocklington, Jim Reynolds, and Bruce Wilkinson From Irene Huse: David Good, Cosmo Howard, Evert Lindquist, Thea Vakil From Laura Hawthorn: Karen Dubinsky, John Langford, Linda Matthews Sara Miller McCune founded SAGE Publishing in 1965 to support the dissemination of usable knowledge and educate a global community SAGE publishes more than 1000 journals and over 800 new books each year, spanning a wide range of subject areas Our growing selection of library products includes archives, data, case studies and video SAGE remains majority owned by our founder and after her lifetime will become owned by a charitable trust that secures the company’s continued independence Los Angeles | London | New Delhi | Singapore | Washington DC | Melbourne Program Evaluation and Performance Measurement An Introduction to Practice Third Edition James C McDavid University of Victoria, Canada Irene Huse University of Victoria, Canada Laura R L Hawthorn Copyright © 2019 by SAGE Publications, Inc All rights reserved No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher For Information: SAGE Publications, Inc 2455 Teller Road Thousand Oaks, California 91320 E-mail: order@sagepub.com SAGE Publications Ltd Oliver’s Yard 55 City Road London, EC1Y 1SP United Kingdom SAGE Publications India Pvt Ltd B 1/I Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India SAGE Publications Asia-Pacific Pte Ltd Church Street #10–04 Samsung Hub Singapore 049483 Printed in the United States of America This book is printed on acid-free paper 18 19 20 21 22 10 Names: McDavid, James C., author | Huse, Irene, author | Hawthorn, Laura R L Title: Program evaluation and performance measurement : an introduction to practice / James C McDavid, University of Victoria, Canada, Irene Huse, University of Victoria, Canada, Laura R L Hawthorn Description: Third Edition | Thousand Oaks : SAGE Publications, Inc., Corwin, CQ Press, [2019] | Revised edition of the authors' Program evaluation and performance measurement, c2013 | Includes bibliographical references and index Identifiers: LCCN 2018032246 | ISBN 9781506337067 (pbk.) Subjects: LCSH: Organizational effectiveness–Measurement | Performance–Measurement | Project management–Evaluation Classification: LCC HD58.9 M42 2019 | DDC 658.4/013–dc23 LC record available at https://lccn.loc.gov/2018032246 Acquisitions Editor: Helen Salmon Editorial Assistant: Megan O’Heffernan Content Development Editor: Chelsea Neve Production Editor: Andrew Olson Copy Editor: Jared Leighton and Kimberly Cody Typesetter: Integra Proofreader: Laura Webb Indexer: Sheila Bodell Cover Designer: Ginkhan Siam Marketing Manager: Susannah Goldes Contents Preface Acknowledgments About the Authors Chapter • Key Concepts and Issues in Program Evaluation and Performance Management Chapter • Understanding and Applying Program Logic Models Chapter • Research Designs For Program Evaluations Chapter • Measurement for Program Evaluation and Performance Monitoring Chapter • Applying Qualitative Evaluation Methods Chapter • Needs Assessments for Program Development and Adjustment Chapter • Concepts and Issues in Economic Evaluation Chapter • Performance Measurement as an Approach to Evaluation Chapter • Design and Implementation of Performance Measurement Systems Chapter 10 • Using Performance Measurement for Accountability and Performance Improvement Chapter 11 • Program Evaluation and Program Management Chapter 12 • The Nature and Practice of Professional Judgment in Evaluation Glossary Index 10 economic evaluation in, 306 needs assessment in, 252–254 ratchet effects in, 388, 418 threshold effects in, 418 Performance measurement, 163–164 for accountability and performance improvement, 343 addressing general issues, 356–357 attribution and, 358–361 beginnings in local government, 344–345 big data analytics in, 180–181 comparing program evaluation and, 353–363 comparisons included in system for, 395–398 complex interventions and, 63 under conditions of chronic fiscal restraint, 437–438 conflicting expectations for, 379–380 connecting qualitative evaluation methods to, 239–241 current imperative for, 342–343 decentralized, 435–437 decoupling in, 431–432 de-emphasizing outputs and outcomes, 437–438 emergence of New Public Management and, 346–349 evaluators, 362–363 external accountability (EA) approach in, 430–431, 435 federal performance budgeting reform, 345–346 gaming, 416 giving managers “freedom to manage,” 434–435 growth and evolution of, 344–350 in high-stakes environment, 412–415 integration with program evaluation, 4–5 intended purposes of, 363, 380–382 internal learning (IL) approach in, 430–431, 435–437 introduction to, 341 logic models for, 81–82, 83 (figure) in low-stakes environment, 425–429 making changes to systems of, 432–434 in medium-stakes environment, 419–424 metaphors that support and sustain, 350–353 Most Significant Change (MSC) approach, 239–241, 242 “naming and shaming” approach to, 415–418 as ongoing, 356, 376 ongoing resources for, 361–362 organizational cultural acceptance and commitment to, 374 professional engagement (PR) regime in, 430–431 for public accountability, 400–401 rebalancing accountability-focused, 429–437 research designs and, 145–146 role of incentives and organizational politics in, 424–425 routinized processes in, 357–358 sources of data, 179–191 steering, control, and performance improvement with, 349–350 validity issues, 388, 391–393, 482 Performance measurement design 669 changes in, 432–434 clarifying expectations for intended uses in, 380–382 communication in, 379–380, 433 developing logic models for programs for which performance measures are being designed and identifying key constructs to be measured in, 385–386, 387 (table) highlighting the comparisons that can be part of the system, 395–398 identifying constructs beyond those in single programs in, 387–390 identifying resources and planning for, 383–384 introduction to, 372 involving prospective users in development of logic models and constructs in, 390–391 key steps in, 374–399 leadership and, 375–377 reporting and making changes included in, 398–399 taking time to understand organizational history around similar initiatives in, 384–385 technical/rational view and political/cultural view in, 372–374 translating constructs into observable performance measures in, 391–395 understanding what performance measurement systems can and cannot and, 377–378 Performance monitoring, Performance paradox in the public sector, 429–430 Perla, R., 185–186 Perry Preschool Study, 145, 179 compensatory equalization of treatments in, 116 conclusions from, 116–117 empirical causal model for, 152–153 High Scope/Perry Preschool Program cost–benefit analysis, 322–328 limitations of, 115–116 as longitudinal study, 113–114 research design, 112–115 within-case analysis, 222 Personal recall, 192–194 Peters, G., 429 Peterson, K., 306 Petersson, J., 181 Petrosino, A., 143 Pett, M., 254 Philosophical pragmatism, 224 Photo radar cameras, Vancouver, Canada, 164–165 Phronesis, 479, 484 Picciotto, R., 12, 463, 491, 506 Pinkerton, S D., 330 Pitman, A., 492 Planning, programming, and budgeting systems (PPBS), 344 Plausible rival hypotheses, 12, 163, 399 causal relationship between two variables and, 99 visual metaphor for, 108, 108 Poister, T H., 374, 383, 394, 398 Polanyi, M., 492 Police body-worn camera program, Rialto, California as basic type of logic model, 58–60, 78–79 connecting this book to, 21–22 construct validity, 127 context of, 17–18 670 implementing and evaluating effects of, 18–19 key findings on, 19 measurement validity, 126–127 program logic model for, 59, 59–60 program success versus understanding the cause-and-effect linkages in, 20 randomized controlled trials andquasi-experiments, 122–124 replication of evaluation of, 466–467 Policies, 10–11 appropriateness of, 253 incremental impact of changes in, 153–156 as open systems, 53–54 results-based neo-liberalism,491–492 Policy on Evaluation, 491 Policy on Results, 491 Policy Paradox: The Art of Political Decision Making, 373 Polinder, S., 301, 331 Political/cultural perspective, 373 Politics and incentives in performance measurement systems, 424–425 of needs assessments, 256–257 organizational, 373 Pollitt, C., 363, 382, 412, 413, 415,424–425, 429, 453 Pons, S., 465 Populism, 486 Positivism, 208, 211 as criteria for judging quality and credibility of qualitative research, 215 (table) Post-assessment phase in needs assessment, 283–285 Postpositivism, 211 as criteria for judging quality and credibility of qualitative research, 215 (table) Post-test assessments, 191 Post-test only experimental design, 108–109 Post-test only group, 110 Power ethical practice and, 485–486 knowledge and, 499 Practical know-how in model of professional judgment process, 496 (table) Practical wisdom, 484, 486 Pragmatism, 213–214 philosophical, 224 Pre-assessment phase in needs assessment, 259–268 focusing the needs assessment in, 260–266, 286 forming the needs assessment committee (NAC) in, 266, 286–287 literature reviews in, 267, 287–288 moving to phase II and/or III or stopping after, 268 Predictive validity, 171, 173 President’s Emergency Plan for Aids Relief (PEPFAR), U.S., 468 Pre-test assessments, 37, 191 retrospective, 135, 194–196 Pre-test-post-test experimental design, 108–109 Prioritizing needs to addressed, 278–279, 288–290 Problems, simple, complicated, and complex, 60–61 Procedural judgment, 494 671 Procedural organizations, 386, 387 (table) Process uses of evaluations, 41 Production organizations, 386, 387 (table) Professional engagement regime (PR), 430 Professional judgment, aspects of, 493–495 balancing theoretical and practical knowledge, 492–493 education and training-related activities and, 504–505 ethics in (See Ethics) evaluation competencies and, 501–502, 502–504 (table) good evaluation theory and practice and, 490–492 importance of, 16–17 improving, 499–506 introduction to, 478 mindfulness and reflective practice in, 499–501 nature of the evaluation enterprise and, 478–482 process in, 495–498 tacit knowledge and, 17, 490, 492 teamwork and improving, 505–506 types of, 494 understanding, 490–499 Program activities, 24–25 Program Assessment Rating Tool(PART), 424 Program complexity, 302–303 Program components, 55 Program effectiveness, 13, 217 Program environment, 13, 34–35, 57 Program evaluation, 3, after needs assessment, 284–285 American model of, 6–7 attribution issue in (See Attribution) basic statistical tools for, 150–151 big data analytics in, 180–181 Canadian federal model of, causality in, 12–14 collaborative, 451 comparing performance measurement and, 353–363 connected to economic evaluation, 302–303 connected to performance management system, 5–8 constructing logic model for, 79–81 Context, Input, Process, Product (CIPP) model, 449 criteria for high-quality, 467–469 cultural competence in, 498–499 defining, developmental, 15, 85, 458 diversity of theory on, 480 as episodic, 356 ethical foundations of, 482–486 ethical guidelines for, 486–488, 489–490 (table) evaluators in, 362–363 ex ante, 8, 16, 27 ex post, 15–16, 27 672 formative, 8, 14–15, 216–217, 217–218 (table), 446, 450–452 holistic approach to, 219 importance of professional judgment in, 16–17 improving professional judgment in, 499–506 inductive approach to, 219 integration with performance measurement, 4–5 intended purposes of, 363 internal, 447–455 as issue/context specific, 356–357 key concepts in, 12–17 key questions in, 22–27 linking theoretical and empirical planes in, 124 (figure) making changes based on, 41–42 measurement in, 163–164 measures and lines of evidence in, 357–358 measuring constructs in, 166 nature of the evaluation enterprise and, 478–482 objectivity in, 40, 450, 460–467 paradigms and their relevance to, 208–213 police body-worn camera program, Rialto, California, 17–22 process, 37–41 professional judgment and competencies in, 501–502, 502–504 (table) realist, 71–74 real world of, 481–482 shoestring, 31 sources of data, 179–191 steps in conducting (See Program evaluation, steps in conducting) summative, 9, 14–15, 216–217, 217–218 (table), 446, 452–453 targeted resources for, 361–362 theory-driven, 68, 74–75 See also Economic evaluation Program evaluation, steps in conducting doing the evaluation, 37–41 feasibility, 30–37 general, 28–29 Program Evaluation Standards, 467, 487 Program impacts, 57 Program implementation, 8–9 after needs assessment, 284–285 Program inputs, 24–25, 55 Program logic models, 5, 33 basic logic modeling approach, 54–60 brainstorming for, 80–81 construction of, 79–81 defined, 51 introduction to, 51–54 for Meals on Wheels program, 88, 89 (figure) open systems approach and, 52–53 for performance measurement, 81–82, 83 (figure) for police body-worn camera programs, 59, 59–60 primary health care in Canada, 89–91 program objectives and program alignment with government goals, 64–67 673 program theories and program logics in, 68–75 “repacking,” 104 strengths and limitations of, 84–85 surveys in, 183 testing causal linkages in, 141–145 that categorize and specify intended causal linkages, 75–79 in a turbulent world, 85 working with uncertainty, 60–63 See also Logic modeling Program logics, 68, 84 contextual factors, 70–71 systematic reviews, 69–70 Program management formative evaluation and, 450–452 summative evaluation and, 452–453 Program managers, 84, 164 performance measurement and, 181–182 Programmed Planned Budgeting Systems (PPBS), 345–346 Program monitoring after needsassessment, 284 Program objectives, 64–67 Program processes, 15 Programs, 11 intended outcomes of, 13 as open systems, 53–54 strategic context of, 263–264 Program theories, 68, 74–75 Progressive Movement, 345 Propensity score analysis, 134 Proportionate stratified samples, 272 Propper, C., 416 Proxy measurement, 181 Psychological constructs, 183 Public accountability, 377–378, 381 performance measurement for, 400–401, 411–429 performance paradox in, 429–430 Public Safety Canada, 18 Public Transit Commission, Pennsylvania, study, 275–277 Public Value Governance, Purposeful sampling, 228, 229 (table) QALY See Quality-adjusted life-years (QALY) Qualitative data, 38 analysis of, 233–236 collecting and coding, 230–233 triangulating, 288 within-case analysis of, 222–223 Qualitative evaluation, 101–102 alternative criteria for assessing qualitative research and, 214–216 basics of designs for, 216–221 comparing and contrasting different approaches to, 207–216, 218–221 differences between quantitative and, 219 (table) diversity of approaches in, 207 introduction to, 206–207 674 mixed methods designs, 224–228 naturalistic designs, 219–220 outcomes mapping in, 220–221 paradigms and, 208–213 performance measurement connected to methods in, 239–241 power of case studies and, 241–242 summative versus formative, 216–217, 217–218 (table) Qualitative interviews, 231–234 in community health needs assessment in New Brunswick, 288 Qualitative needs assessment, 277–278 Qualitative program evaluation, 219 collecting and coding data in, 230–233 credibility and generalizability of, 237–239 data analysis in, 233–236 designing and conducting, 221–237 interviews in, 231–234 purpose and questions clarification, 222 reporting results of, 237 research designs and appropriate comparisons for, 222–224 sampling in, 228–230 Qualitative Researching, 487 Quality-adjusted life-years (QALY), 300, 304, 305 cost–utility analysis and, 321–322 threshold for, 314 Quality Standards for Development Evaluation, 468 Quantitative data, 38 triangulating, 288 Quantitative evaluation, 179 differences between qualitative and, 219 (table) mixed methods designs, 224–228 Quasi-experimental designs, 101 addressing threats to internal validity, 131–140 police body-worn cameras, 122–124 Quota sampling, 275 (table) Randomized experiments/randomized controlled trials (RCTs), 4, 21, 481 consequentialism and, 484 construct validity, 127–128 High Scope/Perry Preschool Program cost–benefit analysis, 323 police body-worn cameras, 122–124 qualitative methods, 219–220 Random sampling, 272 Ratchet effect, 388, 418 Ratio measures, 176, 177–179 Rationale, 21 Raudenbush, S., 174 Rautiainen, A., 432, 459 Reagan, R., 307 Real benefits, 316 Real costs, 316 Realist evaluation, 71–74 Real rates, 317 RealWorld Evaluation approach, 240 675 Redekp, W K., 301 Reflective judgment, 494 Reflective practice, 461, 499–501 Regression, 150 coefficients, 150 logistic, 134 multiple regression analysis, 150,150–151, 154, 186 multivariate, 175 revealed preferences methods, 313 statistical, 120, 133, 137 Regulatory Impact Analysis (RIA), U.S., 307 Reilly, P., 301 Reinventing Government, 347 Relevance, program, 24 Reliability, 164–175 Cronbach’s alpha, 168 difference between validity and, 169–170 intercoder, 168 Likert statement, 168 split-half, 167 in surveys, 191 understanding, 167–168 Replicability and objectivity, 463–465 body-worn cameras study, 466–467 Reports needs assessment, 280–282 dissemination, 40–41 qualitative program evaluation, 237 writing, review and finalizing of, 39–40 Research designs, case study, 34, 141 characteristics of, 104–110 conditions for establishing relationship between two variables, 99 evaluation feasibility assessment, 33–34 experimental (See Experimental designs) feasibility issues, 10 gold standard, 4, 21, 102 holding other factors constant, 104 implicit, 141 naturalistic, 219–220 non-experimental, 140–141 patched-up, 104 performance measurement and, 145–146 Perry Preschool Study, 112–117 qualitative (See Qualitative evaluation) quasi-experimental (See Quasi-experimental designs) “repacking” logic models, 104 survey instruments, 189–191 threats to validity and, 118–131 treatment groups, 99–100 why pay attention to experimental designs in, 110–111 Response process validity, 171, 172, 392 676 Response set, 187 Response shift bias, 195 Results-Based Logic Model for Primary Health Care: Laying an Evidence-Based Foundation to Guide Performance Measurement, Monitoring and Evaluation, A, 89 Results-based management, 5–6 See also New public management (NPM) Results-based neo-liberalism, 491–492 Results reporting in performance measurement systems, 398–399 qualitative program evaluation, 237 Retrospective pre-tests, 135, 194–196 Revealed preferences, 312, 313 (table) Reviere, R., 258 Richie, J., 235 Rist, R C., 28, 355, 456, 458 Rival hypotheses, 26, 57 plausible, 99, 108 Rogers, P J., 30–31, 62, 74, 464 Roosevelt, F D., 307 Rossi, P H., 111 Roth, J., 256 Rothery, M., 254 Royal British Columbia Museum admission fee policy, 153–156 Rugh, J., 194 Rush, B., 55 Rutman, L., 28 Sabharwal, S., 301 Sadler, S., 306 Saldana, J., 222 Sample sizes, 273–274 Sampling, 34, 118 level of confidence, 274 methods, 105, 274–275 mixed, 230 in needs assessments, 271–275 opportunistic, 230 purposeful, 228, 229 (table) qualitative evaluations, 228–230 random, 272 sizes of, 273–274 snowball or chain, 228, 230, 272, 275 (table) theoretical, 228 typical case, 230 Sampling error, 273 Sampson, R., 174 Sanders, G D., 301 Scale, Likert-like, 186 Schack, R W., 380 Schön, D A., 504 Schröter, D C., 74 Schwandt, T., 57, 487, 493 Schwarz, N., 192–193 677 Schweinhart, L., 115 Scrimshaw, S C., 32 Scriven, M., 12, 14–15, 40, 222, 256, 450, 460–462 on ethical evaluation, 487 Secondary sources, 262, 357 Selection and internal validity, 120–121, 132 Selection-based interactions and internal validity, 121 Self-awareness and socially-desirable responding, 79 Self-evaluating organizations, 447 Self-Sufficiency Project, 75–77 Senge, P M., 380, 457 Sensitivity analysis, 317, 318–319, 327 Sequential explanatory design, 226–227 Sequential exploratory design, 227 Shadish, W R., 111, 118, 125, 127–130, 170 on validity, 174, 194 Shareable knowledge in model of professional judgment process, 496 (table) Shaw, I., 30 Shemilt, I., 315 Shepherd, R., 452 Shoestring evaluation, 31 “Sibling Cancer Needs Instrument,” 267 Sigsgaard, P., 239, 240 Simple interventions, 61–62 Simple problems, 60–61 Single time series design, 34, 133, 134–135 Skip factors, 272 Snowball sampling, 228, 230, 272, 275 (table) Social constructivism, 210, 213 as criteria for judging quality and credibility of qualitative research, 215 (table) Social democracy, 486 Social desirability response bias, 191 Social need, types of, 255–256 Social opportunity cost of capital (SOC), 316–317 Social rate of time preference (SRTP), 316–317 Solomon Four-Group Design, 110 Sonneveld, P., 301 Soriano, F L., 258, 280 Sork, T J., 250, 263, 270 Speaking truth to power: The art and craft of policy analysis, 446 Special Issue of New Directions inEvaluation, 453 Specifying set of alternatives in economic evaluation, 314 Split-half reliability, 167 Stamp, J., 393 Standard gamble method, 322 Standards for Educational and Psychological Testing, 170 Standing in cost–benefit analysis, 309–312, 314, 324–325 Stanford–Binet Intelligence Test, 112–114 Stanford University Bing NurserySchool, 174 Stanley, J C., 118, 134 Stated preferences method, 312, 313 (table) Static-group comparison design, 135 678 Statistical conclusions validity, 98, 118, 131 Statistical Methods for Research Workers, 105 Statistical Package for the Social Sciences (SPSS), 223 Statistical regression and internalvalidity, 120 Statistical significance, 113 Steccolini, I., 357 Stergiopoulos, V., 254 Stern, N., 317 Stevahn, L., 501 Stevens, A., 252–254 Stimulus-response model of surveyprocess, 183 Stockard, J., 414 Stockmann, R., 62, 508 Stone, D., 373 Strategic context of programs, 263–264 Stratified purposeful sampling, 230 Stratified random samples, 272 Streams of evaluative knowledge, 457–458 Structure/logic of programs, 24 Structure of Scientific Revolutions, The, 208 “Study of Administration, The,” 352 Stufflebeam, D., 449–450, 460 Summative evaluation, 9, 14–15, 446, 452–453 as qualitative evaluation approach, 216–217, 217–218 (table) Summative needs assessment, 261 Surveys conducting, 187–189 designs, 187–189, 196–197 estimating incremental effects of programs using, 192–196 as evaluator-initiated data source in evaluations, 182–184 Likert statements in, 185–187 in medium-stakes environment, 420–423 in needs assessments, 270–271 open-ended questions in, 227, 231 personal recall and, 192–194 retrospective pre-test, 135, 194–196 steps in responding to, 193 stimulus-response model of, 183 structuring instruments for, 189–191 unintended responses in, 184 validity and reliability issues applicable to, 191 Sustained leadership, 432 Sutherland, A., 18–20 Swenson, J R., 265 Symbolic uses of evaluations, 41 Systematic review, 32 Systematic sampling, 272, 275 (table) Tacit knowledge, 17, 490, 492 Tailored design method surveys, 188 Taks, M., 309 Tanner, G., 254 Target populations for needs assessment, 262–263 679 Target setting, performance measurement systems, 388–395 Taylor, F., 351 Teamwork and professional judgment, 505–506 Technical efficiency, 25 Technical judgments, 494 Technical/rational perspective, 373 Temporal asymmetry, 99 Testing procedures and internal validity, 120 Thatcher, M., 347 Theoretical sampling, 228 Theorizing in mixed methods, 225 Theory-driven evaluations, 68, 74–75 Theory of change (ToC), 51, 74–75 response bias, 191, 195 Thompson, J D., 361 Three triangles in model of professional judgment process, 498 Three-way analysis of variance, 107 Threshold effects, 418 Tilley, N., 71 Time series, 113 interrupted, 133–140 single, 34, 133, 134–135 York Neighborhood Watch Program, 136–140 Time trade-off method, 322 Timing in mixed methods, 224–225 Traditional anthropological research, 208 Travel cost method, 312, 313 (table) Treasury Board of Canada Secretariat (TBS), 7, 82, 350, 355 accountability expectations, 452 core questions in program evaluation, 356 key role for managers in, 362 logic model template, 54, 91 objectivity in evaluation of, 461, 464 program structure, 387 repeatability in evaluation of, 465 resource alignment review, 67 Treatment groups, 99 Triangulation, 34, 140, 141 of data sources, 239 of qualitative and quantitative lines of evidence, 288 Tripp, D., 499–500, 504 Trochim, R., 125, 170 Troubled Families Program in Britain, 166–167 as complex program, 302–303 mixed methods in, 225–226 needs assessment and, 251 qualitative evaluation report, 237 within-case analysis, 223 Trump, D., Tusler, M., 414 Tutty, L., 254 Tversky, A., 103 680 Typical case sampling, 230 U.K Job Retention and Rehabilitation Pilot, 223, 225 Uncertainty, working with, 60–63 Unintended responses, 184 United States, the Data Resource Center for Children and Adolescent Health, 254 early childhood programs in, 32, 69,98, 128 federal performance budgeting reform in, 345–346 focus on government program performance results in, 6–7 gold standard in, 102 Government Accountability Office,54, 461 Government Performance and Results Act, 349, 424 Government Performance and Results Act Modernization Act, 349 Medicaid program, 249 New Deal era in, 307 North Carolina Community Assessment Guidebook, 279 Office of Management and Budget (OMB), 6–7, 349, 424 Oregon Open Data Portal, 263 Patient Protection and Affordable Care Act, 251 performance measurement in local governments in, 344–345 police body-worn camera study in Rialto (See Police body-worn camera program, Rialto, California) President’s Emergency Plan for Aids Relief (PEPFAR), 468 Regulatory Impact Analysis (RIA), 307 resource alignment review in, 67 Units of analysis, 4, 144, 175–176 in surveys, 182 Urban Change welfare-to-work project, 217 within-case analysis, 223 US Bureau of Justice Assistance, 18 Utilization focus, 30 UTOS, 101 Uyl-de Groot, C., 301 Vale, L., 301, 315 Validity, 164–175 bias as problem in, 168 of causes and effects, 197–198 concurrent, 171, 173 construct, 21, 68, 118, 124–129, 170–171 content, 171, 172, 392 convergent, 171, 174 difference between reliability and, 169–170 discriminant, 171, 174–175 external, 21, 98, 118, 129–131 face, 171, 172, 392 four basic threats to, 118–131 internal, 21, 35, 98, 110, 118–122, 131–140 internal structure, 171, 172–173 measurement, 125–126, 197–198 in needs assessments, 275–277 of performance measures, predictive, 171, 173 681 response process, 171, 172, 392 statistical conclusions, 98, 118, 131 in surveys, 191 types of, 170–171 understanding, 169–170 ways to assess, 171–175 Value-for-money, 307 Values in model of professional judgment process, 496 (table), 497–498 Van Dooren, W., 459 Van Loon, N., 429–430 Van Thiel, S., 429, 432 Variables, 106 ambiguous temporal sequence and, 121 dependent (See Dependent variables) independent (See Independent variables) nominal, 176–177 ordinal, 177 Vickers, S., 484 Vining, A., 308, 315 Virtual interviews, 236 Vo, A T., 206 Volkov, B., 453 Voluntary Organizations of Professional Evaluation (VOPEs), 506–507 Wankhade, P., 415 Watson, K., 103 Web-based surveys, 188 Weber, M., 434 Weighting in mixed methods, 225 Weikart, D., 115 Weimer, D., 308, 315 Weisburd, D., 110–111 Weiss, C., 449 Weiss, C H., 9, 15 Welfare economics, 305 Westine, C D., 74 Whynot, J., 452 Wicked problems, 481 Wilcox, S J., 319–320, 330 Wildavsky, A B., 362, 446–447, 456 Williams, D W., 344 Willingness-to-accept (WTA), 305, 312 Willingness-to-pay (WTP), 305, 312 Wilson, A T., 487 Wilson, D., 412, 416, 429 Wilson, J Q., 386 Wilson, W., 352 Winter, J P., 276–277 Wisdom, practical, 484 Within-case analysis, 222–223 Wolfe, E W., 188 Wolfe, S E., 20 Workable logic models, 80 682 WorkSafeBC, 358, 395–396 Wright, B E., 376 Yarbrough, D B., 467 Yesilkagit, K., 432 York Neighborhood Watch Program, 136–140 findings and conclusions, 137–140 program logic, 141–145 Zero-based budgeting (ZBB), 344 Zigarmi, D., 195–196 Zimmerman, B., 60–62 683 ... Evaluation and Performance Measurement Introduction Integrating Program Evaluation and Performance Measurement Connecting Evaluation to the Performance Management System The Performance Management... important for both program evaluation and performance measurement After laying the foundations for program evaluation, we turn to performance measurement as an outgrowth of our understanding of program. .. performance management is the importance of program and policy performance results being collected, analyzed, compared (sometimes to performance targets), and then used to monitor, learn, and make

Ngày đăng: 20/01/2020, 14:18

TỪ KHÓA LIÊN QUAN