2014 Presenter Bios and Workshop Descriptions

Tarek Azzam Michael Bamberger Dale Berger Tiffany Berry Katrina Bledsoe Wanda Casillas
           
 
Huey Chen Tina Christie William Crano Stewart Donaldson Rebecca Eddy David Fetterman
           

John Gargani Brian Hilton Rodney Hopson Thomas Horan Maliha Khan Ada Ocampo
           
 
Allen Omoto Michael Q. Patton Becky Reichard Maritza Salazar Michael Scriven Marco Segone
           


   
Jason Siegel Hazel Symonette Scott Thomas Michael Trevisan  Tamara Walser  

Workshop Descriptions

Saturday, August 23

 

Basics of Evaluation and Applied Research Methods
Stewart I. Donaldson & Tina Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series.

Participants are strongly encouraged to watch our online introductory course via Canvas (click here).

Recommended background readings include:

Copies are available from Amazon.com by following the links above and are also available from the Claremont Evaluation Center for $20 each.  Checks should be made out to Claremont Graduate University and addressed to: John LaVelle, Claremont Graduate University/SSSPE, 150E 10th Street, Claremont CA 91711.


Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

   

Applied Multiple Regression: Mediation, Moderation, and More

Dale Berger


Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include preparing data for analysis, selecting models that are appropriate to your data and research questions, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

You will learn:

  • Concepts important for understanding regression
  • Procedures for conducting computer analysis, including SPSS code
  • How to conduct mediation and moderation analyses
  • How to interpret SPSS REGRESSION output
  • How to present regression findings in useful ways

Questions regarding this workshop may be addressed to Dale.Berger@cgu.edu

 

Big Data

Thomas Horan & Brian Hilton


This workshop description is forthcoming.
 

Evaluation-Specific Methodology

Michael Scriven

The five great sources of the methodology we need to do professional evaluation in the usual areas such as program, personnel, policy, performance, and portfolio evaluation are, in historical order of the recognition of their indispensability: (i) qualitative methodology; (ii) measurement and experimental design; (iii) statistics; (iv) cost analysis; (v) evaluation-specific methodology. Progress has been slow on the last two, but good texts and examples are available for the fourth, some of them in this summer package from CGU, so it’s really the fifth we need to move forward on. We’ll cover (a) why it’s absolutely essential and inescapable; (b) why it’s been dismissed or underestimated in the past; (c) why basic values are plainly facts, so the ‘facts vs. values’ dichotomy is a myth; (d) why, in addition to basic values, a vast slice of our survival-supporting knowledge is evaluative, but not enough; (e) how to validate value claims; (f) anything else controversial we can squeeze in, e.g., when to use/avoid crowd-sourced evaluation.

 

 

 

Data Visualization

Tarek Azzam


The careful planning of visual tools will be the focus of this workshop. Part of our responsibility as evaluators is to turn information into knowledge. Data complexity can often obscure main findings, or hinder a true understanding of program impact. So how do we make information more accessible to stakeholders? Often this is done by visually displaying data and information, but this approach, if not done carefully, can also lead to confusion. We will explore the underlying principles behind effective information displays. These are principles that can applied in almost any area of evaluation, and draw on the work of Edward Tufte, Stephen Few, and Johnathan Koomey to illustrate the breadth and depth of their applications. In addition to providing tips to improve most data displays, we will examine the core factors that make them effective. We will discuss the use of the common graphical tools, and delve deeper into other graphical displays that allow the user to visually interact with the data.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

 

Sunday, August 24

Cultural Responsiveness in Applied Research and Evaluation

Rodney K. Hopson & Wanda Casillas


The dynamic cultural demographics of organizations, communities, and societies make it imperative to understand the importance of cultural sensitivity and cultural responsiveness in applied research and evaluation settings. Responding to culture is not easy; the researcher/evaluator must understand how culture underlies the entire research process from conceptualization to dissemination, use, and impact of results.

In this workshop several questions will be considered. How does culture matter in evaluation theory and practice? How does attention to cultural issues make for better evaluation practice? Does your work in an agency or organization require you to know what culturally responsive in evaluation looks like? What issues do you need to consider in building culturally competent and responsive evaluation approaches? How do agencies identify strategies for developing and disseminating culturally responsive evaluation information? We articulate how these questions and considerations are quintessential in working with organizations and communities with hard to reach populations (e.g., marginalized groups), and where evaluations, if not tailored to the organization's or community's cultural milieu, can easily overlook the mores of its members.

This workshop is multifaceted and will rely on various interdisciplinary social science theoretical frameworks to both situate and advance conversations about culture in evaluation and applied research. In particular, participants will receive information and materials that help them to develop expertise in the general topics of culture in evaluation, including understanding the value-addedness for the evaluation researcher or program specialist who needs to develop a general understanding of the topic itself. Workshop attendees will also be encouraged to understand cultural barriers that might arise in evaluative settings between evaluators, key stakeholders, and evaluation participants that can hamper the development and execution of culturally responsive evaluations (e.g., power dynamics; and institutional structures that may intentionally or unintentionally promote the "isms"). We will also discuss how cultural responsiveness extends to institutional review board criteria and research ethics, and the development of strategies to garner stakeholder/constituent involvement, buy-in, and trust.

The presenters will rely on real world examples from their evaluation practice in urban communities, in school districts, and in a large national multi-site federally funded community-based initiative. This workshop assumes participants have an intermediate understanding of evaluation and are interested in promoting ways to build culturally competent and responsive practices.

Questions regarding this workshop may be addressed to Hopson@duq.edu or WandaCasillas@gmail.com.

 

   

Survey Research Methods

Jason Siegel


The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The day long workshop will focus specifically on these three aspects of survey creation. The day will begin with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question order can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

 

   

Quasi-Experimental Design

William Crano

Conducting, interpreting, and evaluating research are important aspects of the social scientist’s job description.  To that end, many good educational programs provide opportunities for training and experience in conducting and evaluating true experiments (or randomized controlled trials [RCTs], as they sometimes are called).  In applied contexts, the opportunity to conduct RCTs often is quite limited, despite the strong demands on the researcher/evaluator to render “causal” explanations of results, as they lead to more precise understanding and control of outcomes.  In such restricted contexts, which are absolutely more common than those supporting RCTs, quasi-experimental designs sometimes are employed. Though they usually do not support causal explanations (with some noteworthy exceptions), they sometimes provide evidence that helps reduce the range of plausible alternative explanations of results, and thus, can prove to be of real value. This workshop is designed to impart an understanding of quasi-experimental designs. After some introductory foundational discussion focused on “true” experiments, we will consider quasi-experimental designs that may be useful across a range of settings that do not readily admit to experimentation. These designs will include time series and interrupted time series methods, nonrandomized designs with and without control groups, case control (or ex post facto) designs, regression-discontinuity analysis, and other esoterica. Participants are encouraged to bring to the workshop design issues they are facing in real world contexts.

Questions regarding this workshop may be addressed to William.Crano@cgu.edu.

 

   

Empowerment Evaluation: From Capacity Building Concepts to Corporate Philanthropic Case Studies

David Fetterman


Empowerment Evaluation is celebrating its 21st anniversary. David Fetterman introduced empowerment evaluation to the field during his presidency at the American Evaluation Association annual conference in 1993. The workshop will highlight its growth in conceptual clarity and methodological specificity. It will cover: 1) the definition; 2) key concepts, principles, and steps of empowerment evaluation; and 3) tools, ranging from Getting-to-Outcomes to Google Glass. In addition, case examples will be discussed, such as: schools in academic distress, Hewlett-Packard’s $15 Million Digital Village initiative (bridging the digital divide in communities of color), Stanford University’s School of Medicine accreditation review, and a 10-year tobacco prevention empowerment evaluation. The workshop will consist of a mix of pedagogical techniques, ranging from a brief talk to extensive participant participation. Empowerment evaluation references, ranging from AJE articles to a forthcoming Empowerment Evaluation book will also be shared during the workshop.

Questions about this workshop may be addressed to fettermanassociates@gmail.com

 

Real-world Evaluation: Conducting Rigorous Impact Evaluations Under Real-World Conditions

Maliha Khan & Michael Bamberger, in collaboration with Malgosia Madajewicz

 

The workshop will analyze the process of implementing a statistical mixed-methods impact evaluation design in a real-world context. The evaluation team worked under budget and time constraints, where there were issues of data availability, and where the evaluation had to respond to the needs of different national and international stakeholder who had diverse information needs and perceptions about the role of evaluation.

The workshop will cover the following topics:

  • A step-by-step presentation of the evaluation design, implementation and data analysis. The presentation will also explain why this evaluation design was used, what were the other options and what lessons were learned about the strengths and weaknesses of the design
  • The instructors will share practical lessons about implementing evaluations in the real world with effects of budget constraints on the methodological rigor and on the practical value of the findings to different stakeholders, including program management. Recommendations will be made on ways to strengthen the design, for example by incorporating a mixed methods design
  • The evaluation was an integral part of the Oxfam America development of an innovative pilot program to test new approaches to strengthen the resilience of small farmers in drought-prone areas.
  • The task of managing an evaluation within this operational (and very political) context is much more challenging than the typical text-book examples where the evaluator is only concerned with the technical quality of the evaluation.

A significant amount of time will be reserved for workshop participants to share their experiences and to discuss how lessons from this evaluation could be applied in other contexts with which participants are familiar.

Students are strongly encouraged to purchase and read in advance:

  • RealWorld Evaluation: Working under Budget, Time, Data and Political Constraints (Michael Bamberger, Jim Rugh and Linda Mabry 2012 (Second Edition) Sage Publications.

Questions about this workshop may be addressed to jmichaelbamberger@gmail.com

 

Monday, August 25, 2014

   

Introduction to Qualitative Research Methods

Maritza Salazar


This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Maritza.Salazar@cgu.edu.

 

 

Introduction to Educational Evaluation

Tiffany Berry & Rebecca Eddy

 

This workshop is designed to provide participants an overview of the key concepts, issues, and current trends in contemporary educational program evaluation. Educational evaluation is a broad and diverse field, covering multiple topics such as curriculum evaluation, K-12 teaching/learning, institutional research and assessment in higher education, teacher education, Science, Technology, Engineering, and Mathematics (STEM), out of school time (OST), and early childhood education. To operate within these varied fields, it is important for educational evaluators to possess an in-depth understanding of the educational environment as well as implement appropriate evaluation methods, procedures, and practices within these fields. Using lecture, interactive activities, and discussion, we will provide an overview of key issues that are important for contemporary educational evaluators to know, such as (1) differentiating between assessment, evaluation and other related practices; (2) understanding common core standards and associated assessment systems; (3) emerging research on predictors of student achievement; and (4) development of logic models and identification of program activities, processes and outcomes. Case studies of recent educational evaluations with a focus on K-12 will be used to introduce and discuss these issues.

 

Questions regarding this workshop may be addressed to Tiffany.Berry@cgu.edu.

 

   

Evaluating Program Viability, Effectiveness, and Transferability: An Integrated Perspective
Huey-Tsyh Chen


Traditionally, an evaluation approach argues and addresses one high priority issue (e.g. internal validity for Campbell, external validity for Cronbach). But, what happens when stakeholders prefer an evaluation to address both internal and external validity or more comprehensively, address viable, effectual, and transferable validity. This workshop is designed to introduce an integrated evaluation approach developed from the theory-driven evaluation perspective for addressing multiple or competing values of interest to stakeholders.

Participants will learn:

  • Contributions and limitations of the Campbellian validity typology (e.g., internal and external validity) in the context of program evaluation
  • An integrative validity model with three components as an alternative for better reflecting stakeholders’ view on evaluative evidence: viability, effectuality, and transferability
  • How to apply sequential approaches (top-down or bottom-up) for systematically addressing multiple types of validity in evaluation
  • How to apply concurrent approaches (maximizing or optimizing) for simultaneously addressing multiple types of validity in an evaluation
  • How to use of the innovative framework for reconciling major controversies and debates in evaluation
  • Concrete evaluation examples will be used to illustrate ideas, issues, and applications throughout the workshop.

Questions regarding this workshop may be addressed to hueychen9@gmail.com.

 

 

Cultivating Self-in-Context as Responsive Evaluators: Engaging Boundaries, Borderlands and Border-Crossings

Hazel Symonette & Katrina Bledsoe


We increase prospects for operating at our evaluator best when we intentionally embrace a contextually-responsive action researcher stance. This involves systematic data-grounded inquiry as an evidence-framing dialogue with SELF as evaluator--vis a vis one's stakeholders and the requirements of the evaluation agenda and contexts. For excellence and ethical praxis, evaluation practices should be broadly diversity-grounded and equity-minded. They should be socially-responsive, socially-responsible and socially just as informed by the American Evaluation Association’s Guiding Principles and the Joint Committee on Standards for Educational Evaluation’s Program Evaluation Standards. This holistic developmental evaluation framework promotes empathically scanning, tracking and unpacking WHO? factors in context: who is served by whom with whom as embedded in situational, relational, temporal and spatial/geographic contexts. Doing this centers human systems dynamics---the WHO-factors---at the heart of a logic model's more conventional WHAT-factors: notably, the interface among primary stakeholders within the terrain of power & privilege/oppression. We will use a model that provides a *holistic systematic inquiry and reflective practice framework* for empathically cultivating the Self-in-Context and, thus, for enhancing interpersonal validity vis a vis responsive programmatic design and development.

Questions about this workshop may be addressed to hsymonette@studentlife.wisc.edu or katrina.bledsoe@gmail.com.

   

Multilevel Modeling
Scott Thomas

The goal of this workshop is to develop an understanding of the use, application, and interpretation of multilevel modeling in the context of educational, social, and behavioral research. The workshop is intended to acquaint students with several related techniques used in analyzing quantitative data with nested data structures. The workshop will employ the IBM SPSS statistical package. Emphasis in the workshop is on the mastery of concepts and principles, development of skills in the use and interpretation of software output, and development of critical analysis skills in interpreting research results using the techniques we cover.

Questions regarding this workshop may be addressed to Scott.Thomas@cgu.edu.

 

Tuesday, August 26, 2014

Practical Program Design and Redesign: A Theory-Driven Approach

Stewart I. Donaldson & John Gargani

This workshop will provide participants with the basics of program design and redesign from a theory-driven evaluation perspective. Participants will learn the five elements of a basic program design and how they relate to program theory and social science research. Lecture, discussions, and group activities will help participants learn how to apply what they learn to design and improve social, health, educational, organizational, and other programs. Examples from practice will be provided to illustrate main points and key take-home messages.

Participants are strongly encouraged to watch our online introductory course via Canvas (click here).

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

   

Evaluability Assessment: What, Why and How

Michael Trevisan & Tamara Walser

Evaluability assessment (EA) is used to determine the readiness of a program for impact evaluation. It can also provide information useful for formative evaluation, implementation assessment, evaluation planning, program development, and technical assistance. Although several EA models exist, the essential elements of EA include focusing the EA, developing a program theory, gathering feedback on program theory, and using the EA. Initially a program management tool, current use of EA provides evidence of its value as a tool for increasing stakeholder involvement, understanding program culture and context, and facilitating organizational learning and evaluation capacity building. EA use is on the rise; however, there continues to be ambiguity and uncertainty about the method. In addition, it has taken on multidisciplinary appeal and has become a popular methodology for conducting theses and dissertations.

In this workshop, a modern model of EA will be presented that incorporates the essential elements of EA with current evaluation theory and practice. Participants will learn the “What, Why, and How” of EA; specifically:

  • What: Participants will learn the essential elements of EA and how they are incorporated in the EA model presented.
  • Why: Participants will learn the important benefits and advantages of conducting an EA.
  • How: Participants will learn how to implement the EA model presented.

Participants will be exposed to a variety of case examples that illustrate features of EA that show how EA can be used across disciplines. Brief video clips of evaluators will be presented to illustrate for participants how evaluators developed and carried out EA projects, issues that arose and how they were dealt with, and unique aspects that emerged in each EA. Participants will also engage in application exercises and related discussion to practice implementing the EA model. We will administer a pre-workshop questionnaire to identify participant characteristics and prior experience with and interest in EA to better tailor the workshop to participant needs.

Questions regarding this workshop may be addressed to Trevisan@wsu.edu.

 

   

Leadership Assessment

Becky Reichard


Leadership assessment is commonly used by organizations and consultants to inform selection, promotion, and development of leaders. This experiential workshop will provide participants with an overview of the three main methods of leadership assessment – self-assessment, 360-degree assessment, and assessment centers – and, in the process, will provide workshop participants’ with feedback on their leadership strengths, skills, and styles. Leadership assessments including leadership competency models, personality, strengths, and social and emotional skills will be introduced and discussed.

The second half of the session will focus on the assessment center method of leadership assessment. An assessment center is a method of evaluating leaders’ behaviors during simulated scenarios, or various life-like situations that leaders encounter. Workshop participants will experience first-hand three leadership simulations – an in-basket task, a leaderless group discussion, and a one-on-one role play with a troubled follower. Participants’ behaviors will be recorded during the simulations and behaviorally anchored rating scales (BARS) will be used to accurately assess the behavioral components of leadership. Workshop participants will receive a detailed feedback report with helpful, developmental feedback that they can use to improve their leadership experiences within 2-4 weeks of the completion of the workshop. Beyond engaging in the assessment center simulations, a discussion of the behaviorally anchored rating scales, coding training and procedures, and feedback reports will conclude the session.

In advance, workshop participants are expected to do the following:

Questions regarding this workshop may be addressed to Becky.Reichard@cgu.edu

 

   

Grant Proposal Writing

Allen Omoto


This workshop covers some of the essential skills and strategies needed to prepare successful grant applications for education, research, and/or program funding. It will provide participants with tools to help them conceptualize and plan research or program grants, offer ideas about where to seek funding, and provide suggestions for writing and submitting applications. Some of the topics covered in the workshop include strategies for identifying appropriate sources of funding, the components and preparation of grant proposals, and the peer review process. In addition, topics related to putting together a research or program team, constructing an appropriate budget, grants management, and the writing of an application will be discussed. The workshop is organized around key questions relating to grant support and how to become a successful grant-getter, including WHY seek grant funding or support? WHERE to look for support? WHO applies for funding and WHEN should one seek funding? WHAT is submitted in a grant application? And, HOW to structure an application and supporting materials? The workshop is intended primarily as an introduction to grant writing, and will be most useful for new or relatively inexperienced grant writers. Workshop participants are encouraged to bring their own "works in progress" for comment and sharing. At its conclusion, workshop participants should be well positioned not only to read and evaluate grant applications, but to assist with the preparation of applications and to prepare and submit their own applications to support education, research, or program planning and development activities.

Questions regarding this workshop may be addressed to Allen.Omoto@cgu.edu.

 

     

Wednesday, August 27, 2014

 

Developmental Evaluation

Michael Q. Patton

Online Workshop

The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems is uncertain and key stakeholders are in conflict about how to proceed. Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change.

Participants will learn:
  • The unique niche of developmental evaluation
  • The 5 types of developmental evaluation
  • What perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches
  • Rapid response methodological approaches consistent with developmental evaluation
Questions about this workshop may be addressed to mqpatton@prodigy.net.

 

How to Design and Manage Equity-Focused and Gender-Responsive Evaluations

Marco Segone & Ada Ocampo

Online Workshop

The push for a stronger focus on equity and on gender equality in human development is gathering momentum at the international level. Its premise is increasingly supported by United Nations reports and strategies as well as by independent analysis. More and more national policies and international alliances are focusing on achieving equitable development results. While this is the right way to go, it poses important challenges – and opportunities – to the evaluation function. How can one strengthen the capacity of governments, organizations and communities to evaluate the effect of interventions on equitable outcomes for women and marginalized populations?

What are the methodological implications in designing, conducting, managing, and using equity-focused and gender-responsive evaluations? What are the evaluation questions to assess whether interventions are relevant and having an impact? What are the challenges peculiar to equity-focused evaluations and gender-responsive evaluations, and how to overcome them?

This interactive on-line workshop starts by defining social equity and gender equality, why it matters and why it is so urgent now. It then explains what an Equity-focused and gender-responsive evaluation is, discussing what its purpose should be and potential challenges in its promotion and implementation. The second part of the workshop explains how to manage equity-focused and gender-responsive evaluations, presenting the key issues to take into account when preparing for the evaluation and developing the Terms of Reference, including presenting potential equity-focused and gender-sensitive evaluation questions; how to design the evaluation, including identifying the appropriate evaluation framework and appropriate methods to collect and analyze data; and how to ensure the evaluation is used.

Questions regarding this workshop may be addressed to marco.segone@unwomen.org and aocampo@unicef.org.

 

Thursday, August 28, 2014

   

Principles-Focused Evaluation
Michael Quinn Patton

Online Workshop
World Premiere!

Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. It is useful to distinguish three types of evidence-based conclusions:

  1. Single evidence-based program. Rigorous and credible summative evaluation of a single program provides evidence for the effectiveness of that program and only that program.
  2. Evidence-based model. Systematic meta-analysis (statistical aggregation) of the results of several programs all implementing the same model in a high-fidelity, standardized, and replicable manner, and evaluated with randomized controlled trials (ideally), to determine overall effectiveness of the model. This is the basis for claims that a model is a “best practice.”
  3. Evidence-based principles. Synthesis of case studies, including both processes and outcomes, of a group of diverse programs or interventions all adhering to the same principles but each adapting those principles to its own particular target population within its own context. If the findings show that the principles have been implemented systematically, and analysis connects implementation of the principles with desired outcomes through detailed and in-depth contribution analysis, the conclusion can be drawn that the practitioners are following effective evidence-based principles.

Principles-focused evaluation treats principles as the intervention and unit of analysis, and designs an evaluation to assess both implementation and consequences of principles.  Principles-focused evaluation is a specific application of developmental evaluation because principles are the appropriate way to take action in complex dynamic systems.  This workshop will be the worldwide premier of principles-focused evaluation training.  Specific examples and methods will be part of the training.

Participants will learn:

  • What constitutes a principle that can be evaluated

  • How and why principles should be evaluated

  • Different kinds of principles-focused evaluation

  • The relationship between complexity and principles

  • The particular challenges, strengths, and weaknesses of principles-focused evaluation.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

 

     

Click Here to Register