2013 Presenter Bios

 
           
Tarek Azzam Dale Berger Tiffany Berry Katrina Bledsoe Heather Campbell Wanda Casillas
           
           
Huey Chen Tina Christie Ross Conner Stewart Donaldson Rebecca Eddy John Gargani
           
           
Rodney Hopson Jacob Leos-Urbel Jeanne Nakamura Ada Ocampo Allen Omoto Michael Q. Patton
           
   
Becky Reichard Maritza Salazar Michael Scriven Marco Segone  Jason Siegel  Scott Thomas
           
         
 Michael Trevisan Tamara Walser        
           

Workshop Descriptions

Sunday, August 25, 2013

Basics of Evaluation & Applied Research Methods

Stewart I. Donaldson & Christina A. Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series. Students are strongly encouraged to purchase and read in advance of the workshop:
Copies are available from Amazon.com by following the links above and are also available from the Claremont Evaluation Center for $20 each.  Checks should be made out to Claremont Graduate University and addressed to: Credible Evidence Text, 175 E. 12th Street, Claremont CA 91711.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

Applied Multiple Regression: Mediation, Moderation, & More

Dale Berger

Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include preparing data for analysis, selecting models that are appropriate to your data and research questions, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

You will learn:

  • Concepts important for understanding regression
  • Procedures for conducting computer analysis, including SPSS code
  • How to conduct mediation and moderation analyses
  • How to interpret SPSS REGRESSION output
  • How to present regression findings in useful ways

Questions regarding this workshop may be addressed to Dale.Berger@cgu.edu

 

Experience Sampling Methods

Jeanne Nakamura

What do people do with their time, and how do they feel while doing it? Within psychology, this question animates much basic, applied, and evaluation research. In recent decades psychologists have begun to study in earnest the quality of everyday life. Researchers at DBOS’s Quality of Life Research Center pioneered some of the methods that make it possible to get an accurate understanding of people's actual experience.

One-time, self-report measures are widely relied upon to assess people’s use of time, and their thoughts, feelings, motivations, and well-being. These measures have several advantages but depend on processes of estimation and retrospection that distort the reality they are intended to assess. In this workshop participants will learn about the strengths and limitations of various methods – surveys, diaries, and tools for real-time measurement of experience in natural settings. Primary attention will be given to the Experience Sampling Method (ESM), considered the "gold standard" in the field. Issues to be considered include how to select the appropriate tool for a researcher’s questions and resources, how to design an ESM study, and how to analyze ESM data to answer a variety of questions such as: Where during the school day are students most engaged? How do emotions change in response to environmental factors? When do patients with chronic illness report the least pain?

Questions regarding this workshop may be addressed to Jeanne.Nakamura@cgu.edu

 

Grant Proposal Writing

Allen Omoto

This workshop covers some of the essential skills and strategies needed to prepare successful grant applications for education, research, and/or program funding. It will provide participants with tools to help them conceptualize and plan research or program grants, offer ideas about where to seek funding, and provide suggestions for writing and submitting applications. Some of the topics covered in the workshop include strategies for identifying appropriate sources of funding, the components and preparation of grant proposals, and the peer review process. In addition, topics related to putting together a research or program team, constructing an appropriate budget, grants management, and the writing of an application will be discussed. The workshop is organized around key questions relating to grant support and how to become a successful grant-getter, including WHY seek grant funding or support? WHERE to look for support? WHO applies for funding and WHEN should one seek funding? WHAT is submitted in a grant application? And, HOW to structure an application and supporting materials? The workshop is intended primarily as an introduction to grant writing, and will be most useful for new or relatively inexperienced grant writers. Workshop participants are encouraged to bring their own "works in progress" for comment and sharing. At its conclusion, workshop participants should be well positioned not only to read and evaluate grant applications, but to assist with the preparation of applications and to prepare and submit their own applications to support education, research, or program planning and development activities.

Questions regarding this workshop may be addressed to Allen.Omoto@cgu.edu.

 

The Rest of the Iceberg: Evaluation Beyond Program Evaluation

Michael Scriven

 
There are more than a dozen important sub-divisions of evaluation other than the one in which most of us work, and knowing a little about most of them is often useful to program evaluators and very useful to graduate students and evaluation researchers thinking about topics for further study and publications. We’ll provide a structure for linking these together—the general logic of evaluation—and some examples of evaluation findings and problems from these sibling studies, including product evaluation, personnel evaluation, proposal evaluation, policy analysis, performance evaluation, portfolio assessment, meta-evaluation, intradisciplinary evaluation, metadisciplinary evaluation, and large slices of the elder disciplines—medicine, engineering, logic, aesthetics, and ethics.

Questions regarding this workshop may be addressed to mjscriv1@gmail.com.

 

Monday, August 26, 2013

Considering Culture in Evaluation & Applied Research

Rodney K. Hopson & Wanda Casillas

The dynamic cultural demographics of organizations, communities, and societies make it imperative to understand the importance of cultural sensitivity and cultural responsiveness in applied research and evaluation settings. Responding to culture is not easy; the researcher/evaluator must understand how culture underlies the entire research process from conceptualization to dissemination, use, and impact of results.

In this workshop several questions will be considered. How does culture matter in evaluation theory and practice? How does attention to cultural issues make for better evaluation practice? Does your work in an agency or organization require you to know what culturally responsive in evaluation looks like? What issues do you need to consider in building culturally competent and responsive evaluation approaches? How do agencies identify strategies for developing and disseminating culturally responsive evaluation information? We articulate how these questions and considerations are quintessential in working with organizations and communities with hard to reach populations (e.g., marginalized groups), and where evaluations, if not tailored to the organization's or community's cultural milieu, can easily overlook the mores of its members.

This workshop is multifaceted and will rely on various interdisciplinary social science theoretical frameworks to both situate and advance conversations about culture in evaluation and applied research. In particular, participants will receive information and materials that help them to develop expertise in the general topics of culture in evaluation, including understanding the value-addedness for the evaluation researcher or program specialist who needs to develop a general understanding of the topic itself. Workshop attendees will also be encouraged to understand cultural barriers that might arise in evaluative settings between evaluators, key stakeholders, and evaluation participants that can hamper the development and execution of culturally responsive evaluations (e.g., power dynamics; and institutional structures that may intentionally or unintentionally promote the "isms"). We will also discuss how cultural responsiveness extends to institutional review board criteria and research ethics, and the development of strategies to garner stakeholder/constituent involvement, buy-in, and trust.

The presenters will rely on real world examples from their evaluation practice in urban communities, in school districts, and in a large national multi-site federal funded community-based initiative. This workshop assumes participants have an intermediate understanding of evaluation and are interested in promoting ways to build culturally competent and responsive practices.

Questions regarding this workshop may be addressed to Hopson@duq.edu or WandaCasillas@gmail.com.

 

Evaluating Program Viability, Effectiveness, & Transferability: An Integrated Perspective

Huey-Tsyh Chen

Traditionally, an evaluation approach argues and addresses one high priority issue (e.g. internal validity for Campbell, external validity for Cronbach). But, what happens when stakeholders prefer an evaluation to address both internal and external validity or more comprehensively, address viable, effectual, and transferable validity. This workshop is designed to introduce an integrated evaluation approach developed from the theory-driven evaluation perspective for addressing multiple or competing values of interest to stakeholders.

Participants will learn:
  • Contributions and limitations of the Campbellian validity typology (e.g., internal and external validity) in the context of program evaluation.
  • An integrative validity model with three components as an alternative for better reflecting stakeholders’ view on evaluative evidence: viability, effectuality, and transferability.
  • How to apply sequential approaches (top-down or bottom-up) for systematically addressing multiple types of validity in evaluation.
  • How to apply concurrent approaches (maximizing or optimizing) for simultaneously addressing multiple types of validity in an evaluation.
  • How to use of the innovative framework for reconciling major controversies and debates in evaluation
  • Concrete evaluation examples will be used to illustrate ideas, issues, and applications throughout the workshop.
Questions regarding this workshop may be addressed to hueychen9@gmail.com.

 

Introduction to Educational Evaluation

Tiffany Berry & Rebecca Eddy

This workshop is designed to provide participants an overview of the key concepts, issues, and current trends in contemporary educational program evaluation. Educational evaluation is a broad and diverse field, covering multiple topics such as curriculum evaluation, K-12 teaching/learning, institutional research and assessment in higher education, teacher education, Science, Technology, Engineering, and Mathematics (STEM), out of school time (OST), and early childhood education. To operate within these varied fields, it is important for educational evaluators to possess an in-depth understanding of the educational environment as well as implement appropriate evaluation methods, procedures, and practices within these fields. Using lecture, interactive activities, and discussion, we will provide an overview of key issues that are important for contemporary educational evaluators to know, such as (1) differentiating between assessment, evaluation and other related practices; (2) understanding common core standards and associated assessment systems; (3) emerging research on predictors of student achievement; and (4) development of logic models and identification of program activities, processes and outcomes. Case studies of recent educational evaluations with a focus on K-12 will be used to introduce and discuss these issues.

Questions regarding this workshop may be addressed to Tiffany.Berry@cgu.edu.

 

Survey Research Methods

Jason Siegel

The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The day long workshop will focus specifically on these three aspects of survey creation. The day will being with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question ordered can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

 

Introduction to Positive Psychology Research & Evaluation

Stewart I. Donaldson


Seligman and Csikszentmihalyi (2000) ignited positive psychology at the turn of the century with their special issue of the American Psychologist on Happiness, Excellence, and Optimal Human Functioning. The result has been an amazing plethora of research investigations, grants, peer reviewed articles, books, awards, and applications focused on optimal human functioning and improving human welfare and society. In addition to the rapid growth of scholarly activity, new professional societies such as the International Association of Positive Psychology (IPPA) and the Western Positive Psychology Association (WPPA), scholarly journals including the Journal of Positive Psychology, and top tier graduate programs such as the Masters of Applied Positive Psychology at the University of Pennsylvania and the M.A. and Ph.D. programs in Positive Organizational Psychology and Positive Developmental Psychology at Claremont Graduate University have been developed.

This workshop will provide a basic introduction to the emerging science of positive psychology. First, much of the theory and research that has been developed under the positive psychology umbrella during the past 15 years will be summarized and discussed. Specifically, participants will learn about the main findings from more than 1,200 empirical and theoretical articles published from 1999 to 2013. Next, the strengths and weaknesses of the empirical research methods that have been used to build the evidence-base for positive psychology will be explored and discussed. Special emphasis will be placed on understanding the main criticisms of positive psychology research to date. In addition, a range of approaches for designing and evaluating positive psychology-based interventions will be presented. Strengths-driven evaluation approaches will be discussed in some depth. Finally, participants will be given the opportunity to design a research or evaluation project, and to receive feedback from other participants and the instructor.

Recommend reading: Donaldson, S.I., Csikszentmihalyi, M., & Nakamura, J. (Eds.). (2011). Applied positive psychology: Improving everyday life, health, schools, work, and society. London: Routledge Academic.

Copies are available from Amazon.com by clicking here, and are also available for $20 from the Claremont Evaluation Center. Please send a check for $20 made out to Claremont Graduate University and addressed to: Applied Positive Psychology Text, 175 E. 12th Street, Claremont CA 91711.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

Tuesday, August 27, 2013

Introduction to Qualitative Research Methods

Maritza Salazar

This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Maritza.Salazar@cgu.edu.

 

Translating AEA's Public Statement on Cultural Competence in Evaluation into Practice: Evaluation in Real World Settings

Katrina Bledsoe


Ever wonder how those evaluation theories, association public statements, and textbook readings translate to a practical world setting? This workshop specifically explores the concepts and theoretical underpinnings of AEA's Public Statement on Cultural Competence in Evaluation manifest in a real world setting? Additionally we’ll discuss how approaches such as Transformative Evaluation, Culturally Responsive Evaluation, and Theory-driven Evaluation manifest the statement in practice in organizations, communities, and programs.
Attendees will be introduced to AEA's Public Statement on Cultural Competence, (as well as corresponding key points of the AEA Guiding Principles for Evaluators) as a foundation and backdrop for the workshop. From there, attendees will reflect upon and discuss key aspects pertaining but not limited to relationships with stakeholders, stakeholder involvement in evaluation endeavors, use of appropriate methods within the evaluation context, navigation of the political and cultural contexts, and social justice, social change, and human rights considerations. We will also consider how tried and true scientific method concepts such as validity, logic modeling, credible evidence, and evidenced-based translate to in real-world settings, especially in the consideration of cultural contexts.

Although the presenter will ground the workshop in real-life examples from her own work as well as others (specifically demonstrating the use of Transformative, CRE, and TDE approaches), this is an interactive workshop. Attendees will engage in discussion, demonstrative activities and role play that enables them to simulate on a very basic level, evaluation practice. The workshop is designed for new and early career evaluators.

Questions regarding this workshop may be addressed to katrina.bledsoe@gmail.com.
 

Emerging Practices in International Development Evaluation

Ross Conner

Development evaluation has been underway in Asia, Africa and some of the Americas for many years. It involves diverse constituencies and has special challenges, and it has resulted in both successes and failures. In this workshop, we will explore the ideas, concepts and frameworks from 16 evaluation professions who have been actively involved in this enterprise. Drawing from material in the recently released book edited by Donaldson, Azzam and Conner, we will explore the ideas presented by this group of international evaluators and discuss the implications for future evaluation approaches and practices, not only in the developing world but also for similar evaluation efforts in developed regions.


Students are strongly encouraged to purchase and read in advance of the workshop: Emerging Practices in International Development Evaluation.  Copies are available from Amazon.com by clicking here, and are also available for $20 from the Claremont Evaluation Center. Please send a check for $20 made out to Claremont Graduate University and addressed to: International Development Evaluation Text, 175 E. 12th Street, Claremont CA 91711


Questions regarding this workshop may be addressed to Ross Conner (rfconner@uci.edu).

Policy Evaluation

Jacob Leos-Urbel


This course provides an introduction to the methodology and tools of policy evaluation. Although sometimes considered separately, policy evaluation and program evaluation both involve the application of social science research techniques to better understand the effectiveness of publicly funded interventions. In practice, evaluations often focus on programs, which represent the implementation of policies by organizations on the ground. We will begin with an introduction to logic models and theory of change in the context of policy evaluation, followed by a brief discussion of implementation evaluation. The remainder of the course will focus on challenges in isolating the causal impact of programs and policies (i.e. internal validity), the strengths and weaknesses of various evaluation designs (both experimental and quasi-experimental), as well as a consideration of the generalizability (i.e. external validity) of policy evaluations. Students are encouraged to relate the course material to their specific policy interests.

Questions regarding this workshop may be addressed to Jacob.Leos-Urbel@cgu.edu.

 

Using Technology to Improve Research & Evaluation

Tarek Azzam


This workshop will focus on how a range of new technological tools can be used to improve applied research and evaluation. Specifically, we will explore the application of free or inexpensive software to engage clients and a range of stakeholders, collect data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, analyze and present data. Participants will be given information on how to access tools such as Geographical Information Systems (GIS), data collection software, interactive conceptual modeling software, and new methods for integrating crowdsourcing into evaluation design and analysis to improve the quality of applied research and evaluation projects. Participants will be provided with information sheets on each technological tool along with details about attaining free trials.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

Wednesday, August 28, 2013

Introduction to Program Design: The Theory-Driven Approach

Stewart I. Donaldson & John Gargani

This workshop will provide participants with the basics of program design from a theory-driven evaluation perspective. Participants will learn the five elements of a basic program design and how they relate to program theory and social science research. Lecture, discussions, and group activities will help participants learn how to apply what they learn to design and improve social, health, educational, organizational, and other programs. Examples from practice will be provided to illustrate main points and key take-home messages.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

Social Cost-Benefit Analysis

Heather Campbell


Social Cost-Benefit Analysis (CBA) is central to the way that policy analysts think, and can be of great value in evaluation. CBA allows us to go beyond questions of whether a program is achieving its goals to whether the program is actually worth the resources used. Yet, what constitutes “real” CBA can be confusing. We often hear people say that they are comparing costs and benefits; how is social CBA different from what a business might do, and how is it related to Cost-Effectiveness Analysis (CEA)?

The first part of this workshop guides participants through the basic structure of CBA—including issues of time, standing, and equity—and what makes it the same or different from CEA and other affiliated analysis methods. Then, we consider CBA issues that are particularly relevant to evaluation.

It is strongly recommended that students bring an Internet-enabled device, preferably one that includes Microsoft Excel. In addition, an introductory reading will be provided before the course starts.

Questions regarding this workshop may be addressed to Heather.Campbell@cgu.edu

 

Multilevel Modeling

Scott Thomas


The goal of this workshop is to develop an understanding of the use, application, and interpretation of multilevel modeling in the context of educational, social, and behavioral research. The workshop is intended to acquaint students with several related techniques used in analyzing quantitative data with nested data structures. The workshop will employ the IBM SPSS statistical package. Emphasis in the workshop is on the mastery of concepts and principles, development of skills in the use and interpretation of software output, and development of critical analysis skills in interpreting research results using the techniques we cover.

Questions regarding this workshop may be addressed to Scott.Thomas@cgu.edu.

 

Leadership Assessment

Becky Reichard

Leadership assessment is commonly used by organizations and consultants to inform selection, promotion, and development of leaders. This experiential workshop will provide participants with an overview of the three main methods of leadership assessment – self-assessment, 360-degree assessment, and assessment centers – and, in the process, will provide workshop participants’ with feedback on their leadership strengths, skills, and styles. Leadership assessments including leadership competency models, personality, strengths, and social and emotional skills will be introduced and discussed.

The second half of the session will focus on the assessment center method of leadership assessment. An assessment center is a method of evaluating leaders’ behaviors during simulated scenarios, or various life-like situations that leaders encounter. Workshop participants will experience first-hand three leadership simulations – an in-basket task, a leaderless group discussion, and a one-on-one role play with a troubled follower. Participants’ behaviors will be recorded during the simulations and behaviorally anchored rating scales (BARS) will be used to accurately assess the behavioral components of leadership. Workshop participants will receive a detailed feedback report with helpful, developmental feedback that they can use to improve their leadership experiences within 2-4 weeks of the completion of the workshop. Beyond engaging in the assessment center simulations, a discussion of the behaviorally anchored rating scales, coding training and procedures, and feedback reports will conclude the session.

In advance, workshop participants are expected to do the following:
  • Purchase and review the Strengths-based leadership book by Rath and Conchie, complete the StrengthsFinder assessment using the code in the book, and bring a printout of their Strengths report to the workshop. (make sure you buy a ‘new’ copy of the book that has a code for completing the StrengthsFinder survey)

 

Evaluability Assessment: What, Why and How

Michael Trevisan and Tamara Walser

Evaluability assessment (EA) is used to determine the readiness of a program for impact evaluation. It can also provide information useful for formative evaluation, implementation assessment, evaluation planning, program development, and technical assistance. Although several EA models exist, the essential elements of EA include focusing the EA, developing a program theory, gathering feedback on program theory, and using the EA. Initially a program management tool, current use of EA provides evidence of its value as a tool for increasing stakeholder involvement, understanding program culture and context, and facilitating organizational learning and evaluation capacity building. EA use is on the rise; however, there continues to be ambiguity and uncertainty about the method. In addition, it has taken on multidisciplinary appeal and has become a popular methodology for conducting theses and dissertations.
In this workshop, a modern model of EA will be presented that incorporates the essential elements of EA with current evaluation theory and practice. Participants will learn the “What, Why, and How” of EA; specifically:

  • What: Participants will learn the essential elements of EA and how they are incorporated in the EA model presented.
  • Why: Participants will learn the important benefits and advantages of conducting an EA.
  • How: Participants will learn how to implement the EA model presented.
Participants will be exposed to a variety of case examples that illustrate features of EA that show how EA can be used across disciplines. Brief video clips of evaluators will be presented to illustrate for participants how evaluators developed and carried out EA projects, issues that arose and how they were dealt with, and unique aspects that emerged in each EA. Participants will also engage in application exercises and related discussion to practice implementing the EA model. We will administer a pre-workshop questionnaire to identify participant characteristics and prior experience with and interest in EA to better tailor the workshop to participant needs.


Questions regarding this workshop may be addressed to Trevisan@wsu.edu.

 

Thursday, August 29, 2013

Developmental Evaluation

Michael Q. Patton
Online Workshop

The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems is uncertain and key stakeholders are in conflict about how to proceed. Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change.

Participants will learn:
  • The unique niche of developmental evaluation
  • The 5 types of developmental evaluation
  • What perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches.
  • Rapid response methodological approaches consistent with developmental evaluation
Questions about this workshop may be addressed to mqpatton@prodigy.net.

Friday, August 30, 2013

How to Design and Manage Equity-Focused Evaluations

Marco Segone & Ada Ocampo
Online Workshop

The push for a stronger focus on equity in human development is gathering momentum at the international level. Its premise is increasingly supported by United Nations reports and strategies as well as by independent analysis. More and more national policies and international alliances are focusing on achieving equitable development results. While this is the right way to go, it poses important challenges – and opportunities – to the evaluation function. What are the conceptual and methodological implications in designing, conducting, managing and using Equity-focused evaluations? What are the evaluation questions to assess whether interventions are relevant and having an impact in decreasing inequity, are achieving equitable results, and are efficient and sustainable? What are the challenges peculiar to equity-focused evaluations, and how to overcome them?

This interactive on-line workshop starts by defining equity, why equity matters and why addressing equity is so urgent now. It then explains what an Equity-focused evaluation is, discussing what its purpose should be and potential challenges in its promotion and implementation. The second part of the workshop explains how to manage Equity-focused evaluations, presenting the key issues that the evaluation manager should take into account when preparing for the Equity-focused evaluations and developing the Terms of Reference, including presenting potential equity-focused evaluation questions; how to design the evaluation, including identifying the appropriate evaluation framework and appropriate methods to collect and analyze data; and how to ensure the evaluation is used.


Questions regarding this workshop may be addressed to msegone@unicef.org.

 

 Click Here to Register