Evaluating ADR Programs (2001)

65 Federal Register 59200, 59208-14 (October 4, 2000)

ADR Program Evaluation Recommendations

I. Introduction

The alternative dispute resolution (ADR) field has long promoted the various benefits of using non-traditional methods to resolve disputes, such as savings of time and money, party satisfaction with the ADR process and outcomes, high settlement rates, and improved relationships. The ADR Council recognizes that ADR has the potential to produce these results, and notes the value of hard data to back up the assertion that ADR really delivers these benefits to agencies. The Council’s Core Principles for Non-binding Workplace ADR Programs [and if approved, the ADR Pledge] identify evaluation as a key component of successful ADR program management. Up-front and thorough evaluation initiatives allow ADR program managers to ensure the quality of their programs, to identify programmatic successes and difficulties, and to make necessary improvements. Therefore, it is important that all federal ADR programs engage in a rigorous evaluation of ADR’s use and benefits to ensure quality ADR programs and to provide the necessary information to sustain and increase support of ADR.

As the use of ADR becomes institutionalized within federal agencies, the government has a heightened interest in evaluating the benefits and impact of these dispute resolution initiatives. This type of formal evaluation is consistent with the legal obligations of all federal programs, under the Government Performance and Results Act (P.L. 103-62) which requires that agencies create a performance plan, define goals, and track the extent to which they achieve their desired outcomes. ADR program management best practices emphasize the importance of an evaluation component in program design as well as practice, and some federal agencies have initiated evaluations of their ADR programs.

However, the federal sector will benefit from agencies’ coordinated and uniform efforts at ADR program evaluation.

II. Recommendations

The Council acknowledges that throughout the government, ADR program goals and services differ dramatically among Federal agencies. Consequently, it is appropriate to tailor evaluation plans and methods to meet the needs of a particular program. Even with agency-specific tailoring, effective evaluations will include certain common elements. Therefore, to promote consistency and coordination among Federal ADR evaluation efforts, the Council makes the following recommendations to agencies:

1. Importance of Evaluation. Each agency should engage in an up-front and ongoing evaluation of its ADR programs.


2. Data to be Captured. At a minimum, evaluators should attempt to capture and analyze in a timely manner the following information:

a. Usage: the extent to which ADR is considered and used.

b. Time Savings: the time it takes for a case to be resolved through ADR as compared to traditional dispute resolution processes.

c. Cost Avoidance: the amount of financial savings (or costs) to the agency, including staff time, dollars, or other quantifiable factors, by resolving cases through ADR as compared to traditional dispute resolution processes.

d. Customer Satisfaction: parties’ satisfaction with the process and outcomes, including the quality of the neutral.

e. Improved Relationships: where ongoing relationships are important, to what extent relationships are improved.

f. Other Appropriate Indicators: in line with the agency’s strategic goals and objectives.

3. Validity and Reliability of Data.

Methodologies should be valid and reliable. ADR program results should be compared to results from alternate or previously existing dispute resolution methods.

4. Presentation of Data. ADR Program Managers should present a realistic, accurate and complete picture of the results of their program.

5. Use of Data. ADR success stories should be summarized and publicized, to help foster a culture in which ADR is accepted as beneficial to Federal agencies and their customers. If areas for improvement are identified, that information should be used to enhance the ADR program.

6. Reporting. Federal ADR Program Managers are encouraged to report the results of their evaluations to the Federal Interagency ADR Working Group.

7. Potential Resources. In undertaking ADR activities, agencies should consult: (1) The Federal ADR Program Manager’s Resource Manual, Chapter 8: Evaluating ADR Programs, and (2) The Electronic Guide to Federal Procurement ADR. Both of these resources, as well as other valuable information are available electronically at: www.financenet.gov/iadrwg .

Evaluating ADR Programs


I. Introduction

For the past ten years the practice of ADR, the creation of ADR programs, and the discipline of ADR evaluation have been developing in tandem. We have learned that organizations best design and develop ADR programs by knowing an organization’s conflict resolution culture, we see that evaluation can and should be a reflective feedback mechanism for ADR program development, and that evaluation belongs at the beginning of ADR program design. While evaluation is ideally present at the beginning of ADR program development, we recognize that there are many ADR programs already up and running that do not have evaluation components. This chapter will address ADR programs at any stage along the way of program development.


II. Planning and Designing the Evaluation


Traditional ADR program evaluation is a way to determine whether an ADR program is meeting its goals and objectives. Evaluation data are useful in finding out what works and what does not work and may be a critical factor in decisions to modify or expand a program. When planning and designing a federal ADR program evaluation, it is important to understand what components of the program are essential to comply with federal statutes and initiatives. To the extent that an ADR program maintains compliance with federal ADR requirements, it fulfills a necessary and useful function for your organization or agency. A good design will build upon an existing program structure and will establish an evaluation methodology for each program “core” area, core areas being defined by statute or initiative. Overall program effectiveness can then be determined by combining data from all function areas, with consideration being given to intangible benefits and consumer satisfaction. Evaluation is an art as well as a science, even, perhaps, a state of mind. It is almost never a linear process. Decisions made early in the evaluation planning and design process will almost certainly need to be reconsidered and modified as your ADR program grows and develops. In addition, traditional cost/benefit analysis does not capture many of the benefits derived from ADR service programs because these benefits are often intangible and not easily quantifiable. With all of this in mind, evaluators need to strive for a workable balance between the need for defensible results and practical limitations. Key questions to ask when planning and designing an ADR program evaluation are:

  • What are your goals and objectives for your ADR program evaluation?
  • How will you pay for your ADR evaluation?
  • Who will evaluate your ADR program?
  • Who is your audience for this evaluation?
  • What is your evaluation design strategy?
  • What are your measures of success?


A. What Are Your Goals and Objectives For Your ADR Program Evaluation?


The goals and objectives of an evaluation should link closely with the goals and objectives of the ADR program being evaluated, should reflect the needs and interests of those requesting the evaluation, and should be sensitive to the needs and interests of the expected audiences for the results. Ideally, the ADR program’s goals and objectives will have been established early on. Sometimes, however, these goals may not have been clearly articulated, may not be measurable as stated, or may have changed. Evaluators may need to ask program managers and other stakeholders to provide input (and hopefully arrive at a consensus) on the program’s goals, while addressing questions such as, how well is the program working, should changes be made, should the program be continued or expanded, and how well is the ADR program working in a particular federal context?


B. How Will You Pay For Your ADR Evaluation?


The cost of conducting an ADR program evaluation depends upon a number of factors, such as the number and complexity of success measures, the type of ADR program selected, the level of statistical significance required of the results, the availability of acceptable data, and who is selected to carry out the evaluation. Costs can be controlled, however, by careful planning, appropriate adjustments in the design phase, and a creative use of outside evaluators, from universities, for example.


C. Who Will Evaluate Your ADR Program?


When selecting an evaluator, or a team of evaluators, a number of qualifications should be considered. Objectivity (i.e. no stake in the outcome) is essential for your results to be seen as credible. An evaluator should have sufficient knowledge of the ADR process as well as program expertise to design the evaluation, perform the data collection process and data analysis as well as present your results to your audience if you chose to have the evaluator present your results. Such expertise may be found inside some agency policy and program evaluation offices, at theU. S. General Accounting Office, or at various outside evaluation consulting firms and university departments specializing in social science research. Some understanding of the organization or the context in which the program operates can be helpful to the evaluator, as are good interpersonal and management skills.


Evaluations can be conducted by people outside the agency, within the agency but outside the program being evaluated, or by people involved with the ADR program. There are advantages and disadvantages to each option. An outside evaluator has the potential for the greatest impartiality, lending credibility and validity to your results. In addition, depending upon the expertise available in a particular agency, an outside evaluator may have more technical knowledge and experience. Outside evaluation may be relatively expensive, however, depending upon the affiliation of the evaluators (e.g. colleges or universities, other non-profit groups, or private sector entities such as management consulting or social science research firms). If the agency has evaluation capacity inside the organization where the ADR program is being implemented, the requisite neutrality may be available at a potentially lower cost. An inside evaluator involved in ADR program implementation or design may be the least expensive, and offer the best understanding of program context, but it also carries with it potential perceptions of a lack of impartiality. One way to avoid some of the disadvantages of each of these approaches is to use a team of people, representing internal and external groups.


Regardless of who does the evaluation (outside or inside), it is useful to have someone in the ADR program who can serve as a liaison with the evaluator to ensure access to the necessary information. The liaison might be the person responsible for planning the evaluation.


D. Who Is Your Audience For This Evaluation?


There are usually a variety of people who have an interest in the results of a program evaluation. These audiences may be interested in different issues and seek different types of information. Potential audiences should be identified as early as possible, and kept in mind while planning the evaluation, so that their questions will be addressed. Possible audiences for an ADR program evaluation include ADR program officials, other agency officials, program users, members of Congress, the general public, and others. Agency program officials may be interested in finding out how the ADR program is working, and how it might be improved. Their interests might focus, for example, on the program’s impact on case inventory (backlogs), the effects of ADR use on long-term relationships among disputants, or how well information about the program is being disseminated. Program officials involved in the day-to-day operation may have different interests than those at higher levels.


Other agency officials such as budget officers, staff within offices of General Counsel and Inspector General, or managers from other programs may also have an interest in evaluation results. Budget officials may be interested in whether cost savings have been achieved through implementation of the program. The Inspector General may be interested in the nature of the settlements and whether ADR use promotes long-term compliance. General Counsels may care about how long it takes to resolve cases or the nature of outcomes; other managers may want to know how effectively the program was implemented.


Members of Congress and their staffs may be interested in how ADR use affects budgets and how related laws, such as the Administrative Dispute Resolution Act, are being implemented. Members of the public may be interested in how efficiently the agency is resolving its disputes, and how satisfied participants are with ADR processes. Disputants may be interested in finding out how typical their experience was compared to other users. Officials in other federal agencies may find evaluation results helpful as they plan or modify their own ADR programs. There may be other audiences whose interests or desire for information should be considered.


Although terminology differs, evaluations are commonly characterized as either:
(1) program effectiveness (also known as impact, outcome, or summative) evaluations, which focus on whether a program is meeting its goals and/or having the desired impact; or
(2) program design and administration (also known as process or formative) evaluations, which examine how a program is operating. Program effectiveness evaluations may be useful in determining whether a program should be continued or expanded; program design/administration evaluations often focus on how a continuing program can be improved. Remember that decisions on the future of programs (or even how they could be improved) are usually not made solely on the basis of program evaluation results. Agency priorities, other institutional concerns, budget limitations, and other factors will also affect program decisions. While it is not possible to satisfy every audience by answering all potential questions, it is useful to figure out what the possible questions are and then focus the evaluation on the most important ones. Talking to members of the various potential audiences can help identify the issues they are interested in, and may help develop consensus about which issues to address. Such discussions also improve the likelihood that evaluation results will be a useful and meaningful part of future decision making processes.


E. What Is Your Evaluation Design Strategy?


ADR program design is based on an understanding that certain components of a program are essential to comply with federal statutes and initiatives. Program effectiveness evaluations are conducted to answer fundamental questions about a program’s utility, e.g., does the program provide a necessary or useful function, is the program accomplishing its goals, and is the program being administered effectively. A comprehensive evaluation system measures tangible and intangible benefits, including customer satisfaction, using both quantitative and qualitative data. To be a useful and effective management and planning tool, an evaluation system must do more than provide comparison data. It also must provide a flexible process for reevaluating the goals of the program, modifying the evaluation methodology, and implementing necessary changes.


Development of an evaluation design might include the following steps:


Identification and Clarification of ADR Program Goals
Clear goals and objectives mean that useful conclusions can be drawn from the data collected.


Development of an Appropriate Evaluation Methodology
It is necessary to determine what is to be measured and how, what the sources of the data are, and how the data will be collected. To do this most effectively, core functional areas of ADR program practice need to be identified, as do quantitative and qualitative sources of data.


Development of an Analysis plan and Research Methodologies
Traditionally-based experimental designs (time-cost benefit analysis) provide statistically reliable results. Program analysis, while producing quantifiable results, must go beyond a bare assessment of program outcomes to explain the outcomes and to offer suggestions for program improvement.


Collection Data Mechanisms
Status reports, case studies, time series collections, agency databases, logs, surveys, and evaluation forms are all sources of information, as are personal interviews.


F. What Are Your Measures of Success?


1. Program Effectiveness (Impact)
Program effectiveness measures are aimed at assessing the impact of the program on users/participants, overall mission accomplishment, etc. The indicators of program effectiveness can be further divided into three categories: efficiency, effectiveness, and customer satisfaction.


Efficiency


Cost to the Government of using alternative dispute resolution vs. traditional dispute resolution processes:
Is the use of ADR more or less costly than the use of traditional means of dispute resolution? (Cost may be measured in staff time, dollars, or other quantifiable factors.)


Cost to disputants of using alternative dispute resolution vs. traditional dispute resolution processes:
Is the use of ADR more or less costly than the use of traditional means of dispute resolution? (Cost may be measured in terms of staff time, dollars, or other quantifiable factors.)


Time required to resolve disputes using alternative dispute resolution vs. traditional means of dispute resolution:
Are disputes resolved more or less quickly using ADR, compared to traditional means of dispute resolution?
Such factors as administrative case processing, participant preparation, dispute resolution activity timeframes, and/or days to resolution may be considered.


Effectiveness


Dispute Outcomes.
Number of settlements achieved through the use of mediation vs. traditional dispute resolution processes:
Does the use of alternative dispute resolution result in a greater or a fewer number of settlements?


Number of cases going beyond mediation steps:
Does the use of alternative dispute resolution result in a greater/fewer number of investigations, further litigation activities, etc.?


Nature of outcomes:
What impact does the use of alternative dispute resolution have on the nature of outcomes, e.g. do settlement agreements “look different”?
Do settlement agreements reflect more “creative” solutions?
Do outcomes vary according to the type of alternative dispute resolution process used?


Correlations for cases selected for alternative dispute resolution, between dispute outcomes and such factors as complexity or number of issues, or number of parties:
Is there any correlation, where ADR is used, between the complexity and/or number of parties/issues in acase and the outcome of the case?


Durability of Outcomes.
Rate of compliance with settlement agreements:
Does the use of alternative dispute resolution result in greater or lesser levels of compliance with settlement agreements?


Rate of dispute recurrence:
Does the use of alternative dispute resolution result in greater or lesser levels of dispute recurrence, i.e. recurrence of disputes among the same parties?


Impact on Dispute Environment.
Size of case inventory:
Does the use of alternative dispute resolution result in an increase/decrease in case inventory? Types of disputes:
Does the use of alternative dispute resolution have an impact on the types of disputes that arise?


Negative impacts:
Does the use of alternative dispute resolution have any negative consequences, e.g. an inability to diagnose and correct systemic problem/issues? Timing of dispute resolution:
Does the use of alternative dispute resolution affect the stage at which disputes are resolved?


Level at which disputes are resolved:
Does the use of alternative dispute resolution have any impact on where and by whom disputes are resolved?


Management perceptions:
What are the quantitative and qualitative effects of using alternative dispute resolution on management, e.g. how does the use of ADR impact upon allocation and use of management time and resources?
Does the use of ADR ease the job of managing?


Public perceptions:
Is the public satisfied with alternative dispute resolution outcomes? Is there any perceived impact of use of ADR on effectiveness of the underlying program?
“Public” may be defined differently, depending on the particular program/setting Involved


Efficiency


Cost to the Government of using alternative dispute resolution vs. traditional dispute resolution processes:
Is the use of ADR more or less costly than the use of traditional means of dispute resolution? (Cost may be measured in staff time, dollars, or other quantifiable factors.)


Cost to disputants of using alternative dispute resolution vs. traditional dispute resolution processes: Is the use of ADR more or less costly than the use of traditional means of dispute resolution?
(Cost may be measured in terms of staff time, dollars, or other quantifiable factors.)


Time required to resolve disputes using alternative dispute resolution vs. traditional means of dispute resolution:
Are disputes resolved more or less quickly using ADR, compared to traditional means of dispute resolution? Such factors as administrative case processing, participant preparation, dispute resolution activity timeframes, and/or days to resolution may be considered.


Effectiveness


Dispute Outcomes.
Number of settlements achieved through the use of mediation vs. traditional dispute resolution processes:
Does the use of alternative dispute resolution result in a greater or a fewer number of settlements?


Number of cases going beyond mediation steps:
Does the use of alternative dispute resolution result in a greater/fewer number of investigations, further litigation activities, etc.?


Nature of outcomes:
What impact does the use of alternative dispute resolution have on the nature of outcomes, e.g. do settlement agreements “look different”?
Do settlement agreements reflect more “creative” solutions?
Do outcomes vary according to the type of alternative dispute resolution process used?


Correlations for cases selected for alternative dispute resolution, between dispute outcomes and such factors as complexity or number of issues, or number of parties:
Is there any correlation, where ADR is used, between the complexity and/or number of parties/issues in a case and the outcome of the case?


Durability of Outcomes.
Rate of compliance with settlement agreements:
Does the use of alternative dispute resolution result in greater or lesser levels of compliance with settlement agreements?


Rate of dispute recurrence:
Does the use of alternative dispute resolution result in greater or lesser levels of dispute recurrence, i.e. recurrence of disputes among the same parties?


Impact on Dispute Environment.
Size of case inventory:
Does the use of alternative dispute resolution result in an increase/decrease in case inventory?


Types of disputes:
Does the use of alternative dispute resolution have an impact on the types of disputes that arise?


Negative impacts:
Does the use of alternative dispute resolution have any negative consequences, e.g. an inability to diagnose and correct systemic problem/issues?


Timing of dispute resolution:
Does the use of alternative dispute resolution affect the stage at which disputes are resolved?


Level at which disputes are resolved:
Does the use of alternative dispute resolution have any impact on where and by whom disputes are resolved?


Management perceptions:
What are the quantitative and qualitative effects of using alternative dispute resolution on management, e.g. how does the use of ADR impact upon allocation and use of management time and resources?
Does the use of ADR ease the job of managing?


Public perceptions:
Is the public satisfied with alternative dispute resolution outcomes? Is there any perceived impact of use of ADR on effectiveness of the underlying program?
“Public” may be defined differently, depending on the particular program/setting involved.


Customer Satisfaction
Participants’ Satisfaction with Process
Participants’ perceptions of fairness:
What are participant perceptions of access to alternative dispute resolution, procedural fairness, fair treatment of parties by neutrals, etc.?


Participants’ perceptions of appropriateness:
What are participant perceptions of appropriateness of matching decisions (i.e. matching of particular process to particular kinds of disputes or specific cases)?


Participants’ perceptions of usefulness:
What are participant perceptions of the usefulness of alternative dispute resolution in the generation of settlement options, the quantity and reliability of information exchanged, etc.?


Participants’ perceptions of control over their own decisions:
Do participants feel a greater or lesser degree of control over dispute resolution process and outcome through the use of alternative dispute resolution?
Is greater control desirable?


Impact on Relationships Between Parties
Nature of relationships among the parties:
Does the use of alternative dispute resolution improve or otherwise change the parties’ perceptions of one another?
Is there a decrease or increase in the level of conflict between the parties?
Are the parties more or less likely to devise ways of dealing with future disputes?
Are the parties able to communicate more directly or effectively at the conclusion of the ADR process and/or when new problems arise?


Participants’ satisfaction with outcomes:
Are participants satisfied or unsatisfied with the outcomes of cases in which alternative dispute resolution has been used?


Participants’ willingness to use alternative dispute resolution in the future:
Would participants elect to use alternative dispute resolution in future disputes?


2. Program Design and Administration (Structure and Process)


How a program is implemented will have an impact on how effective a program is in meeting its overall goals. Program design and administration measures are used to examine this relationship and to determine how a program can be improved.


The indicators of program design and administration are further divided into three categories: program organization, service delivery, and program quality.


Program Organization


Program structure and process:
Are program structure and process consistent with underlying laws, regulations, executive orders, and/or agency guidance?
Do program structure and process adequately reflect program design?
Are program structure and process adequate to permit appropriate access to and use of the program?


Directives, guides, and standards:
Do program directives, guides, and standards provide staff/users with sufficient information to appropriately administer/use the program?


Delineation of responsibilities: Does the delineation of staff/user responsibilities reflect program design?
Is the delineation of responsibilities such that it fosters smooth and effective program operation?


Sufficiency of staff (number/type):
Is the number/type of program staff consistent with program design and operational needs?


Coordination/working relationships:
Is needed coordination with other relevant internal and external individuals and organizations taking place?
Have effective working relationships been established to carry out program objectives?


Service Delivery


Access and Procedure
Participant access to alternative dispute resolution:
Are potential participants made aware of the program? Is the program made available to those interested in using ADR?


Relationship between participant perceptions of access and usage of alternative dispute resolution:
What impact do participants’ perceptions about the availability of the program have on the levels of program usage?


Participant understanding of procedural requirements:
Do program users understand how the program works?
Did they feel comfortable with the process in advance?


Relationship between procedural understanding and rates of usage:
Is there any relationship between the level of participant understanding and the degree of program use, e.g. is a lack of participant understanding serving as a disincentive to using the program?


Case Selection Criteria
Participants’ perceptions of fairness, appropriateness:
Do participants feel that appropriate types of cases are being handled in the program?
Do participants or non-participants feel that the criteria for which cases are eligible for alternative dispute resolution are fair?
Are cases being sent to the program at the appropriate dispute stages?


Relationship between dispute outcomes and categories of cases:
Is there a correlation between the nature (size, types of disputants, and/or stage of the dispute) of cases and the outcome of the dispute?
Are certain types of cases more likely to be resolved through alternative dispute resolution than other types?


Program Quality


Training
Participants’ perceptions of the appropriateness of staff and user training:
Do participants feel that they were provided with sufficient initial information and/or training on how to use the program?
Do they feel that program staff had sufficient training and/or knowledge to appropriately conduct the program?


Relationship between training variable and dispute outcomes: Is there a relationship between the type/amount of training (for participant and/or staff) and dispute outcomes?


Neutrals Participants’ views of the selection process:
Are participants satisfied with the manner in which neutrals were selected and assigned to cases?
Were they involved in the selection decision?
If not, did they feel they should be?


Relationship between participants’ views of the selection process, perceptions of neutral competence and objectivity, and dispute outcomes:
Is there any relationship between participant views about the neutrals selection process and dispute outcomes?
How do these views affect participants’ assessment of the competence and neutrality of neutrals?


Participants’ perceptions of competence (including appropriateness of skill levels/training):
Do participants feel that neutrals were sufficiently competent or trained?
Do participants feel that more or less training was needed?


Participants’ perceptions of neutrality/objectivity:
Do participants feel that neutrals were sufficiently objective?
Do participants feel that neutrals were fair in their handling of the dispute?


G. Other Specific Program Features


Every dispute resolution program is unique. Those requesting and/or conducting an evaluation may want to consider examining other aspects of the program. These unique features may relate to the design of a program, who was and continues to be involved in program design and administration, etc. Each is likely to have at least some impact on service delivery and the quality of the program, and should be considered for inclusion in either a comprehensive or selected evaluation of the program, as appropriate.


III. Presentation, Dissemination, and Use of Results Results should be communicated in ways that will allow meaningful decisionmaking by program administrators and decisionmakers. It is easier to make decisions about the best way to present and disseminate results if the people who will use the results (the audience) have been consulted during the initial and subsequent evaluation processes. Such consultation can avoid costly or embarrassing errors; e.g., omission of a key area for analysis, and can ensure the report meets the needs of those who will be using it.


A. What Is the Best Method For Communicating Your Findings?


There are a variety of ways that evaluators can communicate results to potential audiences. Evaluators or program staff may provide briefings, hold meetings with users, and/or prepare a written report.


Briefings and presentations allow evaluators or program staff to convey important evaluation information quickly and selectively. In selecting material to be presented, care should be taken to avoid bias or presentation of material out of context. Some discussion of methodology is important, as are appropriate cautions about the limits and appropriate use of evaluation data. Providing for interaction with or feedback from the audience may allow issues and potential problems to be identified.


Written reports typically take a great deal of time to prepare, but allow evaluators to provide considerably more detail on both methodology and results. Legislation or executive decisions often require a final, written report. If it is important to ensure that there is one “official” source of information on evaluation methodology and results, a formal, written report may be an important and/or required format in addition to briefings and presentations by evaluators or staff.


B. What Kind of Information Needs to Be Communicated? Although the potential audiences, program content, and evaluation objectives will vary for each ADR program evaluation, it is generally helpful to include the following kinds of information in a report or other type of presentation:

  • description of the ADR program and how it operates;
  • goals and objectives of the evaluation;
  • description of the evaluator’s methodology;
  • presentation of evaluation findings;
  • discussion of program strengths and weaknesses;
  • implications for program administration (e.g., training, budget, staff.); and
  • recommendations as appropriate.


Presentation style is entirely a matter of what works for whom. It is always important, however, to make sure that evaluation data are presented accurately and completely, to prevent charges of misrepresentation or overreaching, and to avoid misuse of results.


C. How Can You Enhance the Effectiveness of Your Presentation? Variations in presentation format and style aside, we offer the following suggestions for making the presentation of evaluation results as effective as possible.


Involve potential users as early as possible in determining presentation format and style:
Evaluation data should be organized and communicated in a way that is useful for potential audiences and users.


Tailor presentation method, format, and style to audience needs:
Select the method of presentation (e.g., oral briefing, written report), format, and style of presentation (e.g., formal vs. informal, briefing vs. discussion) based on who your audience is and what their needs are. There may be multiple audiences with multiple needs. Be flexible and willing to adapt material as appropriate.


Be clear and accurate:
Evaluation information must be presented clearly and accurately. Always keep the audience in mind as you prepare to describe your ADR program and present evaluation data. Avoid any gaps in describing the program or presenting the results. A clear and accurate portrayal of the program and evaluation results will allow the audience to draw appropriate conclusions about program effectiveness and any need for change.


Be honest and direct:
Sharing evaluation findings with potential users and involving them in key decisions concerning presentation format and style does not mean publishing only those findings that reflect well on the program or those affiliated with it. Evaluators must present the story objectively; too heavy an emphasis on the positive may cast doubt on the integrity of the results as well as the integrity of the evaluators. Data that suggest weaknesses in program design or administration or that reveal failure to accomplish program goals or objectives should be reported and can be used as a basis for suggesting appropriate changes. Honest analysis and thoughtful consideration of the information will enhance both the credibility and usefulness of the results.


Keep the body of the report or the bulk of the presentation simple:
Reduce complex data to understandable form, use graphic illustrations where appropriate. Evaluation results must be presented so that the most essential data are available, understandable, and useful. Too complex a format or over-reliance on narrative may detract from evaluation results and analysis. Organize the presentation orreport for multiple uses. Use headings and subheadings to help the audience identify useful information quickly. Limit the use of technical jargon. Prevent misinterpretation or misuse by considering how the data will look if lifted from the context of the presentation or report. Use simple graphics to illustrate results and call attention to key findings. Use footnotes and make technical data available in handouts or appendices so that the body of the presentation or report is as uncomplicated as possible.


Provide an executive summary or abstract:
Evaluators should provide an overview. The “quick take” should be supplemented by more detailed discussion later in the report.


Make survey instruments and other data collection tools available: Materials can be made available as handouts, at an oral presentation or face-to-face meeting, or as appendices to a written report. The availability of such material enhances both understanding and credibility. It also allows other ADR program evaluators to learn from the experiences of their peers.


Note limitations on the interpretation and use of evaluation data, where appropriate:
Limitations on the interpretation of the data, such as those that might relate to the ability to study results, should be communicated to the audience. Evaluators need to exercise caution in expressing their own views and conclusions. Where conclusions are not an objective reflection of the data, they need to be labeled appropriately; i.e., as the views of the evaluators and not necessarily of officials responsible for the program.


Expect the need for follow-up; be flexible and responsive:
Have extra copies of reports and presentation handouts available. Keep materials accessible. Provide addresses and telephone numbers for follow-up discussion or questions. Be available for consultation. Stay abreast of how results are being used; provide clarification or added direction in the case of misinterpretation or misuse. Prepare additional materials as needed. Tailor subsequent releases to customer needs.

D. Who Is Responsible for Making Decisions Regarding the Dissemination of Evaluation Results?


It is important to think about dissemination of the results at two points: early in the planning process, and again as results become available. Decisions about dissemination may be made solely by the evaluator, solely by program officials or other entity that has requested the evaluation, or, more typically, cooperatively. Such decisions may be circumscribed by contract or agreement, or may be discussed and resolved informally by evaluators and decisionmakers.


When Should Evaluation Results be Made Available?
Decisionmakers need to consider the implications of releasing evaluation results at different times. For example, if you want publicity for the results, select slower news days. The timing of data release may be defined by contract or agreement, or may otherwise be discussed and resolved by evaluators and decisionmakers. Releasing preliminary data before all data are collected or analyzed may be risky.


How Widely Will Evaluation Results be Disseminated?
Evaluation results may be disseminated widely or narrowly. Cost, convenience, and level of interest are likely to play a role. It is rare that either the evaluator or program officials will have complete control over dissemination of the results.


How Will Evaluation Results be Disclosed Initially?
Evaluation results can be initially disclosed in different ways, with more or less fanfare. They may be made available to the selected audiences by memorandum, by press release, by press conference, etc. Typically, such decisions will be made at the executive level, by those who have the authority to make the disclosure.


Evaluation Checklist


Is your ADR program ongoing or in the formative stage?
What are your goals and objectives for your ADR program evaluation?
How will you pay for your ADR program evaluation?
Who will do the evaluation?
Who is your audience?
What is your evaluation design strategy?
What are your measures of success?
What do you need to know about your program effectiveness (impact)?
What do you need to know about your program structure and administration?
How and when will you disseminate your evaluation results?


Resources


Administrative Conference of the United States. (1995). Dispute Systems Design Working Group. Evaluating ADR Programs: A Handbook for Federal Agencies. Washington, D. C.: Administrative Conference of the United States.
Brett, J. M., Barsness, Z. I., & Goldberg, S. B. (1996). The Effectiveness of Mediation: An Independent Analysis of Cases Handled by Four Major Service Providers. Negotiation Journal, 12(3), 259-269.
Costantino, Cathy and Sickles-Merchant, Christine. (1996). Designing Conflict Management Systems: A Guide to Creating Productive and Healthy Organizations. Jossey-Bass.
Empowerment Evaluation: http://www.stanford.edu/~davidf/empowermentevaluation.html
Federal Deposit Insurance Corporation. (1999). Checklist for Evaluation of Federal Agency ADR Programs: Short and Long Term. Attorney
General’s ADR Working Group, Workplace Session Notes, 5/18/99.
Federal Deposit Insurance Corporation. (1997). ADR Program Evaluation Project, Annual Report.
Galanter, M. (1989). Compared to What? Assessing the Quality of Dispute Processing. Denver University Law Review, 66(3), xi-xiv.
Honeyman, C. (1990). On Evaluating Mediators. Negotiation Journal, 23-36.
Honeyman, C. (1995). Financing Dispute Resolution. Madison, WI: Wisconsin Employment Relations Commission.
McEwen, C. A. (1991). Evaluating ADR Programs. In F. E. A. Sander, Emerging ADR Issues in State and Federal Courts. Washington, D.C.
Patton, Michael. (1990). Qualitative Evaluation and Research Methods. Sage: Beverly Hills, CA.
Posovac, Emil J. and Raymond B. Carey. (1997). Program Evaluation: Methods and Case Studies, 5th Edition. Prentice Hall Humanities/Social Sciences.
Rossi, Peter and Howard Freeman. (1993). Evaluation: A Systematic Approach. Sage: Beverly Hills, CA.
Scher, E. (1996). Evaluations: What for, by Whom, Who Pays? Consensus, October, 5, 7-8.
Susskind, L. E. (1986). Evaluating Dispute Resolution Experiments. Negotiation Journal, April, 135-139.
Tyler, T. (1989). The Quality of Dispute Resolution Procedures and Outcomes. Denver University Law Review, 66, 419-436.
Wholey, Joseph S., Harry P. Hatry, and Kathryn E. Newcomer, Eds. (1994). Handbook of Practical Program Evaluation. Jossey-Bass.
Worthen, B.R., J.R. Sanders, and J. Fitzpatrick. (1997). Program Evaluation: Alternative Approaches and Practical Guidelines. Addison, Wesley, Longman.


This document was written by Lee Scharf, ADR Specialist at the Environmental Protection Agency, and draws from the work of Cathy Costantino and Christine Sickles-Merchant as well as that of the Administrative Conference of the United States. See the Resources section for cites.


Featured Mediators

ad
View all

Read these next

Category

The Art Of Listening

From the Blog of Phyllis G. Pollack.     This past week, I had two mediations. One settled, one did not. The reason behind each result was the same: the art of...

By Phyllis Pollack
Category

Resolving Pressing Issues in Divorce Mediation (video)

This video is from Jim Melamed's 15-hour "Mediating Divorce Agreement" course available at Mediate.com University.       Mediating Divorce Agreement with Jim Melamed This is a sample from of...

By James Melamed, J.D.
Category

CIETAC Administered Arbitrations: Internal Conflicts Cause Uncertainty

Peter Bert and Joachim Glatter are partners in Taylor Wessing, Frankfurt. Joachim Glatter is a member of the China Practice group, and on CIETAC’s Panel of Arbitrators. Peter Bert is...

By Peter Bert

Find a Mediator

X
X
X