|ALL SECTIONS | ABOUT MEDIATION | Civil | Commercial | Community | Elder | Family/DIVORCE | Public Policy | Workplace|
Mediators - Arbitrators - Collaborative Professionals - Mediating Lawyers - Facilitators - Online Mediators - Online Arbitrators
This article originally appeared in the January 1999 issue of Consensus, a newspaper published jointly by the Consensus Building Institute and the MIT-Harvard Public Disputes Program.
These questions are certainly worth answering. But after twenty years, attempts to evaluate consensus building efforts in the public sector have failed to produce agreement on even the right criteria to use for evaluation.
Take, for example, the debate over attempts to evaluate negotiated rule-making efforts at the federal level that reached its apex in the Duke Law Journal in 1997.
Assistant Professor Cary Coglianese of Harvard University's John F. Kennedy School of Government carried out what most quantitatively inclined political scientists would consider to be a methodologically sound study. He reviewed a dozen US Environmental Protection Agency (EPA) negotiated rule makings - known as "reg-negs" in an attempt to determine whether these efforts saved time or reduced litigation costs compared to traditional EPA rulemaking efforts.
Coglianese found that, on average, reg negs had not saved time and that the rules produced through the reg neg process ran a higher risk of legal challenge than did EPA rules produced under traditional procedures. Based on these findings, Coglianese recommended that the federal government stop encouraging the use of negotiated rulemaking.
Philip Harter of the Washington-based Mediation Consortium responded that Coglianese had missed the point, that his methods were faulty, and as a result his findings and conclusion were invalid. Harter, one of the leading practitioners of regulatory negotiation, argued that Coglianese had not looked closely enough at the specific circumstances surrounding each negotiation.
Qualitatively, Harter said, the EPA rules produced through reg negs were not comparable to most EPA regulations. EPA had chosen particularly controversial issues for regulatory negotiation, hoping that the reg neg process could resolve conflicts that had stymied the traditional rulemaking process.
Harter also pointed out that some of the reg negs studied by Coglianese were not representative of reg neg's full potential because there were not conducted according to the process guidelines that dispute resolution professionals had advocated since the early 1980s. In Harter's view, Coglianese unfairly condemned a procedure whose outcomes differ significantly depending on how well it is implemented.
Harter also questioned Coglianese's evaluation criteria as well as his methods. Though saving time and reducing litigation are valid goals, other goals may be even more important: making it possible for more stakeholders to participate more directly in the policy making process; taking advantage of stakeholder's knowledge and experience to create more effective rules that meet stakeholder interests; and doing a better job of taking scientific and technical information into account.
Negotiated rulemaking is in its "adolescence," Harter argued, and will continue to evolve as agencies and interest groups gain experience and confidence in using it to reach consensus on public policy issues.
Coglianese and Harter are both sophisticated and skilled evaluators of consensus building efforts. Their sharp disagreement raises several issues relevant to the evaluation not only of negotiated rulemaking, but of public consensus building in general.
First, what should we be trying to evaluate? There are few unambiguous indicators of a "good" process or a "good" outcome. Process management frequently requires convenors, participants and neutrals to make procedural trade-offs between in-depth exploration of options and timely decision-making.
At first glance, evaluating outcomes would seem to be easier. Two basic criteria - fairness and efficiency - are widely accepted as valid measures of success, at least at a conceptual level. But evaluating the "fairness" of outcomes immediately raises the questions: according to whom and as compared to what?
If stakeholders, neutrals and disinterested evaluators don't all agree, it's probably because they are using different standards of fairness, such as accord with precedent, scientific or technical merit and distributive justice. How should these competing standards be weighed? There is no easy answer.
Evaluating the efficiency of outcomes isn't any easier. Let's say that stakeholders judge the outcome to be "efficient" - in the sense that all stakeholders believe that they could not have gotten more without making at least one other stakeholder worse-off. We still do not know whether these stakeholders could have created more joint gains if they had more information or were able to work together more effectively.
Perhaps focusing on "real world" comparisons will move us forward. Even if the consensus building process was not unambiguously fair or efficient in some absolute sense, maybe it was fairer or more efficient than the stakeholders' next best alternative. But as the Coglianese-Harter debate illustrates, the answer "as compared to what?" is rarely clear-cut, even in settings such as regulatory negotiations where there are well-established decision-making procedures.
Given the thicket of conceptual and methodological problems as well as the difficulty of defending any one set of evaluation criteria, it is no surprise that there is no agreement on how to evaluate public dispute resolution or consensus building efforts.
Nevertheless, some guidelines are worth noting. First, evaluators should not only lay out their evaluative criteria, but also explain why they have chosen those particular criteria. For example, if evaluators choose to assess the time and costs it takes to reach agreement, they should also examine other, less quantifiable costs and benefits that have been claimed for the process (such as the impact of relations and the level of organizational learning) and explain why they are not also examining them.
Second, evaluators need to acknowledge the imperfections of the methods they use. Whether their primary method is participant interviews, review of written documentation or statistical analysis of outcomes, evaluators must highlight the limitations of the methods they have chosen.
For example, it is not enough to say that all the process participants were evaluated using a standard interview questionnaire. Differences in the level of participants' involvement in the process, their recollection of events and their satisfaction with the process and the outcome may all color their responses to questionnaires and should be reviewed in the evaluators' presentation of the methods selected.
On a more positive note, the wide range of legitimate evaluative criteria presents a tremendous opportunity - as well as a formidable challenge - for would-be evaluators. To this writer's knowledge, there not yet been a peer-reviewed, published evaluation of a set of public dispute resolution cases using:
• Multiple process and outcome criteria;
• Qualitative and quantitative indicators and methods well-tailored to those criteria;
• A well-chosen control group of cases from the same arena that were resolved using traditional administrative, political and/or judicial methods
The field would benefit greatly from such studies. Practitioners and evaluators should work together to assemble their findings in a way that could improve both the theory and practice of consensus building in the public sector.
If such studies already exist, Consensus would
like to know about them. If you have authored or
know of one, please let us know.
David Fairman has facilitated consensus building and mediated resolution of complex public and organizational disputes on economic development and human service programs and projects, environmental and land use planning and regulation, and violent intergroup conflicts. Recent and current projects include facilitation of a back-channel political dialogue on options for a political settlement to resolve a Moslem insurgency in the southern Philippines; a global review of the environmental and social policies of the World Bank’s International Finance Corporation; consensus building on health and housing policies for ex-offenders re-entering their communities across the U.S.; and facilitation of U.S. national policy dialogues on the future of public housing programs and national energy policy.
Dr. Fairman has taught negotiation, consensus building and mediation skills for organizational leaders and staff through the Program on Negotiation at Harvard Law School, and through national and international organizations including the U.S. Agency for International Development, the Asian Development Bank, the United Nations Development Program, the Sustainability Challenge Foundation, Hewlett Packard, IBM, Capital One, and the American Cancer Society, among many others. He has researched and written numerous academic publications, consulting reports and simulations on negotiation and public consensus building.
Dr. Fairman has been with CBI since 1997. Prior to his work with CBI, Dr. Fairman was a facilitator and trainer in private practice, and a mediator with Endispute, Inc., one of the first private conflict resolution organizations in the U.S. He has been a professional in the field since 1989.
He is a senior mediator on the rosters of the U.S. Environmental Protection Agency and the U.S. Institute for Environmental Conflict Resolution. He serves on the membership committee of the Alliance for International Conflict Prevention and Resolution, where he was a founding Board member, and he is a life member of the Council on Foreign Relations.
Dr. Fairman holds a BA from Harvard University and a Ph.D. in political science from MIT.
|Free subscription to comments on this article||Add Brief Comment|