Last week, I was asked by my MIT colleague, Harvey Michaels, and one of my able graduate students, Elena Alshuler, to facilitate a brainstorming session for Duke Energy and business and community leaders from Charlotte, North Carolina. The question was how to get commercial real estate interests to increase energy efficiency in their buildings. Thirty-five participants met at MIT for almost two days. The group included experts in office building management and operations, local stakeholders from Charlotte, Duke energy staff, representatives of not-for-profits involved in energy efficiency and sustainable development and experts in behavioral change.
By the time I arrived in the afternoon of the first day, the crowd has spent many hours in breakout groups focused on three questions: (1) How can the interests of building owners and facility managers be re-aligned to ensure that they have an incentive to promote energy efficiency? (2) How can individual and social behavioral strategies be used to increase energy efficiency awareness, motivation and action among building tenants? and (3) How can various communication channels be used to promote energy efficiency? Five tables with about eight participants each, along with knowledge discussion leaders and a graduate student recorders managed the brainstorming process. My job was to facilitate a consensus-building discussion that could lead to full group agreement on three or four responses to each of the three questions. The earlier discussions at each of the five tables generated as many as a dozen different ideas in response to each of the three questions.
We used cell phone voting to try to reach a full group agreement. And, that's what I want to talk about. We took about 40 minutes to review the many ideas the recorders compiled from each table's suggestions regarding the best way of answering question #1. In fact, the table recorders met during a break at the end of the brainstorming to compile a composite list of suggestions from all the tables in response to each question. So, when I began with question #1, we had a composite list of 10 or so possible responses. I asked someone who favored each one to explain what they had in mind and why. I gave them about 2 minutes to do this. Then, I asked everyone present to use their cell phone to text message their choice of the proposal they supported most strongly. On a large screen behind me, a bar graph instantly revealed the popularity of each idea. I then took the top four vote-getters (there was a clear drop-off after four), and asked everyone to again text their vote for their top two choices among the remaining four. One response was supported by almost 70% of the group supported; two others that had support from about a quarter of the room. I then asked whether anyone would be unable to support that list of three as the whole group's recommendation. I also asked anyone who wanted to hand in a few sentences with key arguments for why they supported their top choice, or offering further clarification of what would actually be required to implement one of the recommendations, to do so.
We then did the same thing for question #2 and then question #3. Each piece of the discussion took about 35 minutes. Each led to a list of three or four priority suggestions that the group as a whole felt it could support unanimously. Then, we took the suggested language handed in by a relatively small number of participants (maybe two or three for each set of recommendations) and prepared a composite text. This was projected on the screen to summarize the recommendations supported by the group in responses to questions #1. We followed the same process for question #2. (We didn't get to complete the text for #3.) In the end, I asked whether the group was willing to endorse the entire package. We had unanimity, although the organizers promised to send everyone the full text for their review within a few days. If anyone had a problem or wanted a change in the full text, the organizers promised to edit the group statement, if possible, to accommodate last minute concerns about how the recommendations were described.
I was impressed with how comfortable the group was using the cell phone voting procedure. The mixture of face-to-face large group discussion (to clarify each item listed on the screen), followed by two rounds of voting (which produced easy to read bar graphs) generated a crisp and meaningful list of recommendations that everyone had a chance to accept or reject. Of course, the fact that the items on each list were generated initially by small group brainstorming, facilitated by technically sophisticated discussion leaders and recorders, made the final discussion that much easier. Also, we started with a group of 35 who had made it their business to learn as much as they could about behavioral strategies for encouraging support for and implementation of energy efficiency measures in commercial buildings. Background papers and reports were sent to participants before hand, and part of the day one was spent listening to well-known national researchers who had a lot to share about relevant national findings.
I can imagine using cell phone voting in a wide range of public meetings. AmericaSpeaks has used keypad voting for facilitate public meetings with thousands of participants. I like the idea of asking people to use their cell phones (no cost to cast votes, by the way, using instant messaging). I also think the mixture of group brainstorming to generate options, followed by large group voting to narrow the options, followed by group discussion during which those passionately committed to particular options can make their case, followed by another round of cell phone voting, followed by projection on a large screen of the polished prose summary of the group agreement (with language contributed by the participants and not just the facilitators) was a success.
I'd love to hear from others who have used cell phone technology to facilitate groups decision-making, particularly in public settings.