Once again, The Economist has reported on an interesting study concerning artificial intelligence and psychology. In its August 16, 2014 edition, the authors of "The Computer will see you now" discuss a study in which the participants chatted with an avatar. More specifically, Jonathan Gratch at the Institute for Creative Technologies in Los Angeles, California led researchers in an experiment to determine if people, when asked the tough or potentially embarrassing questions, will be more forthcoming if responding to an avatar. They found the answer to be "yes".
To test their idea that people are more honest when speaking with an avatar, they created Ellie, a psychologist. She was able to interpret a smile, pick up on a nervous tic, or that one was tense. She was a very good listener, listening and processing every word, along with the tone, pitch, and body language that goes with it.
Once Ellie was created, the researchers,
..., they put 239 people in front of Ellie... to have a chat with her about their lives. Half were told (truthfully) they would be interacting with an artificially intelligent virtual human; the others were told (falsely) that Ellie was a bit like a puppet, and was having her strings pulled remotely by a person.
Designed to search for psychological problems, Ellie worked with each participant in the study in the same manner. She started every interview with rapport-building questions, such as, "Where are you from?" She followed these with more clinical ones, like, "How easy is it for you to get a good night's sleep?" She finished with questions intended to boost the participant's mood, for instance, "What are you most proud of?" Throughout the experience she asked relevant follow-up questions-"Can you tell me more about that?" for example-while providing the appropriate nods and facial expressions.
During their time with Ellie, all participants had their faces scanned for signs of sadness, and were given a score ranging from zero (indicating none) to one (indicating a great degree of sadness). Also, three real, human psychologists, who were ignorant of the purpose of the study, analysed transcripts of the sessions, to rate how willingly the participants disclosed personal information.
These observers were asked to look at responses to sensitive and intimate questions, such as, "How close are you to your family?" and, "Tell me about the last time you felt really happy." They rated the responses to these on a seven-point scale ranging from -3 (indicating a complete unwillingness to disclose information) to +3 (indicating a complete willingness). All participants were also asked to fill out questionnaires intended to probe how they felt about the interview. (Id.)
The researchers found that those subjects who thought that they were dealing with a human were, indeed, less forthcoming. Similarly, those who thought that Ellie was under the control of a human operator "... reported greater fear of disclosing personal information." (Id.) In sum, the participants were more open and honest when they knew they were dealing with an avatar than a human or an entity under human control.
Letting the imagination run wild for a moment, it would appear that if I and my fellow mediators could turn ourselves into avatars, it would do wonders for mediation confidentiality and settling matters. The parties would be much more open and honest, and candid, (and perhaps not game or "play" the mediator as much) which would lead to really getting to the bottom of what is going on which could then lead to a resolution. One cannot resolve an issue if one is not candid about what is really involved; an avatar gives people that sense of trust and intimacy that us humans, as try as we might, and even with the cloak of mediation confidentiality, cannot always accomplish!
So... perhaps it is time for me to step into virtual reality and become an avatar. (Does anyone know where I can get a hold of the machine used to turn humans into avatars in the movie, "Avatar"?)
... Just something to think about!