First-of-its-kind forum demonstrates that global deliberation is feasible scientific method

June 27th, 2023
Credit: Pixabay/CC0 Public Domain

In November 2022, Meta announced that it would launch a series of Community Forums—a new tool to help the company make decisions that govern its technologies. The process, they explained, would "bring together diverse groups of people from all over the world to discuss tough issues, consider hard choices, and share their perspectives on a set of recommendations that could improve the experiences people have across our apps and technologies every day."

The Metaverse Community Forum, conducted in collaboration with Stanford's Deliberative Democracy Lab on the topic of bullying and harassment, was a first-of-its-kind experiment in global deliberation. It sets an example for thoughtful and scientifically based public consultation on an emerging set of issues posed by new technology. In doing so, it shows that global deliberation is entirely feasible and could be applied to other public policy issues of global interest.

There are innumerable emergent "worlds" being set up in virtual reality. What are the ground rules of behavior in those worlds going to be? What protections need to be offered to those who participate? By whom? Who should be responsible? This project offers answers through thoughtful and representative public consultation.

In this project, scientific samples of the world's social media users were recruited for a weekend-long deliberation from 32 countries in nine regions around the world speaking 23 languages. A matching control group of comparable size did not deliberate but took the same questionnaires in the same time period in early December 2022. The issue is a novel and important one: how to regulate bullying and harassment in virtual reality, particularly in the new private or "members only" virtual spaces that are being created in the Metaverse.

More than 6,300 deliberators, representative of global social media users (with the principal exception of China), were selected by 14 survey research partners who recruited respondents who deliberated in 23 languages. For more on the design and for the weighting of the sample that supports inferences to the global population of social media users, see the Methodology Report.

Deliberative Polling

The design for the deliberations followed the Deliberative Polling model under the direction of the Stanford Deliberative Democracy Lab (housed at the Center on Democracy, Development and the Rule of Law, part of the Freeman Spogli Institute for International Studies at Stanford University). The project was a collaboration with Meta and the Behavioral Insights Team (BIT). A distinguished Advisory Committee vetted the briefing materials for the deliberations and provided many of the experts for the plenary sessions.

The process alternated small group discussions and plenary sessions where competing experts would answer questions agreed on in the small groups. The agenda was a series of 56 policy proposals that could be implemented by Meta or other platform owners. The proposals came not only with background materials but also with pros and cons posing trade-offs that the participants might want to consider. Video versions of the briefing materials were also provided.

The small group discussions were conducted on the Stanford Online Deliberation Platform, which moderated the video-based discussions, controlled the queue for talking, nudged those who had not volunteered to talk, intervened when there was incivility, and moved the group through the agenda of policy proposals and their pros and cons. Near the end of each discussion, it also guided the groups in formulating key questions that they wished to pose to the panels of competing experts in the plenary sessions. The Stanford Online Deliberation Platform is a collaboration between the Crowdsourced Democracy Team, led by Ashish Goel, and the Deliberative Democracy Lab, both at Stanford University.

The core issue posed for deliberation was the responsibility of platform owners such as Meta for regulating behavior in the multitude of private or "members only" worlds being formed in the Metaverse. To what extent should the platform owners stay out, since these "members-only" spaces are joined by mutual consent, and the participants may want privacy? Or to what extent do the platform owners, such as Meta, have a responsibility to act to protect against bullying and harassment, particularly since the metaverse is an immersive reality in which bullying and harassment may have severe consequences? If the platforms have a responsibility, what should they do? These are novel issues, and they amount to the beginnings of a social contract for how these new spaces in virtual reality should be governed.

What should be done?

The before and after questionnaire results provide guidance. They tell us that "platform owners should have access to video capture in members-only spaces." This proposal rose significantly from 59% to 71%, an increase of 12 points (means on the 0 to 10 scale rose from 6.814 to 7.253). Further, there was a significant increase in support for "spaces where there is repeated bullying and/or harassment, platforms should take action against creators." This rose about 10 points, from 57.3 to 66.9% (means from 6.39 to 6.901 on the 0 to 10 scale).

What actions should they take? First, those spaces should be made less visible to users, according to 63% of the deliberators (up from 53%, an increase of about 9.5 points). Second, such spaces should no longer be publicly discoverable, providing a real disincentive to creators who want to grow their membership but who have permitted repeated bullying and harassment. Third, "about spaces where there is repeated bullying and/or harassment, creators should be required to take a course on how to moderate the spaces they create." Support for this proposal rose from 67% to 78%, a rise of 11 points.

Again, for the members-only spaces where there has been repeated bullying and harassment, users should receive notification of such cases when entering a space. Receiving notification when entering rose more than nine points from 76% to 85%.

But the recommendations from the global sample are not punitive. The sample declined to endorse more severe punishments. For example, support for removing members-only spaces where there is repeated bullying only reached 43% (up from 39%). Support for banning creators from making additional members-only spaces if there was repeated bullying and harassment only reached 45% (up from 38%.) Support for banning creators of such spaces from inviting additional people to join only reached 49% (up from 43%). Lastly, support for preventing creators of such spaces from making money off their spaces only reached 54% (up from 49%).

A full range of questions were also posed for public spaces. There was less concern for privacy in public spaces, so the role of the platform in regulating and protecting against bullying and harassment was clear from the outset but also rose significantly with deliberation. Before and after questionnaire results for all the proposals, along with qualitative excerpts and analyses from the small group discussions, can be found in the report which follows.

Knowledge and evaluations

The participants were asked a series of nine knowledge questions about the Metaverse, about bullying and harassment, and about ways of detecting and responding to it. The average gain from deliberation on these questions was about ten percentage points—a statistically significant increase. The participants were also more confident in their knowledge of all aspects of this agenda. When asked whether or not they were confident about their knowledge of the metaverse, Meta's role, the differences between public and members-only spaces, what bullying and harassment are like in the metaverse, and what platforms like Meta are currently doing about it, participants expressed confidence on these questions with above 70% and some above 80% confidence on these questions.

At the end of the process, participants completed a battery of evaluation questions. More than 80% thought that the important aspects of the issues were covered in the group discussions, that the experts presented different points of view in a balanced way, that the plenary sessions facilitated a balanced discussion, and that the issue guide was balanced. 73% thought that the platform "tried to make sure that opposing arguments were considered," and 78% thought the members of their group "participated relatively equally." More than 60% thought that the small group discussions and the process as a whole helped them clarify their position on the issues. Reflecting on the event as a whole, 75% agreed, "I learned a lot about people very different from me—about what they and their lives are like."

The process also increased their sense that they had "opinions worth listening to on these issues," rising from 72% to 79%. A similar percentage said they were confident that they had "come to an informed judgment that Meta can consider in making decisions." 82% said they would "recommend this event to Meta as a way to make decisions in the future." Finally, 79% said they were "confident that Meta will take the polling results seriously in making decisions."

This first-ever global Deliberative Poll sets an example for other global, or near-global deliberative consultations. It is demonstrably representative and thoughtful. The world faces a host of difficult challenges—climate change, AI, and those identified by sustainable development goals. We think this design, implemented with new technology developed at Stanford, provides a basis for people to have an input to the dialog about difficult trade-offs that people around the world have to live with.

More information:
Metaverse Community Forum: Results Analysis. cddrl.fsi.stanford.edu/publica … rum-results-analysis

Provided by Stanford University