Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI
Late last month, Meta quietly announced the results of an ambitious, near-global deliberative “democratic” process to inform decisions around the company’s responsibility for the metaverse it is creating. This was not an ordinary corporate exercise. It involved over 6,000 people who were chosen to be demographically representative across 32 countries and 19 languages. The participants spent many hours in conversation in small online group sessions and got to hear from non-Meta experts about the issues under discussion. Eighty-two percent of the participants said that they would recommend this format as a way for the company to make decisions in the future.
Meta has now publicly committed to running a similar process for generative AI, a move that aligns with the huge burst of interest in democratic innovation for governing or guiding AI systems. In doing so, Meta joins Google, DeepMind, OpenAI, Anthropic, and other organizations that are starting to explore approaches based on the kind of deliberative democracy that I and others have been advocating for. (Disclosure: I am on the application advisory committee for the OpenAI Democratic inputs to AI grant.) Having seen the inside of Meta’s process, I am excited about this as a valuable proof of concept for transnational democratic governance. But for such a process to truly be democratic, participants would need greater power and agency, and the process itself would need to be more public and transparent.
I first got to know several of the employees responsible for setting up Meta’s Community Forums (as these processes came to be called) in the spring of 2019 during a more traditional external consultation with the company to determine its policy on “manipulated media.” I had been writing and speaking about the potential risks of what is now called generative AI and was asked (alongside other experts) to provide input on the kind of policies Meta should develop to address issues such as misinformation that could be exacerbated by the technology.
At around the same time, I first learned about representative deliberations—an approach to democratic decisionmaking that has taken off like wildfire, with increasingly high-profile citizen assemblies and deliberative polls all over the world. The basic idea is that governments bring difficult policy questions back to the public to decide. Instead of a referendum or elections, a representative microcosm of the public is selected via lottery. That group is brought together for days or even weeks (with compensation) to learn from experts, stakeholders, and each other before coming to a final set of recommendations.
Representative deliberations provided a potential solution to a dilemma I had been wrestling with for a long time: how to make decisions about technologies that impact people across national boundaries. I began advocating for companies to pilot these processes to help make decisions around their most difficult issues. When Meta independently kicked off such a pilot, I became an informal advisor to the company’s Governance Lab (which was leading the project) and then an embedded observer during the design and execution of its mammoth 32-country Community Forum process (I did not accept compensation for any of this time).
Above all, the Community Forum was exciting because it showed that running this kind of process is actually possible, despite the immense logistical hurdles. Meta’s partners at Stanford largely ran the proceedings, and I saw no evidence of Meta employees attempting to force a result. The company also followed through on its commitment to have those partners at Stanford directly report the results, no matter what they were. What’s more, it was clear that some thought was put into how best to implement the potential outputs of the forum. The results ended up including perspectives on what kinds of repercussions would be appropriate for the hosts of Metaverse spaces with repeated bullying and harassment and what kinds of moderation and monitoring systems should be implemented.