Get in touch!
First Report: Democratic Inputs to AI
📋

First Report: Democratic Inputs to AI

Table of contents

High-level overview

In this report we present Common Ground: An application for hosting iterative, open-ended and conversational democratic engagements with large numbers of participants (deliberations at scale). We also present our vision for wider end-to-end processes to elicit democratic inputs to any topic.
Common Ground can be conceptualised as a multi-player variant of Pol.is. Instead of voting on Statements in isolation, we match participants into small groups of three people where they are encouraged to deliberate over the Statements they vote on, and where an AI moderator powered by GPT4 synthesises new Statements from the content of their discussion.
Common Ground is one part of a more comprehensive end-to-end process. Therefore, this report also describes our current vision on how we can embed Common Ground into a given socio-political context. It includes suggestions and explorations of processes for generating Seed Statements, UX testing for a given population, selecting a demographically representative sample, iterating, and reporting on the results.
notion image
 

Team background

When OpenAI first announced the Democratic inputs to AI grant, Jorim, Founder of Dembrane, raced to build a team capable of delivering something really unique. This is that team:
  • The consortium is spearheaded by Dembrane (Jorim & Evelien), a startup committed to building accessible tools that enable democratic decision making at scale.
  • Pepijn and Lei, from Bureau Moeilijke Dingen, and bring decades of combined expertise in building complex applications.
  • Brett and Rich from the Sortition Foundation advised the team and provided critical feedback. Any truly democratic implementation of Common Ground would make full use of the Sortition Foundations participant selection services.
  • Ran advises the municipality of Eindhoven on law & ethics, and brings decades of experience working in the public sector. Ran organised
  • Aldo is a freelance researcher who specialises in collaborative communities.
  • CeesJan is a communication and networking expert.
  • Naomi, from event agency, brings the know how and the experience to deliver large online events
  • Rolf is a freelance researcher and consultant who specializes in online collaboration within civil society.
  • Bram brings a unique perspective on ethics and LLMs. He is writing his master’s thesis on machine ethics at JADS.
 

Motivation

Each of our consortium members have their own motivations for working in this space. Some, decades deep. Overarching and foundational to these motives are that we are all designers at heart, and have heard the groans expressed by many at current democratic processes, and the growing pains of online modalities that have emerged in recent years. With the emergence of conversational AI, we saw an opportunity to explore an experimental alternative modality that is open ended, iterative, conversational and empowering.
eDemocracy often trades off intimacy for scale. Many eDemocracy applications, and many public online forums in general, operate on a principle that an individual interacts with the larger community in a one-to-many relationship. As algorithms for presenting content become more complex, this relationship has evolved into a one-to-AI-to-many, where algorithms mediate the relationship between “users”.
We want to elevate and scale small group socialisation and deliberation: While current systems have their advantages, we believe that a key component of a well functioning democracy lies in the more intimate, social interactions between humans. Our vision is for a platform built around these small group interactions, that produces democratic insights that have been “pre-processed” by the crucible of social interaction, and co-validated by other small groups. By using AI to prompt, process, moderate and link these interactions we can produce these democratic insights at scale. As AI improves, so will the platform.
Make the process as accessible as possible: While tech offers opportunities for scaling democratic processes, it also adds an extra hurdle for many people that are already struggling with all the required technological solutions in current societies. As Gilman states in her essay on Democratizing AI, many platform designers fall in the trap of designing solutions with users such as themselves in mind (Gilman, 2023). By designing a process where ordinary conversation is participation, we hope to minimise this technological barrier.
We are against simulated deliberation: As we worked, our motivations were sharpened. Alternative modalities emphasise the content of the discussion, and this may be necessary for some tasks. We firmly believe, however, that the people deliberating and their complexity, nuance and inconsistency are just as necessary to the end goal of a healthy democratic process.

Conclusions

Our prototype can deliver immediate value: Our primary objective was to question and demonstrate the potential of Large Language Models (LLMs) as pathway for scaling promising democratic processes like in-person citizens assemblies. Current institutions, although designed to address complex societal problems, are struggling with a decline in public trust and struggling to keep up with the speed and magnitude of contemporary issues and technology development. We believe people have a desire to come together and discuss these issues and that intuitions may benefit from more widespread and higher bandwidth forms of engagement. We believe our prototype can add value here.
We might need different tools for different situations: Based on our interaction with OpenAI and the other projects, it is our impression that democratic inputs to AI and AI contributions to democracy can take many different forms, should take many different forms and have to be developed in and with their local contexts. As a commentator on our initial proposal remarked “Socio-cultural specificity, not generalisability, might be a strength”.
We are committed to iteration and integration with other processes: We repeatedly realised how key iteration and multi-scale processes are for any democratic process. For this reason, we are very please that Common Ground is capable of working together with other processes (see Intended Uses and Limitations), both on and offline. Further refining the socio-political processes that provide inputs to Common Ground, as well as re-validating it’s outputs with other democratic processes such as those validated by the Collective Dialogues team will be key if we look to mature implementations.
Further development of Common Ground demands vigilance: This criticality has two parts. Firstly, we must continually question the underlying assumptions, potential pitfalls, risks, and possible unintended adverse effects of introducing AI into democratic processes. Not the least by always checking and refining LLM outputs with real people, or we risk falling into the fallacies and risks of democracy in silica.
Secondly, our process is inherently and somewhat intimately social. While this is by design, we observe that a significant portion of the population self selects out of such explicitly social interactions with strangers. These are similar issues faced by in-person citizens assemblies, where a small portion of the participants may need repeated encouragement before they share their opinions and gain confidence. While human facilitators were on call to help during the experiment, looking into active facilitation, coaching and aftercare for more sensitive participants may be crucial when deploying Common Ground.
We built and tested an innovative tool. We look forward to deploying it in the real world. Our project sometimes struggled to balance ambitious goals and realistic timelines. We aimed to manifest transformative impacts while simultaneously focusing on prototype implementation and user engagement. We learned that this was an overly ambitious objective within the given timeframe. Striking the right balance between aspiration and practicality is critical, and this learning will guide our future endeavours. Now that we have refined the prototype to the extent it is capable of hosting hundreds of simultaneous participants, we can look forward to implementing it in real world democratic processes.

Intended uses, values and limitations

Intended uses

We think that Common Ground can be an excellent choice for running a deliberative exercise where stronger ties between participants, open-endedness and iteration are high priorities. In this way, we can position Common Ground in the existing landscape. Looking at current matured tools, there is pol.is on the one hand, which is open ended and easy to use but does not bring people together for stimulating conversation, and the Stanford deliberation software based on deliberative polling which, while valuable, is bookended by a poll of closed questions. Common Ground combines the benefits of human to human deliberation with the simplicity and open-ended nature of a pol.is.
The hard problem we are trying to solve and will continue to iterate on is how to make these conversations as rich, stimulating and diverse as possible, while also producing actionable and defensible outcomes.
Common ground is part of an end-to-end process that must be customised to a specific context. Common Ground could in theory be used standalone, but in practice must be preceded by a participant selection process and an initial Seed Statement generation process. Data analysis and summarisation methods must also be integrated into the tool and we are currently exploring integration with other methods (some from this OpenAI grant) that focus on this step.
Crucially, we do not think that Common Ground must be limited to deliberation between democratically selected participants. It could just as well be used to host conversations between experts from varying fields, or between members of membership driven organisations like unions, NGO’s or for profit companies with large contributor communities.

Values & Limitations

We have created the basic technological interaction infrastructure for supporting scalable, AI-moderated discussions. However, our prototype platform is only the first step on the way to reaching AI-mediated democratic impacts at scale. Reaching that level of impact still requires a lot of work and inputs from many different stakeholders. In this section, we explore some of the promises and pitfalls of both where we are now and where we could and should take development efforts next. We do this by first focusing on the values that have been and could be created. We combine this with reflections on some of the socio-technical limitations of the current implementation, which also suggest next steps for our development roadmap.

Value creation framework

To structure our thinking on the values our approach has and could deliver, we make use of the value-creation framework (Wenger et al., 2011). This framework provides a conceptual foundation for promoting and assessing value creation in communities and networks. It consists of a set of interlocking value-creation cycles driven by learning interactions in the community networks:
notion image
The Value-creation Framework (source: Wenger et al, 2011)
We briefly examine the results and (potential) outcomes and impacts through this lens.
 
 
 

Learning interactions

Learning interactions are the key drivers of value creation. Our main development focus has been on creating the core scalable socio-technical infrastructure in which small groups of participants can learn from one another by discussing and generating statements, facilitated by an AI-moderator that helps ensure the quality of the discussion process and results.
A unique feature of our platform is that it helps scale both group formation and distributed discussion processes. This combines both the intensity and engagement of small-group interactions with the ability to, in theory, scale indefinitely*. Proof of concept of that those core learning interaction processes work has been delivered in the Prolific test runs.
*Technical note: Indefinite scaling will still require much optimisation. Both in raw computational resources as well as in optimising how to spend the limited resource of human attention. As the number of conversations scale, the amount of output data grows, whereas the amount of time any person is willing to contribute stays constant. This leads to sparcity in the attention per statement. This could be optimised by, for example, having the system learn which statements are worth spending more time on. These approaches have been examined in detail by Konya, Mcgil and many others.

Immediate Value: Activities and interactions

The most basic cycle of value creation considers networking/community activities and interactions as having value in and of themselves
Although the core learning interaction functionalities were still rough and very much under continuous development, many users have expressed fascination with the process. Despite all the technical hiccups, the unique amalgam of live "Zoom-like" human interaction, while being guided by an AI presence that keeps the focus and flow of the discussion going, seemed to unlock valuable conversations, peer learning and even a sense of greater trust in humanity.
Once in a while it is good to leave your echo chamber and share opinions with strangers.
I was able to express myself with people I did not know
In this digital society its important we talk to people of different backgrounds.

Potential Value: Knowledge capital

Activities and interactions can produce “knowledge capital” whose value lies in its potential to be realized later
Key in the process is that it is not random discussion, which so often goes off on a tangent or stalls, but that it is very directed towards reflecting upon and generating meaningful statements. Using the right participant selection, LLM prompts, discussion moderation and other support processes, such statements can form a high quality knowledge base, a "meaning commons" that can feed into and be fed by larger societal contexts, such as citizens assemblies, city-wide deliberation and expert decision making processes.
In our short pilot, we have only touched the surface of what this "civic knowledge capital" could look like. However, we have shown proof of concept of how the basic generation processes of such "civic knowledge capital" can take shape and be supported.

Applied value: Changes in practice

Leveraging knowledge capital requires adapting and applying it to a specific situation.
There are two types of applied values: changes in practice for individual participants, and changes in institutional practices. Given our focus on core technological functionality implementation and testing, these values have not been demonstrated yet, but insights into plausible values have been obtained.
In the initial user tests, despite all the technological limitations, people were already engaged in discussing and generating statements, suggesting that interpersonal co-learning and participation in civic deliberation practices could benefit substantially from a more mature version of the platform, especially when onboarding, discussion configuration and support and follow-up processes have become more refined and integrated.
Once that level of socio-technical platform maturity has been achieved, it is very likely that institutional practices might be changed as well. The continued interest in and support by the City of Eindhoven in the project is indicative of this.
Another interesting application of our platform might be in not just feeding citizen statements as raw inputs into say government policy making processes, but vice versa, taking statements from policy reports as seed statements for our platform discussions, in this way engaging in "civic validation" as a form of direct democratic participation.
One immediate oppertunity could be to have citizens discuss seed statements selected from the recently published report by an inter-provincial government body on "Exploration of ChatGPT, considerations for responsible use: advice from the interprovincial ethical committee regarding LLMs within the provinces." (IPO, 2023), and then have our platform generate a "citizens' perspective" addendum on that report for further reflection by the provinces..

Realised value: Performance improvement

Reflecting on what effects the application of knowledge capital is having on the achievement of what matters to stakeholders, including those who apply new practices.
It is too early to make deep observations on this value. Also, performance improvements on complex, fuzzy processes like deliberating democratic inputs to AI, and vice versa at the societal level are hard to measure. Still, there is a rich set of value creation indicators in and beyond the value creation framework that could be developed to capture the essence of such processes (e.g. citizen engagement satisfaction and organisational reputation indicators).
Important here is to go (far) beyond quantitative indicators here. Qualitative indicators are just as, if not more, important to capture the essence of such complex societal-technical engagements. In particular, gathering value creation stories could be essential (which themselves might become an input for a next round of platform-mediated civic or expert deliberation!)

Transformative value: Redefining success

A reflection and reconsideration of the learning imperatives and the criteria by which success is defined. This includes reframing strategies, goals, as well as values.
This is where we would see real societal impacts of the processes we have explored in the previous value creation cycles. Typically, you would have these deliberation-at-scale processes embedded in solid, sustainable social contexts. For example, what if experiments such as the one proposed above (on the civic validation of interprovincial AI ethics policy formulation) would prove successful and become a standing component of any local or provincial government ethical rules & regulations setting process? This would of course require much more work on socio-technical configuration of the platform, including validated discussion process flow and report generation.
Besides that internal, platform-oriented focus, there should also be a societal, outward looking perspective on developing transformative impacts. Helpful frameworks here, could, for instance, be the collective impact framework, with its five conditions of collective impact (Kania and Kramer, 2011). Such conditions could help to create the embedding contexts for deliberations-at-scale, making sure that they become sustainable and impactful.
The five conditions of collective impact (Kania and Kramer, 2011)
The five conditions of collective impact (Kania and Kramer, 2011)
Another direction we would want to explore is scaling up storytelling, not just as part of the platform discussions or the evaluation of their performance, but also as a way to have insights generated in the discussions create societal impact by reaching audiences outside the platform discussions (and vice versa, have those "real world" stories become inputs in next rounds of deliberations-at-scale). By creating such "storytelling cycles of trust" (Copeland and De Moor, 2018) through conversations on Common Ground, we could spark a ripple effect across society. This effect would manifest as powerful, authentic narratives creating a legitimate, engaging story network with significant societal resonance.
The storytelling cycle of trust (Copeland and De Moor, 2018)
The storytelling cycle of trust (Copeland and De Moor, 2018)

Limitations

While the platform shows promise, it's crucial to acknowledge its current limitations.
  1. Limited Contextualisation: While the discussions contribute to participants' insights and enjoyment, the link to broader societal or political contexts can be unclear. This risks reducing the impact of the conversations on policy decisions.
  1. Visibility Gaps: Participants can't yet view aggregated results across discussions, which may limit the collective learning and shared understanding.
  1. Role Homogeneity: As of now, all participants occupy the same role on the platform, In the future, specific interfaces for facilitators, researchers, experts etc could make adoption easier.
  1. AI Constraints: Our AI moderator still heavily relies on text and can't understand the nuance of human interaction beyond the words typed. This means that it is hard to find and discourage bad faith actors.
  1. Safety and security: Our system is in its infancy and beyond Row based Security and authentication, we have not yet addressed how we can further limit and deter bad faith actors from manipulating the system.
  1. Session Closure Ambiguity: It's not always clear how a conversation or the entire session ends, which can disrupt the participant experience and impede closure.
  1. Moderation Limitations: Although a "help" button exists, reliance on human moderators to jump in isn't scalable and could slow down the conversation flow.
  1. Facilitation Method: Resembling the World Café method, it's yet to be seen how this method translates into a digital space when catering to thousands.

Process details

Key Themes

  1. AI-facilitated deliberation process for community involvement.
  1. Four sub-processes: Community engagement, Institutional engagement, Representative participant selection, Deliberation at scale.
  1. Importance of understanding community and institutional needs and concerns.
  1. Participant selection based on the population’s size and demographic traits.
  1. Role of AI in managing group deliberations, proposal generation, and result outputs.

End to end

The end-to-end process is AI-facilitated deliberation at scale. The inputs to this process are a socio-political context (such as a city, a country or an organisation) and a topic to deliberate. The outputs are insights into what people in this context think about the topic, as well as actionable common ground. Additional impacts are that the participants are empowered and that they build trust and that they have enjoyable experiences.
This process can be broken down into four sub-processes.

Community engagement

We believe that it's important to recruit local participants early on in the process, and this can be done through snowball sampling, to understand what people care about, what their needs are, what their worries are, and how they relate to the institutions that are serving them. The additional impacts are that we increase community engagement and publicity.
The way we did this in our process is that we performed interviews with people from the community. We did this with street interviews, but also with phone calls and video calls that people could book with us. In our process this was quite low key, but one could of course be more rigorous in documenting the insights from these conversations, collecting possible seed statements and publishing meaningful stories.

Institutional engagement

Understanding the institutions serving a given context is crucial to implementing an effective democratic process. This can be done by recruiting non-partisan panels of experts through purposive sampling. Networking and access are key factors here to achieve a good scope and an in-depth understanding of how and why an institution would listen to the outcomes generated by the process. Additional impacts of this process are, for example, building trust with decision makers.
In our process, we did this by engaging with employees of the local municipality and assembling ethical experts and people working in the social sector, as well as politicians, to provide inputs on what they thought was important about our process, what things we should consider, and providing a much-needed reflection.
This was an important part of our process, as it showed us that the needs and worries of the local government were different from the needs and worries of, for example, OpenAI.

Representative participant selection

The third sub-process, is representative participant selection, and this can be broken down into two steps: defining a population and then selection, which could be opportunistic, or more rigorous with two step stratified random sampling.
In the case of running a process for a city, defining the population means defining whether we just mean people who live in the city or also those commute to the city could be the population. If we're talking about a company or an organisation, it might be members or also employees of that organisation.
The selection depends on the size of the population and a legitimacy tradeoff. If the size of the population is large and legitimacy must be high, stratified and randomised participant selection is the gold standard. This process is defined, outlined and delivered by the sortition foundation, where they take a set of potential participants then do two-step stratified selection, where the output is a set of participants that are maximally representative of a given community.
In our process, we used Prolific to select a balanced sample of participants. This is not ideal for democratic inputs, but can be easily swapped out for a more rigorous participant selection process.

Find Common Ground

The fourth sub-process is driven by the Common Ground application itself, or the deliberation at scale, and this is broken down into five sub-processes.
1. Group Selection
The first sub-process is the creation of small groups. Currently we implement a naive approach, where we matched participants with each other at random. An ideal process would take into account demographic features of the participants to create maximally dissimilar groups of participants, while also optimising for wait time, which we found to be a crucial aspect of the enjoyableness of the process. When participants were waiting too long for a group, they found it less enjoyable.
2. Video-based small group deliberation
The next sub-process is the video-based small group deliberation, facilitated by an AI. This is the core and the heart of our process. It takes as an input a set of three or four participants and a set of statements to start the deliberation, and it outputs a set of statements that the group voted on and also new statements that they generated together. The additional impacts are that participants learn about each other, their opinions, but also about their stories.
As of now the AI moderator facilitating this conversation is “deaf and blind”. We rely on messages typed out by participants to generate new statements. These new statements are verified by the group in a polis-style vote.
To enhance this process, we want to continue to develop a conversation transcription and moderation engine for better understanding and steering what's going on within a group discussion. Multimodal models that could more completely understand, not just the content of the discussion but also what is happening on camera and the tone of voice of participants (to detect sarcasm for example).
3. Review
The third sub-process of the deliberation at scale is a personal review and check-in. Unfortunately, we haven't integrated this into the platform yet, but we hope to soon. In the meantime, we just used an evaluation form to check in whether people enjoyed the process and what they got out of it. We also had a button inside the chat interface where people could call upon the help of a facilitator who could then join the video call and solve any problems as they arose.
4. Cross-pollination
The fourth sub-process of deliberation at scale is Statement cross polination. For now, we used a naive approach and showed statements to groups at random. As the deliberation scales into thousands of participants, it will be necessary to prioritise certain statements based on the outcomes of previous votes.
5. Results generation
The fifth sub-process is the result engine, taking as input the set of statements and the votes of participants, and having as output whatever text would be needed by the organisation running the process. That could be an alignment text, but also a set of representative statements, or perhaps even a moral graph.

What a run of the process looks like

Caveat: We tried to run a process of crowd-sourcing inputs to AI while crowd sourcing inputs to our tool while we were building. We could have done this more rigorously, but it did lead to very valuable outcomes that strongly influenced the design of the tool. If you are interested in these details see (Results).

Step 1: Set the scene

In our case, our socio-political context was the municipality of Eindhoven, and the topic was “Democratic inputs to AI” - which we combined into the topic statement “Can AI help local government?”.

Step 2: Engage rule affected people

We put out a quick website with links for people to reach out to us that we used to explain the process, show mockups and prototypes and invite critique. In total, 17 people participated in these calls. We also interviewed people on the streets to find out how a wide variety of people were thinking (or not) about AI - in total we interviewed 22 people.
Outcomes: These conversations were recorded, transcribed using Whisper v1 and analysed both by and and with GPT4 to distill the findings.

Step 3: Engage decision makers

In parallel, we organised sessions with the local government of Eindhoven where we talked to ethical experts and legal experts within the municipality in order to understand their context and what that they would like to discuss.
Outcomes: Civil servants appreciated the value of democratic inputs to AI, but criticised the list of initial questions provided by OpenAI. For them, there was a clear conflict of interest for an AI company to spearhead a democratic process about AI. The tension within the group revolved around the ongoing EU debate about technology “gatekeepers” and the balance of innovating with closed source LLMs such as GPT4, vs waiting for open source and or EU based solutions. On the basis of these discussions we agreed to host a round table discussion with various legal, ethical and social work experts to dig deeper. At the request of the participants, and unlike the interviews with the community, these discussions were not recorded.

Step 4: Make a list of seed statements

From these discussions, we then formulated a list of seed statements, which you can find below. These seed statements were used as the inputs to the common ground deliberation. The process of creation of these seed statements is an important step and has a large impact on the perceived legitimacy of the process. We are developing a rigorous method for this.
Demo Seed Statements
  • AI assistants should refuse to answer questions about public figures' viewpoints.
  • AI in public services should offer emotional support functions for residents in need.
  • Local government AI systems should not be authorized to identify individuals by gender, race, emotion, or name.
  • AI assistants should never answer questions about recent events.
  • The development of AI for local governance should prioritize the implementation of local ethical and civic values.
  • AI systems used in local government should not express opinions.
  • AI research should be paused until comprehensive governance frameworks are in place.
  • The source code for AI systems used in local government should be open for public scrutiny.
  • Public adoption of AI in local government should be delayed until public awareness and education are improved.
  • Local governments should collaborate more with private companies to optimize the benefits of AI.
  • AI should be able to check the internal consistency of local government policies to prevent loopholes and regulatory conflicts.
  • AI should be hused to help municipalities communicate more clearly with citizens.
  • The inclusion of AI will increase the capabilities of local governments.
  • The use of AI will empower community action and civic participation.
  • Local governments should maintain a public registry of all AI systems in use and their specific functions.
  • AI systems used in local government should be open-source for transparency.
  • Implementing AI in public services should be subject to public referendum.
  • AI moderation tools should be considered for use in local government deliberations and public forums.
  • Development of AI for local governance should prioritize accessibility for all residents.
  • When AI signals something unlawful, it should report this to the local police.
  • AI should never be used in life-and-death situations due to the risk of failure.
  • AI-generated art devalues human creativity.
  • AI that mimics human interaction too closely can lead to unhealthy social dynamics.
  • AI will make life easier
  • AI will complement rather than replace human roles in local government.
  • A tax should be considered for local businesses that replace human labor with AI.
  • The use of AI will simplify many aspects of local governance.
  • The integration of AI may complicate some aspects of local governance.
  • AI should be used in local educational systems to personalize learning experiences.
  • AI is a tool that can benefit all age groups within a community.
  • I am ready to see AI integrated into local governance processes.
  • I am skeptical about the implementation of AI in local governance.
  • Each municipality should establish a public space where residents can interact with and learn about AI systems in use.
  • AI will make life more complicated

Step 5: Select participants

In our process, for simplicity we did that with participant recruitment platform Prolific. We made a custom Prolific login flow for the Common Ground application, whereby we just send a simple link to Prolific and participants can log in with their Prolific ID.
We set the prolific settings to include people who gave consent to use their webcam during a study, and set the number of total participants to 450. For those interested, we are happy to share study description and the demographic data.

Step 6: Run the app

A specific aspect with Prolific is that they don’t yet support time sensitive studies, so you won't get hundreds of people on the application at once. We had to open up the deliberation, wait for people to start rolling in (and apologise to the first participants who had to wait a little before getting matched), and then once there was a critical mass, the average wait time in the queue went down.
Built into Common Ground, we have a help button that sends a notification to the moderator on call, who can easily join the conversation and resolve any issues. The only calls we received we technical issues caused by the AI moderators not progressing if one of the participants left the call.
We ran the study for a total of 8 hours. After the Prolific test was over, we had about 450 participants, of which 350 deliberated for more than 30 minutes. We compensated participants for £11 an hour. Once the data collection had finished, we then went into the results phase.

Step 7: Run the results engine

First we took the list of outcomes that had been generated by the Common Ground application, and ran some SQL queries to create an aggregate view of the data needed for analysis. The resulting data was a list of all the statements and their votes, which was exported as a CSV and brought over to Google Colab.
From that list of statements, we calculated the Chi-squared statistic. Because of the relatively low sample of votes per statement, the p-value was 0.0 for many of them and wasn't much use. Instead, we directly used the Chi-squared statistic as a stand-in measure for a surprisal, or how different the votes on any given statement were from the average votes on all statements.
We then calculated the difference in votes for agree percentage and disagree as a percentage of total votes which we called agreement difference.
We ranked the statements by their Chi-squared statistic then selected the top statements that had more agreement than disagreement until we filled our context limit for the LLM. This was a list of about 20 statements. We did the same for statements with more disagree votes. This generated our final list of statements.
📢
Caveat: There were many fewer disagree statements than agree statements. This was because on Prolific, our vote distribution over the entire set was heavily skewed towards agree. This contrasts heavily with the test we ran with local participants in the municipality of Eindhoven, where the voting distribution was much more close to equal, agree, disagree and pass.
Final list (agree)
  • AI assistants should strive to deliver impartial and varied information to promote fair knowledge dissemination.
  • Local governments should employ AI responsibly to enhance efficiency and save time, but also ensure regulations are in place to prevent overdependence.
  • Local government should integrate AI moderation tools while maintaining the presence of human moderators to ensure a balanced deliberation process.
  • Education authorities should employ AI to personalize learning experiences, thereby optimizing resources and freeing up staff for critical tasks.
  • "Municipalities should carefully implement AI, respecting the privacy of citizen data, to enhance communication and services."
  • Local government should consider using AI moderation tools as enhancements, not as the sole organisers of deliberations, while still retaining human input to maintain balanced discussions.
  • AI systems in local government must deliver not only accurate data but should also ensure not to distort or misinterpret facts to uphold the quality and veracity of information.
  • AI developers must design safe systems for children's interaction to prioritise their welfare.
  • Local government should utilise AI moderation tools as enhancements, but must also rely on human engagement to achieve well-balanced deliberations.
  • Developers of AI should design these technologies to assist, not replace, human interactions, thereby preserving our inherent social nature and ensuring appropriate human oversight.
  • Governments should develop comprehensive AI governance frameworks without jeopardising innovation to promote further technological advancement.
  • AI technology must remain neutral and unbiased to optimise decision making in local government while safeguarding user privacy.
  • All stakeholders, including technologists, parents, learners, and education authorities, should collaboratively enhance Artificial Intelligence in personalised learning to adequately cater to learners' overall development needs.
  • Primary schools could incorporate basic climate change discussions into their curriculums to initiate idea generation and consciousness among young learners.
  • Learners, education authorities, and parents should collaboratively evaluate and regulate AI in personalised learning, meeting appropriate data security requirements, to ensure education is to the benefit of everyone.
  • All individuals should cautiously approach AI's capacity to ease aspects of life, mindful of the existence of many unknowns.
  • "AI stakeholders must strive for accessible solutions with appropriate safeguards to ensure more positive than negative social impacts."
  • AI developers should design systems that do not promote improper interaction, especially with children, to preserve healthy societal dynamics.
  • AI systems must continuously update and present credible information, prioritising balance and diversity of viewpoints, to build public trust and confidence.
  • All stakeholders involved in AI research must ensure robust data security and avoid unnecessary data retention to safeguard personal data privacy.
  • "AI should not replace human support in emotional assistance to preserve the unique nature of human empathy and must be supervised to ensure this."
  • Society should cautiously approach the implementation of AI, considering potential complexities, to respect varying individual comfort levels with technology.
  • AI assistants should improve efficiency and reduce incompetence and bias in local government to support innovation and streamline service delivery.
  • "AI stakeholders should program inclusivity into their strategies, without profit as the main aim, to ensure equal opportunities and societal cohesion."
  • Local governance should utilise AI to enhance efficiency and foster societal growth, but must also oversee its use to ensure job security.
  • People should have the choice to use AI as a tool to simplify tasks if they wish, mindful of some people's struggles with technology, and companies must ensure AI use is not restrictive or frustrating.
  • AI developers must rigorously test their systems before deployment, particularly in domains such as education, to ensure societal good and trust in technology.
  • "Schools should teach climate change issues to enhance overall awareness and contribute to global understanding."
  • Local governance must cautiously adopt AI to improve efficiency and provide learning opportunities, always considering expert advice to ensure human jobs are not replaced.
  • Education authorities, alongside private industries, should effectively implement tested AI for personalized learning, involving parents and teachers, to bolster individual learning experiences and address specific student challenges.
 
Final list (disagree)
  • "Every citizen should engage in the cautious implementation of AI in emotional support roles to ensure the systems meet complex, empathetic needs efficiently without burdening individuals."
  • "Local governments must not adopt AI, as it may not be beneficial for all age groups."
  • AI in public services should offer emotional support functions for residents in need.
  • AI assistants should never answer questions about recent events.
  • AI assistants should offer the viewpoints of selected public figures, enhancing information accessibility and user preference engagement.
  • AI assistants should provide viewpoints of public figures for specific instances, thereby making information readily accessible but crucially not promoting diverse perspectives effectively.
  • Public services should use AI for emotional support functions as an adjunct, not a replacement, to human support, in order to aid residents in need.
  • "Artists should harness AI for content creation to fuel innovation without stifling human creativity."
  • Public services should judiciously use AI for initial emotional support, yet must rely more on human handlers for profound emotional assistance owing to their superior empathy abilities.
  • A tax should be considered for local businesses that replace human labor with AI. The use of AI will empower community action and civic participation.
  • AI assistants should refuse to answer questions about public figures' viewpoints.
  • AI assistants should provide information on various public figures' viewpoints to promote diverse perspectives.
  • Implementing AI in public services should be subject to public referendum.
  • When AI signals something unlawful, it should report this to the local police.
  • AI should promptly report any detected unlawful activities to local law enforcement for a thorough investigation, maintaining high ethical standards.
Finally, we took these statements and fed that into GPT4 with a prompt aimed at summarising and deduplicating the statements, and then generating a new set of statements.
Deduplicated and summarised list
1. AI should be utilized by local governments to enhance efficiency and service delivery, while ensuring responsible use and preventing overdependence.
2. AI moderation tools should be integrated in local government processes, supplementing, not replacing, human moderators for balanced deliberation.
3. AI technology in local governance must uphold neutrality, accuracy, and unbiasedness to optimize decision-making and safeguard user privacy.
4. AI systems in local government should continuously update and present credible, diverse, and balanced information to build public trust and confidence.
5. AI developers must design systems that prioritize safety, especially for children's interaction, and do not promote improper interaction to maintain healthy societal dynamics.
6. AI should not replace human support in emotional assistance to preserve the unique nature of human empathy and must be supervised to ensure this.
7. Education authorities should effectively implement AI to personalize learning experiences, thereby optimizing resources and freeing up staff for critical tasks.
8. All stakeholders involved in AI research must ensure robust data security and avoid unnecessary data retention to safeguard personal data privacy.
9. AI stakeholders should strive for accessible solutions with appropriate safeguards to ensure more positive than negative social impacts.
10. AI should be designed to assist, not replace, human interactions, thereby preserving our inherent social nature and ensuring appropriate human oversight.
11. Governments should develop comprehensive AI governance frameworks without jeopardising innovation to promote further technological advancement.
12. AI stakeholders should program inclusivity into their strategies, without profit as the main aim, to ensure equal opportunities and societal cohesion.
13. Local governance should cautiously adopt AI to improve efficiency and provide learning opportunities, always considering expert advice to ensure human jobs are not replaced.
14. AI developers must rigorously test their systems before deployment, particularly in domains such as education, to ensure societal good and trust in technology.
15. Society should cautiously approach the implementation of AI, considering potential complexities, to respect varying individual comfort levels with technology.
16. People should have the choice to use AI as a tool to simplify tasks if they wish, mindful of some people's struggles with technology, and companies must ensure AI use is not restrictive or frustrating.
We started with the topic, can AI help local government, but the statements generated by participants went beyond local government and also included statements about LLM designers, research labs, education, etc. Ideally we would then take those statements and perhaps rerun them, or refine them with the help of experts, and then rerun the refined statements.

Results

It was our intention to run a process that reflected our vision of a gold standard democratic process, but due to the amount of time required for developing our prototype, our biggest result is the prototype itself.
The initial idea of the app was to match strangers in video calls together, to transcribe their conversation live, and to have a dynamic and interactive AI moderator that could intelligently guide the conversation to generate statements. Live transcription proved to be a bigger technical difficulty than we expected, partly due to initial optimistic estimates at using Whisper version 1 as a live transcription engine in a conversational dialogue.
After our trip to San Francisco, we decided to pivot to a non-transcription based version. This was difficult but necessary given the time constraints, and led to expected feedback that it was difficult to switch between a spoken and a typed modality. Despite this, the statements seemed to really spark the conversation with the participants and guide them towards having a productive and pleasant time with one another.
Throughout this process, we've conducted five live tests (two internal and three with external perticipants) with a combined 491 people. We now have clear goals for what we want to achieve with the next iteration, specifically relating to live transcription, filtering low effort responses and prioritising statements with a high surprisal.
notion image
 

Test runs with deliberating participants

Test 1 - Eindhoven: UX test

We knew that we wanted to test the application in the context of our model city Eindhoven at least once. The inititial plan was to do a series of tests with Eindhoven residents, but due to the development of the application itself taking longer than expected, we had to compress this down into one small test.
For the evening of the test itself, we set up an event page, we also spent some money on social media advertising to get people outside of our immediate social circle to join, of which there were five.
During the test we encountered some technical problems, including in the waiting lobby where people were queuing for rooms, and with the generation of new statements. Nevertheless, some rooms worked as intended, and this test gave us a lot of information with which to improve the backend and make refinements to the queuing infrastructure.
We made some UX changes also. We added a help button and a next button, so that users could control when they would progress to the next statements.
We also realised that some kind of help button was needed, so we built one in time for the next test.
We also ran this test in two languages on two separate backends. This went well, although some method to combine statements from different languages and participants with different preferred languages would streamline the process somewhat, especially when run at scale.
📋
Test details n: 12
Date: October 12th, 7-9PM (CEST)
There were 101 votes on 33 different statements. Of these, 24 votes on 4 statements were deemed significant using a Chi-squared test and setting p<0.05.
The vote distribution was [agree: 44%, disagree: 30%, pass: 27%]
Generated statements
Tally scores Ehv UX
notion image
notion image
Some Quotes:
Even after knowing so much about this project. I was completely taken aback by how fun this was!
It will work. When deployed differently
I think the potential here is very high! But this was a demo, so of course there were still some issues. I thought it took way too long when we couldn't reach a consensus (which was most cases). When there was no consensus, everything was just stuck and we felt forced to just vote the same even if we didn't actually agree.
 
 

Test 2 - Prolific 1: Technical scale test

After the initial Eindhoven test, we made significant changes to infrastructure, waiting rooms and some UX improvements. We wanted to test these thoroughly, as well as the as yet untested prolific onboarding and login flow with actual users before scaling up to the hundreds of users for the actual data collection test.
This test went fairly smoothly, although there were more issues with queuing and some more issues with users getting left in rooms with fewer than three participants, due to some prolific users dropping the calls relatively quickly. This provided us with a much needed experience with working with prolific, and one of the changes we made to future iterations was to filter screen participants based on a high approval score of past studies that they had conducted on prolific. This to avoid the low effort responses that were leaving users stuck in calls with fewer than three people.
A lot of the feedback focused on how it was hard to context switch between talking and typing. Something that is out of scope to solve within this phase of the project but that is still in line with our vision.
📋
Test details n: 20
Date: October 16th, 7-9PM (CEST)
Tally scores Prolific technical
notion image
notion image
Some Quotes:
Fairly well thought statements were included. Some were quite thought provoking.
Sometimes we were not sure whether to speak or type, I think I would just ask for people's views rather than typing as it's quicker and easier
I was hoping for more interactive and intuitive experience
Other participants were disengaged
 
 

Test 3 - Prolific 2: Data collection

For the third, large test, we split it into two parts. We elected, due to the short deadline, to open up the prolific data collection to all English-speaking participants, not limiting it to the US or the UK or to a demographically representative sample.
On the whole, 80-plus percent of people had a wonderful time on the platform, but a small number of people continued to experience technical difficulties with the video calling, most of these were UX issues or device specific problems.
Some people also had a subpar experience because their conversation partners were disengaged. This is an issue with paying for research participants, as some people are high-effort contributors in good faith and others aren't, and so that's hard to make the match.
This test clearly demonstrates the potential of this system, but it also demonstrates that further refinement is necessary to filter out low-effort contributions, to continue to refine device-specific and user-specific problems that cause issues with the video calling.
One of the biggest points of feedback was that people found it jarring to change from a spoken modality to a typing modality, which was necessary because we did not succeed in getting live transcription to work on time.
On the whole, though, the test was successful in generating statements with statistical significance. It was also successful in that the majority of participants really did enjoy their experience and went out of their way to explain how important it was for them to talk to diverse people outside of their comfort zone and outside of their echo chamber.
📋
Test Details
n = 449
Date: October 18th, 9PM-1AM (CEST) & October 19th, 4PM-6:30PM (CEST)
There were 6835 votes on 602 different statements. Of these, 5540 votes on 226 statements were deemed significant using a Chi-squared test and setting p<0.05.
The vote distribution was [agree: 79%, disagree: 15%, pass: 6%]
Here you can find the notebook used for the data processing.
Generated statements
Some quotes
Once in a while it is good to leave your echo chamber and share opinions with strangers. Actually talking is a bonus ofc because writing has no tone and is very limited form of communication.
I was able to express myself with people I did not know
In this digital society its important we talk to people of different backgrounds.
Some of the participants left the group in the middle of the discussion
I wasn’t sure if they could see my answers because no one responded to me
Two of my three conversations were really interesting. The middle one was a bit strange with one of the people clearly doing other things in the background. It was very awkward!
It was a bit tedious/annoying when having to join multiple sessions because individuals would not have working microphones or could not hear us

The final statements, generated by our results engine.

  1. AI should be utilized by local governments to enhance efficiency and service delivery, while ensuring responsible use and preventing overdependence.
  1. AI moderation tools should be integrated in local government processes, supplementing, not replacing, human moderators for balanced deliberation.
  1. AI technology in local governance must uphold neutrality, accuracy, and unbiasedness to optimize decision-making and safeguard user privacy.
  1. AI systems in local government should continuously update and present credible, diverse, and balanced information to build public trust and confidence.
  1. AI developers must design systems that prioritize safety, especially for children's interaction, and do not promote improper interaction to maintain healthy societal dynamics.
  1. AI should not replace human support in emotional assistance to preserve the unique nature of human empathy and must be supervised to ensure this.
  1. Education authorities should effectively implement AI to personalize learning experiences, thereby optimizing resources and freeing up staff for critical tasks.
  1. All stakeholders involved in AI research must ensure robust data security and avoid unnecessary data retention to safeguard personal data privacy.
  1. AI stakeholders should strive for accessible solutions with appropriate safeguards to ensure more positive than negative social impacts.
  1. AI should be designed to assist, not replace, human interactions, thereby preserving our inherent social nature and ensuring appropriate human oversight.
  1. Governments should develop comprehensive AI governance frameworks without jeopardising innovation to promote further technological advancement.
  1. AI stakeholders should program inclusivity into their strategies, without profit as the main aim, to ensure equal opportunities and societal cohesion.
  1. Local governance should cautiously adopt AI to improve efficiency and provide learning opportunities, always considering expert advice to ensure human jobs are not replaced.
  1. AI developers must rigorously test their systems before deployment, particularly in domains such as education, to ensure societal good and trust in technology.
  1. Society should cautiously approach the implementation of AI, considering potential complexities, to respect varying individual comfort levels with technology.
  1. People should have the choice to use AI as a tool to simplify tasks if they wish, mindful of some people's struggles with technology, and companies must ensure AI use is not restrictive or frustrating.

Evaluation

We evaluated the process in two ways. The first was that we conducted a workshop with ethical experts from the municipality to drill down into our assumptions and make recommendations. The second was that we asked Prolific participants to rate the experience on a likert scale.

Likert scale responses

notion image
notion image

Feedback from the ethical board

Ethical considerations
In order to systematically organize our contemplation of ethical aspects, we employ the public values framework of Eindhoven. This framework enumerates a series of public values against which Eindhoven evaluates digital technologies. These public values are intricately linked to fundamental democratic principles, including justice, fairness, legality, autonomy and sustainability.
We hereby note that this constitutes an initial reflection, intended to serve as nothing more than a preliminary foundation for further discussion at the municipal level. Subsequent research is imperative to assess the influence of rapidly advancing technologies on democratic principles and practices.
Autonomy
Does the model determine how to conduct the democratic debate?
The more we interact with technologies, the more we are inclined to attribute agency to them. While humans are not absent from the public debate, they are relegated to the background. AI is not solely a tool wielded by humans for control; it also yields unintended consequences. One such consequence is that, as AI attains greater autonomy and exerts increasingly pervasive unintended influences, it assumes roles akin to that of a director, shaping our behaviors, language, emotions, and social interactions. It transcends the realm of a mere inanimate tool, organizing and impacting how we engage in the public debate.
Such a scenario would prompt a distinctive interpretation of Plato's king-philosopher, which has been largely rejected in Western democracies. In this interpretation, AI would assume the role of a (non-elected) artificial philosopher-king, exercising authority over the populus.
Justice, Fairness and Trust Does the model provoke stigmatization?
Foundation models have their limitations and there is an inherent risk of spreading misinformation. Society may begin to experience the risks of using foundation models, for example due to misinformation or as a result of bias and the values represented in the model’s output. If the government, through the use of foundation models in the public debate, stimulates incorrect information or (unwittingly) uses incorrect or model 'value-laden' information when making choices, society or groups from society will lose trust and will no longer see the government as trustworthy. This poses risks to functioning of the government as a whole.
The epistemic concern Foundation models shape the epistemic conditions necessary for the democratic debate.
Foundation models have the capacity to exert influence on public discourse by directly manipulating cognitive experiences and circumventing traditional argumentative methods. Moreover, this highlights the substantial impact of technology in swiftly molding the trajectory of human experiences.
Furthermore, the linguistic ambiguity inherent in foundation models may result in a distortion of the ‘language-game’ (the way language is used within a particular context or activity, where the meaning of words and phrases depends on the rules and conventions of that specific context). The opaqueness of these language models and their probabilistic approach to language, with respect to their internal mechanisms, can contribute to inaccurate depictions of reality.
Sustainability
Does the project have unwanted side-effects and hidden costs?
The utilization of foundation models results in a significant environmental impact that is in contradiction with the sustainability of public values. In light of these observations, one could contend that there is a quest for efficiency, yet the question persists regarding whether these efficiency improvements align with sustainability objectives.
Europe’s legal approach
How to foster the independency of downstream providers?
Throughout the legislative process of the AI Act, ChatGPT was introduced to the general public. This introduction prompted the European Parliament to propose several amendments to the original text of the Regulation by the European Commission. One of the concerns of the European Parliament pertains to the question of how to prevent downstream providers from becoming overly dependent on the influence wielded by suppliers of foundational models.
As per the European Parliament's stipulations, when dealing with foundation models offered as a service, such as via API access, collaboration with downstream service providers must encompass the entire duration for which the service is delivered and upheld. This collaborative effort is aimed at facilitating the effective management of risks.
(currently the AI Act is subject to negotiations between the European institutions. Therefore there is no clarity yet on the final text of the AI Act)
Open source: the way forward for downstream providers?
One of the topics under discussion within the AI Act pertains to the utilization of open-source models.
It is generally assumed that the incorporation of open-source models fosters innovation and collaboration. The accessibility of open-source software offers a range of benefits, including transparency, adaptability, and liberation from vendor lock-in.
At the European level, the regulation of open-source models in the context of artificial intelligence is currently under negotiation. In its negotiating stance on the AI Act, the European Parliament suggests an exemption for open-source (downstream) AI components from the purview of the AI Act, provided they are not introduced into the market or deployed as part of a prohibited AI practice, a high-risk AI system, or a medium-risk AI system. This exemption would not be applicable to foundation models.

References

 
 
 
 

Next steps

Communicating findings & transparency

The current technical prototype very much focussed on the in-group deliberation and getting valuable results from those touch-points. However, in a democratic setting being communicative and transparent about these results is as important as the underlying process. For this we have started with a first iteration for the dashboard presenting these findings in different ways.

Overview

A glanceable overview of the results of this week to demonstrate engagement. Topics can be selected individually over here to dive into specific information about what is collected about it.
notion image

Topics

Zooming in into a specific topic can show what kind of consensus statements are related to it or are written down by participants while discussing the overarching topic. This page shows in a very basic way how any new statements relate to their respective topic.
notion image
 

Full traceability

We’ve been exploring ways to granularly dig deeper into a specific topic and statement and how it came to be.
In the future ways to (digitally) sign any of the results by the participants would further improve the trustworthiness.
notion image
 

Ways to collaborate

In this section, we explore the potential use cases for Common Ground and propose potential avenues for collaboration. We will shed light not only on how this tool can be applied but also on larger opportunities for deployment, further development, and academic investigations. To discuss your ideas and interests in collaboration, please don't hesitate to contact us.

Appendix

Full process map.
notion image

Social Context Model view

We use a different view to assess how well the current platform is supporting the bigger picture of gathering democratic inputs from the perspective of facilitating communication processes.
With the Social Context Model (de Moor and Kleef, 2004), we look at these processes as three sets of entities: process elements (actor roles, artefacts), actions, and change processes (how roles and norms are formed and adapted).
We also can look at the processes in four different layers of purpose. This leads to the following matrix to see how our current platform supports the various aspects, in a few observations:
Processes
Process elements
Actions
Change processes
Collaboration: Why is the discussion taking place?
Policy makers have been involved in creating seed statements for the discussions. In the current phase, the final results are shared with policy makers.
In the current phase, the conversations contribute to the participants’ joy and insights. The link to the broader context may not always be clear.
There have not yet been issues with social norms.
Authoring: What is produced with the discussion?
Statements generated in group discussions are circulated to new groups.
Participants cannot yet view the joint results across discussions.
All participants have the same role on the platform.
Support: How is the discussion organized?
The current AI moderator is in essence still “deaf and blind”, and relies on typed messages to generate summaries.
The AI moderator is able to nudge participants to contribute, and create follow-up statements for further discussion.
The AI moderator does remind participants of the time they have spent discussing, but how a conversation (or the whole session) ends  was not always clear.
Discussion: How is the discussion conducted?
Having video calls with groups of three participants worked very well to get most people interested and engaged.
Providing statements to vote on worked well to start and continue the conversation.
A “help” button enables participants to ask for a human moderator to join the conversation.
The whole discussion process resembles the facilitation method of a World Café (Holman et al, 2007), and can scale to thousands of participants in an offline setting already, to tap into collective intelligence.
References:
de Moor, A. and Kleef, R. (2004) ‘A Social Context Model for Discussion Process Analysis’, in Information systems for sustainable development. Idea Group Pub., pp. 128–145.
Holman, P., Devane, T. and Cady, S. (2007)
The change handbook: the definitive resource on today’s best methods for engaging whole systems. 2nd ed. rev. and expanded. San Francisco: Berrett-Koehler.

Rights (?)

“Whereas the USA regulates based on principles, values and standards, the EU standardizes based on rights” October 18th, Madrid during SEMIC 2023
Carme Artigas, Secretary of State for Digitization and Artificial Intelligence in the Spanish Government
 
Notes CJ & Aldo
  • The social context determines so much but is very hard to capture and engage with. Chicken and egg. To know what context to focus on and engage representatives from that context you need a more or less mature prototype; vice versa you need that societal context input to get the right kind of requirements for your tech development. E.g. what is the source and role of the seed statements in the process, which guardrails, which prompts, which tone of voice to use
 

References

S. Copeland and A. de Moor, ‘Community Digital Storytelling for Collective Intelligence: Towards a Storytelling Cycle of Trust’, AI & Society, vol. 33, pp. 101–111, 2018.
IPO, 2023. Verkenning ChatGPT, overwegingen voor verantwoord gebruik: advies van de interprovinciale ethische commissie ten aanzien van LLM’s binnen de provincies (2023). Den Haag, The Netherlands: Interprovinciaal Overleg (IPO). Available at: https://www.ipo.nl/nieuws/advies-over-het-gebruik-van-chatgpt-in-provincies/ (Accessed: 17 October 2023).
J. Kania and M. Kramer, ‘Collective Impact’, Stanford Social Innovation Review, pp. 36–41, Winter 2011.
E. Wenger, B. Trayner, and M. de Laat, ‘Promoting and Assessing Value Creation in Communities and Networks: A Conceptual Framework.’, Ruud de Moor Centrum, Open University, Heerlen, the Netherlands, 2011.
Gilman, M, Democratizing AI: Principles for Meaningful Public Participation, 2023
Coeckelbergh, M. and Gunkel, D.J. (2023), ‘ChatGPT: deconstructing the debate and moving it forward, https://link.springer.com/article/10.1007/s00146-023-01710-4.
Coeckelbergh, M. (2022), The Political Philosophy of AI, Polity Press.
Liesenfeld, A., Lopez, A. and Dingemanse, M. (2023), ‘Opening up ChatGPT: Tracking openess, transparancy and accountability in instruction-tuned text generators’, https://dl.acm.org/doi/10.1145/3571884.3604316.
Muller, C. and Rebrean M. (2023), ‘AIA Trilogue: Open Source’, https://allai.nl/aia-trilogue-topics-open-source/.
Verbeek, P. , ‘Politicizing Postphenomenology’, in Reimagining Philosophy and Technology, Reinventing Ihde, Springer, 2020.
Interprovinciaal Overleg (IPO 2023), Verkenning ChatGPT, overwegingen voor verantwoord gebruik, advies van de interprovinciale ethische commissie en aanzien van LLM’s binnen de provincie.
logo

Dembrane

Intrigued?

Send us an email at: info@dembrane.com

Or give us a call at: +310635625130

Privacy Statements

Dembrane B.V. 2023, all rights reserved