AI in Peacebuilding: An outsider perspective

Artificial Intelligence AI has made significant headway in peacebuilding over the past years. Academics and practitioners likewise have put forward a range of use cases across the field: From automated social media sentiment analysis for enhanced conflict analysis, to large-scale digital dialogue platforms to expand inclusion in mediation processes or automated stakeholder mapping during Track 1 negotiations.

What all of these propositions have in common is a rather instrumentalist view of AI’s role in peacebuilding: whether it is for enhanced conflict analysis, more inclusive peace processes or better informed mediators, AI is considered a better means to the same end.

The flipside of the AI-coin however remains under-explored: How the introduction of these applications potentially alter conceptions of peace processes and of prevailing paradigms therein among stakeholders.

As we have seen time and again, the introduction of new technologies is not merely a means to an end but fundamentally alters people’s realities. Technological affordances – what users can and cannot do with a given application –, also render individual experiences and perceptions of a given situation.

Human–machine interaction is therefore not a one-way street but co-produces sociotechnical systems that impact its operational environment. How the introduction of AI-assisted applications render peace processes at large thus needs to be assessed more thoroughly. Not doing so risks ignoring the broader sociocultural impact of such applications on peacebuilding and key concepts therein – or put differently, it influences how we do peacebuilding rather than just by what means,

To illustrate my point, I want to offer a brief description of one use case of AI in peacebuilding and add some tentative thoughts on its potential consequences for the peace process in general: large-scale digital dialogue platforms.

In 2019, the UN Department of Political and Peacebuilding Affairs DPPA partnered with the New York-based marketing firm Remesh to develop a large-scale digital dialogue platform. Its stated goal: ‘To scale inclusivity during the mediation process’.

In line with the UN’s intent to foster inclusive peace processes around the world, the platform enables UN moderators and peace constituencies to have back and forth conversations on conflict-related issues or peace initiatives.

To do so, participants simply log onto a web-link at a scheduled time, provide some general information about themselves (e.g. gender, place of residence, age, political affiliation, etc.) and are ready to go.

Once the dialogue round starts, they answer open-ended questions posed by the moderators and rank a subsample of other respondents’ answers. By virtue of natural language processing NLP and machine learning ML, two subfields of AI, similar answers are then clustered together and transformed into summary statements. The algorithms further allow moderators to filter subaltern opinions by specific groups such as women, youth, or people with a certain regional or political affiliation. In this way, feedback not only provides for an overall picture of public opinion but lets moderators obtain a better understanding of minority positions on certain issues.

The whole cascade, from the moderators’ questions, to participants’ answers and ranking exercise, to NLP and ML processing, takes no more than a few minutes. Generated insights are then either used to initiate another round of dialogue and pose follow up questions or fed back into the ongoing Track I mediation process.

Unlike conventional peace polling methods such as field or phone survey, whose results oftentimes lag behind the political realities in fast evolving conflict environments, the platform offers a timely and direct avenue for large subsamples of peace constituencies to make their voices heard during peace processes.

So far the, the UN has deployed the application in Yemen, Libya, Iraq and Bolivia with up to 1000 participants. The developers estimated that the computational threshold will rise even further by the end of 2022, allowing for synchronous digital dialogue with up to 10’000 participants.

Although UNDPPA clearly states the limit of the tool on various levels, the UN and other peacebuilders’ general enthusiasm about AI’s potential points to an increasing use of such tools in the field. Reservations are usually limited to sampling (e.g. excluding populations without internet access or lacking digital literacy) or modelling issues and cybersecurity (e.g. malicious actors, bots and disinformation).

Although these issues remain relevant, a more fundamental understanding of how AI potentially alters peacebuilding realities for all the stakeholders involved is as important. In the context of AI-assisted digital dialogue for example, a number of follow-up questions need answering across the platform:

Backend/Developers:

  • What conception of inclusivity is baked into the platform by the developers (UNDPPA’s innovation cell and Remesh)?
  • How is it operationalized in the code?
  • Who leaves what traces in the code (e.g. UNDPPA practitioners, Remesh software engineers)?
  • What NLP parameters are used to generate summary statements?
  • What training data is used for NLP in the different (colloquial) dialects (the platform allows for dialogue in the local language)?

Frontend/Users:

  • How do participants (peace constituents) experience their role in the dialogue and conceive inclusivity whilst engaging on the platform?
  • Do their conceptions of inclusivity align with the developers’ envisaged use of the platform?
  • Does participation on the platform raise new expectations and stakes in the peace process among participants?
  • How do UN moderators approach the notion of inclusivity on the platform?
  • What type of questions are (not) asked compared to other dialogue formats?
  • How does the real-time character of AI-mediated dialogue structure the conversation and possible follow-up discussion topics?

Track I Mediation/Mediators and Parties to the Conflict:

Beyond expanding inclusivity, this non-exhaustive list of questions points to the underlying consequences that the introduction of digital dialogue platforms could have for peace processes at large.

Similar questions arise in regards to other applications as well, first and foremost discursive tools that mediate interaction between different stakeholders in one way or another. But also for AI-assisted analytical tools such as sentiment analysis or stakeholder mapping, a critical assessment of their use in the field and engendering practices in the process is needed.

As an outsider, I am agnostic on AI’s role in peacebuilding. I do not claim that the technology is problematic by default. To the contrary, for practitioners and organizations working under tight budgets in fast-paced conflict environments, AI-assisted tools can offer a cost- and time-effective solution. Whether it is for conflict analysis, public consultation, or stakeholder mapping.

But, when leveraging such tools, a prior assessment of what they afford and obscure is key to capture the full scale of the sociotechnical system at work. Rather than just a means to an end, AI guides peacebuilders’ gaze on what solution to look out for and, by contrast, what to ignore.

What this means further down the line for peace constituencies, parties to the conflict and practitioners in the field, needs to be taken into account. Otherwise, AI-applications render peace process design, rather then the other way around.