This post is more than 3 years old.

The most important part of last week's episode of On The Media was probably the segment on how the Restorative Justice process can serve as an alternative to the broken prison system in the U.S.  I highly recommend it. But the segment that followed, about what role Restorative Justice could play in resolving conflicts that happen online, was also intriguing, especially as someone who has been trained as a conflict mediator and participated in conflict resolution advocacy programs in the past. It got me thinking about what the one-off experiment on Reddit that Micah Loewinger and Lindsay Blackwell conducted might look like in wider practice.

Right now when two or more people are in conflict on Twitter, Facebook, Reddit or elsewhere, the most likely eventual outcome is that someone will be blocked, banned, muted or otherwise removed from the conversation, either by a participant or by a moderator of the service itself. As the On The Media episode notes, the best that social media companies seem to be able to come up with in this problem space is making it even easier to report or block someone. (And to be clear, I'm generally a supporter of users being able to block/mute someone else at will without having to explain themselves.)

But if anyone involved in or affected by the conflict was interested in a different outcome, how could they get there?

An idea I'm exploring here would be a bot that someone, either one of the parties or an observer, could mention to initiate a conflict resolution process with elements of the Restorative Justice approach included.

"Hey, @ConflictBot, can you help us out here?"

In response, the bot could add its own comment to the discussion with a customized link to a website landing page generated specifically for that particular discussion.

"Welcome, @Bob and @Alice. It looks like you're having a conflict. If you'd like some help resolving this, our online tools and community of mediators are here to assist. To continue, accept some terms and authenticate yourself."

If the participants agreed to some basic understanding of the process and wanted to proceed, they could use a social login to verify that they are the users that were in the original thread of conflict. They could also invite other users directly affected by the original conflict.

From there, they could be dropped into a process on the website that would unfold privately by default. (There might be a lot of value in having parts of the process happen out in the open for others to see and benefit from, so maybe they can choose that option if they're comfortable, either along the way or at the end.)

What would that process look like? Some combination of:

  • prompt-driven conversations where the participants could slow down and be more thoughtful about their responses before sharing them ("what do you think happened in this conversation? what were you hoping for? what was hard about it? what would you do differently if you could start over?").
  • guided conversations from trained volunteer mediators
  • building a checklist of shared agreements about what a mutually acceptable resolution looks like

The goal would be to work toward some sense of justice for everyone involved. Even though there may not have been a crime committed as the Restorative Justice process typically assumes, there could still be restorative actions that everyone is accountable to, and that facilitates a better outcome than muting/blocking/banning.

If the process is successful, maybe the bot could share a summary back in the original conflict thread, to illustrate this alternative approach and educate anyone who comes along later about it.

There are obviously a lot of gaps to fill in here. Where would the conflict mediators come from, and how would you ensure appropriate training? What would be the criteria for when this tool should be used and when it's not a good fit? What happens when the process doesn't have a good outcome? How do you prevent misuse or abuse by people not acting in good faith?

On that last one, it's worth emphasizing that I don't see this kind of tool or process as helpful for any situation where there's someone bent on inflicting pain or confusion, or where there's any fundamental attack on or disregard for someone else's dignity or humanity. People might understandably experience anger, frustration, anxiety, powerlessness or many other things in the midst of a conflict, but if there's a lack of basic psychological safety around interacting with another person involved, no online tool or process like this is going to be a good next step.

In the experiment that Micah and Lindsay conducted, it sounds like they had to do a fair amount of convincing people to participate and engage with the process. As I thought about what incentives someone might have to accept an invitation into a process on some random website, I'll admit there wasn't a lot I could come up with beyond curiosity. Maybe with sponsorships and donations you could say that a successfully resolved conflict would result in a charitable contribution to not-for-profits of the participants` choosing, but that starts to get tricky. I suppose that over time you would just have to hope for credibility and awareness of the benefits of this kind of justice to grow enough that people are ready to try something different.

So, that's one idea for helping to resolve online conflict. There's no revenue model; anyone want to build it? 🙂

What do you think is the future of detoxifying online conversations?

 

(Previous writings that are related: Obama, Gates and Restorative Justice, Conflict resolution as a life skill, Go toward the hard stuff, To challenge and be challenged in conversation.)

3 thoughts on “Restorative justice and resolving online conflict

  1. Conflictbot made me think of Clippy for some reason. "Hi: It looks like you have worked yourselves into a murderous rage. Would you like to hate me instead?"

    But the more serious thought I had is, what if people don't come to social media sites in spite of the conflict. What if they come because of the conflict? If that's the case, it might be that we need less of a conflict resolution model and more of an addiction treatment model.

    1. I think that people come to social media looking for connection, but don't really have the skills or self-awareness to really find deep connection. The fastest, easiest way to find some connection is to provoke a conflict with some opposing "tribe." Then people will "like" your posts and comments and you get a little pavlovian treat in the form of an in-app notification.

      If you can find a way to scratch that connection itch by resolving conflicts instead of provoking them, you are on to something.

  2. I heard that OTM episode both the recent time and awhile ago, and think that such an approach based on Restorative Justice has promise - not for people who go looking for conflict, but for those who stumble into it unintentionally. And, remembering your having been involved with Richmond's former Conflict Resolution Center, I wondered if you might have heard it and what your thoughts might be, so this post is a delight to read.
    I affirm your idea of turning Micah and Lindsay's experiment into a service/program, and think that it would be wise to give people the option of working with the bot or with a volunteer mediator as they will differ on which method they would prefer. (Although that could raise a secondary source of conflict . . .)
    ~ Stephanie

Leave a Reply

Your email address will not be published. Required fields are marked *