The most important part of last week's episode of On The Media was probably the segment on how the Restorative Justice process can serve as an alternative to the broken prison system in the U.S. I highly recommend it. But the segment that followed, about what role Restorative Justice could play in resolving conflicts that happen online, was also intriguing, especially as someone who has been trained as a conflict mediator and participated in conflict resolution advocacy programs in the past. It got me thinking about what the one-off experiment on Reddit that Micah Loewinger and Lindsay Blackwell conducted might look like in wider practice.
Right now when two or more people are in conflict on Twitter, Facebook, Reddit or elsewhere, the most likely eventual outcome is that someone will be blocked, banned, muted or otherwise removed from the conversation, either by a participant or by a moderator of the service itself. As the On The Media episode notes, the best that social media companies seem to be able to come up with in this problem space is making it even easier to report or block someone. (And to be clear, I'm generally a supporter of users being able to block/mute someone else at will without having to explain themselves.)
But if anyone involved in or affected by the conflict was interested in a different outcome, how could they get there?
An idea I'm exploring here would be a bot that someone, either one of the parties or an observer, could mention to initiate a conflict resolution process with elements of the Restorative Justice approach included.
"Hey, @ConflictBot, can you help us out here?"
In response, the bot could add its own comment to the discussion with a customized link to a website landing page generated specifically for that particular discussion.
"Welcome, @Bob and @Alice. It looks like you're having a conflict. If you'd like some help resolving this, our online tools and community of mediators are here to assist. To continue, accept some terms and authenticate yourself."
If the participants agreed to some basic understanding of the process and wanted to proceed, they could use a social login to verify that they are the users that were in the original thread of conflict. They could also invite other users directly affected by the original conflict.
From there, they could be dropped into a process on the website that would unfold privately by default. (There might be a lot of value in having parts of the process happen out in the open for others to see and benefit from, so maybe they can choose that option if they're comfortable, either along the way or at the end.)
What would that process look like? Some combination of:
- prompt-driven conversations where the participants could slow down and be more thoughtful about their responses before sharing them ("what do you think happened in this conversation? what were you hoping for? what was hard about it? what would you do differently if you could start over?").
- guided conversations from trained volunteer mediators
- building a checklist of shared agreements about what a mutually acceptable resolution looks like
The goal would be to work toward some sense of justice for everyone involved. Even though there may not have been a crime committed as the Restorative Justice process typically assumes, there could still be restorative actions that everyone is accountable to, and that facilitates a better outcome than muting/blocking/banning.
If the process is successful, maybe the bot could share a summary back in the original conflict thread, to illustrate this alternative approach and educate anyone who comes along later about it.
There are obviously a lot of gaps to fill in here. Where would the conflict mediators come from, and how would you ensure appropriate training? What would be the criteria for when this tool should be used and when it's not a good fit? What happens when the process doesn't have a good outcome? How do you prevent misuse or abuse by people not acting in good faith?
On that last one, it's worth emphasizing that I don't see this kind of tool or process as helpful for any situation where there's someone bent on inflicting pain or confusion, or where there's any fundamental attack on or disregard for someone else's dignity or humanity. People might understandably experience anger, frustration, anxiety, powerlessness or many other things in the midst of a conflict, but if there's a lack of basic psychological safety around interacting with another person involved, no online tool or process like this is going to be a good next step.
In the experiment that Micah and Lindsay conducted, it sounds like they had to do a fair amount of convincing people to participate and engage with the process. As I thought about what incentives someone might have to accept an invitation into a process on some random website, I'll admit there wasn't a lot I could come up with beyond curiosity. Maybe with sponsorships and donations you could say that a successfully resolved conflict would result in a charitable contribution to not-for-profits of the participants` choosing, but that starts to get tricky. I suppose that over time you would just have to hope for credibility and awareness of the benefits of this kind of justice to grow enough that people are ready to try something different.
So, that's one idea for helping to resolve online conflict. There's no revenue model; anyone want to build it? 🙂
What do you think is the future of detoxifying online conversations?