Civility in an online community

From Weaponized Social
Revision as of 22:24, 13 July 2015 by Willow (talk | contribs) (Created page with "Civil Discourse Platform Notes on Civil Platform Discussion Discussion Leader: Wael Ghonim Parlio is a new platform that starts with the premise of civility in discourse....")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Civil Discourse Platform

Notes on Civil Platform Discussion

Discussion Leader: Wael Ghonim

Parlio is a new platform that starts with the premise of civility in discourse. How can they help institutionalize that? Good signals, bad signals, penalties for bad behavior?

How do you separate people that want to break the rules, and people that just don’t know them? User education might help distinguish — if someone is alerted to a particular thing that they are doing that the community sees as unacceptable conduct, they’ll stop unless they are purposefully breaking the rules.

Humanizing the product — this might be the key to building and maintaining the right norms. It may make online communications conform to the norms of civility we have when it comes to in-person communication. How do we do this? Riot Games has research on this, which entailed detailed analyses of the notifications given to users in the event they violate some community norm. The Facebook compassion team has also done work in this space, at least in part with teenager focus groups. Tiny variations in the language used to communicate with users made a big difference in outcomes and satisfaction.

Using the educational system to teach digital citizenship. We instill in-person norms this way, and it’s quite effective. Having curricular elements about how to interact online — emphasizing that there is a human on the other side of each computer — could help establish appropriate conduct online from an early age.

Letting people know what they did wrong in the first place also has benefits. Riot Games has been experimenting with this on their user base, as has Twitter. That allows the users to understand exactly what they said that prompted the complaint.

Some of the Facebook research was designed to encourage users to resolve their conflicts with one another. If I want to report an image, the platform first prompts me to engage with the original poster. They even give suggested language. It got some pretty astonishing results. Both parties were much more satisfied with outcomes with that workflow. This is limited in its contextual usefulness, but still a potentially useful avenue. This also helped with the majority of cases where a photo was reported because the user filing the complaint didn’t like it (i.e. was in the photo and found it unflattering), which Facebook can’t really do anything about because those photos don’t violate and of the community standards. Apparently people reporting posts they just don’t like makes up a huge percentage of complaints, and prompting the users to resolve the issues between themselves is much more productive than a Facebook or Twitter simply saying there is nothing they can do about it.

What about having people sign a pledge? Signing something that establishes their commitment to community norms may make users more likely to follow those norms. Perhaps the pledge could be humorous — start with “your mother is listening.” Humor can be an effective tool in both framing discourse and grabbing attention. If you want people to actually read it — instead of quickly clicking through, as people do with most user agreements — you’ll want to make it short and funny. You’ll also want some way of reinforcing it long-term. Very few people will think back to the community pledge every time they post. So thinking through ways that people can be prompted or reminded to abide by the community norms would be a useful way of helping those norms stick.

Another way to establish norms of behavior and identify users not abiding by those norms might be reputation models. A reputation model would enable people to get feedback from the community, and give a form of accountability. However, we need to ensure that the reputation model doesn’t silence people. On Parlio, they’ve brainstormed having an on-ramp where people can only read at first, and then they slowly accrue privileges to comment and then post as they demonstrate their commitment to productive discourse.

Platforms may want to make all posters anonymous to other users until those users upvote the post. That way the post isn’t truly anonymous, but it forces people to evaluate the ideas in the post without any of the preconceived notions they might have about the speaker if they knew the speakers identity. Might encourage more meaningful interactions — often if people see a woman’s photo attached to a post, they will automatically make disparaging/sexist remarks. By presenting ideas without identity, other users have to decide whether or not they like the idea before they see who wrote it. There is research on people thinking that they can guess the gender of an anonymous speaker and being completely wrong. This might suggest that some form of anonymity at the outset would have a tangible impact on how people perceive posts. This would also produce a lot of super interesting data! If we could find a way to capture it and study it, that would be amazing.

An onboarding mechanism that helps connects users and humanizes them might also improve interactions in online spaces. Having new users complete a challenge together where they have to work together and help each other would remind them that there are real people on the other side of the screen. There was a PS3 game that connected new users to do a challenge anonymously — you had to help each other, but you had know idea who the other person was. Those users often ended up becoming friends after completing the collective challenge. So they actually had success in taking away information. The users couldn’t have any preconceived notions about the people helping them, because they didn’t know anything about them. They built a relationship through collaboration without all the social baggage we often bring to the table.

Parlio has found that women tend to post less on the platform than men. Should it change the algorithm to favor posts from women? This could do a lot to bring balance to the voices on the platform, and show other potential female users that this is a place that respects and values contributions from women. And Parlio may not have to make the change permanent. Ideally, promoting posts from women might prompt more women to post, and eventually the community of female posters would be robust and self-reinforcing without a boost from the algorithm. Is that sexist? Not really. It’s bringing balance to a place where society has produced an imbalance. What about algorithmic transparency? If Parlio were to disclose that it was favoring posts from women, that probably wouldn’t go over so well, but it’s also not ideal to have the algorithm for sorting/ranking/promoting posts be secret. Maybe that part of the algorithm can be kept somewhat obscured?

One interesting thing we’ve seen recently is different levels of civility in discourse self-segregating. On twitter last week, the discourse around the same-sex marriage ruling varied greatly by hashtag. Some hashtags tended to be used by people wanting to have civil, supportive, reasoned discourse, and others tended to have more hate/trolling/etc. Maybe there is some way to capture this and make it easier for those looking for reasoned conversation to find like-minded folks on existing platforms.

How can we work to frame negative interactions in a positive way? Perhaps framing criticism as “dissent” rather than “dislike.” Judges write dissenting opinions that often disagree sharply with the majority opinions, but it’s always done in a respectful way that acknowledges what the majority got right while still concluding that the other justices got the decision wrong. Maybe a way to help structuralize this would be to make feedback a compliment sandwich (one positive thing, one negative thing, then another positive thing). That often works in classes when students are giving each other constructive criticism. But are we comfortable forcing people to say something positive about a post with which they disagree? What if they feel the post is racist or sexist or transphobic? Should they still be forced to say something nice about something they find so viscerally offensive? Especially when it comes to conservative viewpoints, where do you draw the line? For example, some might view a post that refers to Caitlin Jenner as “Bruce” to be highly offensive, but because society still has a long way to go on respecting transgender identity, it may not cross the societally determined line as being that offensive. Should people be forced to say positive things about that post before critiquing it?

Maybe feedback can be anonymous, but that might be a reddit-like rabbit hole down which we don’t want to do. Perhaps, though, it could work if the feedback was structured in some way. There could be a drop-down menu with the basic elements of constructive criticism from which anonymous users could choose (e.g. “needs more sources,” “conclusion could have been stronger,” “could be structured better,” et.). Then authors could get stats on the feedback saying “30% of people said your conclusion could have been stronger” or something like that.

At the end of the day, all of this is about trying to dissociate evaluating ideas and evaluating the author. This is very hard to do, especially in environments where anonymity so often leads to hostility. It’s very hard for us to separate ideas from identity unless the platform can somehow force us to do it.

Maybe a model to look at is the Amazon review model. The feedback that rises to the stop seems to always be good. But that’s in part because everyone on amazon tends to find the same things helpful in a review (thorough description, pros and cons, etc.), whereas people are looking for very different things in arguments on polarizing topics. Plus, the evaluation of an idea is inherently colored by whether one agrees with it or not, but most people don’t come to an Amazon reviews page with a preconceived agenda.