Crowds, benefits and risks

From Weaponized Social
Revision as of 22:43, 13 July 2015 by 142.254.29.147 (talk)
Jump to navigation Jump to search

Issues

Civility and Online Discourse

N: sometimes when you let communities create and enforce their own norms, they get out of control or spiral into dangerous places

Spotting and Dealing with Unacceptable Behaviour

Support

S: In Myanmar, Buddhist religious leaders and politicians are systematically inciting people against Muslims. One particular false rumour spread on Facebook led to riots. About a year ago, a relatively small group of activists created a movement called “Flower Speech,” a personal and collective commitment against hate speech online and offline. The meme is a picture of a person with a flower in their mouth with a statement along the lines of “I avoid and don’t tolerate language that leads to social unrest.”

  • N: Can this type of response be systematised?
  • S: Success looks like two things, 1) changes the expression of someone/people who were previously toxic, or 2) more interesting but more difficult to measure, is counter-speech that affects the audience in a positive way
  • S: In a relatively small sample like the Yik-Yak community, can we generate data that lead to tools/instructions for creating a taxonomy of counter-speech? Is the same sort of counter-speech effective in different environments? She’s hoping for more experiments on small scales focused on particular groups, and results aggregated
  • O: A campus activist group in favour of visibility for people of color was under attack on Yik-Yak, and professors shared positive experiences on Yik-Yak and signed off with their names (so it wasn’t anonymous, as Yik-Yak usually is) and because they were leaders on that campus, they were able to flood it with positivity and knock all the racist comments down in visibility
  • E: This translates beyond Yik-Yak to Twitter, where an influential person can change the whole tide of a conversation by using a hashtag in a different way and reaching a bunch of followers at once (those followers then pick it up)
  • S: it doesn’t even have to be a user with a large following
  • N: the theme of recruiting more people to join in a conversation and fill a positive space happens on reddit, by moderators who actively recruit people, but there is also a downside to this technique (“brigading” or destroying conversations by recruiting a lot of people to destroy a conversation)

G: When we were designing Enforcement United, we settled on the concept of the “heart of the volunteer.” There were many very complicated reward systems for quality standards, but in the end we realised that people who are doing it for their own reasons (usually altruistic or because they like to have the authority of being the traffic cop) are most useful. We knew that people will manipulate tools and do them for the wrong reason. Maybe we are so caught up in our own ideas that we are missing things, so let’s get feedback from the crowd. We designed an “exam” by injecting known gamer tags into the stream, so people would be taking a test without even realising they’re taking it. And then we score them and their vote counts for more or less according to a non-linear architecture. We count on the “hell bucket” of people deliberately voting like crazy to try to make fools of us and get even with the community to balance out the exam. We do learn from them information about that type of behaviour. And the payoff has been a very large reporting system and we get hundreds of thousands of complaints that we can look at, knowing what we know about brigading and false reporting and helps us with noise-filtering algorithms.

  • N: Do you have many people analyse the same report?
  • G: We use this entire rating system almost entirely as a noise-filter. There’s a business reason behind this: they are paying customers, so if we’re going to take action against that person, we have to know that it’s going to be the right action. We’ve also learned that when things are bad, but maybe the community has not agreed that they’re bad, that same piece of UGC will come back into that system and another user will see it.

P: We have a few semi-intelligent lexical systems, mostly to analyse what gets reverted by the volunteers. Good-faith people are trying to keep the site clean. “Recent changes” is accessible to everyone on the left-hand side of the Wikipedia page, and if any user sees a bad change, they can undo it. Over time, we built up an automatic filtering system that is using this data to inform its actions.

  • Clue Bot: Works from the information given to it by volunteers reverting changes to pages, and then is trained to monitor the system. The biggest challenge is false positives; people might get the idea that they can’t edit it openly, and the user base really values transparancy. It identifies behaviour, not people, and the response is human. Once Clue Bot finds a bad edit made by someone, Clue Bot puts a message up on that user’s page. If that happens three times and then our volunteers respond to it by blocking them outright.
  • Because of anonymous editing, it’s very easy to just set up a new account. Many times, initial vandals become editors eventually. Others become long-term abusers.
  • G: Do you do any IP telemetry? Do you filter by IP, deny by IP?
  • P: Only in our worst cases, and we try to keep it temporary.
  • S: Have you tried to quantify this reform process, where you transform vandals into editors?
  • P: It’s difficult because we can use user data for only a very tiny list of things and right now that research is not one of them.
  • W: What do you do about heated discussions and polarised topics, for instance Israel/Palestine? Have you built any tools from within the community conversations?
  • No, this is all still done by humans at a human rate with very little technical assistance. We have what’s called “arbitration.” Once an issue goes to arbitration, like Israel/Palestine, it goes under arbitration enforcement. So you’re really limited to how much editing you can do once the page is under arbitration.
  • E: What about things that are less obviously complicated, like a feminist activist’s Wikipedia page?
  • P: We have one out of thirteen anonymous volunteer arbitrators who is a woman. Imagine a Supreme Court of anonymous volunteers. Over time, this arbitration works better. There is enforcement that was applied ONLY to GamerGate called 500/30: you have to have 500 edits and your account has to have been active for 30 days in order to edit a GamerGate-related page. We don’t apply any kind of restrictions anywhere except for that, so it was hugely controversial.

N: There was this question between the interaction between bots/software and crowds, there is this question of recidivism. There is actually research by Justin Chang at Stanford based on Disqus that looks at the effect of people getting downloaded. Who ends up having power is related to who is on the service and using it, as well as this accountability question: who holds them accountable if they are doing things that may be harmful or challenging? One of the other initiatives of Wikipedia that I’d like to highlight is one that moves beyond the response to harm approach and thinks about the people who may have had their edits deleted or who have received negative feedback from the crowd. There is a concern sometimes that overeager moderation on Wikipedia may be turning people away. So Wikimedia designed a program called SNUGGLE. Snuggle watches the moderators and instead of focusing on content, it focuses on people. It identifies people who might have been a vandal, but they might be from a less-well-represented group and the community rejected them because their contributions were different from what the community was used to. Snuggle offers mentorship and support to try to bring that person back into Wikipedia. So this addresses the impact on people as well as questions the crowd moderating process.

W: One of the aspects that is missing is the commitment of those who build platforms toward that culture. So for instance, if I walk into Harvard and am rude to one person, others will be upset by that and that kind of experience is not appreciated among that crowd. Filtering out attitudes is really about culture. So first of all, there is a lot of weight on culture. What do you tell people as they sign up? One of the tools we’re experimenting with is taking a pledge before signing up. But one of the things that drove me crazy about Facebook algorithms is that they are designed for engagement. Constructive, thoughtful content will get much less engagement. The algorithm that tis designed for advertising boosts the “Fuck You” post and shows it to more people. One of the things that Google did very well was the page rank. So if CNN links to your site, probably you are a big deal. If a random, smaller site links to your content, it won’t be boosted in the same way. How can you build the reputation model among people that does that, without silencing minorities? So let’s say for example, if your content is appreciated by a group of people who understand the specific topic you’re talking about, then that content will be shown to many more people, versus “Like rank.” You have to add some level of meritocracy into the game.

  • G: It’s an intelligent reputational model that you’re talking about.
  • S: You turn those people into effective moderators.
  • W: Without them knowing. Exactly. You are part of a ranking algorithm, but you don’t know how powerful you are, so you don’t abuse your power.
  • W: Stack overflow does a very good thing: you are no one. You have to prove yourself. You start at a certain level and you have very low number of privileges. If you behave, you are upgraded. If you misbehave, you are downgraded. One of the things we are also experimenting with is that we tell people you sign up as a reader. Based on how you interact with the content, you get limited privileges to start commenting. If you write comments that are appreciated by the community, then you get the privilege to write posts. This makes it much more expensive for you to think you can abuse the system, because you have a lot to lose by not doing that. If you could get access to write right away on any platform, you’d just create a fake account to do whatever you want, and if you get kicked out, you create a new fake account, and so on. The difference in traffic law in Egypt and that in US is interesting. If I run a red light in Egypt, I pay 100 pounds and that’s it. In the United States, your insurance increases, so you’re double-taxed in a way. But that’s not what’s happening on Facebook and Twitter today. If you go in and troll, you get a lot of likes and attention. The post might be deleted, but your reputation on that platform is not affected, unless you really violate the law and then they ban you for three of four days. What we’re trying to build is that you come in and troll once, your reputation is affected and you have to rebuild it overtime.
  • S: Do you tell people that?
  • W: No. So it’s shadow banning. But at the beginning, you have them pledge that this is a place for learning and that you have to be respectful or else fewer people will see what you write.

Big Questions

E: Is there a solution for users to monitor the content that is content that is associated with them on a platform?

W: Facebook created a tool where you can flag certain “bad words” you don’t want to see (a lexical filter), but it’s "shadow banning" people don’t know their comment is flagged and filtered from your view.

G: changing user behaviour at scale (punitive and non-punitive). Question: how do we tweak the transition between different issues of sanctions. (thanks people) (try to avoid having people guess their system). Appeals process (white glove treatment) (charge people to participate) (make them take a test). Big question: behaviour change.

O: how can we add an element of empathy, that they realise someone is on the receiving end of a message? How can we make people confront that?

W: What was the one great thing that platforms like Facebook and Twitter did to create positive vibes, and no negative vibes? Can we create negative feedback systems that are constructive, where I don’t feel like the person is against me.

S: what’s more effective? Attempts people know about or attempts people don’t know about? (do they know the policies, do they know they were sanctioned? Do they know who’s taking the action).

S: different types of behaviour? Chronic/professional, people who join a bandwagon?

S: interventions “offline” like calling someone’s parents? and should worry us.

S: can we think of other mechanisms of bringing in wider social relations

S: crowds runs amok