Inclusive community failfest

From Weaponized Social
Jump to navigation Jump to search

Inclusive community failfest

Wikipedia has failed to create an inclusive community - 85% male, you need a thick skin to edit Wikipedia

Story: disputes on Wikipedia are forwarded to the Administrator’s Noticeboard. Result is mayhemic; abuse can go on for years. Complicated by the belief that lack of regulation is ideal, at least for user disputes.

Question: when does inclusivity become exclusivity?

  • Hacker space: meant to be inclusive, and then socially problematic people enter and can’t be forced out, and some people leave or are otherwise excluded
  • Story: NYTimes allowed anyone to write the op-ed for the week, became obviously bad
  • Wiki error: here’s a dispute, everyone come to a rough consensus about who’s wrong
  • Two separate ideas: what are the rules? how do we enforce the rules?
    • “Be excellent to each other” is a good rule, but enforcement is complicated

Problem: depending on the case, user disputes can also be content disputes (female authors on Wikipedia)

Story: Reddit just banned subreddits that fostered abuse

  • Immediately after the ban, there was an immediate explosion of hate (thousands of similar subreddits, etc.)
  • Would have been better to have figured out the solution BEFORE banning subreddits
  • Now that it’s died down but there still is bad subreddit creation, Reddit has to be constantly vigilant (esp. with a team of 6 people).
  • Takeaway - platform-wide decisions affect real employees and people

FB: every time you change a policy to allow for something, you allow a lot of bad things to happen

Story: There’s a FB group which accuses Jews of killing Christian babies and drinking their blood (blood libel). It was reported, but the reviewers weren’t sure if it was allowable or not. FB is better at explaining policy than responding to reports.

  • Potential solution: a separate page that shows who are making these rules (not all white men)
  • Counterpoint: several of Twitter’s people are visible women of different groups, but yet many people still consider Twitter’s rule-makers as white men

Solutions and responses vary based on platform/company size, both in terms of company response and public perception

Problematic: view that companies put a non-cis white male in a position of authority for PR “success”

“Epic Fail” - Privacy

  • BlockBot problem in the UK - entities which look at user data need to be registered a special way
  • Difficult for companies to figure out patterns in aggregate which result from users deleting their tweets before the company reads them
    • In many cases platform employees can’t access records because of legal obligations with the FTC

Problem: a lot of problems caused by human error. Someone at FB pressed the wrong button, and many outsiders have a hard time believing that FB isn’t just run by machines and algorithms

Transparency reports - budget and number of people working on these issues are often obscured

  • FB - doesn’t perceive that total transparency (data collection, community monitoring, etc.) because the outcome would be explosive on both sides (FB is taking over our data and lives, and also you aren’t doing enough because you hate us)

FB’s “real name” policy has a lot of implications for transfolks

  • There are groups which report or assemble data on trans profiles as a way of online harassment
  • Important to remember that when there’s a failure, there can be an entire population that is disenfranchised, etc.
  • Problem: who are you writing policy for? FB says it designs security to help the most vulnerable person on the site (if a closeted LGBT teen dies, the parents don’t necessarily have the right to access their FB page)

FB does focus groups and talks to people who have been/might be affected by security and policy changes

Large organizations can “pivot,” small organizations can (somewhat - Wikipedia can change faster than other large platforms because they have a culture of constant change)

Problem: many users don’t even know how to moderate their own FB page or Twitter, which allows for abuse and harassment

Story: Neopets over the weekend removed all their moderators and replaced them with insufficient word filters that don’t keep the forums safe

Action items

  • Recognizing the difference between rules and process, and whether failures come from rules or process
  • Guide to the worst kind of errors? - Hydra problem, clever wordplay problem, problem when people misunderstand the words of rules or terms of service
  • Tendency to apply fixes that work in one place to multiple problems, which is lazy
  • Changing company hierarchy - problems arise because people who have been with the company the longest and are the most stubborn are the ones who stay with the company longest and are in the highest positions of management
  • Valuing content moderation as skilled labor on scale