What reporting is like for non Westerners

From Weaponized Social
Revision as of 22:24, 13 July 2015 by Willow (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

What reporting is like for non Westerners

Sara Baker - works for international NGO (APC), runs group Take Back the Tech which supports women’s inclusion in tech all around the world Did research on women in 7 countries (Bosnia, Congo, Philippines, etc.)

  • Purpose was to look at domestic policies for local and international telecomms and internet firms in the area and create case studies
  • All of these countries had a law that could be used by women against harassment online, but actual application was shaky. Police didn’t really know what to do with the law, survivors were moved back and forth between different government offices, wealthier or celebrity women were more successful
  • Problems seen in research are seen constantly
    • Platforms don’t have reporting forms in every language (Twitter, FB, etc.) (sometimes, the directions to get to the reporting form were in the language, but the actual reporting form wasn’t).
    • You have to copy & paste the exact language into the report, but if the platform doesn’t read the language, they won’t know the meaning or cultural implications of harassment (woman in Pakistan had to copy abusive language in Urdu)
    • Some women felt/assumed that their reports were being rejected because of language, so they stopped reporting
    • Some women had to provide ID of who they were, in English (digital scan)
    • Women not on the platform are having things happen against them on the platform, which makes it harder
    • Communication problems from abroad to US (woman in Pakistan scanning something to FB six times but they never got it)
    • Women report abuse, and Twitter reviewers see it a different way (cultural reasons, language barriers, etc.)
    • In Bosnia, popular during divorces is for the man to make a slanderous FB page about the woman. Later during court case custody issues, etc. are complicated because it looks like the woman is a sex workers, etc.
  • Research revealed
    • Women always took action for the first time
    • Hollaback is providing a platform called HeartMob, which allows bystanders to report abuse about you on your behalf
    • Overriding feeling of “we want to do something, but we’re helpless.”
    • The laws are useful, but aren’t being used effectively - lack of capacity

Other comments:

  • In India, the biggest obstacle is to get police to actually file your report and explain to the police officer that abuse is actually harassment.
  • Solutions: non-policy responses (mobilization, etc.)
    • Woman harassed on LinkedIn. Instead of going to LinkedIn, she contacted the harasser’s company and got him in trouble.
  • Reviewers don’t see the full context, are limited by company guidelines and must make a judgement with little information
  • Blasphemy accusations in the US aren’t a big deal, but the same in Pakistan is a big deal and puts the woman in danger. US-based reviewers don’t understand that blasphemy is a big deal and dismiss it.
  • In Pakistan, people reach out to people with connections, intellectuals, law enforcement who “merit” a response. Otherwise, average people don’t get a response.
  • People are seeing platforms respond to government requests but not theirs (esp. in case of government take-down of FB pages, etc.). Has implications for freedom of speech, etc.
  • In Turkey, a girl was abducted and a hashtag was started that went viral. Put pressure on the government and influenced related government agencies
  • Countries need a victim’s advocate who can help with police, court hearings, etc. (emotional support, technical and bureaucratic knowledge)
  • People use WhatsApp in unique ways outside the Western World that quickly translate into offline things
    • In a movie theater room, a WhatsApp is sent to everyone in the audience saying that a Hindu and Muslim teenager were on a date (hacked into movie’s database, since everyone in India gets their movie tickets via phones)
    • Similarly, people sent videos of a cow being slaughtered to incite right-wing people to violence
    • In India, the right-wing is much more organized online than other groups, which means that women, etc. who go online are in the minority, constantly threatened
    • In India, in response to taxi driver rape allegations, there’s a way to allow the GPS function in your phone to track where you are. But how safe is that data?
  • “Safety apps” are not necessarily feminist - if people you trust can track you, then other people can too
  • Difficult too because many people aren’t interested in attending workshops on privacy, etc.
    • Also problematic because it puts the onus on the victim

Question: should companies like Twitter police one-on-one messages (DMs) for abuse?

Corporations should be aware when governments are asking to take things down of opposition

  • In Pakistan, government officials have used the blasphemy law to prosecute and *execute* people who say things online

Many laws written that justify violence against women are written abstractly in a “we need to protect these women and their virtue, etc.” way

Need?: a cross-platform way to report abuse to everyone (FB, Twitter, etc.)

Or at least a non-profit group that will provide resources on different ways to report

Twitter: no company is “pro-revenge porn” so these conversations are good

FB: we’re creating the problem by existing, so we need to solve it

Problem: we put up a video of my little girl niece dancing on the beach in a bikini. Some people reported it as borderline child porn. The question is do we take down stuff based on intention, or content, or potential for abuse by others? Cultural questions (older indigenous woman from Brazil had her picture taken down even though her picture wasn’t sexual).