2015 October Call
- This was a repeat call, once on October 14th and once on October 19th. It was posted to the Weaponized Social mailing list, to the International Workshop on Misogyny and the Internet list, and to an online harassment taskforce list. The invitation was extended by some on those lists to others who had expressed interest in the topic. We used Uber Conference to call in, which worked well with an international crowd, as it's accessible via phone and browser. Attendees had 2 full working days to review notes before they were transferred to this wiki.
Overview of how the call will work
- Agenda (intros of up to 30 seconds about you and your group, then longer updates on projects and organizational statuses, then wrap-up)
- Note taking (we all take notes for each other)
- Time keeping (if you go over, you will first be poked via chat, and then verbally interrupted)
Introductions (Who's here?)
30 seconds each : 15 minutes tops total : who you are, what sorts of stuff you work on, what sort of world you want to live in : If you just spoke, take notes for the person who speaks after you.
- Willow with Aspiration.
- Meredith with Nuance. Security! Want to live in a world where people have productive disagreements.
- TQ is hanging out with Meredith, but cooking.
- Jolly is a nomad interested in computer security and life extension. Wants to live in a world with tighter communities and less ageing/dying.
- Desiree - Hollaback and Heartmob. Free and Equal
- David - working on a book around these issues. Wants less tribalism and more skepticism
- Chinmayi - Bangalore, Bachchao project. Supports those enabling the fight against gender-based violence. Back to the world with butterflies, fewer humans.
- TQ - spend time with interesting people, half of whom I disagree with. The world I want to live in is one where it's safe to express and discuss unpopular opinions,
- Yvonne - Nairobi Kenya based. Working on initiative called SKIRTS Want a world with more peace and tolerance
- Susan - Works on ways of fostering conversations and reducing threats resulting from those conversations. World of passionate, provocative conversations without causing harm
- Emily Founded and direct the UnSlut project, against sexual shaming and bullying. Online harassment. Working on making these topics palatable and approachable to people who think it doesn't affect them, who coast through life...future I envision is one in which it's trendy, expected, normalized to express ideas online without bringing in peoples' identities (leaving the option not to be scrutinized in that way if people prefer not to be)
- Andrea - works with ToR project. She would like to live a world with less intimidating experiences(?)
- Willow - Wants to live in a world where we can have informed debate, where we combine solidarity with informed critique because we can't have one without the other and be healthy.
- Shireen - Want to live in a world where WoC aren't harassed online and off. Digital Sisters was the first organization to get women and girls of color online and into tech. 15 years later, we're still having that conversation, but now with challenges of these women being attacked Online. The other project is Stop Online Violence Against Women
- Ed - UMich School of Information. Past: i3 Detroit hackerspace, MIT Center for Civic Media. Other interests: equalizing power imbalances, decentralization, nonviolent communication. @elplatt
Project and Organizational Overviews
Quick summary Andrea, TQ, Meredith working on a decentralized discussion platform. Want it to be censorship resistent, but autonomy respecting. People don't need to host (or even encounter) things they don't want to. Difficult to balance. Draws from Reddit, but even a little further. Reputation systems, how people interact nd how they say what they say. Making votes public and something you can subscribe to. If I down-vote a post, I might do so because it's spam or a bad argument, but people who have a social connection to me might want to use my votes to determine what they see. Want to address issue of spam, problem of pile ons. Make it less of a contested space. There are balkanized discussion platforms where each has its own set of content and own set of discussions. People decide that policies on one platform aren't tenable has trouble migrating to another one. Network effects endemic of being social. Is there a way to store these perspectives so you might take different views of it? Moderate out parts of the discussion based on others' opinions, rankings, votes, other meta information. Side effect of taking all your data with you if you depart.
- We'll have a two-layer structure, with the transport (current project) being mainly concerned with censorship-resistant message delivery. The only scarce resource here is bandwidth/storage, so the only blocking we need is a minimal level of spam filtering to preserve network functionality.
- Next layer is edge moderation: how do we enable users to filter out the bullshit without using centralized platforms and creating a censorship chokepoint? Here the scarce resources are human time and emotional capacity to deal with high-conflict interactions.
- Consider Sarah Perry's delightful essay: http://carcinisation.com/2014/08/29/two-patterns/; nearly all current platforms are at one endpoint or the other of the intimacy gradient - can we fill in the middle?
- Contact: firstname.lastname@example.org
- Need: Front-end side of things. Discussed ideas about how users and sites could present to different people to different effect. Need mockups, visual brainstorming.
- can help with visual brainstorming- chinmayi
Quick Summary- Working on a book on technology and society, _Try, Catch, Finally_ on increasing balkanisation / growing extremism&intolerance on internet e.g., FB data showed some links shared by lefties, others by righties; Twitter had links shared by extremists but not moderates (!) something healthy about the old Internet has been lost at the scale we're operating at today
- have you looked at Jonathan Stray's work on polarization? Good data science there.
- yes, his stuff is good. Seb Benthall has had some ideas too.
- conflicting loyalties between shareholders/users are part of the problem, as is lack of transparency
- We're lacking a communal understanding of words even like "harassment" Disolving into middle school appeals to authority or moral authority bludgeon.
- Need: a working definition of what "harassment" is (and is not)
In January, we're having a hackathon around Gender Based Violence. Looking at what people have built to counter online harassment. Also security training for how to protect yourself online. Also an wish list of what can be built ?
- Contact: Chinmayi - email@example.com
- Need: References to existing security manuals. The ones which exist might not be good enough.
- Meredith can help pull those together. Some of this is going to be journalist-specific, but the opsec needs of journalists and people in DV situations are pretty similar...@runasandvik probably already has a lot of journalist type security training out there. Tactical Tech, too. They just released one specific to gender.
- Need: Resources?
- Willow to send doc of projects after asking permission
Creat a space against sexism and online violence against women. Conducted various workshops, gotten positive response. Challenge is when you create evidence on online violence, the numbers don't compare to offline workshops. But it ends up being the same people. Want to create provocative conversations without causing harm. A strategy towards that is challenging. We don't want to be fighting like tug-of-war, we want to blend in and create awareness. Cause mindsets to change, get people to realize the violence on and offline exists. Last month there was audio of a woman being raped. There was admiration for the man's prowess, her saying "no" was brushed off.
- Need Support on creating provocative
- Can we help amplify in a way which protects the privacy of the women you're working with?
- We have different issues coming on board here. This started from WhatsApp and moved to Twitter. You could pick teh name of the lady and the guy. Protecting privacy in that moment is tricky, as it's already out. But human rights defenders and journalists, it's preventative measures. Maybe change behavior. No strategy in this case, tho.
- Have you organized people to respond? Like with Take Back the Tech recently? It looked like so many people were outraged by the response you refer to. What would have happened if they'd organized a simultaneous response.
- It trended for a day, and so long as we were having somber conversations, there were even more folk who were drowning it out by taking the conversation go in another route. In most situations, women relate and share. But Twitter is 70% male dominated, and a clear point will be lost in the noise.
- How do we grow a safe space which has started and is doing ok?
- Is this even possible? My intuition: no, due to differences in definitions of "safe"
- Can we help amplify in a way which protects the privacy of the women you're working with?
Documentary film is being shown at a lot of colleges etc. 40 minutes, then a 30 minute discussion specific to that community. Self selecting audience that wants to learn. Can we use these conversations to tie in these platforms? Ask people to try them out, or to bring their specific challenges. Email Emily with folk to circulate for feedback. Use these screening events. Areas of focus are Online harassment, slut shaming and women empowerment. People are asking about what is changing, what is are the new tools that people are using to protect themselves?
Imperical work with McGill University to look for cases in which people. Focus on different opinions engaging online and not having negative effect (golden conversation). Cases in which people with strong differences of opinion engage online. Somebody produces content that is seen as objectionable/hateful, someone responds trying to get that person to stop, and in some cases that first person is favorably affected. They are searching for these cases but don't have a tool for finding them at scale. In the meantime, data scientists in the group have been building classifiers, including for productive speech, using subreddits to train the software. Susan's ask if for ideas on how to find these "golden conversations" and how to find more of them, and how to examine them and analyze them as we find them.
- Conversation which is important to have. Trying to find that conversation. Take Back the Night looked at how the conversation turns towards violence (Especially when a person of color, a woman, or a woman of color) in the US. What people aren't often getting are actually words, but instead a photo with words on it, so not classified as hate speech. Platforms can't do anything about it. That's unsettling to me. Often of lynching.
- Cat and mouse game between those trying to pollute platforms with bad content and the various efforts of actors to push back on that.
- When images are involved, humans have to be in the loop, which is more momentarily and temporarily cost intensive.
- Platforms rely on flags of content for inspection. But the reporting parsing is in part automating. (Tools first, humans second)
- "Creative license" is also an aspect which is not often discussed. "Someone's creativity" might not be something we like, but it might be "art." One person's art is another person's threat. The target being WoC is not taken seriously.
- "Gender" as "women's issues" but a lack of cultural competency needs to be included
About to do a press conference focussing around cyber stalking, particular case of a university student. Blocking cam campus use on campus to curb cyber stalking. So who is responsible, university or students. e.g the app............which is totally anonymous . More of the app to be aptured at the conference next week on tuesday. Working on the new SWATting bill, looking for a conversation on reacting and training. Want people trained on when a tool like SWATing in used as harassment tactic. Looking for: information from these events shared out to networks
Many of the problems brought up need to be solved by local instances, rather than centralized authoritarian moderation. Not just one cat for a lot of mice. Some of the thoughts I have on safe space is that we're trying to have a community which support self-expression free of judgement and discrimination. But how then do safe spaces change mindsets or cultural problems? When mainstream culture doesn't even acknowledge that these are issues. What kind of large scale changes do we expect?
Reflections on platform and process
Participants can have a running assessment of the platform and process in this area of the notes.
Do we want to make this more of a regular thing? Once a month or so? Leave some room for a topic of discussion. Create more time for someone like David to talk about their project. or just to dig in deeper on specific topics, some larger problem worth chewing on as a group Encouraging folk who don't usually talk to talk more.