Looking at the structure at forms for reporting abuse online to companies (Twitter, Google, Facebook, etc.) Tech companies could be doing more proactive things to help victims → need to structure the form in order to get enough information about the situation.
Users outside of North America and Europe have an even harder time filling out the relevant forms → sometimes not even in their native language. Also need to consider the cultural differences because certain terminology may not be appropriate in every situation.
Needs to be a clear communication between the method of reporting and the recipient of these reports. Also no transparency about the workers processing these reports: who are they, how are they trained, what are they paid, etc.?
- workers are reviewing disturbing online images and are not properly trained. Secondary trauma.
- workers need to be supported → need to take into consideration the culture of the workplace.
- Some companies have team activities and debriefs and training to help the mental health of workers. Also, could rotate workers through positions, so that they are not reviewing the same type of content at all times.
Problem with the webform:
- Question of where these reports are, not always clear to users how to report abuse.
- You don’t have the back and forth like over the phone. Also you have to think about whether the phone is tapped or the emails are monitored etc.
- Form is structured in a way to funnel the information from the user in a way that makes it easier to process the request → take into consideration the number of requests received and how much time the recipient on the other end has to devote to one case.
- Victims are not good at explaining the context → need someone to tease out that context → blurring of online and offline abuse. Sometimes without the context of what is happening offline, it doesn’t seem like there is an immediate threat.
- Users also try to self-escalate, which means that sometimes their request is sent to the wrong queue (for example, they report their issue as child pornography, when there problem really falls under a different category), which further delays the process.
- language used in the form (example is YouTube) is not user/victim friendly → at times condescending, telling the user that they will remove the video, but also telling the user to be patient. One thing to consider that these tech companies need greater support staff.
There is a distinction between the process and the policy. Some reports may not be against company policy (“roses” example: Asking for action on a contact with a picture with roses, but without context we have no idea whether this is an actual threat).
- Remember that people also use the reporting system to abuse others. Companies need to be sure that an actual threat or violation has occurred.
Also blocking a user who is abusing you online is not the best action because 1. you can’t get the evidence against them that you need to fight them in court. In some ways it is best that abuse happens on a public platform because it is easier to track. 2. There are social consequences: You are saying it is ok that the abuse is happening instead of fighting it.
- YouTube Content ID example: Is there a technological solution? You have to be careful though because it can be overbroad as in the copyright example.
- 911 button → way to immediately report; what if you have to talk to a real person, so there could be a taboo if you have to talk to an actual human being.
- Platforms should be legally required to reply within a certain period of time.
- Need a system that gets the empathy part right even if under company policy there is nothing that can be done → no content removed.
- Victims need to feel that they can have a conversation with a person → makes them feel supported.
Ideas: Victim advocate service
- What about victim advocates? People could use the victim advocate to talk to the police, does not always work with the police. Opportunity to tell your story, connects you with resources, and supports you in court.
- What if the platforms funded the victim advocates, so that you wouldn’t need the online reporting. Victim advocates could triage the people coming through. Remove responsibility from the provider and lets them focus on removing content/making the decision. Victim advocates could also help with the process of writing the form → help with the empathy.
- The advocates would ideally be knowledgeable about all of the platforms/technologies. Advocates would help the users fill out forms→ help them identify the problem and navigate the reporting process etc.
- Already tested flaggers in place, so there is a process already.
- Other benefit is that there are already training and mental health services for advocates.
- Option to talk to advocates might also discourage people from abusing the system because there are more consequences when you have to talk to an actual person.
- Good to acknowledge that technology cannot solve all problems. Is this the platforms fault and responsibility to fix the problem?
- Another way to ask this: Do we want to assign this responsibility to these platforms? Is this where we want to put our political platform? What is the most important for us to accomplish?
- Maybe there should be more social consequences. For example, if perpetrators were no longer able to go to the bar, hang out with their friends etc. they would be less likely to become abusers. Note that in the technological space people can be banned from platforms, but they can use other services to abuse the same person.