Twitter has made many promises to do better. One of them is to automatically remove abusive tweets with hateful content in them. Another way is through autoblocking, which features from a mode called safety mode which blocks abusive or spammy accounts from interacting with you for 7 days. And another one is a prompt asking users to rethink their tweet before they post. But Twitter doesn't seem to care about abusive, threatening and hateful profile user handles - such as user handles containing rape references, glorifying pedophilia or user handles with slurs in them.
This is a problem we have to deal with regularly when we report accounts. Take trolls like "Lemon" for instance, or members of The Shed, and they create chat rooms with names like "rape room".
This not only against the Twitter rules on abusive profile information, but these trolls are a safety risk to women and children as their user name encourages violence against women and children. Names like the following in the screenshot in the embedded tweet encourages rape, pedophilia and violence and it is not OK, and especially so that women more frequently becoming victims of domestic violence and murder.
A similar rule applies with hateful slurs, except it covers a more broader range than just women. Jewish people, Muslims, other religious groups, people of colour and members of the LGBTQA+ community are often frequent targets by trolls with user handles that attack one of the groups above. This make these groups feel unsafe on Twitter, and it also creates a hinderance for moderators who have to go through every twitter username that is reported to them.
Twitter not only needs to remove abusive usernames - they need to block the creation of new usernames and featuring abusive user handles. There are so many ways that abusive usernames are detected such as "Kate Hikes" (who is a troll) - switch the first letter of both the first and last name around - and an anti semitic userhandle will be the result.
If Twitter truly cared about their users, they would take action on accounts with abusive profile information. If Twitter cared, they would blacklist words and symbols (such as the Nazi swastika) that are not appropriate as a username or a profile display name and tell new users that those words cannot be used in either. It makes sense, given that I could not use "Twitter" in my profile display name, on my @failedguideline account right? Yes. Until Twitter takes action on usernames with abusive handles, spammy code digits and starts to ban accounts with abusive profile information, and blacklist inappropriate words, imagery and symbols (especially those listed in the ADL database and MGTOW symbols), I will not take Twitter seriously when it comes to the rules on profile information.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
CategoriesTags |