LRT: One of the problems with Twitter moderation - and I'm not suggesting this is an innocent cause that is accidentally enabling abuse, but rather that it's a feature, from their point of view - is that the reporting categories available for us do not match up to the rules.

Now, Twitter's actual policy is that wishes or hopes for death or harm are the same as threats. That policy has been in place for years. But there's no report category for hoping someone dies. You can only report it as a threat.
Which gives the moderator, who doesn't spend long on any individual tweet, mental leeway to go, "Well, there's no threat here." and hit the button for "no violation found".
They have a rule that says persistent misgendering or other dehumanizing language is not tolerated, but again - there is no reporting category for that. We have to report it as hate against a group.
So again, the moderator looks at that tweet, briefly in isolation, and sees what, without context, might look neutral or matter of fact. A series of tweets referring to somebody consistently by the same set of pronouns or a statement that somebody is a man or woman.
I've said this before, but having a rule against misgendering or otherwise dehumanizing trans people and not enforcing it is worse than having no rule.

Because the rule's existence creates the impression that we have protections we don't.
So the people who dehumanize, misgender, and wish death upon us get the best of both worlds - they can freely do it over and over again while proclaiming themselves censored martyrs to free speech. They can use the "power" our supposed "protected status" gives us to foment hate.
There rules, their reporting tools, and their rulings all ultimately feel like they are each created/run by a different group of people who not only don't agree but haven't communicated with each other about what they're doing.
But again, that makes it sound like it's an innocent, well-intentioned mess and even if at one point it started out that way (and I'm not saying that it did, I'm saying *if*) at this point it's been going on so long and has been pointed out to them so many times, it's deliberate.
It is a deliberate choice to keep running their system this way.

Meanwhile, people who aren't acting in good faith can, will, and DO game the automated aspects of the system to suppress and harm their targets.
They can run coordinated and/or bot-assisted mass reporting campaigns to make sure their complaints get escalated or the system automatically steps in and locks accounts.
Another side of this is that, for peoples who have historically been targeted for death, there are all sorts of references that are ready-made for making EXPLICIT DEATH THREATS that to an untrained moderator looking at a tweet in isolation, might just seem like absurdism.
E.g., references to ways people died or had their corpses abused in the Holocaust, in slavery or Jim Crow America. References to lynching, to atomic bombings, to drone strikes.
And then, then we come to the fact that the people making the moderation decisions are making decisions, even on the stuff that "will not be tolerated".

A death threat is not supposed to be allowed on here even if it's a joke. That's Twitter's premise, not mine.
But a sizable chunk of Twitter's moderation pool has a hard time looking at, say, a straight white man threatening violence upon a woman, a gay person, a trans person, etc., and seeing it as serious. It's like background radiation. It's always there. Not alarming.
But anger, even without an explicit threat, from those groups directed against more powerful ones... that's alarming to the same people.
It's the Joker Principle. I know we're all sick of pop culture exegesis but I just can't let go of this one: "Nobody panics when things go according to plan."

A guy going "Haha get raped." is part of the plan. It's normal.

His target replying "FUCK OFF" is not. It's radical.
Things that strike the moderator as unusual, as radical, as alarming are more likely to get moderated.

Things that strike the moderator as "That's just how it is on this bitch of an earth." get a pass.
Helicopter rides. A trip to the lampshade factory.

And then the ultra-modern ones like "Banned from Minecraft in real life."

https://t.co/5xdHZmqLmM
And needless to say, all of this "confusion" and subjectivity in what are supposedly objective, zero tolerance rules that apply to everybody... they give people who *want* to protect and promote fascism and violence through moderation a lot of cover.

More from 12 Foot Tall Giant Alexandra Erin

More from Social media

So let's check in on "Newsguard," one of the Orwellian groups (e.g., The Atlantic Council) that totally reliable sites like @voxdotcom and @axios use to decide what is "Unreliable" and "fight disinformation."

One example:

OK, so "The Daily Wire" and "
https://t.co/oEa89coNak" are unreliable. Fair enough, maybe they are (I don't use either one of them).

So let's look into one of our new official arbiters of "reliability," Newsguard!

What's their advisory board look like?

https://t.co/5N8op70VE1


OK, so maybe a few names jumped out at you immediately, like, oh I don't know, (Ret.) General Michael Hayden, former Director of the CIA AND former Director of the National Security Agency in the run-up to the Iraq War in 2003! Google him, he's famous!


Newsguard is all about "seeing who's behind each site," (like how Michael Hayden is behind Newsguard?)

All they want to do is fight "misinformation." That's laudable, right?

Also, Newsguard has a "24/7 rapid response SWAT TEAM!!"

So cool!
https://t.co/EDN3UXvBR9


Ok, I'm not a journalist or a former CIA director, so I have no idea what's true or not unless someone tells me, so hey, Columbia Journalism Review - what do you think of Newsguard Advisory Board Member Michael Hayden?
1/ Creating content on Twitter can be difficult. A thread on the stack of tools I use to make my life easier

2/ Thread writing

Chirr app

Price: Free

What I like: has a nice blank space for drafting and a good auto-numbering feature

What I don't: have to copy and paste tweets into Twitter after thread is drafted and can't add pics

https://t.co/YlljnF5eNd


3/ Video editing

Kapwing

Price: Free

What I like: great at pulling vids from youtube/twitter and overlaying captions + different audio on them

What I don't: Can't edit content older than 2 days on the free plan

https://t.co/bREsREkCSJ


4/ Meme making

Imgflip

Price: Free

What I like: easiest way to caption existing meme formats, quickly

What I don't: limited fonts

https://t.co/sUj13VlPiO


5/ Inspiration

iPhone notes app

Price: Free

What I like: no frills & easily accessible. every thread i write starts as an idea in notes

What I don't: difficult to organize

You May Also Like