LRT: One of the problems with Twitter moderation - and I'm not suggesting this is an innocent cause that is accidentally enabling abuse, but rather that it's a feature, from their point of view - is that the reporting categories available for us do not match up to the rules.
Now, Twitter's actual policy is that wishes or hopes for death or harm are the same as threats. That policy has been in place for years. But there's no report category for hoping someone dies. You can only report it as a threat.
Which gives the moderator, who doesn't spend long on any individual tweet, mental leeway to go, "Well, there's no threat here." and hit the button for "no violation found".
They have a rule that says persistent misgendering or other dehumanizing language is not tolerated, but again - there is no reporting category for that. We have to report it as hate against a group.
So again, the moderator looks at that tweet, briefly in isolation, and sees what, without context, might look neutral or matter of fact. A series of tweets referring to somebody consistently by the same set of pronouns or a statement that somebody is a man or woman.