We’ve recently seen research about so-called “bots” and misinformation on Twitter and wanted to share our perspective on why findings that might seem remarkable at first are likely inaccurate. We’re working on a more detailed explanation, but some comments for now.

We continue to be excited by the research opportunities that Twitter data provides. Our service is the largest source of real-time social media data, and we make this data available to the public for free through our public API. No other major service does this.
Many researchers, academics, and journalists use our public API — a set of tools for programmatically accessing information on Twitter. We make all public Twitter content available via our APIs. You can learn more about them here: https://t.co/QJQ0USRvI2
The basic issue with much of the research based on our public APIs is simple: The APIs don't provide insight into our defensive actions to protect Twitter from manipulation, including bots.
Because of this, API-based research can't distinguish between accounts we've already identified as bad (and hidden or removed) and real, authentic ones.
This means that our primary actions here — challenging, filtering, and removing bad actors before they have a chance to disrupt people's experience on Twitter — are not reflected.
Why not include this data? Because doing so would make it easier for bad actors to get around our defenses. https://t.co/Q5yweOXc1x
Let’s take a step back and look at the issue of “bots” in general. Even among researchers, there’s little agreement about what “bot” means. It’s a term used to refer to everything from accounts that post automatically to spammers to real people that Tweet something controversial.
The lack of understanding of what a “bot” is and is not contributes to fear, uncertainty, and distrust — in short, unhealthy conversations.
The same way we sometimes see people dismissing facts as "fake news," we also see real people labeling each other as bots rather than engaging with each other — to the detriment of the public conversation.
We've also seen bot detectors and dashboards created by commercial entities, which claim conversations are full of bots, seemingly in an effort to boost their own business models.
When we talk about bots, we mean accounts engaged in platform manipulation and spam. Even then, identifying bots using only public data is very difficult.
Since nobody other than Twitter can see non-public, internal account data, third parties using Twitter data to identify bots are doing so based on probability, not certainty.
One of the most common signals used to predict if someone is a bot is how often they Tweet, or how many times they Retweet. The obvious problem there is, people who are passionate about politics, or sports, or music also Tweet a lot.
Some people only Retweet. There are lots of different ways to use Twitter, and labeling certain uses “bot-like” is unhelpful. Other signals, like political views, the presence of a profile photo, frequency of Retweets, or number of followers seem obvious, but are not clearcut.
These behaviors differ globally, across age groups, language usage, and people’s individual choices about their own privacy and self-expression online.
Many of the common “bot detectors” or “troll hunters” use machine learning techniques to return a “bot score.” What does this actually mean? The answer is very little.
In order to train a machine learning model, you have to start with a training set of users you “know” are bots, so the model can predict whether other users are similar to or different from them.
These tools and approaches are deeply flawed. In our experience, most people aren’t very good at identifying bots from public information alone.
The end result is a staggering margin of error, and one that builds in preconceptions and biases about Tweet volume, political beliefs, and user behavior. These issues rarely make it into media reports, but are often the reasons why some numbers are surprisingly large.
Much of what is being presented as categorical findings is in fact an extrapolated guess and not even close to being accurate. There isn’t really a bot behind every flag. This concern was articulated by one leading researcher in this Buzzfeed piece: https://t.co/WqydQjiYIE
We continue to be committed to enabling academic research, at scale, using Twitter data. Our policies are written to support this work — including when the results are unflattering to Twitter.
However, we believe that to protect our efforts promoting healthy public conversations, there’s a need to speak up here — a lot of this “bot research” is not peer reviewed and not reflective of the facts on any level.
These types of studies, that are covered widely in the media, do not stand up to scrutiny and undermine healthy public conversation, our singular mission as a company.
Oh, and if you see a suspicious account, use our new reporting feature and let us know. It helps our work to make this place better for everyone. Thanks for reading. https://t.co/kypOkCyWk9

More from Tech

There has been a lot of discussion about negative emissions technologies (NETs) lately. While we need to be skeptical of assumed planetary-scale engineering and wary of moral hazard, we also need much greater RD&D funding to keep our options open. A quick thread: 1/10

Energy system models love NETs, particularly for very rapid mitigation scenarios like 1.5C (where the alternative is zero global emissions by 2040)! More problematically, they also like tons of NETs in 2C scenarios where NETs are less essential.
https://t.co/M3ACyD4cv7 2/10

In model world the math is simple: very rapid mitigation is expensive today, particularly once you get outside the power sector, and technological advancement may make later NETs cheaper than near-term mitigation after a point. 3/10

This is, of course, problematic if the aim is to ensure that particular targets (such as well-below 2C) are met; betting that a "backstop" technology that does not exist today at any meaningful scale will save the day is a hell of a moral hazard. 4/10

Many models go completely overboard with CCS, seeing a future resurgence of coal and a large part of global primary energy occurring with carbon capture. For example, here is what the MESSAGE SSP2-1.9 scenario shows: 5/10

You May Also Like

I hate when I learn something new (to me) & stunning about the Jeff Epstein network (h/t MoodyKnowsNada.)

Where to begin?

So our new Secretary of State Anthony Blinken's stepfather, Samuel Pisar, was "longtime lawyer and confidant of...Robert Maxwell," Ghislaine Maxwell's Dad.

"Pisar was one of the last people to speak to Maxwell, by phone, probably an hour before the chairman of Mirror Group Newspapers fell off his luxury yacht the Lady Ghislaine on 5 November, 1991."

OK, so that's just a coincidence. Moving on, Anthony Blinken "attended the prestigious Dalton School in New York City"...wait, what? https://t.co/DnE6AvHmJg

Dalton School...Dalton School...rings a

Oh that's right.

The dad of the U.S. Attorney General under both George W. Bush & Donald Trump, William Barr, was headmaster of the Dalton School.

Donald Barr was also quite a

I'm not going to even mention that Blinken's stepdad Sam Pisar's name was in Epstein's "black book."

Lots of names in that book. I mean, for example, Cuomo, Trump, Clinton, Prince Andrew, Bill Cosby, Woody Allen - all in that book, and their reputations are spotless.