Thank you members of the Commerce Committee for the opportunity to speak with the American people about Twitter and §230. My remarks will be brief to get to questions. §230 is the most important law protecting internet speech. Removing §230 will remove speech from the internet.

§230 gave internet services two important tools. The first provides immunity from liability for user’s content. The second provides “Good Samaritan” protections for content moderation and removal, even of constitutionally protected speech, as long as it’s done “in good faith.”
That concept of “good faith” is what’s being challenged by many of you today. Some of you don’t trust we’re acting in good faith. That’s the problem I want to focus on solving. How do services like Twitter earn your trust? How do we ensure more choice in the market if we don’t?
There are three solutions we’d like to propose to address the concerns raised, all focused on services that decide to moderate or remove content. They could be expansions to §230, new legislative frameworks, or a commitment to industry wide self-regulation best practices.
The first is requiring a service’s moderation process to be published. How are cases reported and reviewed? How are decisions made? What tools are used to enforce? Publishing answers to questions like these will make our process more robust and accountable to the people we serve.
The second is requiring a straightforward process to appeal decisions made by humans or algorithms. This ensures people can let us know when we don't get it right, so that we can fix any mistakes and make our processes better in the future.
And finally, much of the content people see today is determined by algorithms, with very little visibility into how they choose what they show. We took a first step in making this more transparent by building a button to turn off our home timeline algorithms. It’s a good start.
We’re inspired by the market approach suggested by Dr. Stephen Wolfram before this committee in June 2019. Enabling people to choose algorithms created by third parties to rank and filter their content is an incredibly energizing idea that’s in reach. https://t.co/Oavx4xVskC
Requiring 1) moderation process and practices to be published, 2) a straightforward process to appeal decisions, and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they’re all achievable in short order.
It’s critical as we consider these solutions, we optimize for new startups and independent developers. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems. We mustn’t entrench the largest companies any further.
Thank you for the time, and I look forward to a productive discussion to dig into these and other ideas.

More from Tech

I could create an entire twitter feed of things Facebook has tried to cover up since 2015. Where do you want to start, Mark and Sheryl? https://t.co/1trgupQEH9


Ok, here. Just one of the 236 mentions of Facebook in the under read but incredibly important interim report from Parliament. ht @CommonsCMS
https://t.co/gfhHCrOLeU


Let’s do another, this one to Senate Intel. Question: “Were you or CEO Mark Zuckerberg aware of the hiring of Joseph Chancellor?"
Answer "Facebook has over 30,000 employees. Senior management does not participate in day-today hiring decisions."


Or to @CommonsCMS: Question: "When did Mark Zuckerberg know about Cambridge Analytica?"
Answer: "He did not become aware of allegations CA may not have deleted data about FB users obtained through Dr. Kogan's app until March of 2018, when
these issues were raised in the media."


If you prefer visuals, watch this short clip after @IanCLucas rightly expresses concern about a Facebook exec failing to disclose info.

You May Also Like

The entire discussion around Facebook’s disclosures of what happened in 2016 is very frustrating. No exec stopped any investigations, but there were a lot of heated discussions about what to publish and when.


In the spring and summer of 2016, as reported by the Times, activity we traced to GRU was reported to the FBI. This was the standard model of interaction companies used for nation-state attacks against likely US targeted.

In the Spring of 2017, after a deep dive into the Fake News phenomena, the security team wanted to publish an update that covered what we had learned. At this point, we didn’t have any advertising content or the big IRA cluster, but we did know about the GRU model.

This report when through dozens of edits as different equities were represented. I did not have any meetings with Sheryl on the paper, but I can’t speak to whether she was in the loop with my higher-ups.

In the end, the difficult question of attribution was settled by us pointing to the DNI report instead of saying Russia or GRU directly. In my pre-briefs with members of Congress, I made it clear that we believed this action was GRU.
1. Project 1742 (EcoHealth/DTRA)
Risks of bat-borne zoonotic diseases in Western Asia

Duration: 24/10/2018-23 /10/2019

Funding: $71,500
@dgaytandzhieva
https://t.co/680CdD8uug


2. Bat Virus Database
Access to the database is limited only to those scientists participating in our ‘Bats and Coronaviruses’ project
Our intention is to eventually open up this database to the larger scientific community
https://t.co/mPn7b9HM48


3. EcoHealth Alliance & DTRA Asking for Trouble
One Health research project focused on characterizing bat diversity, bat coronavirus diversity and the risk of bat-borne zoonotic disease emergence in the region.
https://t.co/u6aUeWBGEN


4. Phelps, Olival, Epstein, Karesh - EcoHealth/DTRA


5, Methods and Expected Outcomes
(Unexpected Outcome = New Coronavirus Pandemic)