Observing the public conversation around FB, and the private ones happening among techies and ex-FBers, I think the mutual misunderstanding is worse than when I set out two years (and 500 pages) ago to (in a small way) bridge that gulf.

We're basically fucked.

The tech world has gotten so huge, self-reinforcing, and insulated from reality they can no longer even vaguely look at themselves (and their actions) as others do. They just live on a different planet than most people.
Conversely, the average tech consumer doesn't understand the technology that has slowly taken over their lives, and their designated emissaries to figure it out--politicians, pundits, regulators, journalists--understand it barely better than they do, and have their own agendas.
To say more than generalities for a moment, here's what I think is likely the core problem.

Techies take weird, improbable visions, and make them realities: some BS pitch deck to a VC, mixed with money and people, really does turn into some novel thing.
Most people work inside a legacy industry that's evolved that way over time (usually for good reasons), and they think about the future via some analogy with their present (which is a function of a long-ago past). The interruption that tech will introduce is often hard to grasp.
We have techies who are technically skilled and motivated, but who (and I'd be the first to admit it) often have narrow educations that don't let them see a bigger picture. And we have people who live in the world who don't understand technological implications. That's our mess.
Let's take a concrete example of how this shakes out. Not sure why I'm doing this, as I'm done with screaming into the FB media tornado, but I'll just go ahead:
Disinformation is not a solvable problem. It is here to stay, forever. Every election in the foreseeable future will feature massive amounts of user-generated disinformation. The only hope is to culturally adapt, as we did to other weird aspects of social media.
Why do I say that?

Facebook & Co. can take on the most egregious disinformation examples, or efforts undertaken by identifiable state actors (maybe), but it will never be able to shut it down entirely.
Assuming some semblance of free speech, ubiquitous online identity, and some amount of engagement-optimized distribution (even if crude and self-selected, like on WhatsApp), and global reach, we will always have it, full stop.
No techie I've spoken to--I'm talking people who've spent years inside FB or TWTR--think it's solvable at scale, and anyone who says so is blowing smoke up your ass.

Why do I feel confident in this assertion (that I'm sure will get trolled)?
Remember privacy? Remember how that was the biggest angle on the FB story, and how many rivers of electrons were spilled in talking about it?

Where'd that end up? Nowhere. We got GDPR, which is pointless, and if anything solidified FB/GOOG's position in Europe. Ditto CCPA.
Privacy didn't get 'solved', we merely shifted culturally to accommodate new notions of it, and now we don't think about it much (even the Privacy Industrial Complex that made a career out of this has pivoted to being a new Disinformation Industrial Complex).
Think I'm being glib and dismissive? Let's take a historical perspective.

If you sat down to a meal in the 80s, and took out a camera and took a photo of your food, while telling everyone you were sending copies to your friends, you'd have been locked up in an insane asylum.
And yet now 'Stories' (which FB ripped from Snap) is basically that, and one of its most popular features.

The Beacon scandal that blew up FB in the late aughts now seems like a joke. People got worked up over that?

We'll read the current disinformation coverage the same way.
You can see the shift in polling by generation cohort. Those raised in a world where smartphones and ubiquitous sharing are just givens think about it very differently.

It's the bridge generation (looks in mirror) that's mostly freaking out about it.

https://t.co/LqB2xNe7Cw
Note, I'm not dismissing disinfo complaints. It's clearly a real problem that's produced human suffering in places like India or Brazil. I'm questioning our ability to do anything about it at scale, while still maintaining the technology that is (i.e. forget Butlerian Jihads).
Nor am I saying there's *nothing* anyone can do about it. FB policing (or trying to anyhow) political advertisers much more severely *is* a solvable problem, and one they should undertake (and be taken to task if they slip). But that gets back to my earlier point....
Which is it's hard for anyone to discern what's worth worrying about with this immense gulf. The techies don't see the bigger picture, the public doesn't see the disruptive vision, and the chattering classes are wrapped up in exploiting the very spectacle they claim to deride.
So, we'll muddle through, as we've always done. It'll get worse before it gets better. Mistakes will be made, and then doubled-down on, again and again.
We as a species are dumb. We don't learn anything, and only technical and scientific knowledge is cumulative.
Doubt me? Compare the conversations on this service with one of Socrates' dialogues. Are we smarter now? More respectful in dialogue, more clever in our conclusions? I don't think so. We (or some us) just know how to make things like smartphones now. Best of luck. We'll need it.

More from Tech

There has been a lot of discussion about negative emissions technologies (NETs) lately. While we need to be skeptical of assumed planetary-scale engineering and wary of moral hazard, we also need much greater RD&D funding to keep our options open. A quick thread: 1/10

Energy system models love NETs, particularly for very rapid mitigation scenarios like 1.5C (where the alternative is zero global emissions by 2040)! More problematically, they also like tons of NETs in 2C scenarios where NETs are less essential.
https://t.co/M3ACyD4cv7 2/10


In model world the math is simple: very rapid mitigation is expensive today, particularly once you get outside the power sector, and technological advancement may make later NETs cheaper than near-term mitigation after a point. 3/10

This is, of course, problematic if the aim is to ensure that particular targets (such as well-below 2C) are met; betting that a "backstop" technology that does not exist today at any meaningful scale will save the day is a hell of a moral hazard. 4/10

Many models go completely overboard with CCS, seeing a future resurgence of coal and a large part of global primary energy occurring with carbon capture. For example, here is what the MESSAGE SSP2-1.9 scenario shows: 5/10

You May Also Like

Nano Course On Python For Trading
==========================
Module 1

Python makes it very easy to analyze and visualize time series data when you’re a beginner. It's easier when you don't have to install python on your PC (that's why it's a nano course, you'll learn python...

... on the go). You will not be required to install python in your PC but you will be using an amazing python editor, Google Colab Visit
https://t.co/EZt0agsdlV

This course is for anyone out there who is confused, frustrated, and just wants this python/finance thing to work!

In Module 1 of this Nano course, we will learn about :

# Using Google Colab
# Importing libraries
# Making a Random Time Series of Black Field Research Stock (fictional)

# Using Google Colab

Intro link is here on YT: https://t.co/MqMSDBaQri

Create a new Notebook at https://t.co/EZt0agsdlV and name it AnythingOfYourChoice.ipynb

You got your notebook ready and now the game is on!
You can add code in these cells and add as many cells as you want

# Importing Libraries

Imports are pretty standard, with a few exceptions.
For the most part, you can import your libraries by running the import.
Type this in the first cell you see. You need not worry about what each of these does, we will understand it later.