Last up in Privacy Tech for #enigma2021, @xchatty speaking about "IMPLEMENTING DIFFERENTIAL PRIVACY FOR THE 2020

Differential privacy was invented in 2006. Seems like a long time but it's not a long time since a fundamental scientific invention. It took longer than that between the invention of public key cryptography and even the first version of SSL.
But even in 2020, we still can't meet user expectations.
* Data users expect consistent data releases
* Some people call synthetic data "fake data" like
"fake news"
* It's not clear what "quality assurance" and "data exploration" means in a DP framework
We just did the 2020 US census
* required to collect it by the constitution
* but required to maintain privacy by law
But that's hard! What if there were 10 people on the block and all the same sex and age? If you posted something like that, then you would know what everyone's sex and age was on the block.
Previously used a method called "swapping" with secret parameters
* differential privacy is open and we can talk about privacy loss/accuracy tradeoff
* swapping assumed limitations of the attackers (e.g. limited computational power)
Needed to design the algorithms to get the accuracy we need it and tune the privacy loss based on that.

Change in the meaning of "privacy" as relative -- it requires a lot of explanation and overcoming organizational barriers.
By 2017 thought they had a good understanding of how differential privacy would fit -- just use the new algorithm where the old one was used, to create the "micodata detail file".
Surprises:
* different groups at the Census thought that meant different things
* before, states were processed as they came in. Differential privacy requires everything be computed on at once
* required a lot more computing power
* differential privacy system has to be developed with real data; can't use simulated data to do this because the algorithms in the literature weren't designed for dats anything like as complex as the real data (multiracial people, different kinds of households, etc)
* to understand the privacy/accuracy trade-off requires a lot of runs, representing a *lot* of computer time
Census bureau was 100% behind the move
* initial implementation was by Dan Kiefer, who took a sabbatical
* expanded team to with Simson and others
* 2018 end to end test
* original development was on an on-prem Linux cluster
* then got to move to AWS Elastic compute... but the monitoring wasn't good enough and had to create their own dashboard to track execution
* it wasn't a small amount of compute
* republished the 2010 census data using the differentially private algorithm and then had a conference to talk about it
* ... it wasn't well-received by the data users who thought there was too much error
For example: if we add a random value to a child's age, we might get a negative value, which probably won't happen to a child's age.

If you avoid that, you might add bias to the data. How to avoid that? Let some data users get access to the measurement files [I don't follow]
In summary, this is retrofitting the longest-running statistical program in the country with differential privacy. Data users have had some concerns, but believe it will all come out.
Code is up on github and papers are up online. (@xchatty have some links?)

[end of talk]

More from Lea Kissner

More from Tech

You May Also Like

Ivor Cummins has been wrong (or lying) almost entirely throughout this pandemic and got paid handsomly for it.

He has been wrong (or lying) so often that it will be nearly impossible for me to track every grift, lie, deceit, manipulation he has pulled. I will use...


... other sources who have been trying to shine on light on this grifter (as I have tried to do, time and again:


Example #1: "Still not seeing Sweden signal versus Denmark really"... There it was (Images attached).
19 to 80 is an over 300% difference.

Tweet: https://t.co/36FnYnsRT9


Example #2 - "Yes, I'm comparing the Noridcs / No, you cannot compare the Nordics."

I wonder why...

Tweets: https://t.co/XLfoX4rpck / https://t.co/vjE1ctLU5x


Example #3 - "I'm only looking at what makes the data fit in my favour" a.k.a moving the goalposts.

Tweets: https://t.co/vcDpTu3qyj / https://t.co/CA3N6hC2Lq
Master Thread of all my threads!

Hello!! 👋

• I have curated some of the best tweets from the best traders we know of.

• Making one master thread and will keep posting all my threads under this.

• Go through this for super learning/value totally free of cost! 😃

1. 7 FREE OPTION TRADING COURSES FOR


2. THE ABSOLUTE BEST 15 SCANNERS EXPERTS ARE USING

Got these scanners from the following accounts:

1. @Pathik_Trader
2. @sanjufunda
3. @sanstocktrader
4. @SouravSenguptaI
5. @Rishikesh_ADX


3. 12 TRADING SETUPS which experts are using.

These setups I found from the following 4 accounts:

1. @Pathik_Trader
2. @sourabhsiso19
3. @ITRADE191
4.


4. Curated tweets on HOW TO SELL STRADDLES.

Everything covered in this thread.
1. Management
2. How to initiate
3. When to exit straddles
4. Examples
5. Videos on