I urge all followers who have read my criticisms of PCR mass testing in U.K. to carefully read Mr Fordham’s carefully worded letter. Note that the innovation minister in the Lords, Lord Bethel, already admitted that the PCR system doesn’t have the equivalent of an MOT. https://t.co/zXzeDMKCBb
So I wrote back to @lucyfrazermp for another go. Here\u2019s my letter.
— Edmund Fordham (@EdmundFordham) November 28, 2020
They don\u2019t understand how serious this is.
If they can\u2019t tell us the oFPR, our PCR testing is worthless. (thread) pic.twitter.com/zHJ8SJCzf1
This accounts entirely for the notion that we’re in...
Not just me having severe doubts about the trustworthiness...
https://t.co/7tdvEaNSvN
Univ Surrey also:
https://t.co/uPMGdzpFvf
They’ve just got to halt this test ‘with no MOT’ before they kill anyone else.
More from Yardley Yeadon
@ukiswitheu I invite people to run the thought experiment: “what if the ‘cases’ data is inaccurate?”
Ignore ‘cases’, look instead only at excess deaths (per M Levitt’s tweet). Does that look characteristic of an epidemic? It’s completely diff from spring or any winter flu outbreak.
London:
Can anyone explain why there is no ‘2nd wave’ of excess deaths in London, without invoking herd immunity?
It’s not lockdown. See NW England:
This is the largest #SecondaryRipple (which I predicted).
https://t.co/b0rT5Lq9HI
Now check the 3 predictions I made months ago. They’ve all happened. Compare predictions from SAGE’s statements: they’re all wrong.
Even neutrals at this point might ask themselves “if he’s been right on all predictions, maybe he’s correct now?”
I’ve been saying since the Lighthouse Labs got up & running that I’m deeply sceptical about the trustworthiness of their ‘cases’ data. I showed how, at low virus prevalence, the PCR mass testing data was throwing out potentially 90% positives being
https://t.co/t4qQN4rH0u
I got ‘fact checked’ a LOT over that statement. This paper just published, about precisely that time period I speculated about. Turns out that high-80s% of Dr Healy’s positives by PCR were FALSE. This alone is sufficient in my view to throw severe doubt...
Ignore ‘cases’, look instead only at excess deaths (per M Levitt’s tweet). Does that look characteristic of an epidemic? It’s completely diff from spring or any winter flu outbreak.
London:
Can anyone explain why there is no ‘2nd wave’ of excess deaths in London, without invoking herd immunity?
It’s not lockdown. See NW England:
This is the largest #SecondaryRipple (which I predicted).
https://t.co/b0rT5Lq9HI
Now check the 3 predictions I made months ago. They’ve all happened. Compare predictions from SAGE’s statements: they’re all wrong.
Even neutrals at this point might ask themselves “if he’s been right on all predictions, maybe he’s correct now?”
I’ve been saying since the Lighthouse Labs got up & running that I’m deeply sceptical about the trustworthiness of their ‘cases’ data. I showed how, at low virus prevalence, the PCR mass testing data was throwing out potentially 90% positives being
https://t.co/t4qQN4rH0u
I got ‘fact checked’ a LOT over that statement. This paper just published, about precisely that time period I speculated about. Turns out that high-80s% of Dr Healy’s positives by PCR were FALSE. This alone is sufficient in my view to throw severe doubt...
More from Society
You May Also Like
A brief analysis and comparison of the CSS for Twitter's PWA vs Twitter's legacy desktop website. The difference is dramatic and I'll touch on some reasons why.
Legacy site *downloads* ~630 KB CSS per theme and writing direction.
6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices
https://t.co/qyl4Bt1i5x
PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.
735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices
https://t.co/w7oNG5KUkJ
The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.
The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
Legacy site *downloads* ~630 KB CSS per theme and writing direction.
6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices
https://t.co/qyl4Bt1i5x
PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.
735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices
https://t.co/w7oNG5KUkJ
The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.
The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.