Throughout this global pandemic, one of the most frustrating conundrums we’ve faced is faulty data. Granted, some of the reasons may be chalked up to having to play Van Halen’s “Eruption” with our 1st guitar and only a handful of lessons. As the virus crept across our shores, we didn’t have much knowledge of what we were dealing with at the beginning of the year. We had issues with the early Imperial College predictive models. This was the model adopted to initiate the devastatingly moronic idea of “15 days to flatten the curve.” Seeing how we are currently into day 283 of “flattening the curve” and we are reinstating shutdowns by Government fiat. We keep questioning when, or if, a return to normalcy will ever be allowed. We are witnessing an increase in cases. Some of this may be due to the entrance into the flu season. But tainted data, such as over sensitive testing, might also be to blame.
A previous post by Jennifer Cabrera and Alex Rodriguez, “Why mass PCR testing of the healthy and asymptomatic is currently counter-productive,” discussed some of the problems with PCR tests. The short version is that documented studies show that PCR tests are too sensitive to identify live virus (infectious people) when they use a cycle threshold over 34, and almost all labs in the United States use at least 37, if not 40 or 42, cycles. The New York Times reported that these tests can produce 40% to 90% false positive results. (If you don’t have a subscription you can read the summary from Apoorva Mandavilli’s Twitter account.)
Being unable to have much confidence in the data being used for public policy will typically lead to awful decision making. It’s even more concerning when those delivering the data may have self-serving, politically motivated reasons. In a recent report, an expert in ethics and health policy at the University of Pennsylvania had this recommendation:
Somehow, we have built a system that allows for white people to live longer and thus we should have them “Rosa Park” to the back of the bus with the administration of the Covid19 vaccine.
This assertion seems to be conventional wisdom with those handling the data that dictates the policy decisions that are currently wrecking our lives. Take, for instance, this bit of information found by Phil Kerpen. Mr. Kerpen is a columnist who has been exposing the inconsistencies with all aspects concerning the Chinese Wuhan Coronavirus.
Mr./Mrs. Walker conducted a study on vaccine distribution cited by an acclaimed pollster, Nate Silver from 538.com. The “wokeism” this epidemiologist subscribes to would make anyone skeptical of the “science” and the reasoning for the method of allocation that he/she/they may be peddling. It’s desire to push for a racially required distribution is very questionable on it’s premise alone.
Here is the exchange:
The following thread of posts by Emily Burns further puts into view the questionable data that Jo Walker is peddling. Read along as she strips the binary panties off of Mr./Mrs. Walker.
Looks as though Chicago is gonna use this “woke science” for the distribution of the vaccine.
This is a disturbing trend over the last year. The data for the most part is unreliable. It is either being skewed, misreported, incorrectly assessed and analyzed, or out right fraudulent. The science can only be trusted when the science is based on objective findings. Not “woke”, skewed garbage that will be weaponized to ruin people’s lives in the name of “Safety”.