Note: You will see an advertising banner beneath our videos that play off the Brighteon platform (when they are not maximised). This advertising helps support the Brighteon platform that doesn't charge subscribers for their content, is committed to free speech, yet is also respectful of copyright-related law. We'd like to clarify that no advertising revenue from Brighteon is received by the Alliance for Natural Health Intl.


By Robert Verkerk PhD, founder, executive and scientific director

On 19 March 2020, the UK downgraded the status of COVID-19, no longer classifying it as a ‘high consequence infectious disease’ (HCID). This was even before the reported mortality rate for ‘deaths involving Covid’ started escalating. It was at the time that the government, academics, the National Health Service (NHS) and the public were worried that the NHS capacity to handle critically ill patients would be overrun. That was back then.

Now we’re seeing rising cases – referred to widely as a ‘surge’ in infections – and government machines are working hard to prepare the public for more restrictions.

If you’re in the UK – publicly protesting against such measures has become tougher as of the beginning of the week when prime minister Boris Johnson instigated – against the will of all but two ministers his ‘rule of 6’.

Our question of the week

The question we want to pose to the British people this week is this: given the current status of COVID-19 disease as well as the precarious state of the UK economy, is Boris Johnson’s £100 billion plus proposed investment in ‘operation moonshot’ coronavirus testing programme appropriate? Is it the best way of getting back to some semblance of normal life and resurrecting the economy?

This kind of investment needs to be considered in the context of a country that is expected to suffer an 11.5% slump in national income (gross domestic product; GDP) this year according to the Organisation for Economic Cooperation and Development (OECD), worse than any other developed country. This kind of economic slump should be compared with the typical 6% slump in GDP during 1918-21 owing to the Spanish flu that wiped out a staggering 2.1% of the global population.

Spanish flu 1918 (Source: Wikimedia Commons)

This isn’t a rhetorical question. It’s a question that we hope triggers critical thinking among us, the citizens and residents of a country that is by the end of this week closing its short public consultation on its plans to change UK medicines law to prepare the way for mass vaccination of the population. Testing and vaccination might seem like natural bedfellows – testing ostensibly telling you whether or not you’re infected in the absence of any available treatment, vaccination (once available) providing insurance against infection in the future. But could they also be devices designed to achieve something quite different, that’s not really in our interest, but more in the interests of those controlling the shots (pun intended)?

Testing troubles

The trouble is that RT-PCR tests on which ‘operation moonshot’ is based aren’t accurate. This problem is complicated by the lack of a ‘gold standard’, and known, significant variations in sensitivity and specificity. If that wasn’t bad enough, there are many other sources of variation as well, that include cross-reaction with other genetic material, timing of tests, potential contamination and sample degradation.

If you felt so inclined, you can use the BMJ’s ‘Covid-19 test calculator’ to work out the percentage of people likely to have false and true positives and negatives according to different pre-test probabilities of infection, sensitivities and specificities of test. Generic test calculators such as this medical test calculator can also be used. Assuming an 80% pre-test probability (i.e. the best estimate of the actual prevalence of the disease in a given area or population), 70% sensitivity and 95% specificity, the calculators show you that for those who receive a negative test result (i.e. the majority), 56% are actually likely to be infected (as compared with 80% if the tests had both 100% sensitivity and specificity).

The trouble is that the precision declines as the prevalence of the disease reduces. So, in the above example, if you substitute the 80% pre-test probability for 1% (still around 10 times more than the current data based on ONS data), you find that the probability of being infected if you have a positive test result is only 16%. In other words, precision (or 'positive predictive value') declines dramatically as prevalence goes down.

So at 95% sensitivity and 95% sensitivity and a 1% prevalence (pre-test/clinical probability) level, only 49% of those who receive a positive test would actually be likely to be infected. Play with your calculators yourself. Their downside is that they don't go below 1% and actual prevalence for most parts of the world are now much lower than 1%. Keen mathematicians can use he actual formulae to determine positive and negative predictive values from Altman & Bland (1994)

All of that's assuming the tests are taken n the right way and at the optimal time. This all means that precision in the real world can be much lower than under perfect lab conditions.

Also, even when manufacturer claimed sensitivity and specificity are much higher than shown in the above example, the precision can be very low when prevalence is low.      

Put another way, if you could be 100% sure that someone had Covid-19 as a result of confirmed infection with SARS-CoV-2 (which is almost never the case because lab-confirmed cases are based on inaccurate PCR tests, meaning this is more of a theoretical notion), for every 100 people who are infected, 30 would be missed. This means there's a 30% failure rate where you can guaranteed infection or a 57% failure rate if you're 80% sure someone is infected, which is a more realistic scenario. 

Would you make one of the biggest investments of your life in some unproven technology that had more than a 30 to 50% failure rate? Especially without asking those who’d contributed to your wealth (taxpayers) if they thought this was a good idea?

(See below for references)

From case rate to positivity rate

Last week we discussed the emergence of the ‘casedemic’ – the change in the narrative around Covid-19 that now rarely discusses daily death rates, and instead focuses the public eye on cases. This week, to aid your critical thinking, we add another metric that helps you look at test results. It’s the positivity rate. In the context of Covid-19, it’s quite simply the percentage of those tested who test positive, based on RT-PCR tests.

Like all metrics, it has its limitations, because it depends on who’s getting tested. With a scientific hat on, the results we see for Covid-19 are limited by the fact that the sample of people getting tested aren’t randomised. But it’s still a very useful relative metric, that tells you a lot about the progression of an epidemic, much more so than simply the number of cases, something the media has been trying to keep our eyes focused on.

We also talked last week about another useful metric, one linked to mortality, the infection fatality ratio or IFR. But given that so few people appear now to be dying of Covid-related causes, it’s important to get a handle on the proportion of positive tests found among those tested using albeit inaccurate RT-PCR testing. Enter the positivity rate. The metric has been given a lot more airtime in the other countries, such as the USA and Australia. It’s not used widely in the UK, the country that hosts one of the leading vaccine contenders in the Oxford/AstraZeneca vaccine.

On 12 May 2020, the World Health Organization (WHO) somewhat arbitrarily advised governments that before reopening economies and removing restrictions, the positivity rate should be below 5% for at least 14 days. Currently in the USA, around half the states (25) are below this level, half (26) above. This puts the USA, nationally, a fraction over the 5% positivity rate (5.1%) in week 36 (first week of September).

In the UK, we’ve calculated the 7-day moving average for the positivity rate based on data from the UK government dashboard at just 0.7 (that’s over 7 times less than the WHO arbitrary threshold of 5% (see Figure 1 below). We’ve included the US data (Figure 2) below for reference; among the reasons for the high figures in the US is the fact that the epidemic wave struck the southern states significantly later.

Fig 1 Positivity rate trend for the UK (Data source: GOV.UK; data analysis and graphics by Alliance for Natural Health International)


Fig 2 Positive rate trend for USA (Source: Johns Hopkins Coronavirus Resource Center)

Back to the big question

With a bit of this additional food for thought, let’s get back to the question: Should Brits be investing such a vast – yes, an eye-watering £100 billion – a figure that is on par with or in excess of the UK education spend, to give just one example – without any recourse to the view of citizens or their elected representatives?

If it helps, you might also want to consider some other questions, such as: Is Boris Johnson’s new ‘Rule of 6’, that was apparently ushered in against the will of every minister other than Matt Hancock and Michael Gove, all part of a crony capitalism revival? One that’s being steered through with Boris Johnson’s hand firmly on the tiller of the United Kingdom?

Think about it.


We've got two closely related questions we'd love you to answer via Twitter poll:

Question 1

Question 2

Infographic references:

[1] House of Commons Library

[2] Institute for Fiscal Studies

[3] Ministry of Defence

[4] NHS England

[5] NHS Digital

[6] Joseph Rowntree Foundation