A recent report claims that leading figures in the data science industry believe that the computer code developed by Professor Neil Ferguson, which convinced governments that lockdown was the best choice to prevent the spread of the Wuhan coronavirus, was “totally unreliable.”
A recent report from the Telegraph alleges that experts have begun to question the computer code developed by Professor Neil Ferguson which largely influenced world governments to enter states of lockdown to prevent the spread of the Wuhan coronavirus.
David Richards, the co-founder of the British data technology firm WANdisco, derided Ferguson’s data model of coronavirus infections as a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming.” Richards added, “In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The Imperial College’s model placed the fatality rate higher than others and predicted that in the United Kingdom alone 510,000 people would die without a lockdown. This reportedly prompted a dramatic change in policy from the UK government which ichluded shutting down businesses, schools, and restaurants.
The Telegraph writes:
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.
Researchers from the University of Edinburgh commented on the Imperial College’s Github file: “There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different.”
A fix to this bug was later provided but appeared to be one of a number of bugs within the system. The developers claim that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour.”
But some specialists have argued that “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters…otherwise, there is simply no way of knowing whether they will be reliable.”
Michael Bonsall, Professor of Mathematical Biology at Oxford University, commented that: “We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained.”
Konstantin Boudnik, vice-president of architecture at WANdisco, also stated that Ferguson’s previous models which incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu, and 65,000 from swine flu, do not inspire confidence. “The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr. Boudnik. “We don’t know for sure if the same model/code was used, but we clearly see their methodology wasn’t rigourous then and surely hasn’t improved now.”
A spokesperson to the Imperial College COVID19 Response Team stated:
The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.
Read more at the Telegraph here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com