Alice Richardson wrote:
Greetings Australian and New Zealand statisticians: I’m writing to let you know the outcome of the proposal to reorganise statistical support for research students and staff at the Australian National University.
Many of you will be familiar with the Statistical Consulting Unit (SCU) which was established in 1982. It flagship activity of one-on-one consultations has resulted in methodological advances as well as contributions to knowledge in disciplines right across the ANU. I’d like to thank all the Directors and consultants in the SCU who have contributed to its success over the last 39 years.
You’ll recall that in March 2021 I let you know that the ANU was planning to shut down all centrally-funded statistical support for research students. I’m grateful for the support the SCU received from ANZSTAT, Statistical Society members and the SSA Executive in our partially-successful attempts to defeat this.
From 20 August 2021 the SCU will close and be replaced by the Statistical Support Network (SSN). I’m the Lead of the SSN. My initial goals including building a network of statisticians across campus who can support students in their own disciplines, as well as organising drop-in sessions, workshops, and developing a web portal of statistical tutorial material. There’ll be a period of transition over the next 5 months for the current clients of the SCU, and updating the SCU website is also a priority.
I look forward to continued support from the ANZSTAT community as this new venture gets under way – thanks in advance, Alice.
Congratulations on what you have achieved.
Evidence on the very serious problems that exist with the science funding and publication processes continues to emerge, with statistical analysis issues a large part of it. Why does it seem that so little is being done in most of the areas affected (psychology is one area where there has been change) to deal with it? Why is there not more concern within the academic community?
The areas where publication processes appear to be functioning well are those where what is done relies on contributions from scientists who share data and expertise, all contributing their own skills, so that the refereeing that matters occurs before papers are sent for publication --- such areas as climate science, geophysics, earthquake science, the study of viruses and vaccines, modelling of epidemics, and so on.
In areas where what is presented is the work of one scientist, or of a small tight-knit group, the refereeing that matters is what happens, if at all, after the paper is published. Examples are the May 2020 Lancet and New England Journal of Medicine studies, claiming to be based on observational data, arguing that use of the drug hydroxychloroquine as a treatment for Covid-19 was increasing patient deaths. Issues (with the analysis as well as with the credibility of the data) with these papers were quickly identified because they made claims that bore on an issue of major concern, and attracted attention from readers who carefully scrutinized their detailed statements. They were quickly retracted. How much that has no sound basis does not attract such attention, and is never challenged?
There are apt comments in
Stark, Philip B., and Andrea Saltelli. "Cargo‐cult statistics and scientific crisis." Significance 15, no. 4 (2018): 40-43.
The mechanical, ritualistic application of statistics is contributing to a crisis in science. Education, software and peer review have encouraged poor practice–and it is time for statisticians to fight back.
Not just statisticians, I'd suggest, but scientists who care about public regard for science.
A recent book that highlights many of the issues is:
Ritchie, Stuart. 2020. Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science. Random House.