In the terrible movie “Pete’s Dragon” the drunken lighthouse attendant shows up an sings a song, which as I recall contains the lines, “A dragon, a dragon, I swear I saw a dragon.” Of course no one believes him. However, when the dragon can be seen by everyone, we know it is true.
Science works sort of like that: the experimental techniques state that if any two people do the same thing under the same conditions the same result will occur. I one time heard it explained and a device for remembering things. From UC Berkeley’s “Understanding Science“
|“Scientists aim for their studies’ findings to be replicable — so that, for example, an experiment testing ideas about the attraction between electrons and protons should yield the same results when repeated in different labs. Similarly, two different researchers studying the same dinosaur bone in the same way should come to the same conclusions regarding its measurements and composition. This goal of replicability makes sense. After all, science aims to reconstruct the unchanging rules by which the universe operates, and those same rules apply, 24 hours a day, seven days a week, from Sweden to Saturn, regardless of who is studying them.”|
Science promises a sort of repetition and accuracy. But science is also conducted by human beings who have goals which are not necessarily exactly the same as truth for truth’s sake.
And a scandal involving Chinese scientists publishing fake papers in the field of medicine (for the purposes ostensibly of advancing one’s career are quite troubling:
“All the way down, they all have similar reward systems,” Tiger explained. “That is a really bad mechanism to really foster a lot of fraud.”
Elisabeth Bik — the only member of the team willing to give her real name — is a microbiologist from the Netherlands, based in California. She began tracking the phenomenon of paper mills at the beginning of 2020.
Along with Tiger, a senior research scientist who goes by the name of Morty, and a mathematical psychologist called Smut Clyde, Bik has spent many unpaid hours searching for anomalies in Chinese research. Early this year, the group discovered one paper mill, which they believe was responsible for more than 500 fake studies examining human gene function and cancer.
Unfortunately, it seems to be lots of people: “In psychology journals, 39 percent of the 100 analyzed studies had been successfully replicated. In economy journals, it was 61 percent of 18 studies, and in the journals Nature and Science, it was 62 percent of 21 studies.”
This means that a significant portion of the scientific work in the first instance is problematic. I’m not sure exactly how this correlates with the study of papers for replication, but a fair number of scientists admit to engaging in problematic practices, “On average, across the surveys, around 2% of scientists admitted they had ‘fabricated’ (made up), ‘falsified’ or ‘altered’ data to ‘improve the outcome’ at least once, and up to 34% admitted to other questionable research practices including ‘failing to present data that contradict one’s own previous research’ and ‘dropping observations or data points from analyses based on a gut feeling that they were inaccurate.’ In surveys that asked about the behaviour of colleagues, 14% knew someone who had fabricated, falsified or altered data, and up to 72% knew someone who had committed other questionable research practices.” (An unscientific observation is that more people engage in bad acts than actually admit to it: hence 34% say they have been sketchy at some point but they 72% know someone else who has done so. This might indicate that some people are lying about lying.)
And, apparently as a corollary to the adage “A lie travels around the globe while the truth is putting on its shoes” we find out that the questionable papers are by far the papers which get the most press, “The differences in the prominent Nature and Science journals were the most striking: here, non-replicable papers were cited 300 times more than replicable ones on average.”
This could be just a matter of poking at the moral preening of scientists. But these fabrications, repetitions, and unnecessary errors result in missed opportunity. There are cancer treatments which never come about or are delayed because a researcher begins with bad data and it takes years to figure out what went wrong.