In Dutch this text is available at the Academia account of Frans Groenendijk too (see link in footer). It was first published in 2012 at Keizers en Kleren (Emperors and clothes), a website about science that does not exist anymore.
So yes: the article is older than the previous one in this category TEXT: Real Science and Climate Science.
The prestige of science is crumbling. This crumbling poses a threat to civilization, especially western ones. If we are not careful, with the inevitable decline of social science and climate science, the image of all scientific practice will suffer more damage than it deserves. It is the background for this plea for non-peer reviewing, particularly in social science.
The superiority of scientific practice over other forms of knowledge acquisition and dissemination lies in welcoming criticism, in other words, in the focus on the content of what is written or spoken, rather than on the position of authority of the writer or speaker. Scientific practice is not so easy. Describing the conditions that must be met in order to arrive at scientific knowledge is less difficult.
Explicit, falsifiable hypotheses must be drawn up and tested.
The data itself and the exact way in which it was collected must be public.
The data must be processed using reliable, sound, expertly handled tools.
Preference is given to data that comes from –repeatable- experiments.
The tested hypotheses must be in line with a clearly formulated theory.
The extent to which these conditions are met determines the quality of the scientific effort in question. If that value is of a sufficient level, the significance of scientific productions can be assessed.
The conditions from the previous paragraph describe an ideal image. Practitioners of natural sciences come closest to that ideal, but even for them, the latter condition in particular – linking to a clearly formulated theory – is a fundamental problem. Michael Polanyi has already pointed out in Personal Knowledge (1958) that without this condition, scientific practice would not lead to meaningful knowledge but to the filling of cabinets of curiosities: loose, incoherent chunks of scientific knowledge in an ocean of opinions and traditional views.
Thomas Kuhn has put this theoretically formulated limitation into practice in The Structure of Scientific Revolutions. The term ‘paradigm’ is the central concept in this.
Irreverently formulated, Kuhn actually states that scientists are only human and not all perform at the same high level.
Just like people in general, many scientists are guided by their assessment of whether they can still pay their mortgage payments in the medium term, competition and friendship play a role, as do money and prestige. In other words, scientific practice is also a sociological phenomenon.
This is all rather negatively formulated, it can also be more positive. Most scientists’ research stays close to the set of common theories that shape the paradigm.
Strengthening the dominant paradigm can certainly be useful. Paradigms do not lend themselves to unambiguous falsification and certainly not to verification. The prevailing paradigm needs to explain more, propose more meaningful research, and explain both the strength and limitations of the old paradigm; there is nothing fundamentally wrong with contributing in these areas.
The sociological aspects of the world of science threaten its quality in various ways. A number of threats occur in each branch, others are more specific to certain branches of science.
In False Progress. Deception in Dutch science, Frank van Kolfschooten described a number of cases of deception, ranging from staggering to downright sensational. It even appears to be able to take the form of data falsification by an assistant in love to substantiate claims of her scientific boss, falsification that is then not tackled for decades thanks to misplaced respect for the authority of that scientist.
The inordinateness of social psychologist Diederik Stapel, who has achieved an almost proverbial status even outside the scientific world, were even more spectacular.
However, such behaviors do not pose a major threat to the reputation of science: they are so obviously deceptive that everyone around him has dropped him and the man is -rightly – considered a pariah of science. Ex-professor Stapel even handed in his doctorate.
There is a danger that this gives the impression that this is the exception that does proveⁱ the rule of honest scientific practice.
As a result, two things receive too little attention.
Always and everywhere there is the danger of pure deception. The conditions should actually also state: the practitioners of science have a minimum level of integrity. The possibility that there is pure deception, made-up data, generally receives too little attention; that minimum level is considered a given.
In addition, a lot of social scientific research has a political color; the theories refer too little to measurable behavior and too much to a human image. Human images function as paradigms. In criminology, for example, the paradigm is: perpetrators are victims. Political coloring also occurs outside of social science, see climategate.
Given the scale of scientific production, it is inevitable that the position of authority of writers and speakers still plays a role; no one can read everything. In various ways, efforts are made to ensure that this authority is derived exclusively from the scientific achievements themselves.
Since about 1960, indices (such as the Science Citation Index of Thomson Reuters and its derivatives) have been intended to provide an objective measure of authority in scientific circles. Simply put, anyone who is quoted a lot by other scientists apparently has something interesting to say.
There are many questions and comments to be made about the precise mechanism. For example, what is the effect of exposure and/or withdrawal? And more fundamentally, what can be done about targeted attempts to influence the index? There is a certain similarity with the most questionable forms of ‘search engine optimization’, such as so-called link farms.
Phil Davis recently wrote a piece on The Scholarly Kitchen about those practices under the title Emergence of a Citation Cartelⁱⁱ.
The phenomenon of peer reviewing is much older, but its broad application is about the same age as indexing. The blind variant in particular seems like a fantastic tool at first glance; before a magazine posts an article, the article is presented to a number of people who can be considered to be experts on the subject discussed in the article. The reviewers are not told by whom it was written and, ideally, the editors who judge whether or not the article is included do not know who the reviewers are. In practice, this system turns out to be far from watertight.
Twan de Vries of the Leiden University Medical Center has repeatedly drawn attention to this. After listing the different ways in which this can work, De Vries wrote in the Leiden University Weekly Mare:
Most of the above problems can be traced back to the malfunctioning of the peer review process. (…) It is high time that reviewers realized (again) that reviewing scientific articles is a serious and time-consuming matter that requires care and neutralityⁱⁱⁱ.
Later, the VPRO radio program Noorderlicht interviewed a number of scientists under the heading ‘elbow work’.
Peers and colleagues
The word ‘peer’ in science has the meaning: someone with the same status and competences. Outside of this, it also refers (among other things) to peers and any group of people who have an influence on someone. From this comes the concept of peer pressure: “influence from members of one’s peer group; his behavior was affected by drink and peer pressure.”
It is ironic that an important part of the problems is ingrained in the different meanings of the term ‘peer’. Even more ironic is that the warning about dangerous aspects of reviewing by ‘peers’ also is embedded in the Dutch term for it. Peer reviewing is translated into Dutch as: “peer review” and according to Van Dale, the term ‘collegial’ stands for both “as it goes among colleagues” and “comradely”.
Being checked by colleagues is better than not being checked: ‘peer reviewed’ is better than ‘non peer reviewed’.
Large-scale conscious or unconscious misuse of statistics and other forms of science fraud still require something else.
Certainly in social science, it would be advantageous if people from outside the relevant field would also critically assess articles.
For many scientific articles, for reviewing it is sufficient to have a good dose of common sense, intelligence, critical sense and some knowledge of statistics.
In social science this applies to all articles.
The image of the social sciences is crumbling. My ‘working hypothesis’ is that real scientists in particular still have a relatively positive view of the social sciences. Not all of them. In the US, work is being done to stop putting money from the NSF (National Science Foundation) into social sciencesⁱᵛ. Political science may have already been scrapped.
ⁱ In NRC, Sjoerd de Jong writes about how the newspaper deals with the Stapel case. The newspaper places a warning in the archive with documents that refer to the work of the stumbled scientist. The article does not contain any indication or suggestion of the newspaper’s future reluctance in its references to products from social psychology. Link to the NRC-article (in Dutch).