Nature: Research shows that Karl Marx and Sigmund Freud are the best scientists ever!

8 November 2013 by Stephan Schleim, posted in scientific fraud, theory of science

Assessment has become ubiquitous in the present science system. Particularly output measures such as the Impact Factor or h-index are very influential in determining scientific progress at all career stages. Presently, there is a surge in resistance against this assessment regime – resistance that also criticizes many other things going wrong in the present science system. One example for such resistance is the Science in Transition movement that just organized an international meeting at the Dutch Royal Academy of Arts and Sciences in Amsterdam.

Nowadays, people are assessing everything. Particularly in science and academia, evaluations have become ubiquitous. To assess the quality of a researcher, citation indexes such as the ISI Journal Impact Factor or the h-index have become tools that are decisive in granting research funds and assigning tenured positions; they determine careers, who is to stay and who has to leave academia.

The Impact Factor reflects the popularity of a journal, understood as the mean number of citations per article within the previous two years; by contrast, the h-index relates to the productivity and citation frequency of an individual researcher, where h is the number of papers that have at least h citations. For example, a researcher with ten publications that are cited at least ten times each will have an h-factor of ten.

Quantifying quality

Ever since these measures exist, there has been much critique by the assessed scientists themselves as well as by theoreticians investigating how the quality of science can be measured in the first place and what could go wrong if this is done in a bad manner. Eugene Garfield who is considered as one of the founders of bibliometrics and scientometrics and who developed the Impact Factor conceded that there is much critique while advocating that there was just no better alternative system available (Garfield, 2006).

One of the absurdities of his measure is that it really only tells you something about the popularity of a journal and of the field it represents. It was originally meant to assist librarians, not research grant or tenure committees, simply because it does not tell you anything about the individual researcher.

Everything is counted, even if the count is meaningless

Yet, for example, the Dutch Research Foundation (NWO) still asks people submitting research proposals to first report the mean Impact Factor of their field and then to provide the Impact Factor for each of their publications’ journals. Although this is obviously a confused way of assessment, colleagues from the natural and life sciences are strictly required to provide the information while this is voluntary for others.

Performance metrics based on values such as citation rates are heavily biased by field, so most measurement experts shy away from interdisciplinary comparisons. The average biochemist, for example, will always score more highly than the average mathematician, because biochemistry attracts more citations. (Richard Van Noorden,

The advantage of the h-index is that it is at least a measurement about the individual, because it reflects productivity and an indication of his or her number of citations. Obviously, this still does not tell you much about the quality of research, as people might also refer to work because they are criticizing it or because the number of citations within a field, the so-called citation density, varies strongly. For example, molecular biologists refer to many more other papers in their publications than psychologists, leading to higher h-indexes in one field than in another. Yet the index, just like the Impact Factor, is often used as a surrogate measurement of scientific quality.

Science in Transition

I just attended a two-day meeting of an originally Dutch initiative called Science in Transition (see their position paper Why Science Does Not Work as It Should And What To Do about It, PDF). With the participation of scholars from Harvard and Oxford they met at the Dutch Royal Academy of Arts and Sciences in Amsterdam on November 7 and 8 (day 1, day 2). The scholars there reacted to many problems and wrong incentives within the current science system.

Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity. … Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising. (How Science Goes Wrong, The Economist)

The attendants proposed several improvements that would be easy to implement. One of the issues referred to several times were the implications of research assessment, particularly when this reduces researchers to a number such as the Impact Factor or the h-index. Some presenters referred to their own h-indexes of twenty or forty as particularly high. However, a study summarized in a news feature just published on tested who is the most successful researcher ever when calculating their h-indexes – and the results make such values between twenty and forty appear humble (mine is actually only seven).

And the winners are...

Of almost 35,000 researchers whose data are gathered on Google Scholar, it is actually the psychologist Sigmund Freud who has the highest h-index, namely 282, followed by the physicist Edward Witten with a score of 243. However, Filippo Menczer at the Indiana University Bloomington and his colleagues who carried out this calculation, noted that the h-index does not take into account different citation practices within the disciplines, such as the citation density I referred to above.

Ironically, while the American introductory textbooks we use at our faculty question whether Sigmund Freud was a scientist at all, he is the most successful one according to the h-index often applied in America.

They thus proposed to divide a scholar’s h-index by the average h of their scholarly field to gain a more valid measure that also allows for comparison between disciplines and called this new value the hs-index (Kaur, Radicchi & Menczer, 2013).

...Karl Marx and Sigmund Freud

When performing this correction, the number one scientist ever actually is Karl Marx, assigned to the discipline of history, with an hs-index of 21.5, far ahead of number two, Sigmund Freud with 14.8, Edward Witten, with 12.9, the philosopher Jacques Derrida with 12.5, and developmental psychologist Jean Piaget with 11.6. (An hs-index of ten means that this researcher has an h-index ten times as high as the average of his or her field – are you still following?)

These numbers may be surprising, considering that history, philosophy, and psychology often are not considered to be the most prestigious scientific disciplines, and there is no Noble Prize awarded in these fields. Yet, their work apparently is enormously successful in influencing and inspiring fellow researchers until today.

If you, as an evaluator, have to rely solely on corrected h-indices to compare academics, says Ihle, “then you’re dumb, and you don’t understand what you are doing”. (Richard Van Noorden,

Yes, it’s strange, but that’s just the way it is

If you start asking yourself what the value of all this counting, dividing, calculating, and comparing is, then you start getting the right feeling: Namely, that these are indeed strange things, particularly when assessing the quality and impact of an academic. Yet, the bad news is that this is what the system is like and what it has been like for decades. Most of the people now in powerful positions within science adapted to this strange system and forced others, their PhD students, research associates, and others they evaluated, to adapt to it as well – or to turn away in disillusionment.

Of course, there are exceptions, there is the occasional supervisor who prefers fewer papers of a higher quality above more papers of bad quality, as they now actually offer PhD students at the University Clinics Utrecht to submit a dissertation with four instead of six papers – given they meet the higher quality standards –, as I just learned at the Science in Transition meeting; or there is the occasional dean who communicates to his faculty members that he or she does not insist on maximizing rather meaningless assessment figures instead of following inspiring lines of research and educating people ready to become public intellectuals.

Many critics are old

There are also those who openly criticize the science system, like the members of the Science in Transition movement, but frequently they are academics at a late or very late stage of their career, long after their positions became tenured, long after they adapted to the corrupted and potentially corrupting incentive systems, and long after they encouraged others to adapt to it, too.

You may ask whether “corruption” is too strong a term to use, but here I refer to the explicit use by three former or present presidents of international neuroscience federations in a critical comment published in the Proceedings of the National Academy of Sciences of the USA:

It is our contention that overreliance on the impact factor is a corrupting force on our young scientists
(and also on more senior scientists) and that we would be well-served to divest ourselves of its influence. … The hypocrisy inherent in choosing a journal because of its impact factor, rather than the science it publishes, undermines the ideals by which science should be done. This contributes to disillusionment, causing some of our talented and creative young people to
leave science. (Marder, Kettenmann & Grillner, 2010, p. 21233)

But we can change the system

“Corruption” and “hypocrisy” are very strong words, indeed. I could not count how many young colleagues I talked to in the recent years who had become very cynical, openly talking about leaving the country to look for a place where the system works better or leaving science completely to become a commercial researcher or science journalist.

Wouldn’t it be nice, by contrast, to change the system, in particular the incentive system, such that it guarantees a reliable, trustworthy, and valid science and the idealists who scientists often are actually enjoy working in it? To achieve this, sometimes less of what we have become used to achieve would be more.

Do you want to know what’s in the sausage you are eating?

Otto von Bismarck (1815-1898), first Chancellor of the German Reich, once said: “The less the people know about how sausages and laws are made, the better they sleep at night.” Obviously, Bismarck’s interest was not to have educated and enlightened citizens, but to maximize the power of the leading elite while the common folk raise children that will be suitable soldiers for the Emperor.

Of course, if you want to sleep well, then do not investigate into the power and knowledge production structures of science; but perhaps it may be worthwhile, particularly for the younger colleagues, to have a couple of bad nights in order to increase the match between scientific idealism and scientific practice in the long run.

Now who is the best scientist, after all?

The brief discussion of the Impact and the h-index was just one example. One could simply claim that these measures should be replaced by better ones, which certainly are available and may do more good than harm when applied with care. Yet, it should still be disconcerting that these invalid measurements were used and applied by so many intelligent people for so long – and what the consequences of this counting regime are and will be for the time to come.

It may be worthwhile to consider not just replacing one way of assessment with another one, but to ask the more foundational question what all this assessment means in the first place, who wants it, whose benefit it is, what it is good for, and where it is (in)appropriate to be applied.

Or, of course, you could just bite the bullet and accept that Sigmund Freud and Karl Marx have indeed been the best researchers ever.


If you want to read more about science metrics, the science writer Richard Van Noorden who wrote the recent report wrote another news feature for Nature in 2010: Metrics: A profusion of measures.

Garfield, E. (2006). The History and Meaning of the Journal Impact Factor. Journal of the American Medical Association 295: 90-93.

Kaur, J., Radicchi, F. & Menczer, F. (2013). Universality of scholarly impact metrics (PDF) Journal of Informetrics 7: 924-932.

Marder, E., Kettenmann, H. & Grillner, S. (2010). Impacting our young. PNAS 107: 21233.

3 Responses to “Nature: Research shows that Karl Marx and Sigmund Freud are the best scientists ever!”

  1. John Dickey Reply | Permalink

    Of course, there were a couple of generations of Soviet authors who could get sent to Siberia if they failed to attribute all wisdom to Marx. That might sway the index a little?

    • Stephan Schleim Reply | Permalink

      I doubt that this explains much of his high h/hs-index score: That literature was published in Russian and before the times of the internet, while Google Scholar has a bias towards Western, more recent, and English literature.

      But a large number of citations might actually be Western scholars criticizing Marx, which undermines the validity of the h/hs-index as I already suggested in the post – but not belief Marx's popularity.

  2. Kaveh Reply | Permalink

    Great Post! I'm translating it to Persian for an Iranian monthly named Mehrnameh, accompanied with the two by Van Noorden in Nature.

Leave a Reply

four + 6 =