My own personal Impact Factor

15 August 2012 by Tom Webb, posted in Uncategorized

The editor of a well-respected ecological journal told me recently, “I am… very down on analyses that use citation or bibliographic databases as sources of data; I'm actually quite concerned that the statistical rigor most people learn in the context of analysing biological data is thrown out completely in an attempt to show usage of a particular term has been increasing in the literature!” I think he has a point, and in fact I feel the same about much that I read on bibliometrics more generally: there’s some really insightful, thoughtful and well-reasoned text, but as soon as people attempt to bring some data to the party all usual standards of analytical excellence go out the window.

I see absolutely no reason to buck that trend here.


The old chestnut of Journal Impact Factors has been doing the rounds again, thanks mainly to a nice post from Stephen Curry which has elicited a load of responses in the comments and on Twitter. To simplify massively: everyone agrees that IFs are a terrible way to assess individual papers (and by inference, researchers), but there’s less agreement on whether they tell you anything useful when comparing journals within a field. Go read Stephen’s post if you want the full debate.

But what’s sparked my post was a response from Peter Coles (@telescoper), called The Impact X-Factor, which proposed an idea I’d had a while back about judging papers against the IF of the journal in which they’re published. Are your papers holding up or weighing down your favourite journal? Let’s be clear from the outset: I don’t think this tells us anything especially interesting, but that needn’t put us off. So I have bitten the bullet, and present to you here my own personal impact factor. (The fact I come out of it OK in no way influenced my decision to go public.)

The IF of a journal, remember, is simply the mean number of citations to papers published in that journal over a two-year period (various fudgings and complications make it rather more opaque than that, but that’s it in essence). So for each of my papers (fortunately there aren’t too many) I’ve simply obtained (from my google scholar page, as it’s more open that ISI) the number of citations they accrued in the two years after publication. I’ve then compared this to the relevant journal IF for that period, or as close as I could get. Here are the results:

OK, bit of explanation. This simply plots the number of citations my papers got in the two years post-publication, against the relevant IF of the journal in which they were published. (The red points are papers published in the last year or so, and I’ve down-weighted IF to take account of this; I’ve excluded a couple of very recently-published papers.) The dashed line is the 1:1 line, so if my papers exactly matched the journal mean they would all fall on this line. Anything above the line is good for me, anything below it bad – the histogram in the bottom right shows the distribution of differences of my papers from this line.

I’ve fitted a simple Poisson model to the points, with and without the outlier in the top right – neither does an especially good job of explaining citations to my work, so we might as well take a mean, giving me my own personal IF of around 6.

As my editor friend suggested, there’s a whole lot wrong with this analysis. For instance, I haven’t taken account of year of publication, or any other potential contributing factors (coauthors, publicity, etc. etc.). Another obvious caveat is the lack of papers in journals with IF > 10 (I can assure you that this has not been a deliberate strategy). But back in the peloton of points which represent the ecology journals in which I’ve published most regularly, I’m reasonably confident in stating that citations to my work are unrelated to journal IF. Gratifyingly too, the papers that I rate as my best typically fall above the 1:1 line.

So there we have it. My own personal impact factor.

9 Responses to “My own personal Impact Factor”

  1. Jon Copley Reply | Permalink

    Love it; this has now distracted me for a lunchtime to do the same, and find that the mean difference between initial two-year citations of my papers and the IFs of their journals is >7.

    No doubt some bosses would argue that I should therefore be pitching to higher IF journals. But my conclusion, of course, is that I am, ahem, clearly leading my specialist field (and furthermore, perhaps my usual journals should pay me a fee for papers under gold option OA?).

    Now that's the kind of metric I can get behind! :)

    (only kidding - but maybe I should try including those data in my next appraisal, and see what happens...).

    Seriously, though, playing with this has reinforced to me the folly of any system that attempts to judge people via a single number. My highest cited paper was a foray into educational research, outside my usual field. It accrued a daft number of citations because it was one of the first on its topic (and in a far larger field than my usual research), but in terms of analytical rigour, I am quite ashamed of it. Meanwhile, one paper I still regard as my best work just tracks along its journal's IF in citations (my view, naturally, is that it is ahead of its time, and waiting for its significance to be appreciated!).

  2. Tom Webb Reply | Permalink

    Thanks Jon. And there was me happy just to come out as a +ve, +7 is extraordinary, well done!

    On quality issues - my view (perhaps naive, although with some basis in experience / conversations) is that the people who really matter (grant reviewers, REF panels, peers, colleagues and friends) are actually pretty good judges of the quality of a piece of work and look beyond simple numbers. I think we need to trust that this is the case - but the challenge is to convince the people who control what gets submitted for assessment that you are the best judge of the quality of your own work.

    Meanwhile, do you want to swap lists of things we could each cite in our next papers?!

  3. Tom Webb Reply | Permalink

    Of course, the stat I neglected to include was my 'value added IF', the extent to which my numbers differ from journal predictions, which is 2.282567 (I think that degree of precision is warranted). Basically my papers do about 0.6-4 2-yr cites better than they should. I await my letters of thanks from executive editors with baited breath.

  4. Stephan Schleim Reply | Permalink

    Hi Tom, what a nice idea to "personalize" your impact factor in a certain way.

    I think that there is a problem with your method, though. The journal impact factors you are using seem to be based on the (common) ISI Web of Science (WoS) citations, but your personal citations seem to be derived from Google Scholar which includes many more sources than the WoS, such as meeting abstracts, book chapters, even blog posts on the internet.

    To get a more valid estimate, I would suggest to either combine WoS impact factors with your WoS citations, or Google Scholar impact factors (if there are any) with your Google Scholar citations.

    Mixing the two will amount to, either, butting the benchmark rather low (numerically, with the lower WoS citations) and getting a positive personal bias, or, the other way round, putting the benchmark rather high and getting a negative personal bias.

  5. Tom Webb Reply | Permalink

    Hi Stephan, and thanks for the comment. You're absolutely right - if I was doing this properly I'd certainly want to use the same source (although I don't think there's an enormous difference between WoS and GS cites in my case). But this was never going to be a comprehensive analysis - it was intended to be somewhat tongue-in-cheek, and I think the whole idea is too trivial to devote too much time to, so I went with the easiest way to pull the data together quickly. Hence, blog post not paper!

  6. Karen Vancampenhout Reply | Permalink

    Great idea! Should you ever decide to do a 'personal impact factor 2.0', maybe you can include a factor for the 'relevance' of each citation? I.e.: did they actually refer to new data or hypoteses or did they simply use it to back up some general statements in the introduction?

    Yesterday some collegues and I were musing about citation alerts, only to find that the question "Did I say that?" often returns when our most 'popular' papers get cited. Or worse: your name appears behind something that's completely wrong (e.g.

    Quite a humbling experience to find that even the people who claim to be familiar with your work didn't bother to read it properly...

  7. Tom Webb Reply | Permalink

    Thanks Karen. And you're dead right about being mis-cited, my work has certainly been cited in support of exactly the opposite point to that which I made… I think one way around this is to make sure your titles at least are completely unambiguous - for instance, using a question as a title seems to be a pretty good way to get misunderstood by the lazy reader!

  8. Pat Bateman Reply | Permalink

    Don't you need to divide the number of cites your papers received by 2?

    I think you have defined the IF incorrectly as " the mean number of citations to papers published in that journal over a two-year period". It is actually the number of citations in year T to papers published in T-1 and T-2, divided by the total number of papers publised in T-1 and T-2. So if a journal published 100 papers in 2010 and 2011, and those papers received 200 cites in 2012, the IF would be 2.0.

    The IF is therefore counting the citations of a paper within a single year, not over a 2-year period. Your measure could be a defensible estimate of the contribution of your paper to an IF if you divided by 2 - although is not actually directly comparable with the IF.

    That means your personal IF is about 3.04. That would seem to be about right given the mean IF of the journals you publish in. [you mention above that the mean IF of your journals should be 2.28..., but a glance at your graph indicates that figure is pretty clearly wrong].

Leave a Reply

+ nine = 10