By Catherine Webb and Nicola Gujer

A long time ago in a galaxy far, far away, we started deconstructing the National Statement of Science Investment and considering the claims it makes.

Our first blog post ended with an unanswered question – is the proportion of a country’s papers in the global top 10% of citations a good way (or at least the best available way) to measure science excellence? The NSSI’s definition of excellent science is “well-designed, well-performed, well-reported research, recognised as such, e.g. through peer review.”

In preparation for our first blog post, we were dealing with a graph that suggested New Zealand produces science which is not quite as ‘excellent’ as our comparative small advanced economies, because a lower percentage of our publications make it into the 10% most cited publications worldwide. In that post, we discussed the limitations of that graph, but nonetheless concluded that it does show New Zealand lagging slightly in the production of ‘top-shelf science’.

Still, we were curious to see whether there are any more useful ways of measuring ‘science excellence’ that might paint a different picture.

What if just looking at the papers with the most citations is not the right approach? Are citations a good way to measure excellence at all? One flaw with citations is that papers can become highly cited for two reasons: by being amazing, or by being such poor science that everyone wants to correct it (like Wolfe-Simon’s paper on a bacterium that thrives on arsenic). Also, as we discussed in our second post, citations tend to follow a ‘rich-get-richer’ power-law distribution, which makes a well-known paper garner even more citations, while another paper, nearly as good, can dwell in obscurity all its life.

However, even if citations are not a great way, they may currently be the least-bad way of assessing the impact of publications. But what kind of citations should we use to compare countries? Every country has different areas of specialty, and if they are alone in that area, they may not be cited very much by other countries, even if their science is top-notch. New Zealand, for example, is environmentally isolated and unique. Our conservation or agricultural papers for instance may not be of immediate relevance to anyone else as much as they are to us. If our science is rather useful to ourselves, but not to the rest of the world – should that make it less ‘excellent’?

We broke the data down into only intranational citations per publication and only international.

Because international citations make up the vast majority of any small country’s citations, these have the greatest impact on the percentage of publications in the 10% most cited. Thus, in terms of ranking countries, these two measures of ‘excellence’ can be roughly used as proxy.

Does New Zealand’s rate of intranational citations balance our lagging overall ranking?

Country Intranational citations per publication International citations per publication
New Zealand 1.25 5.95
Denmark 1.70 8.35
Finland 1.46 6.87
Ireland 1.13 7.05
Israel 1.23 6.98
Singapore 1.35 7.82
Mean average 1.35 7.17

It’s possible that New Zealand produces publications which are more relevant and therefore cited more within our own country than they are in other countries; we just don’t cite enough to pull up our average very far,      but this is conjecture.

In any case, does New Zealand do well enough by intranational citations to let us off the hook of general lack-of-excellence? Well, we certainly have room for improvement. The next question is, obviously, how to improve – a subject for another article, where we examine government investment and its effect on the science sector.

Look out for our next episode in the New Year, and may the force be with you over the holidays!

*Featured image by XKCD comics.