Welcome back to Not Such a Silly Idea! In this exciting new season, two summer students continue to critique a government document, and this time they have interactive visualisations! In case you missed the first part of our epic journey, you can read instalments one, two, three and four.
One of the claims made in the National Statement of Science Investment (NSSI) which we wanted to look into was based on this graph (Figure 1) on page 20 of the NSSI:
The NSSI asserts that the government’s investment in science has “increased by over 70% since 2007/08”, and that “our science has improved in response”. This is followed with several graphs which show things like increasing publication rate, increasing number of papers in top journals, growing research workforce, and increasing international collaboration over time. These graphs each stand on their own, but fail to relate these improvements to the amount of money the government has been spending on science. One of our goals was to re-visualise these graphs in a way that clearly showed a correlation (or not) with increasing government investment, and we will address that later on. But before that, we had to investigate this data on government spending.
We remain puzzled that the NSSI claims an “over 70%” increase in government expenditure in the last eight years, when according to their own data the increase is more than 80%. Self-deprecation aside, when we went back to The Treasury we discovered that the graph on page 20 is not adjusted for inflation. This immediately indicated that the increase in spending was not quite as significant as claimed, since the government would have had to up their investment by about 40% just to compensate for the devaluing dollar. Using the Reserve Bank calculator, we found that government spending on science has actually increased 55% (not 82%) since 2007/08 and 46% (not 87%) since 2004/05.
After adjustment the government’s spending still showed a rise, however, so we started looking for the implications of that rise. We created the graph below to see whether the government might be able to claim New Zealand’s burgeoning number of publications as correlated with its financial support. To make this graph we had to include a time lag. Cash pays for research, but it takes time for that research to be published. We weren’t sure, though, how long that lag is on average. We did a regression analysis of our data using this equation, in case you’re interested:
We used a range of time lags between investment and publication, from no lag to four years. The time lag that showed the strongest correlation was two years. (We have far too little data to infer anything universal from this – it is simply our best guess.)
And voila, there is a positive correlation between the amount of money poured into ‘science and innovation’ and the number of papers churned out. But what does the money do? Do scientists publish more frequently when they are better funded? In other words, does greater funding increase productivity measured in papers per full-time researcher? Or does the money go towards increasing the number of people producing papers in the first place?
Six years ago, Professor Shaun Hendy published this paper (page 56). It drew the conclusion that although our publication output increased hugely from 1990-2008, that increased output was due to a rise in the number of researchers, not the number of papers each researcher produces in a year. Having read this paper, we expected to find similar results for 2000-2014, but we were surprised to see that both FTEs and productivity have been steadily on the rise, according to both OECD and Statistics New Zealand data.
The publication output numbers we retrieved from SciVal, and we found two different sets of researcher full-time equivalents (FTE) data; one in the Main Science and Technology Indicators database on OECD.Stat, and the other in the R&D surveys on Statistics NZ. There was a confusing discrepancy between these sources because the latter breaks down Higher Education researcher FTEs into ‘researcher’ and ‘student researcher’, while OECD.Stat makes no distinction, and the numbers didn’t come to the same total. Our best guess is that one counts only PhD students, while the other also includes Masters.
These two graphs are very interesting, because in spite of the differences, they support that both the number of science researchers and their productivity has increased.
So, apart from the fact that MBIE needs to be careful with accurately presenting information, we can conclude government investment in science has indeed increased, and that it is correlated with increased output of publications, increased research workforce, and increased productivity. Of course, just from these graphs we can’t be sure which way round the causal relationship works. A great incentive, surely, for both parties to keep up the good work!