Guest Blog

Meet the team: Q&A with Mike Plank

Meet the team: Q&A with Mike Plank

We recently caught up with Principal Investigator Dr Michael Plank, a senior lecturer in the School of Mathematics and Statistics at the University of Canterbury. Mike has taken on the role of Theme Leader: Complexity and the Biosphere while Alex James is on hiatus. As a research theme leader, Mike will be steering Te Pūnaha Matatini’s research projects that build a better understanding of New Zealand’s environment and the interactions between biodiversity, the economy, and human decision-making.

Tell us about your research, including projects aligned with Te Pūnaha Matatini

My research is in biological modelling and ranges from the very small (intracellular dynamics) to the very large (marine ecosystems). A common theme in my research is investigating how collective phenomena emerge from interactions among individuals, whether on the scale of single human cell, or the scale of an ocean. I am interested in the insights that relatively simple mathematical models can give into the ways these complex systems function – and why they sometimes go wrong.

One of my projects aligned with Te Pūnaha Matatini is modelling the emergent behaviour of fishers stemming from their decisions about which species or sizes of fish to target. Principles from ecology suggest that natural predators tend to spread their effort according to the productivity of their prey. So why shouldn’t humans behave like natural predators and spread their fishing efforts according to the productivity of the fish? If this really happens, it could change the way we design fishing regulations from top-down control to a bottom-up approach that recognises the effect of the fish stock on the behaviour of fishers as well as the other way round.

What attracted you to the  role of Theme Leader: Complexity and the Biosphere?

We have some really exciting projects going on in the Biosphere theme. I’m really looking forward to a new project that will look at the interplay of ecological dynamics, geospatial data, and social attitudes to map the effectiveness of large-scale predator control. Other projects include investigating the effects of social contact networks on epidemic spread, and harnessing the huge potential of citizen science to enhance conservation projects.

We have some amazing scientists and students involved with these projects and I’m excited to work with them and see how we can turn the scientific results into real impacts for New Zealand’s unique ecosystems.

How can research using complex systems, networks, and data assist New Zealand’s environment?

New Zealand is facing a range of pressing environmental issues, including loss of our endemic native flora and fauna, agricultural pest invasions, and management of our fisheries. We have a large amount of data relating to these, for example the Department of Conservation’s tier 1 monitoring programme, and catch data from our Quota Management System. At the same time, we’re investing substantial money and resources into these areas, but we’re not always making full use of the data that are available. Te Pūnaha Matatini’s research programme has the potential to really add value to our conservation dollar by helping us target our resources to areas where they will have the most impact.

Taking a complex systems and network approach also gives us opportunities to look at environmental issues at a larger spatial scale, rather than focusing on projects in isolation. As a simple example, a predator control programme in an area of Department of Conservation land might reduce or even eliminate the possum population in the short-term. But if there is adjacent, privately owned land without any control, the possums are likely to re-invade in the long-term. Viewing the whole country as an interconnected network gives us a better ability to predict long-term outcomes, and therefore a better chance of eliminating possums for good.

My First Conference(s)

My First Conference(s)

By Jonathan Goodman

Never do things by halves, jump in the deep end, give it a go, eat your vegetables, trust your supervisors. This is all good advice and I now realise I must have taken it, having presented at the first conference I have ever attended, then attending another conference three days later run by an organisation I had never heard of before. I have also joined the Te Pūnaha Matatini Whānau committee based solely on my supervisor’s advice. Before I go on, I must admit that all of these actions have proved to be worthwhile and rewarding.

The first conference was the Te Pūnaha Matatini cross-theme hui. This was the first Te Pūnaha Matatini gathering I have attended since joining the Centre of Research Excellence as a PhD student at the start of the year. The hui consisted of a series of short talks, including my first at a conference, interspersed with four rounds of the “Research Knockout” – a game designed by Alex James. The game started with the creation of teams of 3-5 researchers from Te Pūnaha Matatini’s three research themes. Each team then generated a potential research project. Each round of the knockout consisted of pairing up the groups and amalgamating their ideas into an enhanced version. This continued until there were just two groups remaining. In the grand finale, there was a final presentation followed by a vote. The winning research topic was ‘Measuring the impact of the communication of science’.

The question of science outreach also came up at the conference run by the New Zealand Association of Scientists (NZAS). The conference was held at Te Papa in Wellington and celebrated the 75th anniversary of the Association. The conference had a selection of engaging speakers looking at the role of scientists in the past, the present, and into the future. A number of speakers talked about science communication.

One of the presenters, Simon Nathan, spoke about James Hector and how he effectively pushed the cause of New Zealand science, through his role of Chief Government Scientist, by constantly reminding politicians about the value of science. Rebecca Priestley talked about how science outreach was different back in the days of the Department of Scientific and Industrial Research (DSIR). Instead of scientists engaging in outreach programs, interested journalists and citizens would phone and be able to speak directly with the scientist who was in the best position to answer their queries. Te Pūnaha Matatini’s own Shaun Hendy presented on how social media is currently the only way scientists are able to directly communicate with the population without the risk of their message being obscured. His three guidelines for public engagement were very apt.

Researchers should:

1) Not be d!@#s

2) Get on social media

3) See rule number 1.

The other major theme of the conference was the structure of the pathways inside and outside academia for emerging researchers. I will touch on this in another blog post on the Te Pūnaha Matatini Whānau page.

Having had a rewarding weekend forming connections with talented scientists, and with the science community as a whole, I will sign off hoping that I have followed Shaun’s rules.

Jonathan Goodman

NSSI (Not Such a Silly Idea… but do it properly) #5

NSSI (Not Such a Silly Idea… but do it properly) #5

Welcome back to Not Such a Silly Idea! In this exciting new season, two summer students continue to critique a government document, and this time they have interactive visualisations! In case you missed the first part of our epic journey, you can read instalments one, two, three and four.

One of the claims made in the National Statement of Science Investment (NSSI) which we wanted to look into was based on this graph (Figure 1) on page 20 of the NSSI:

Figure 1

Science and Innovation spend - New Zealand Treasury Estimates

The NSSI asserts that the government’s investment in science has “increased by over 70% since 2007/08”, and that “our science has improved in response”. This is followed with several graphs which show things like increasing publication rate, increasing number of papers in top journals, growing research workforce, and increasing international collaboration over time. These graphs each stand on their own, but fail to relate these improvements to the amount of money the government has been spending on science. One of our goals was to re-visualise these graphs in a way that clearly showed a correlation (or not) with increasing government investment, and we will address that later on. But before that, we had to investigate this data on government spending.

We remain puzzled that the NSSI claims an “over 70%” increase in government expenditure in the last eight years, when according to their own data the increase is more than 80%. Self-deprecation aside, when we went back to The Treasury we discovered that the graph on page 20 is not adjusted for inflation. This immediately indicated that the increase in spending was not quite as significant as claimed, since the government would have had to up their investment by about 40% just to compensate for the devaluing dollar. Using the Reserve Bank calculator, we found that government spending on science has actually increased 55% (not 82%) since 2007/08 and 46% (not 87%) since 2004/05.


After adjustment the government’s spending still showed a rise, however, so we started looking for the implications of that rise. We created the graph below to see whether the government might be able to claim New Zealand’s burgeoning number of publications as correlated with its financial support. To make this graph we had to include a time lag. Cash pays for research, but it takes time for that research to be published. We weren’t sure, though, how long that lag is on average. We did a regression analysis of our data using this equation, in case you’re interested:

regression NSSI#5

We used a range of time lags between investment and publication, from no lag to four years. The time lag that showed the strongest correlation was two years. (We have far too little data to infer anything universal from this – it is simply our best guess.)

And voila, there is a positive correlation between the amount of money poured into ‘science and innovation’ and the number of papers churned out. But what does the money do? Do scientists publish more frequently when they are better funded? In other words, does greater funding increase productivity measured in papers per full-time researcher? Or does the money go towards increasing the number of people producing papers in the first place?

Six years ago, Professor Shaun Hendy published this paper (page 56). It drew the conclusion that although our publication output increased hugely from 1990-2008, that increased output was due to a rise in the number of researchers, not the number of papers each researcher produces in a year. Having read this paper, we expected to find similar results for 2000-2014, but we were surprised to see that both FTEs and productivity have been steadily on the rise, according to both OECD and Statistics New Zealand data.

The publication output numbers we retrieved from SciVal, and we found two different sets of researcher full-time equivalents (FTE) data; one in the Main Science and Technology Indicators database on OECD.Stat, and the other in the R&D surveys on Statistics NZ. There was a confusing discrepancy between these sources because the latter breaks down Higher Education researcher FTEs into ‘researcher’ and ‘student researcher’, while OECD.Stat makes no distinction, and the numbers didn’t come to the same total. Our best guess is that one counts only PhD students, while the other also includes Masters.

These two graphs are very interesting, because in spite of the differences, they support that both the number of science researchers and their productivity has increased.

So, apart from the fact that MBIE needs to be careful with accurately presenting information, we can conclude government investment in science has indeed increased, and that it is correlated with increased output of publications, increased research workforce, and increased productivity. Of course, just from these graphs we can’t be sure which way round the causal relationship works. A great incentive, surely, for both parties to keep up the good work!

NSSI (Not Such a Silly Idea… but do it properly) #4

NSSI (Not Such a Silly Idea… but do it properly) #4

By Catherine Webb and Nicola Gujer

A long time ago in a galaxy far, far away, we started deconstructing the National Statement of Science Investment and considering the claims it makes.

Our first blog post ended with an unanswered question – is the proportion of a country’s papers in the global top 10% of citations a good way (or at least the best available way) to measure science excellence? The NSSI’s definition of excellent science is “well-designed, well-performed, well-reported research, recognised as such, e.g. through peer review.”

In preparation for our first blog post, we were dealing with a graph that suggested New Zealand produces science which is not quite as ‘excellent’ as our comparative small advanced economies, because a lower percentage of our publications make it into the 10% most cited publications worldwide. In that post, we discussed the limitations of that graph, but nonetheless concluded that it does show New Zealand lagging slightly in the production of ‘top-shelf science’.

Still, we were curious to see whether there are any more useful ways of measuring ‘science excellence’ that might paint a different picture.

What if just looking at the papers with the most citations is not the right approach? Are citations a good way to measure excellence at all? One flaw with citations is that papers can become highly cited for two reasons: by being amazing, or by being such poor science that everyone wants to correct it (like Wolfe-Simon’s paper on a bacterium that thrives on arsenic). Also, as we discussed in our second post, citations tend to follow a ‘rich-get-richer’ power-law distribution, which makes a well-known paper garner even more citations, while another paper, nearly as good, can dwell in obscurity all its life.

However, even if citations are not a great way, they may currently be the least-bad way of assessing the impact of publications. But what kind of citations should we use to compare countries? Every country has different areas of specialty, and if they are alone in that area, they may not be cited very much by other countries, even if their science is top-notch. New Zealand, for example, is environmentally isolated and unique. Our conservation or agricultural papers for instance may not be of immediate relevance to anyone else as much as they are to us. If our science is rather useful to ourselves, but not to the rest of the world – should that make it less ‘excellent’?

We broke the data down into only intranational citations per publication and only international.

Because international citations make up the vast majority of any small country’s citations, these have the greatest impact on the percentage of publications in the 10% most cited. Thus, in terms of ranking countries, these two measures of ‘excellence’ can be roughly used as proxy.

Does New Zealand’s rate of intranational citations balance our lagging overall ranking?

Country Intranational citations per publication International citations per publication
New Zealand 1.25 5.95
Denmark 1.70 8.35
Finland 1.46 6.87
Ireland 1.13 7.05
Israel 1.23 6.98
Singapore 1.35 7.82
Mean average 1.35 7.17

It’s possible that New Zealand produces publications which are more relevant and therefore cited more within our own country than they are in other countries; we just don’t cite enough to pull up our average very far,      but this is conjecture.

In any case, does New Zealand do well enough by intranational citations to let us off the hook of general lack-of-excellence? Well, we certainly have room for improvement. The next question is, obviously, how to improve – a subject for another article, where we examine government investment and its effect on the science sector.

Look out for our next episode in the New Year, and may the force be with you over the holidays!

*Featured image by XKCD comics.

NSSI (Not Such a Silly Idea… but do it properly) #3

NSSI (Not Such a Silly Idea… but do it properly) #3

By Catherine Webb and Nicola Gujer

The adventures in data-sleuthing continue. In this blog series, two summer students examine the National Statement of Science Investment 2015-2025, and appraise its use of data. So far we have found one case of partially-correct-yet-misleading-ness, and another of axis-labelling-deficit. This time we take a look at Academic-Corporate Collaboration.

On page 17, the NSSI makes an intriguing statement: “Only 3.2 per cent of New Zealand publications have academic and corporate affiliations, suggesting scope for more collaboration.” We asked, where did this information come from? By what standard do we have ‘scope for more collaboration’? And is that a good standard to use?

Implicit in that statement is the assumption that an abundance of academic-corporate collaboration is a good thing – a claim to be investigated later on.

Firstly, we found that the “3.2%” used in the NSSI is found under ‘Academic-Corporate Collaboration in New Zealand’ on SciVal. The first thing we noticed was that this statistic does change over time, although it fluctuates less than field-weight citations. From the time it was retrieved for the NSSI published in October and the time we checked it in November, it had already fallen to 2.6% (that’s around a 19% drop). Hence, we wanted to check out how stable this measure is over a longer period of time.

Benchmark_SAE_ACC_percentage

We didn’t find anything very remarkable in that regard: on average, the academic-corporate collaboration rate of each small advanced economy deviated about 17% from its initial value over five years, with New Zealand squarely on the mean (+16.7%).

This also helps us answer the second question, ‘compared to what do we lack collaboration?’ The graph shows how our nation’s academic-corporate collaboration measures up to that of other small advanced economies (SAEs); Denmark, Finland, Ireland, Israel and Singapore (widely accepted as comparative countries due to similarities in factors that affect science performance). Using the same measure, this is the data as it stands in Nov/Dec 2015:

Percentage of publications with both academic and corporate affiliations:

New Zealand 2.6%
Denmark 4.8%
Finland 2.5%
Ireland 2.7%
Israel 3.3%
Singapore 2.0%
Mean average 3.0%

We see that by this standard, NZ is below average, but not markedly. We are still above Singapore and Finland, and with our ‘3.2%’ measured earlier in the year, we would have been above the present average!

Presumably, when the NSSI claims that New Zealand is lacking collaboration, they are using the small advanced economies as a reference – they cannot be referring to the worldwide percentage of academic-corporate collaboration, as that is only 1.34%. And yet, if they are comparing us to other SAEs, we are near the average and certainly not significantly underperforming.

Finally, however, we found a significant problem with the New Zealand statistics on SciVal. Academic-corporate collaboration is defined as at least one academic institution and one corporate (business) institution being affiliated with the same publication. On SciVal we found evidence that Crown Research Institutes (which are government entities and not private businesses) are being counted as corporate organisations. Here is one example of a paper listed as an academic-corporate collaboration:

Erroneous inclusion of CRIs as corporate example highlighted

As you can see, the only contributors to this paper are universities and two Crown Research Institutes; AgResearch and Landcare Research. Although our Crown Research Institutes have been ‘corporatised’, meaning that they are managed much like businesses, New Zealand is unique in this respect. Many countries have government science research organisations – the equivalent of our CRIs – but which are treated as purely public government institutes, such as the CSIRO in Australia. This presents an obstacle in drawing conclusions from this data set: comparing academic-corporate collaborations between countries is problematic when New Zealand calls corporate what other countries call government.

This inclusion of CRIs as corporations is skewing the total stats on collaboration in New Zealand, but by how much it is difficult to tell. Unfortunately, it is not possible to find the collaboration data adjusted to exclude CRIs on SciVal. CRIs cannot be excluded from the affiliation search without excluding papers with genuine collaboration between universities and corporations. SciVal’s lack of Boolean operators makes more nuanced searches impractical. Thus, we cannot provide a more correct number for New Zealand’s academic-corporate collaboration percentage than that published in the NSSI. But what we can say is that the NSSI’s number is not accurate, and when CRIs are excluded, NZ’s true academic-corporate collaboration percentage should be, in fact, lower than the NSSI reports.

We have to trust that a similar mistake has not been made in SciVal’s database for any other of the small advanced economies. Without a better dataset we cannot draw any conclusions about the potential for improving academic-corporate collaboration in New Zealand. If anything, this project has highlighted the need for comprehensive data-keeping, as well as taking care in how it is used.

*Featured image by XKCD comics.

NSSI (Not Such a Silly Idea… but do it properly) #2

NSSI (Not Such a Silly Idea… but do it properly) #2

By Catherine Webb and Nicola Gujer

In this blog series we pull apart the National Statement of Science Investment and scrutinise its use of data.  In case you missed it, in our last post we checked out the graph on page 19, and what it says about New Zealand’s academic performance.  This time we’re taking a look at page 18… because science.

Published on page 18 of the NSSI was a graph (Figure 1) representing “Academic Output” for New Zealand showing the amount of publications in each discipline and the quality of the output which was shown using field-weighted citation impact. We actually liked this graph; it seemed to show the information it claimed to (which sadly cannot be said about some of the other graphics in the NSSI). Alas, there was no numerical data or scale to be found on the graph. Yes, it showed the disciplines relative to each other, but surely adding a few values to an axis would make this a more effective graph. So recreating this graph, all it needed was a little sprucing up with some numbers and she’d be right (see Figure 2).

Figure 1. NSSI Academic Output (edited to show the “Multidisciplinary” field) (NSSI, 2015)

Figure 1. NSSI Academic Output (edited to show the “Multidisciplinary” field) (NSSI, 2015)

 

Figure 2. Volume and quality of academic output for New Zealand between 2010-2014 (Scival, 2015).

Figure 2. Volume and quality of academic output for New Zealand between 2010-2014 (Scival, 2015).

 

While recreating this graph, we came across the same issue as the NSSI with the data lower down becoming too small to be able to fit any labels to it. Unfortunately, this is just what will happen with this data, as the largest amount of publications is 19,993 for “Medicine” and the smallest amount is 401 for “Dentistry”. There is such a large gap between these that yes, it will be hard to have them both clearly visible and labelled.

A feature of this graph that definitely could have used some explanation would be the mean quality of output of NZ and SAEs. At first glance we thought the two averages were weighted by the quantity of publications in each field, since if we look at the NZ line it did not seem balanced by quality alone. Upon further examination of the graph, we noticed that the thin line towards the bottom of the graph was in fact “Multidisciplinary” (an annoyingly large number yet again). So this would explain why our average seemed larger. The mean lines that we have included on our graph are computed using a publication weighted mean. We are not exactly sure what MBIE did as we are not working with the same data as citations have accumulated over time.

These averages also raised the question of how accurate or even necessary these lines are. The lines shown represent the mean quality of output, but the distribution of citation counts for individual papers do not usually follow a normal distribution. Rather, citations tend to better fit a shifted power law distribution as the majority of publications may receive minimal or no citations, while a few highly cited publications will receive even more since when a publication is referenced often, awareness of it grows. This skewness increases over time meaning these average lines become less accurate. It is also likely that the amount of skewness differs between disciplines. A consequence of the skewed distribution is that the mean becomes a poor measure of the “average” value of the distribution. For heavy-tailed distributions, like a power law, the mean value will tend to be much larger than the median. This means that any statistics that are weighted by the mean will also be skewed. This makes it problematic to compare the average of New Zealand with the average of other small advanced economies, since the Field-Weighted Citation Impact does a poor job of normalising across fields.

Another difficulty in using citations to measure the quality of output is that while papers are usually highly cited when other researchers agree with their statements or use them to support their own research, high rates of citation can also occur for controversial or incorrect statements. Unfortunately, there doesn’t seem to be a more effective way of measuring quality of scientific publications so for now, we are stuck with our flawed metrics. Scientists do seem to love a sneaky approximation here and there.

*Featured image by XKCD comics.

NSSI (Not Such a Silly Idea… but do it properly) #1

NSSI (Not Such a Silly Idea… but do it properly) #1

By Catherine Webb and Nicola Gujer

In October, the MBIE published an interesting document: the National Statement of Science Investment 2015-2025. It is intended to assess New Zealand’s strengths and weaknesses in research and development, and inform science funding for the next decade. As scientists, this document could impact our future, so it’s understandable that it has generated dialogue, especially about its use of data.

Among the NSSI’s many attractive graphics, this graph (Figure 1) in particular, which compares the impact of New Zealand research with that of a number of similar countries, has generated active discussion. A number of people have commented on social media about how poorly we do in multidisciplinary science compared to Denmark, Israel, Finland, Ireland and Singapore. We wanted to look a little deeper into what is going on.

Figure 1. Academic output quality of small advanced economies by field (NSSI, p. 19, 2015)

Figure 1. Academic output quality of small advanced economies by field (NSSI, p. 19, 2015)

This data is taken from SciVal, Elsevier – a tool for analysing publications and citations – which is part of the Scopus database. The data is also field-weighted, meaning it is normalised to make different fields comparative.

The “percentage of publications in the top 10% of citations by field” is used here as a measure of the excellence of a country’s academic output. The greater the proportion of Denmark’s maths papers that make it into the most cited 10% of maths papers worldwide, the more excellent Denmark is at maths.

This seems like a reasonable measure, and it’s easy to see New Zealand has a concerningly left-heavy position on the scale. But does the graph really say, as the NSSI claims, that we need to up our game in almost every field to be as ‘excellent’ as our comparative countries?

We set out to pull apart this data, go back to the source, and check out how reliable this information is.

The first issue we encountered was the field labelled “Multidisciplinary”. As you can see, in all six countries, this field out-streaks the others where citation excellence is concerned. This made us wonder what “Multidisciplinary” actually means here – are these small countries all so good at collaborating across different fields?

As it turns out, “Multidisciplinary” publications seem to be labelled according to the nature of the journal they are published in, not because of any cross-field collaboration within the paper. For instance, this category includes papers that are published in the multidisciplinary journals Nature and Science, as well as more unusual choices such as Vaccines and the Journal of Chaos and Bifurcation Theory. Because a few multidisciplinary journals like Nature and Science are extremely highly cited, the citation impact distribution of “Multidisciplinary” publications is skewed to the right, so field-weighting (which is basically an arithmetic mean) does a poor job of normalising the top 10%. Thus, the field receives a confusingly high excellence score.

The second issue is more significant. We wanted to know how stable and consistent these field rankings were across the years, so we went back to SciVal to check. We discovered two things: firstly, due to the dynamic nature of citations, the data we retrieved this November on publications from 2013 was already different to the data on the very same 2013 publications that was gathered earlier this year (for use in the NSSI).

 

Figure 2. Change in 2013 data on output quality ranking of science fields in New Zealand between data retrieval in November 2015 (right) and earlier in 2015 (left). (SciVal November 2015, NSSI)

Figure 2. Change in 2013 data on output quality ranking of science fields in New Zealand between data retrieval in November 2015 (right) and earlier in 2015 (left). (SciVal November 2015, NSSI)

 

In a matter of months, the ranking of many fields has changed significantly. This caused us to question whether the trend stabilises over a matter of years. Discovery number 2: no, it doesn’t.

Figure 3. Change in academic output quality ranking of New Zealand science fields over five years. (Scival November 2015)

Figure 3. Change in academic output quality ranking of New Zealand science fields over five years. (Scival November 2015)

This is a graph to show how the order of which fields New Zealand is most ‘excellent’ in has changed unpredictably, and has not stabilised over 5 years. We can infer that the graph on page 19 of the NSSI may be an accurate snapshot of the data at the time it was taken (except for that remarkable multidisciplinary field), but the data moves so quickly that this type of graph cannot reflect any meaningful trends in the order of fields.

That said, the SciVal data does have some information to offer.

Figure 4. Percentage of small advanced economies’ academic output falling within the 10% most cited academic output worldwide, by field, averaged over five years. (SciVal November 2015)

Figure 4. Percentage of small advanced economies’ academic output falling within the 10% most cited academic output worldwide, by field, averaged over five years. (SciVal November 2015)

This is a remake of the NSSI graph, using averaged data over five years (2010-2014). The order of fields is different from the graph that MBIE published, as we would expect from what we have seen so far. However, this is similar to the NSSI graph in that New Zealand on average scores notably lower than other small advanced economies.

So does the graph say what it claims to say? With a bit of tweaking, it could. It cannot give us any meaningful information about which fields NZ is doing best or worst in, because that can change within months. However, if expanded to more years than 2013, it does show that NZ consistently produces a lower percentage of globally top-quality papers than comparable countries.

Whether papers being in the top 10% of citations in the world is a good way to measure science excellence or not – that’s a question for another day.

*Featured image by XKCD comics.

Dr Sarah Morgan on NZAS 2015

Dr Sarah Morgan on NZAS 2015

By Dr Sarah Morgan

Scicomm: Building a Sledgehammer for the Walls Between Science and Society

I’ve been wracking my brain about how to structure this piece, and have picked so many starts and specific topics that I’ve tied myself up into a delightful hot mess. I typed 9 pages of notes on the day and have had numerous follow-up discussions. Each and every one was wonderfully rich, somewhat opinionated and never resolved.

There is just so much to talk about, from the #GoingPublic conference of the New Zealand Association of Scientists, that we are going to be talking for a long time. (which is lucky since it’s taken me so long to get this piece out…).

One of the sound bites that remained with me was the thought that this conversation, about scientists speaking out in public, might be one that needs to be repeated every generation: each time it will reach someone new. I see great similarities between this idea and feminism, which in itself further highlights the sexism undercurrent to the discussions around scicomm efforts in New Zealand. Uh oh: all the controversial topics are coming out.

I’m going to break this up into sections, and review what a) struck me as the most pertinent, poignant points raised during the conference or b) were clarified by my attendance. It’s interesting that the main points turned out to be about scicomm in general, rather than specifically with regards to scientists speaking up on controversial issues in public.

  1. Hypercritique, including sexism

This is number one because I see it as the major roadblock holding scientists ‘above’ the general public. If we hold ourselves and our peers to such lofty measures of perfection, no effort is ever going to be good enough. No effort is ever going to be recognised as having value, even if it is very small. I believe every effort at such a point has value, even if negatively received by the public (to a certain degree); the action of getting scientists out there will help along the path to normalisation of having scientists in society.

Hypercriticism in this scicomm context is a reflexive denigration of engagement efforts. It tends to be couched behind phrases illustrated by some as having been the direct recipient of. One’s achievements in the main stream media are not ‘worth’ anything, your expertise is not specifically on that topic therefore you have no ‘right’ to speak up on it, your way of doing this is outdated/bad/wrong etc/my way is better/you should be doing it like that, your research isn’t even that good, you’re not even a professor yet etcetera: peers being reflexively and overtly negative in response to scientists who have made efforts towards engagement with the public.

I use the word ‘reflexively’ because I don’t think we even realise we do it anymore. We are all trained to critique the papers we read, to critique our own and others methodology, manuscripts and grant applications. It’s a habit – one that is valuable in the context of the scientific process – but detrimental in the scicomm/society space where being personable and establishing social connection/empathy are the keys towards engaging a lay audience.

Several presenters spoke to this point, though not all directly. As mentioned earlier, there were examples of comments received from scientific peers, but also opinions from a perspective of ‘harden up, ducks-back the comments’, which raised further discussion and again dovetailed with the sexism in science theme – at which point do you stop ignoring negative aspects of the local culture and push for change at every opportunity? I strongly suspect the level of antagonism/scorn/vitriol received from one’s scientific peers as a result of scicomm efforts is rather gender dependent.

Revolution: Find something complementary to say about a scientific colleague’s scicomm efforts and imagine saying it out loud to their face. Expert level: go say it out loud to their face.

Fight the subconscious hypercriticism habit.

  1. Public expert vs public intellectual

This is intertwined with the first point on hypercriticism, in that it is very easy to judge another person as one or the other, but very difficult to judge oneself – especially after any measure of success in your endeavours, which might reinforce your personal belief in one direction or the other. Hypercritique comes in with the verbal cutting down of people supposing themselves to be public intellectuals rather than experts. “You don’t even study that topic – what right do you have to be on the news speaking about it?!” New Zealand’s infamous tall poppy syndrome is rife within this conversation.

Side note: It has a word! Ultracrepidarianism is the habit of giving opinions and advice on matters outside of one’s knowledge.

I think the public has come some way towards accepting the expert, but less so the intellectual. Or perhaps the difficulty comes in with the public being as unable to identify worthy intellectuals as we are personally as scientists. Trust is put in anyone who speaks up with authority until a point where they show themselves to have zero clue – usually in a highly embarrassing and public manner. The public intellectual is a rare beast, and appointment is obviously fraught with danger. The public expert on the other hand is able and encouraged to speak up at any point when their topic of expertise is brought into the public eye. Until we can clarify the role of the public intellectual, perhaps we should focus on encouraging the public expert to speak up, and encouraging more and diverse public experts to make their mark in the media.

I’d also like to re-raise a point here, though it takes us on a tangent:

The message is very much so that the government requires a science advisor to help make decisions for the good of the country: as opposed to a science advisor to society, to support them in their requirements of their government.

The public intellectual in a position of chief science advisor to the New Zealand public, is a powerful dream. An office which could become known for its unbiased and effective research, answering any question at any point, providing the data for backing up stories in the news and elsewhere. Weigh in on public debates or controversies. Communicate the science behind 1080 or vaccinations or oil drilling to society from a position of elected authority…can scientists as a group not provide this function? Was this part of the dream for the Science Media Centre and scientists in conjunction?

Revolution: Talk about your work within your community when it comes up in the media. You don’t have to be on TV or radio – ring the local schools and see if the teachers want to bring it up in class, try the local Rotary or Women’s Institute meetings.

  1. The Celebrity Scientist

This is, of course, tied up with the first 2 points and very poignantly illustrated in the USA where anyone with a PhD and some amount of showmanship can start their own science/health literacy-based TV show. However, my bugbear is that turning scientists into celebrities completely flies in the face of the normalisation of science in society goal. Wrapped up in the second point, any measure of media success can very easily thrust a scientist from their comfortable and somewhat safe role of public expert into the public intellectual role where they are called upon to give their opinion on topics and situations vastly removed from their professional expertise. However, I believe that normalising scientists in society would be helped greatly by more scientists speaking out on more topics, with the proviso that they are sure to phrase it as their own opinion. After all, the academy is full of over-educated, strongly opinionated individuals who are as human and subject to the same flaws as the lawyer or the builder. Having this more openly acknowledged would help the public see scientists as real people once again – but in a way where your offering your opinion is not concurrent with an appointment as a public intellectual/authority. When schools or engagement programmes are looking for a scientist to chat with a class or do an experiment or presentation– why go for the semi-famous one who’s on TV or the radio all the time, which will not go any way towards normalising science or the tertiary study environment (a goal of schools with low rates of students going on to further education). I suggest instead finding the regular Dr Bloggs, who has no notoriety whatsoever, but knows her subject just as well. Perhaps even open the field and try to find a scientist who attended your school, or lives in the area. Least I ruffle too many feathers – I do not mean to say that having recognisable, ‘celebrity’ scientists in mainstream media is bad – I just don’t think all engagement efforts with the public should remain solely in their court.

If there was one lesson to take away from the conference (according to the gospel of me) it would be, not surprisingly, collaboration. Collaboration not only between scientific peers (and in an encouraging, supportive manner – down with hypercriticism!): but between sectors, fields – all things. Find a journalist and build a relationship. Find a policy/ministry employee and build a relationship. Find a school, find a sports club, find a new lab, find a community group, find a blog, find a mind – and communicate with it.

END I’d like to thank Te Pūnaha Matatini for getting me to the NZAS conference this year. I think the best part about academia is the collaborative and community atmosphere that is illustrated so perfectly at conferences. Academia, and science in particular, will always be better when we work together, when we collaborate and learn with and from each other. The New Zealand Centres of Research Excellence are a beautiful example of this, and Te Pūnaha Matatini is pushing the boundaries connecting academia with business and industry.

Dr SM Morgan is a Research Fellow at the Liggins Institute, University of Auckland, working on translation interventions in the health literacy field.

http://twitter.com/DrSMMorgan