6 October 2025
A collaboration between ecologist Will Godsoe, mathematician Claire Postlethwaite and illustrator Hanna Breurkes. Edited by Jonathan Burgess.
We think we live in the information age.
In the information age one of the latest big buzzes is artificial intelligence (AI). We’re surrounded by claims about the power and potential of AI. We hear that it’s an amazing tool to take lots of information, stir it around and make good guesses about… well, seemingly nearly anything.
It’s easy to feel a little queasy about these technological guesses. The way we think about new tools like ChatGPT that manipulate information seems strangely removed from the actual world of us living creatures. Human intelligence is inextricably linked to our bodies and our environments.
It isn’t always clear how the guesses made by artificial intelligence connect to this world.
***
We’ve been exploring how AI interacts with the environment by focusing on a mathematical model of a trap with an AI-guided system being used to try and control an invasive species. Models like these can give us some useful insights into how weird outcomes can arise when AI is used in complex systems.
The weka is an iconic New Zealand bird. Some populations of weka are threatened by rats and other predators, so we decided to model a trap which identified rats using AI, and the effect this would have on the weka population.
Ecosystems are systems where outcomes depend on complex interactions among many species. For example, predator populations are prone to feedbacks – processes that change the thing that caused them. The behaviour of the AI changes the population of both the rat and weka, because either or both may be caught in the traps. But the number of rats also affects the behaviour of the traps. When rats are common, the traps tend to catch the rats. When the rats become less common, the traps are available to trap weka. This means that a very small error rate in the ability of the AI algorithm to correctly identify a weka can cause the population to crash.
Uh oh, that’s not what we want. And before the sales people surf in on the AI wave and sell us a bunch of these AI-powered traps, we need to hold up a moment.
In our system there are two ways in which the AI identification algorithm can make an error. First, when a weka goes into the trap, the AI may misidentify it as a rat, then kill it. Second, a rat could enter the trap, but the AI incorrectly identifies it as a weka – allowing it to go free. Our mathematical model allows us to investigate how both types of errors might affect the environment.
It is possible for the feedback between AI-based traps and rats to cause unexpected effects, even when we think we are improving the accuracy of the AI. Suppose we start our system with fairly conservative settings. We are nervous about killing too many weka, so our initial set-up has a fairly high rate of false negatives: that is, frequently (say, 70% of the time) when a rat enters the trap, the system identifies it as a weka and lets it free. The weka are mostly correctly identified, but the system here is not 100% accurate, so let’s suppose that 1% of the time, the weka are incorrectly identified as rats. With these settings, we find that in comparison with the system with no trapping, there are many fewer rats, and slightly more weka. This is a good outcome, but we would like to be able to eliminate even more of the rats.
We then increase the accuracy of the AI through training or other methods, so that the system gets better at identifying that a rat really is a rat. We observe that the ecological outcome is initially appropriate – the density of rats decreases further, and the density of weka increases. The system gets even better at identifying rats as rats.
But then suddenly, the ecological outcomes are catastrophically different. Both species are driven to extinction.
We can explain this by considering the feedback loops in the system. When the ability of the AI to correctly identify the rats increases, then the rate of trapping rats increases. This leads to a decrease in the rat population, which means that there are fewer rats to be trapped – and more traps available for weka to be accidentally trapped.
Because the traps have a reset time, when the rat population was high, the traps were more frequently “full” and unavailable. The weka population is much more sensitive than the rat population to small changes, and so the trapping even of a few weka is enough to drive that population to extinction.
AI can’t be evaluated in isolation. This lesson should be emphasized in the age of AI hype because many measures of the success of AI focus on accuracy, either formally in terms of the number of errors that are produced, or informally in terms of when AI mimics what we’d expect from people. These measures foreground what AI is doing and not the ecosystem where it is found. It’s far easier to make a good guess than it is to get a good outcome.
By tracing through the feedback in a larger system our work highlights that information may be great, but there is more to the world than just information.
Will Godsoe and Claire Postlethwaite co-lead the Towards a better understanding of artificial intelligence and its interaction with its environment project at Te Pūnaha Matatini.
Hanna Breurkes is a designer and illustrator who is passionate about designing to improve wellbeing and is inspired by nature.
Jonathan Burgess is an award-winning communications specialist who specialises in translating technical detail for a general audience.