The Archaeology and Physics Departments at the University of Auckland, as well as contributors from other universities, have been collecting data on obsidian artefacts from the north part of New Zealand. To date, this project has data on over 2,500 such artefacts, obtained from various sources including historical studies done on obsidian to more recent studies done by current archaeologists at the University of Auckland. Part of the aim of this research is to look at “Social Network Analysis of Obsidian Artefacts and Māori Interaction in Northern Aotearoa New Zealand” which is the title of a recent publication which involved my Te Pūnaha Matatini and industry supervisors.
Why study obsidian?
Obsidian is a volcanic glass which is found at several locations in New Zealand. It is hard and brittle such that when a piece is broken off (called a flake), it has sharp edges. This made it very useful as a cutting tool in pre-European New Zealand. By analysing the elemental compounds of the artefacts, it can be determined where each artefact was sourced. By comparing this to which archaeological site each artefact was found at, my supervisor Dr Dion O’Neale has been able to infer social networks of pre-European New Zealand. Dion analysed the geographical least cost paths and found that distance was not always the main factor in determining where each archaeological site sourced its obsidian flakes from. Therefore, by analysing obsidian artefacts, a lot of information can be gained and it is the aim of this research project to be able to infer this type of information and even more regarding pre-European Aotearoa New Zealand.
With so much varied data the need arose to have a central data infrastructure where all the various data records can be stored along with protocols to support data quality and provenance. This data needed to be accessible by various parties from various departments and universities.
The main steps I took to complete my internship project included:
- Choosing and learning to use an appropriate database software
- Schema design
- Data cleaning
- Scripting for automated data uploading
These steps were not necessarily sequential and often ran in conjunction with each other. For example, since there was a variety of data sources, while I was doing the data cleaning I came across new data fields in which case I had to edit the schema to reflect the new field. However while doing the data cleaning, I often came across discrepancies or unknown variables in the data which I needed to wait to hear back from other people about before I could proceed.
It surprised me how long it took me to design the schema. Data cleaning often takes the longest amount of time. In some sense the data cleaning did take some time because while I was designing the schema, I was also figuring out what data to keep and what not to keep. This greatly reduced the time it took for me to clean and format all the data tables to be ready for upload. After that, finding and learning to use an appropriate database platform also took a while. Finally, writing the scripts for automatic uploading to the database took a couple of weeks.
Kate is currently studying for a Master of Applied Data Science (MADS) at the University of Canterbury.