.

Find us

4th Floor,
Chaloem Rajakumari 60 Building
254 Phayathai Road, Pathumwan,
Bangkok 10330 Thailand
View on Google Maps

Contact us

Tel: +66 2218 3137

Email:
csii.admin@chula.ac.th csii.communications@chula.ac.th

Our Blog

Read Now

Whose Model is it Anyways?

Facebook
Twitter
LinkedIn
\"\"

BLOG POST

by Dr. John Loewen, Faculty Member at the Chulalongkorn School of Integrated Innovation.

I am by nature, at the core, a numbers guy - a computer scientist by trade - I am fascinated by data and what information can be gleaned from it - I invest (waste?) my time poring over all of the Covid numbers, looking at R values and 7-day rolling averages. I am fascinated by websites like fivethirtyeight.com, - I spent hours glued to their polling and predictive statistics leading up to the 2016 and 2020 US elections (boy, their algorithms were waaaay off for the 2016 election - or maybe it was the polling data?). Isn’t AI just great for this sort of analysis? Artificial Intelligence, through complex algorithms, can provide predictive analysis in a multitude of domains - crunching data sets to predict outcomes, from elections to education, from weather to wearables. Some feel AI can solve all our problems, robots can do everything for us – they may take over the world but it’s worth the risk!

AI facilitates decision making based on quantitative data - in other words, on a dataset devised by scientists - I am one of those scientists who works on datasets. I have worked with other scientists and together we have created algorithms that model and help us learn about and to predict phenomena in the world around us. However, there is a growing group of folks that refute the idea that AI is the panacea for knowledge. This group observes there is a divide (a chasm?) between the knowledge that AI provides, and the way that humans think. As Erik Larson posits (in his book “The Myth of Artificial Intelligence, 2021), AI works on inductive reasoning, from data sets - but humans make conjectures informed by context and experience - they make decisions based on their best guesses given what they know about the world. With AI, the formulas devised are mathematical and therefore involve a certain level of abstraction to create - which often leaves those who do not lead lives of abstraction (ie remote, rural indigenous folks) in a quandary as their historical and cultural traditions are not abstract - when your culture lives off the land, there is innately an understanding of how to live season to season with unpredictability and chaos, and this unpredictability affects day-to-day decision making. And if you want journey down this road of thought a little if a culture lives off the land, then if food comes from the land, then food is culture. I have been spending more of my time lately reading and researching this domain of AI – that is how AI is used to try to model complex systems. A few years back, I learned a hard firsthand lesson about issues with AI and modeling, by way of a long and winding PhD exploration.

About three years and four papers into my PhD dissertation, I had it all figured out. I had a framework and a methodology (kinda the same thing…) and I was going to quantify qualitative thinking using fuzzy logic. It was a form of an “expert system” and at the time, I was excited that I was dabbling on the edges of AI and I was working off the research of some seminal thinkers. In my mind, this dissertation thing was a done deal - graduation with honors, on to bigger and greater things. Then I started asking the “right” people about my ideas, about my framework, about my methodology, specifically indigenous educators and indigenous knowledge holders for whom (in my mind) the system would be so valuable. I was informed, in no uncertain terms, that my framework and methodology were, in essence, detrimental to the furthering of community knowledge and traditions. As stated by an indigenous knowledge expert during my PhD research “any system that takes someone away from the land, from being on the land, from learning off the land is the opposite of what is best for our community”. Now when I first sat down and thought about it, my reaction was that they were all wrong - I was mad and not ready to admit that I had wasted so much time on an unworkable model. Years of reflection time later, and a completed PhD behind me (following a “slightly” different path), I have spent some time reading and thinking about the human brain, about cognition, about how humans navigate the world around them. I would not call myself an expert in this domain - but I have gained knowledge through personal experience and now I am aware and interested when I see AI applied from a hegemonic perspective. Often this type of modeling results in outcomes that may not represent the marginalized folks so well. Who are the marginalized folks? Well, if you’re asking that question after you have created the model, then you are in trouble… like I was.

Which brings me to the important question of “whose model is it anyways?”. I would like to share one more story that is related to my experience. I am originally from a very small, remote community off the northwest coast of Canada, a place called Haida Gwaii (loosely translated in the Haida Language as “land of the people”).

\"\"
Image 1. Location of Haida Gwaii (image on left from Wikipedia)

Now an important thing to note is that I am a “settler”, of white European descent, first arriving on the islands of Haida Gwaii with my parents back in the 1970s. The Haida are indigenous to Haida Gwaii and have been there since “time immemorial”. Nowadays, Indigenous folks in general are wary and skeptical of technology, and for good reason. In the hegemonic society in which many remote indigenous communities must co-exist, the “culture of progress” is defined as “continuous corporate growth”, shown on graphs as an upward trend in profits and sales. In his book “AI in the Wild”, Peter Dauvergne highlights the tendency of corporations to oversell the value, downplay the failures, and misjudge the risks of commercializing technology. One of the main issues that indigenous communities deal with in regards to modeling is the concept of “biased models” and biased data. So what do I mean by this? The first questions that need to be to answered are, who designed the model?, and who provided the data? If the data is biased, then the outcomes will also be biased – if the data is devised from the perspective of a hegemonic society, the outcomes will also reflect this perspective. Now let’s move to how this point is relevant to my own life experience.

Indigenous communities in Canada struggle (ie. completely disagree) with the federal government models around what is considered to be “sustainable fishing”. Pacific herring are a small, essential food fish that inhabit the pacific coast of North America. Pacific herring are culture and life for many indigenous communities of the northwest coast.

Historical spawning grounds of Pacific herring on Haida Gwaii (image on left from marinematters.org)

\"\"
Image 2: Historical spawning grounds of Pacific herring on Haida Gwaii (image on left from marinematters.org)

Due to commercial overfishing (mostly by large “settler” boats), the Pacific herring population collapsed in the 1990s and was subsequently closed by the Canadian Department of Fisheries and Oceans (DFO) to any sort of fishing for an extended period of time. This adversely affected many northern indigenous communities as there was a heavy reliance on the herring as a food and sustenance source. In 2013, after 6 years of herring fishery closures, the Department of Oceans and Fisheries (DFO) devised a new predictive modeling system to forecast 2014 herring levels. From their model, they determined that the levels would be sufficient to open the herring fishery for commercial purposes in 2014 (Jones, Rigg & Pinkerton, 2017). For the Haida, already skeptical of government predictive models, this was not acceptable. Conveniently, the “new” model showed higher predicted herring numbers than the previous model - and on top of this, a leaked internal memo from senior management within DFO to the minister responsible (the decision maker) proposed a continued closure of the fishery. The Haida went to court and won an injunction to keep the fishery closed. The position and action of the Haida nation highlighted their lack of confidence in predictive modeling. The models had been created by scientists who lived and had grown up in societies located thousands of kilometers away - and these models were being implemented without any community consultation or input.

So what’s the model moral of the story here? There needs to be careful consideration of who’s story the model is telling. As many data scientists are designing AI modeling that may directly and indirectly affect non-hegemonic societies, educational paradigms (ie. post secondary institutions) would be astute to include courses that promote cultural awareness, for example, traditional anthropology, human culture, and history. In this way, we can have a higher level of assurance that the AI model tells a more culturally appropriate, and therefore more accurate, story.

Resources

Dauvergne, P. (2020). AI in the Wild: Sustainability in the Age of Artificial Intelligence. MIT Press.

Jones, R., Rigg, C., & Pinkerton, E. (2017). Strategies for assertion of conservation and local management rights: A Haida Gwaii herring story. Marine Policy, 80, 154-167.

Larson, E. J. (2021). The Myth of Artificial Intelligence. Harvard University Press.

READ ALSO: Our Previous Blog Post: Telos – on the nature of Nature at nanoscale

Stay up-to-date with the latest news
by subscribing to our newsletter!