Search
Close this search box.

Artificial Intelligence and Human Rights: Notes from Coconet II

This post was originally published on the Coconet website.

The views expressed in this post do not necessarily reflect the views of the Coconet community, EngageMedia, APC, or their funders. Copyright of the article is held by the author(s) of each article. Check out our Contribution Guidelines for more information. Want to translate this piece to a different language? Contact us via this form.

Artificial intelligence (AI) is a topic that digital rights activists in the region are concerned about, but have little understanding of. At the moment, civil society has more questions than answers to the human rights implications of machine learning and massive amounts of data collected to train the machines.

During Coconet II, I found myself gravitating towards the sessions on AI and data privacy, which emerged as a clear track among the digital rights topics discussed. Sessions that I went to covered various bases, such as conceptual understanding, mapping of the current situation, as well as philosophical discussions.

AI Session at Coconet Camp

In this blog post, I will describe some of the sessions I attended on AI. Erring towards the side of caution on privacy, I’m keeping the session organisers’ names out – acknowledging that all sessions were well-conducted providing plenty of food for thought.

1. Government Uses of AI – The three presenters in this session provided case examples of how governments have employed AI in various functions, such as facial recognition in surveillance (in many countries, sometimes under the name of smart city policies), algorithmic decision-making on social protection (in India, UK, and Australia), or social credit ratings where people are ranked based on their “trustworthiness” (in China).

Of particular concern was the geopolitics of AI.

We also talked about deep fakes, or the ability to doctor photos/videos with AI that creates extremely realistic-looking outcomes, and their implications. Of particular concern was the geopolitics of AI—as the US and China are the main producers of AI technologies—and possible advocacy strategies, such as using arguments based on consumer and intellectual property rights.

"Branding Artificial Intelligence" by Dan Sherratt is licensed under CC BY-NC-ND 4.0

There are two types of bias that can occur in an automated decision-making system.

2. Understanding AI Bias with Candy* – In this lively session, we learnt about two types of bias that can occur in an automated decision-making system. Through a series of candy-distribution activities, the session simulated different biases that can occur in a machine-learning system and the way these biases can interact and amplify each other.

Data bias refers to problems that occur when the data used to train systems or form inferences are in some way prejudiced, skewed, or where populations are missing or poorly represented. Algorithmic bias occurs when computer logic in the system encodes a pathway resulting in an unfair decision, such as privileging one group over another.

We also learned about Noise, an expected property of any statistical model that decreases the accuracy of the model by adding randomness to the system. The effect of bias can increase, sometimes with greater than additive effect, when both types of bias occur in a decision system.

To conclude the session, we broke into groups to design rules for a ‘fairer’ candy-distribution— and found that it was difficult to do this well.

3. Mapping AI in Southeast/South Asia – This was a more hands-on session. We had a mini hackathon for participants to map out the governmental, corporate, and civil society initiatives on AI in their own countries. Countries represented were Indonesia, Bangladesh, Malaysia, the Philippines, Myanmar, and Thailand; those from outside the region also worked on regional initiatives.

This is the joint document that we worked on, and it is a living document that we will still build on beyond Coconet II.

We had a mini hackathon for participants to map out the governmental, corporate, and civil society initiatives on AI.

… the idea of moving the data privacy narrative beyond the Orwell classic novel 1984.

4. Big Brother is Not The Only Narrative – I co-hosted this session, where we discussed the idea of moving the data privacy narrative beyond the Orwell classic novel 1984, where state surveillance (seen as the Big Brother) is used to rule with fear, leading to thought control and self-censorship.

An overlooked narrative, which may be more relevant to the current situation, comes from Franz Kafka’s The Trial, where the main character of the novel is accused of committing a crime of which he has no knowledge, based on some data in some database that he is unable to find. In this second story, a person’s data is used to make important decisions about his life, but he is powerless to challenge the opaque system in place.

He has no say or knowledge in the collection and usage of his data. The system is indifferent towards him as a human being beyond his papers and is uninterested in controlling his thoughts, which is the situation in many public and private uses of AI.

Different metaphors allow us to think about the problems differently, and we should come up with more robust storytelling to help us communicate our advocacy better.

The sessions were very helpful for me, as a participant and a session organiser, to formulate and articulate the problems associated with machine learning from a digital rights perspective. They were also useful to form an initial community concerned about AI, continued through the AI channel in the Coconet Mattermost platform, which is one of its biggest channels with 48 members so far.

Certainly, our conversations on AI in the region need to extend beyond Coconet II, as this is an area that will only increase in importance with time, as more people get connected digitally and more governments adopt these technologies. For this, I am working with EngageMedia on a small research project on understanding the state of the art of AI in Southeast Asia to inform the work of digital rights advocates in the region. More details will come soon, and the outputs will be shared with all.

*

I’d like to thank Laura Summers from Debias.AI for her help in editing the part on AI and Candy 🙂

About the Author

Dr. Jun-E Tan is an independent researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E’s newest academic paper, “Digital Rights in Southeast Asia: Conceptual Framework and Movement Building” was published in December 2019 by SHAPE-SEA in an open access book titled “Exploring the Nexus Between Technologies and Human Rights: Opportunities and Challenges in Southeast Asia”. She blogs sporadically here.