Skip to content

Read the DRAPAC23 Statement of Solidarity

  • Digital Rights
  • Open Technology
  • Video For Change

AI workshop for civil society: Understanding AI harms and the need for documentation

  • Siti Rochmah Desyana
  • 28 May 2025
  • 11:00 am

This is a summary of Day 2 of EngageMedia’s Training on Strategic Engagement with Government regarding AI Governance and AI Accountability, held in Bali, Indonesia, from May 13 to 15, 2025. Read the Day 1 recap here.

Day 2 of EngageMedia’s workshop on AI governance and accountability began with a session on understanding, recognizing, and mapping out biases in an AI system.

Karen Hao of the Pulitzer Center introduced participants to the various ways of thinking about biases: there is the perspective of representation, where ‘bias’ means certain groups shouldn’t be over or underrepresented in an AI system prediction; bias in terms of equal performance, which means that an AI system should perform equally well across groups; and bias in terms of well-distributed error, meaning that the errors an AI system makes should be equally distributed across groups. She provided a matrix to help participants interrogate AI biases across the AI lifecycle: 

Input variables →Training Data →Model Type →Accuracy →Outcomes →Deployment
Does the AI system use variables that are unfair, like a person’s race or gender? Does the training data contain historical biases? Does the machine learning technique inject randomness? Does the system perform equally well across different groups? Is there disparate impact against vulnerable groups?Is the application of an AI system in itself biased? 

Further, Hao also introduced participants to the skills needed to detect BS (yes, it stands for what you think it does) featured in various pitches, advertisements, and testimonies for AI systems and ways to debunk these statements. Typical examples of these misleading statements include the use of hyperbole, in which the perceived efficacy of an AI system is ‘enhanced’ often by citing a specific percentage, for example claiming that “this machine has a 99% accuracy rate!”. Then there’s anthropomorphization, which aims to portray AI as equal or able to match human skills, with taglines like “sentient” and “outperforms humans.” Lastly, statements citing the existential threat posed by AI tend to be used to distract the general public, including governing bodies and regulators, from facing the current problems that are caused by the current iteration of AI by portraying its risks as far-away terminator-esque threats to humanity. 

With biases and BS-es covered, the next session delved into a specific topic in which discussions on AI are relevant: reporting information muddle/influence operations. As a practical exercise, the group picked apart samples of reporting pitches and advertisements typically used in hyping AI systems and touting their supposed benefits. Participants were invited to critically examine these statements and claims and to think of the questions they should be asking when faced with such statements.

Hao also shared some entry points for interrogating the media ecosystem landscape to help shape, sharpen, and present compelling stories on AI and other emerging tech:

  1. Who inhabits it? This maps the audience who would be the target of the content.
  2. Who are the active information providers? This question prompts us to outline information content and disseminators in their ideal form. 
  3. Who are the manipulators? This identifies the bad actors and their modes of operation.
  4. What is the system? This helps understand who built the system and how that identity helps shape the system that was being built is crucial in understanding how the system works. 

The next session was led by Dr. Jun-E Tan from Khazanah Research Institute (KRI), as well as Shaleh Al-Ghifari and Nelson Simamora from Strategic Impact Litigation collective (SILc), and focuses on the ways to map out and analyze AI incidents. It first outlines the definition of an AI Incident, which is when an “event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to tangible harm.” Tan then highlighted the importance of breaking down the taxonomies of harm within an AI incident to fully understand the breadth and scope of the impact it had caused, thus further sharpening the analysis and specifying the recommendations that can be borne out of the assessment. 

The session also highlights the importance of building a repository to monitor and keep track of AI Incidents. Ghifar and Nelson emphasised the need for a concentrated effort to collect in-depth case-building for rights-based advocacy for emerging tech. The class broke into a discussion on how a repository might be able to help them within their respective work, and what requirements would be needed by the repository to be fully effective. 

The Day 2 session aimed to move into practical documentation of AI incidents, harms, and risks, which is intended to guide civil society in gathering relevant evidence needed for crafting actionable recommendations, advocacy plans, and strategic engagement with policymakers and government stakeholders. 

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Subscribe to the EngageMedia mailing list

Two emails a month of all our best posts. View past newsletters.

Subscribe now!

EngageMedia is a non-profit media, technology, and culture organisation. EngageMedia uses the power of video, the Internet, and open technologies to create social and environmental change.

Mastodon X-twitter
  • Home
  • Video
  • Blog
  • Podcast
  • About
    • About EngageMedia
    • The EngageMedia Team
    • Consultancy Services
    • Privacy Policy
  • Resources
    • All Resources
    • Video for Change Impact Toolkit
    • Video Compression – Step-by-Step Handbrake Tutorial
    • Best Practices for Online Subtitling
    • Video Compression Guide
  • Research
  • Projects
  • Jobs
  • Partners
  • Newsletter
  • Support Us
  • Contact