Skip to content

Read the DRAPAC23 Statement of Solidarity

  • Digital Rights
  • Open Technology
  • Video For Change

AI workshop for civil society: Unpacking AI through an accountability lens

  • Siti Rochmah Desyana
  • 23 May 2025
  • 2:15 pm

This is a summary of Day 1 of EngageMedia’s Training on Strategic Engagement with Government regarding AI Governance and AI Accountability, held in Bali, Indonesia, from May 13 to 15, 2025.

As the AI industry continues to grow, it is crucial for civil society to ensure that the deployment of AI systems considers potential risks and harms to society while prioritizing the public interest. Civil society needs to actively participate in advancing the public interest in AI governance discussions. To achieve this, building capacity to understand AI systems and their ideal governance is essential and urgent. 

EngageMedia, with support from Luminate, and in collaboration with Karen Hao from the Pulitzer Center, the International Centre for Not-For-Profit Law (ICNL), UNESCO, the Khazanah Research Institute, Wikimedia Foundation, and GIZ, organized a three-day training workshop from May 13 to 15, 2025 for civil society members, journalists, human rights defenders, and lawyers from Indonesia and Malaysia on engaging with the government regarding AI accountability and governance.

Karen Hao explaining the AI Development Stages

On Day 1, Hao led sessions delving into the history of AI development and how it is often misunderstood as a single entity, typically conflated with ChatGPT or other generative models. However, AI is a broad term encompassing various technologies designed to mimic human intelligence. Coined in the 1960s, the term initially referred to logic machines designed to solve mathematical problems. Early examples, such as ELIZA, a psychotherapist program developed at MIT in the 1960s, relied on hard-coded rules to interact with users. In contrast, contemporary systems such as ChatGPT utilize statistical models to generate responses based on the most probable answers. While both systems engage users in conversation, the underlying technology represents a significant shift in complexity and capability.

Currently, the AI landscape is dominated by a handful of companies—including OpenAI, Microsoft, and Google—that define AI and dictate its resource consumption and accessibility. This raises concerns about monopolization within the industry, and the concentration of power among these corporations makes it difficult for independent experts to voice unbiased opinions, as many are tied to financial interests.

Another critical issue in AI is the “black box” nature of many machine learning models. Users often lack insight into how these systems operate, resulting in a lack of understanding and, consequently, a lack of accountability. This opacity hinders debate and scrutiny, as individuals often lack the technical expertise to challenge or dissect the technology effectively.

Hao introduced a framework for AI reporting that looks into the different stages of AI development. While designed to guide journalists in their reporting, the framework is also useful for civil society to understand the different societal issues that arise at various AI development stages. For instance, issues associated with the use of data for use in AI models — such as the inclusion of “trash” and illegal materials within the data corpus, unethical means of data collection, etc — are vastly different than the problems resulting at the computation stage — for example, the massive energy consumption and high emission levels arising at this stage. The framework also helps break down the different actors and impacted entities involved in each of these stages, further clarifying the impact of potential harms.  

AI Development stagesData +Computation →AI Model →Application

One of the groups explaining what they think are the most pressing AI issues in Indonesia & Malaysia

As a practical exercise, several groups were formed into breakout sessions to identify AI harms in the context of Indonesia using the framework, and most of the answers to the actor question were whether it was the state actor or the private sector that was most commonly involved in the identified issues. One interesting finding is the lack of disclosure of AI inclusion in various public systems, which makes it difficult to even identify the technology’s presence in many of systems in which it is applied, such as in banking and public services. 

The recent case for unethical data collection conducted by WorldCoin in the greater Jakarta Area was also highlighted in many of these discussions, and participants stressed the lack of public understanding and awareness on the safety of their personal data and the dwindling economy as factors as to why people ended up giving away their retinal scans in exchange for a meager amount of financial compensation. 

The Day 1 session provided participants with a grounding into the key concepts of AI and a guiding framework to help them identify and document potential harms arising from the use of the technology at different stages of AI development, as well as the actors involved at each stage.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Subscribe to the EngageMedia mailing list

Two emails a month of all our best posts. View past newsletters.

Subscribe now!

EngageMedia is a non-profit media, technology, and culture organisation. EngageMedia uses the power of video, the Internet, and open technologies to create social and environmental change.

Mastodon X-twitter
  • Home
  • Video
  • Blog
  • Podcast
  • About
    • About EngageMedia
    • The EngageMedia Team
    • Consultancy Services
    • Privacy Policy
  • Resources
    • All Resources
    • Video for Change Impact Toolkit
    • Video Compression – Step-by-Step Handbrake Tutorial
    • Best Practices for Online Subtitling
    • Video Compression Guide
  • Research
  • Projects
  • Jobs
  • Partners
  • Newsletter
  • Support Us
  • Contact