Last February, EngageMedia held a two-day capacity-building workshop that aimed to establish a learning circle on artificial intelligence (AI), increase knowledge, and strengthen civil society’s position on AI governance in Indonesia. The event was held to flesh out how civil society organisations (CSOs) should engage with the Ministry of Communication and Informatics (MoCI) and other AI-related policies, as well as CSOs’ stance on such policies. This followed an earlier consultation held last October 2023 focusing on the Circular Letter of the Minister of Communication and Informatics No. 9 Year 2023 on Ethical AI.
CSOs need to consolidate and collaborate to establish a meaningful relationship with the government and, to an extent, private sectors, and ensure that rights-based approaches and perspectives are considered in AI governance. The two-day training was held from February 27 to 28, 2024 in Bali, Indonesia, with 12 organisations, MoCI, and personal data protection law experts.
Defining AI, considering AI governance
Day 1 focused on defining AI and factors to consider when crafting guidelines on AI governance, particularly concerns related to the accuracy and trustworthiness of data and human bias. MoCI also presented an overview of the 2024 plan for developing AI governance and regulation in Indonesia.
Shabnam Mojtahedi from the International Center for Not-for-Profit Law (ICNL) explained the difference between Narrow AI and General AI and the different fields of AI (Natural Language Processing, Machine Learning, Deep Learning, and large language models). Artificial intelligence is based on data. There are two different ways in which algorithms are developed through data labelling and training. Machine Learning (ML) uses the concept of “Human in the Loop,” where an individual defines the characteristics of an object that will be fed into the model, which is then trained on those characteristics. In Deep Learning (DL), the model will instead extract attributes on its own, with the human user coming in later to check for accuracy and improve the model.
Having a large set of high-quality and well-labelled data sets is essential for both ML and DL. In developing AI tools for a particular application, data scientists take foundation models (already developed by collecting data and training it to identify something). The private sector thinks that foundation models should not be regulated as they only provide a general basis for further development and that the risks of AI come from fine-tuning the model. Conversely, CSOs think that the most significant risk comes from the foundation model. Thus, it needs the most transparency and oversight because everything else is based on them.
Mojtahedi said that with AI, there is a risk of bias bleeding into the data set, how data is identified or labelled by humans, and even from historical and sociocultural assumptions and prejudice. These can then have a discriminatory impact on access to fundamental rights or public service.
Human rights concerns related to AI include discrimination, the right to privacy, and access to justice. Specific forms of AI may have different human rights concerns, such as content moderation, facial recognition during protests, AI at the workplace, and AI for law enforcement purposes. In crafting AI regulations, CSOs said these concerns should be addressed and human rights-based approaches adopted. There is an ongoing debate on whether to use ethics or human rights language in AI governance. The UNESCO recommendation, for instance, used “ethics” in the title, but human rights documents like the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR) mainly influenced the content of the UNESCO recommendation
Day 1 ended with MoCI presenting an overview of the 2024 plan for developing AI governance and regulation in Indonesia. MoCI mentioned several key technical regulations, such as Law No. 11/2020 on Job Creation Law, Ministerial Regulation No. 5/2020, and Ministerial Regulation No. 3/2021, along with the Circular Letter, that prioritise human control and a risk-based approach, and serve as the baseline for AI governance. Indonesia’s AI policy currently follows a soft-law pathway, which means it relies on the private sector to uphold ethical principles while the government monitors AI development. The latest update from the ministry suggests that instead of focusing solely on the “downstream” or end-product of AI, we should also develop and map out the “upstream” process, which involves the cloud computing system that gathers all the data required for AI. The government is confronted with a major challenge of finding a balance between protecting its citizens’ rights and promoting the emerging tech industry’s growth.
Reflections on data-driven technologies
On Day 2, several participants shared their reflections on the Day 1 session. Many were pessimistic about the potential harm that might be caused by foreign AI usage to date, expressing concerns that the government is always one step behind. Some were optimistic, as discussions have started among policymakers, experts, and civil society. All stakeholders must work collaboratively to develop policies and regulations that ensure the safe and ethical use of AI, participants said.
During an interactive session led by Prof. Sinta Dewi, a law expert in data privacy and protection law, participants were engaged in a group activity that involved answering several thought-provoking questions. One of the questions pertained to Clive Humby’s statement that “data is the new gold or oil.” The group unanimously agreed with the statement that screening codes and raw data have been sold at exorbitant prices for a long time. However, with data interoperability – which refers to the ability of data to be interconnected with other data – data processing can become more prone to be misused by the data processor of other third parties.
AI is inextricably linked to data. As Prof. Sinta affirms, large data sets are integral to successfully implementing AI. While the potential for data analysis and insights through online data sharing is immense, protecting personal data is often overlooked in the AI discourse. The data generated in such exchanges often contains sensitive information that individuals are reluctant to disclose without their consent. Therefore, safeguarding personal data is critical to protect an individual’s right to privacy. In the development of AI, there is an increasing risk of privacy violations. However, existing laws provide measures that can be taken to address these violations. Personal data protection (PDP) law is a vital reference point for holding perpetrators accountable and determining the appropriate use and processing of an individual’s data or information.
Following this, ICNL led a presentation on strategic litigation as a method to improve the governance of data-driven technologies. This session provided several human rights litigations on emerging technology issues from other jurisdictions. For example, in the EU, many of the successes of the General Data Protection Regulation came through public pressure on private companies, not through the court. In Brazil, which passed data protection laws in 2018, most of these rules were enforced through the court system, which has almost uniformly sided with the plaintiffs, and 80% of the rulings granted financial remedies to the plaintiffs. In the US, data privacy cases use local laws and copyright and privacy laws. In ACLU v. Clearview AI, for example, the court banned Clearview from sharing biometric data with third parties and law enforcement for five years in Illinois.
The landmark case of the “food delivery driver” in China has profoundly influenced the country’s algorithmic systems. This case is an exceptional example of how strategic human rights litigation can be used effectively, even before it becomes a last resort in court. It also highlights the potential dangers of abusive algorithmic systems on gig workers, which can extend beyond the mere loss of bonuses or an inability to fulfil specific orders and can violate the right to life. The issue has been brought to light through investigative journalism conducted by journalists and academics. It has the potential to prompt the Chinese government to create a policy for the immediate resolution of the dispute on behalf of the affected rights holders.
During a recent discussion, pro bono lawyers Ghifar and Gema highlighted their challenge to the Administrative Court (Pengadilan Tata Usaha Negara) regarding the internet shutdown policy in Papua, Indonesia. According to the lawyers, the policy resulted in financial loss and restricted access to information. Although PTUN only declared no significant punishment, this case helped initiate a more advanced discussion on digital rights. The lawyers also raised concerns about Ministerial Regulation No. 5/2020 on Electronic Service Providers (MR 5/2020), citing potential privacy issues. The organisation believes this regulation could lead to a breach of privacy in the name of surveillance, as it allows the ministry to compel service providers to provide data. Civil society organisations, AJI, Sindikasi, and LBH, filed a lawsuit to repeal the regulation, which was ultimately denied by the court.
The highlight of the day was the speed dating session, in which participants were paired to briefly present their backgrounds, organisations, and work processes. The primary objective of this session was to foster an understanding of how participants’ individual works could complement one another in their shared goal of collaborating on AI engagement activities with the government. As such, this session was critical in identifying potential synergies between various works and avoiding overlaps.
Reflecting on the session, Indonesian public interest lawyers expressed their thoughts on the lack of support (tactical advice) they received from international organisations in challenging the government from the Global North, which hindered their lawsuits against MR 5/2020 last year. Adding to their difficulties, all the work was centred in one organisation, from collecting evidence to submitting the case to the Court. After the training, various local CSOs expressed their eagerness and dedication to collaborate on compiling empirical data or evidence-based information on the ground, to enhance the legal society’s position in constructing a future strategic human rights litigation case.