In late 2022, generative artificial intelligence (AI) tools catapulted to mainstream consciousness following the deluge of releases of publicly available AI-powered applications, among them Microsoft-backed chatbot ChatGPT and generative image software Midjourney. Such rapid advancements in generative AI have fuelled further public discourse on how AI will impact varying sectors of society, and whether the benefits to economic growth and development outweigh the potential misuse and violations of fundamental human and digital rights and freedom.
It is especially concerning in the Asia-Pacific, a region with the highest population of technology users and adaptors but historically is underrepresented in the large datasets used in AI and machine learning, with little to no say in AI design, governance, and standards-setting mechanisms. This lack of representation limits the “predictive model” and “target variable” that is being feed into the algorithm, and often causes misidentification on datasets it is not familiar with and/or have not been trained with. Such misidentification can lead to further algorithmic discrimination against already marginalised communities; a 2018 report on South Wales Police use on AI revealed that the technology had misidentified 2,297 individuals as criminals out of 2,470 identifications, or 92% out of the total data pool.
AI is also being used in surveillance technologies that have become all the more common since the COVID-19 pandemic at the expense of individuals’ right to privacy. Examples include Singapore’s use of facial recognition in government surveillance and predictive policing in China, which uses AI to primarily identify a person and collect data regarding their daily activities and movements. Such technologies result in the practice of self-censorship due to concerns of being arbitrarily watched by authorities, and has the potential to infringe human rights partly due to their imperfect algorithm regularly resulting in false positives especially to people of color, women, children, and the elderly. The United Nations Office of the High Commissioner on Human Rights (OHCHR) in September 2022 reiterated warnings against such surveillance, writing that these technologies remain a threat to digital privacy and human rights.
In response, some Asia-Pacific countries and regional bodies are following the lead of the European Union (EU) in imposing stricter AI regulation, albeit with varying degrees of proposed legislation. The EU, long regarded as the leader in drafting policies on technology and data usage, is in the final stages of enacting a comprehensive AI Act that seeks to impose staggered levels of obligations for providers and users depending on the level of risk. Much like its passage of the General Data Protection Regulation (GDPR), the bloc is hopeful the AI Act will be a global benchmark for other countries as the first regional legally-binding instrument in regards to AI.
Regional bodies and countries in the Asia-Pacific are mixed in their own opinions on AI regulation. The Association of Southeast Asian Nations (ASEAN) will adopt by 2024 an ASEAN Guide on AI Governance and Ethics. Australia is set to update its 2019 AI Ethics Framework to impose tougher restrictions on deep fakes. Japan, contrary to its EU collaborators, is leaning towards “softer rules” than the AI Act. India, in an April 2023 letter by its Ministry of Electronics and Information Technology, said the government was “not considering bringing a law or regulating the growth of artificial intelligence in the country”, opting to focus on developing the AI sector in the country.
In South and Southeast Asia, just as quickly as AI technologies are evolving, so, too, are these national and regional regulations on AI and its interlinked elements, such as data governance. Data Privacy Acts have been enacted in countries such as Indonesia, Philippines, and Bangladesh, which will undeniably affect how AI is being developed and governed in these countries.
This article calls on civil society organisations (CSOs) working in the human and digital rights fields to participate in the process of crafting these regulations and frameworks. If existing and proposed data protection and technology-related laws are to serve any indication (take India’s draft Data Protection Bill which would give the government sweeping surveillance powers or Myanmar’s draft Cybersecurity Bill which criminalizes protected speech, for example), the region is especially vulnerable to legal instruments that can oppress constituents and silence any form of dissent or criticism.
Existing international frameworks on AI – and their limitations
In engaging with policymakers and other stakeholders working on AI regulation, CSOs from the Asia-Pacific, particularly from South and Southeast Asia, can be guided by existing international frameworks and proposed ethical guidelines.
As early as 2018, the UN has addressed the importance of a Regulation for AI tools anchored on human rights principles, highlighting state responsibility to guarantee respect for individual rights. Initial steps to regulate the use and development of AI can be seen in the Organisation for Economic Cooperation and Development (OECD)’s Principles on Artificial Intelligence (2019), which was later incorporated into the AI Principles under 2019 G20 Summit in Osaka, Japan. The principle emphasises a human-centred approach to AI’s use and development, and focuses on the accountability and transparency of all stakeholders regarding the AI they are developing, distributing, and/or operating.
But while international regulatory frameworks exist, they remain as suggestions rather than legally-binding agreements due to the voluntary compliance model that international laws operate on. Existing international human rights law also does not apply to the conduct of private entities, including Big Tech. Presently, International Human Rights Law only applies to the conduct of States, and States that have ratified the legal human rights instrument must ensure their laws conform with the treaties as they are legally bound to do so. While a non-binding legal instrument that applies to businesses – including Big Tech – called the UN Guiding Principles on Business and Human Rights exists, adherence to said principle is voluntary, and the conduct of the private sector is essentially governed by national legislation. In short, The UN treaty bodies can monitor states compliance to existing international laws, but they have no enforcement powers if States are not complying with their positive and negative human rights obligations. This highlights the importance of adopting existing AI principles enshrined in international laws to domestic laws in order to be fully effective.
Rather than illustrating AI as a new, separate concern, a rights-respecting national AI regulation should contextualise existing AI issues with broader human and digital rights perspectives as shaped by regional and country-specific realities. According to the UN General Assembly Resolution No. A/73/348, bills and policies regarding AI and data governance should at minimum pass a three-part-test in forming an ethical judgement under Article 19(3) of the ICCPR:
- Prescribed by law
- To pursue a legitimate aim
- In ways that are necessary in a democratic society.
Despite its autonomous decision-making abilities, AI technologies by themselves cannot be held accountable for the product of its decisions. Future laws should also create a scheme for liability to eliminate vagueness and confusion in possible wrongdoings made by AI decision-making. This scheme would allow companies to draw limits on their responsibilities, and create an accountability system for all the parties involved, including private entities.
Alongside regulatory frameworks, numerous bodies have also pushed for the adoption of ethical AI frameworks that centre on human and digital rights. But much like their regulatory counterparts, applying AI human rights principles in practice is not yet standardised.
Civil society’s role in national AI strategies
Notwithstanding that the guidelines are followed, the realities of implementation of regulations governing digital rights are that regulatory bodies and enforcers historically lean punitive and authoritarian rather than rights-respecting. Indonesian ITE law, Bangladesh’ DSA law, and Cambodia’s prakas on social media conduct, for example, have often been used as a tool for silencing online critics of the government rather than improving privacy and security concerns within the digital world. There are no guarantees that future AI regulations will not suffer the same fate.
Proposed regulations also need to balance the geopolitics of AI in the region; In a 2020 research on AI governance in Southeast Asia by EngageMedia, regional experts on AI and human rights stressed that there is a high barrier of participation experienced by South and Southeast Asian governments to the International AI landscape and governance due to the following; stressed the region’s high barrier in participation in AI governance due to the following challenges:
- The underrepresentation of the region in international standards-setting bodies and authorities
- Lack of a strong regional voice (notwithstanding ASEAN’s recent actions)
- Low average state capacity to govern AI technologies
In addition, most Civil Society in state level also face similar barriers in engaging with domestic government regarding local AI governance due to the government’s focus on encouraging adoption and innovation, which often is at the cost of checks and balances; non-transparent AI usage and its impact on data governance; and limited avenue for meaningful public participation.
These challenges continue to ring true three years on. In addressing the last challenge, CSOs should capitalize on the growing interest in the Asia-Pacific digital landscape. Already, CSOs have started turning their attention to AI. For example, at the recently concluded Digital Rights Asia Pacific Assembly (DRAPAC23), a number of sessions centred on AI, from its involvement in shaping social media algorithms to its larger impact on internet freedoms.
In respect to participating in developing national AI regulation and framework, national and regional CSOs can:
- Conduct more advocacy-based regional research on AI: In-depth, contextual research can paint a more comprehensive picture of how AI affects the region, and aid in addressing overall concerns on the existing landscape for AI use in the region. Researches regarding possible rights-respecting frameworks can also be conducted as part of building case for further advocacy.
- Collaborate with national governments on AI regulation development: CSOs can partner closely with government agencies on joint initiatives to approach AI regulation. Some ways that this trust and partnership can foster include offering capacity building and producing collaborative research regarding AI.
- Form strategic networks and collaborations: DRAPAC23 proves that CSOs on all levels are capable of forming alliances to collaborate and address pressing issues together. The call now is to sustain these kinds of networks and leverage their influence in addressing future AI regulation implementation.
CSOs from the region should champion their lived expertise and the stories of underheard communities, and lean into their crucial role in ensuring future AI regulation will prioritise the protection of fundamental rights and freedoms in physical and digital spaces.
This article was co-produced by EngageMedia and the International Center for Not-for-Profit Law (ICNL) as part of the Greater Internet Freedom program.