Search
Close this search box.

A critical view of AI ethics: Looking at the substance of ethical guidelines

As technologies based on artificial intelligence (AI) gain traction, the need to govern them also becomes increasingly urgent. In recent years, ethical AI has surfaced as the de facto pathway towards safer and better AI, often manifested in lists of guidelines and principles or codes of conduct. At least 84 of such documents exist, put forth by private companies, government agencies, academic and research institutions, non-profit organisations/professional associations, and so on. They are not legally binding, but aim to influence decision-making in the tech industry to abide by certain principles when designing and building AI technologies.

In this three-part series, we will scrutinise current AI ethical principles and guidelines and their shortfalls, and also discuss alternative ethical frameworks that are available. For some background, you may want to read an earlier series on AI and human rights in the context of Southeast Asia.

 

111
Examples of ethics guidelines, screenshot taken from here. Source: (Jobin et al., 2019)[ref]Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2[/ref]

 

Some scholars describe ethics as “arguably the hottest product in Silicon Valley’s hype cycle today, even as headlines decrying a lack of ethics in technology companies accumulate” (Metcalf et al., 2019)[ref]Metcalf, J., Moss, E., & Boyd, D. (2019). Owning ethics: corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research: An International Quarterly, 82(2), 449–476.[/ref]. The more I read about the topic, the less inclined I am to believe they would work to limit AI harms, if not supplemented significantly with other governance approaches such as governmental regulation or tech standards.

Why? In this blog post and the next, I will discuss some points that I’ve gathered from academic literature from two angles: first, from analysing the substance of existing ethical guidelines and principles out there, and, second, from the difficulties in putting the principles into practice.

Maybe I can start by stating my expectations on what ethical AI should and should not do. In the broadest sense, it should support sustainable development and uphold human rights. Ethical AI should not uplift some communities at the expense of others, or be weaponised against marginalised communities or democratic institutions. These are reasonable asks for any technology powerful enough to have a significant impact on society.

Do the content of existing ethical principles and guidelines live up to these expectations? At least four different studies have analysed the substance of the documents, and we will look at some of the conclusions on 1) what the principles contain, 2) what they leave out, and 3) the underlying assumptions that limit what ethical guidelines would be able to achieve. While going through the principles, we should also keep in mind that most of these ethical guidelines analysed are built in the West. In Part Three of this series, we will broaden the conversation by exploring ethical frameworks of other cultures as well.

 

principledai_finalgraphic
Infographic on principled AI, published by the Berkman Klein Center for Internet and Society at Harvard University. Click the image for the full view.

 

What do the principles contain?

Anna Jobin, Marcello Ienca, and Effy Vayena[ref]Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2[/ref], who collected and studied the 84 documents of ethical guidelines and statements, noted that, even though there is no one principle common across all 84, there are five main principles most commonly shared:

  1. transparency;
  2. justice and fairness;
  3. non-maleficence (i.e causing no harm);
  4. responsibility; and,
  5. privacy.

Other than these five, there are six that occur less: beneficence (i.e. promoting good), freedom and autonomy, trust, dignity, sustainability, and solidarity. In a separate analysis of 36 documents (Fjeld et al., 2020)[ref]Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482). Social Science Research Network. https://papers.ssrn.com/abstract=3518482[/ref], researchers came up with similar categories with slightly different groupings. Both papers unpack the concepts, which is worth a read if you want an overall view of what AI ethics cover.

Jobin, Ienca, and Vayena[ref]Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2[/ref] point out that the many ethical guidance documents diverge in four main things:

  1. how ethical principles are interpreted,
  2. why they are deemed important,
  3. what issue, domain, or actors they pertain to; and,
  4. how they should be implemented.

Conceptual and procedural divergences mean that different actors often choose what to prioritise differently, and implementation may vary widely. This may lead to a practise called “ethics shopping”, whereby actors mix and match ethical principles that fit their purposes, instead of actually changing unethical behaviour (Floridi, 2019)[ref]Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x[/ref]. (We’ll discuss this more in Part 2.)

 

What do the principles leave out?

AI ethical guidelines prioritise certain principles against others and, as mentioned, there is a cluster of five main principles that most guidelines agree upon, at least at a high level. But what is left out or underrepresented?

An interesting point argued by Thilo Hagendorff[ref]Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8[/ref] is that most of the recurring principles are the ones that are most easily operationalised mathematically. Also, they tend to be implemented in terms of technical solutions. These principles, such as accountability, explainability, privacy, justice, robustness or safety, tend to be “male-dominated justice ethics”, reflecting the situation that the discourse on AI ethics is primarily shaped by men.

Within the 22 guidelines that he analysed, Hagendorff points out that almost none talk about AI in the contexts of care, nurture, help, welfare, social responsibility, or ecological networks. He goes on to elaborate that very few address the aspects of democratic control, governance, and political deliberation of AI systems – or political abuse of AI systems. The guidelines rarely discuss a lack of diversity within the AI community, where most decisions are predominantly taken by white men. There is also little discussion of trolley problems (ethical dilemmas where there is no clear right or wrong), or assessments of the efficacy of algorithmic decision-making versus human decision-making, or the hidden social and ecological costs of AI systems.

Jobin, Ienca, and Vayena[ref]Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2[/ref] emphasise that mainstream AI ethics debates significantly underrepresent sustainability, dignity, and solidarity as principles. Unpacked, this means the environmental impacts of AI are rarely discussed, as well as impacts on human rights and dignity, and the implications on labour markets. They also point out that, geographically speaking, global regions are not participating equally in the AI ethics debate, pointing out the underrepresentation of areas such as Africa, South and Central America, and Central Asia. In the case of Southeast Asia, Jobin et al.’s data set includes a discussion paper on AI and personal data by Singapore’s Personal Data Protection Commission. No other country in Southeast Asia is represented in the study, even though we find such discussions starting to happen in other Southeast Asian countries such as Thailand. There is also a move towards building national AI strategies in this region.

 

What are the underlying assumptions?

Having some understanding about what the ethical documents contain and leave out, it would probably be useful to zoom out a little and understand why. Daniel Greene, Anna Lauren Hoffman, and Luke Stark[ref]Greene, D., Hoffmann, A. L., & Stark, L. (2019, January 8). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference on System Sciences 2019 (HICSS-52). https://aisel.aisnet.org/hicss-52/dsm/critical_and_ethical_studies/2[/ref] analyse the “moral background” of values statements, or the grounding assumptions that form the discussions on AI ethics. These are the ideas that are taken for granted and seldom questioned around the framing of AI ethics.

From examining seven public statements of ethical principles, Greene, Hoffman, and Stark[ref]Greene, D., Hoffmann, A. L., & Stark, L. (2019, January 8). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference on System Sciences 2019 (HICSS-52). https://aisel.aisnet.org/hicss-52/dsm/critical_and_ethical_studies/2[/ref] find seven interrelated key assumptions:

  1. The ethical guidelines assume that the concerns of positive and negative impacts of AI are universally the same across all cultures and species, and these concerns can be addressed by objectively measuring and fixing the impacts.
  2. Ethical design is the realm of experts (e.g. AI corporations, leading academics, and legal minds); other people who are concerned are merely stakeholders (such as product users or buyers). “Experts make AI happen. Other Stakeholders have AI happen to them” (pg. 2127).
  3. AI and all its associated technologies (e.g. machine learning) will happen inevitably, and humans can only react to its consequences (such as dealing with mass job displacement). The ethical debate is therefore focused on how to design appropriately, and never on whether the systems should be built in the first place.
  4. The good or bad of implementing an AI system is rarely scrutinised from a business level, but always from a design level.
  5. The only ethical path forward is to “build better”: by maximising the benefits of AI and by minimising negative impacts, and educating the public about the role of AI in their lives. “Not building” is not an option.
  6. How to build better? By vetting through the building process, largely by experts (see Point 2). The main legitimising point is transparency, but there is no commitment to substantive changes.
  7. The “experts” mentioned in Point 2 also cover AI and ML technologies, so there is always talk of “explicable” and “transparent” systems.

Through these assumptions, we can then have a better understanding of why AI ethics are formed as so, and, more importantly, understand the limitations that ethical guidelines have in terms of safeguarding society against AI harms.

In the digital era, tech companies generate huge amounts of profits amidst negative impacts on the environment and society, and this is not challenged by AI ethical guidelines. Technology is assumed to solve all problems, and the fact that it brings about some problems of its own is not recognised. The playing field for ethical debates of AI is by design, unequal, because it assumes that AI experts will take the driving seat and the rest of us will just tag along at the back of the wagon. These underlying assumptions are consistent with what we have seen in the substance of ethical guidelines and what they leave out.

 

Conclusion so far

In this blog post, we have discussed the substance of ethical guidelines that have mushroomed in the recent years and taken a closer look at what they cover and their underlying assumptions. We have found that the contents of these guidelines are mostly focused on narrow fixes and carry with them problematic blindspots which do not help with systemic solutions.

But surely, even if the guidelines are not perfect, they can do some good in practice? The next blog post will examine this question, and suggest that they do not, and can even be harmful in cases.

 

About the Author

Dr. Jun-E Tan is an independent policy researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E has written extensively on digital rights and AI governance in the context of Southeast Asia, and has participated in numerous international and regional fora on these topics. More information about her work can be found on her website, jun-etan.com.

 

Comments are closed.