This article is Part 2 of a series on the ethical principles and guidelines of artificial intelligence (AI), as well as their shortfalls and the search for alternative frameworks.
In Part 1 of this series on the ethical principles of artificial intelligence (AI), we analysed the substance of ethical guidelines that supposedly serve as beacons of good design and the development of AI – and found them to be lacking in substance and scope.
In this second instalment, we adjust our question a little. If we change our lofty expectations of what ethical AI should be – supporting sustainable development and human rights in a holistic manner – and conceding that we are happy with the narrow goals current ethical principles and guidelines focus on (transparency, privacy, explainability, etc), how well can these principles be applied to real-world practices and still make somewhat of a difference?
The short answer is, not that well. As for the long answer with explanations, we can look at two aspects of this question: first, on how unethical behaviour happens even if there are lists of how to be ethical (no surprise here, really), and, second, on how, there are still multiple challenges in translating ethical norms to actual practices, even if organisations and developers have the best intentions.
Risks of being unethical, even if you have ethical guidelines
An insightful comment piece from Luciano Floridi1 shows quite clearly that there are a number of ways that beautifully crafted ethical principles can be, and have been, undermined. The table below provides the list of Floridi’s five unethical behaviours and their definitions, and also a column where I give small explanations that hopefully help in providing some further understanding.
Concept | Definition | In Other Words |
---|---|---|
1. Digital Ethics Shopping | The malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks, or other similar standards (especially, but not only, in the ethics of AI) from a variety of available offers in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc) as a way to justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards. | There are too many ethical guidelines out there (see Part 1). Sometimes, different actors develop their own guidelines to fit what they are currently doing so that they can claim to be ethical without making any behavioural changes. Apart from being misleading, the interpretations that each actor has on its own guideline also make it difficult for us to compare the actions of different actors in a standardised way. |
2. Ethics bluewashing | The malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is. | “Ethics” becomes a performance or a public relations exercise. We know this very well from “greenwashing”, such as activities under companies’ corporate social responsibility programmes that, in actuality, are mainly photo-ops that make them seem greener than they are and serve to distract consumers from the environmental damage that they are causing. |
3. Digital ethics lobbying | The malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions. | This is mainly referring to how companies push for self-regulation using the argument that they are already adhering to ethical guidelines, instead of having more stringent oversight. (We will talk a bit more about this after the table.) |
4. Digital ethics dumping | The malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the European Union) in ways that would be ethically unacceptable in the context or place of origin, and (b) importing the outcomes of such unethical research activities. | This means moving unethical research and development practices to someplace where unethical practices is less of an issue, so that a company can appear ethical to the consumers in the country of origin. The product, once developed, is then imported back to the home country. Some examples given by Floridi include the development and training of algorithms of facial recognition. |
5. Ethics shirking | The malpractice of doing increasingly less ethical work (such as fulfilling duties, respecting rights, and honouring commitments) in a given context the lower the return of such ethical work in that context is mistakenly perceived to be. | This is similar with ethics dumping. Companies just drop all pretences of being ethical in countries that have lower ethical standards – which also means lower risks in not appearing ethical. At the end of the day, ethics is a PR exercise. |
Out of the above table, I would like to expand on a couple of points. First, it has been pointed out that the tech industry has been pushing for AI ethics as a way to get out of governmental regulations. Rodrigo Ochigame, a former MIT Media Lab researcher, argues that academia and academic research on AI ethics are used to lend credibility to the tech industry, which lobbies hard to avoid restrictive legal regulations, prefering to support “ethical principles and responsible practices” (emphasis in original) and “moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits”. The discourse on AI ethics is heavily sponsored by corporate actors, and Ochigame specifically points out Partnership on AI of being mostly a PR effort by the industry, despite initial noble intentions.
Second, in Southeast Asia, as citizens outside of the power centres of AI production and consumption, the last two items in the list provide much cause for concern. Ethics shirking has already been seen in the case of content moderators in the Philippines working as contractors for big tech platforms such as Facebook and YouTube, with paltry and toxic labour conditions. (This is also happening in the United States and India.) On ethics dumping, I have not found any obvious case study yet, but it is conceivable that the case as mentioned by Floridi2 on training facial recognition systems could happen in Malaysia, for instance, where policies and protections for personal data are weak.
To get a glimpse of what goes behind content moderation for big tech platforms, we recommend watching the 2018 documentary “The Cleaners” by Moritz Riesewieck and Hans Block.
Putting ethics to practice is not as simple as it seems
Assuming that we give the biggest benefit of the doubt to a tech company that really wants to do the right thing – or, at least, to cover their rears as best as they can – the path forward is still wrought with difficulties. Researchers have tackled this question from various angles, and we can put forth a few examples from the literature.
To start off, a 2018 study by Andrew McNamara, Justin Smith, and Emerson Murphy-Hill (which was cited by Thilo Hagendorff3) involved conducting a survey of 63 software engineering students and 105 professional software developers to test the influence of the ethics guideline of the Association for Computing Machinery (ACM), from the points of view of responsibility to report, user data collection, intellectual property, code quality, honesty to customer, and personnel management. Apparently, the effectiveness of the guideline was “almost zero” (pg. 108). Hagendorff points out that engineers and developers are not systematically educated about ethical issues, and they are also rarely empowered by organisational structures to raise ethical concerns.
Some other qualitative studies point to a similarly dismal outcome. Interviews with 21 AI practitioners in Australia illuminate that the creation of a system does not start and end with tech professionals (Orr and Davis, 2020)4. The AI practitioners see AI ethics and responsibility to be distributed across a range of actors and factors, or as put by Will Orr and Jenny L. Davis, “a pattern of ethical dispersion”.
First, there are pre-set parameters, such as legislative regulations (Is this legal?), organisational norms (Is this consistent with my company’s ethics?), and clients (is this what my client wants?). Interestingly enough, most practitioners could not recall specific language from their companies’ code of ethics, but maintain that they were following them. Part of these parameters are other constraints, including technical and practical matters such as time and resources given to perform their tasks.
Second, the AI systems also rely on the ethics of their users and the machines’ unpredictable interaction with the environments that they operate in. This is not good news for our ethical guidelines, which put most of the responsibility of “doing ethics” on tech professionals.
Another study by Jacob Metcalf, Emanuel Moss, and danah boyd5 looks at “ethics owners” in Silicon Valley, or staff in tech companies who focus on implementing ethical processes in the organisation as part of their portfolio. The study highlights three cultural logics in the tech industry that underlies the implementation of ethics.
- The tech industry operates as a meritocracy (only the best people from best schools), and so the engineers can be relied upon to do what is ethical. This is problematic, not only because there are serious flaws in the meritocracy assumption, but also because we’ve seen in above discussions that ethical considerations go much more beyond the individual, who should not be scapegoated for ethical failures.
- Technology solves all problems, and even problems created by technologies can be solved by more technological solutions. The irony in the recursive logic is evident, but the lack of reflexivity leads to the mostly futile exercise of creating more and more checklists or metrics which do not bring forth actual impact on the social worlds outside of the organisations.
- Grounded in a belief of market fundamentalism, building an ethical product is always subservient to market logics, which leads to the race to the bottom. If competitors are not being ethical (and if there are no legal restrictions punishing unethical behaviour), ethical procedures give way to bottom lines.
In conclusion
The studies we have explored in this article and its predecessor look mostly at the Western arena of AI ethics. And, so far, these studies demonstrate that ethical principles and guidelines currently in use have limited substance in their content and also a high possibility of being used mainly as window dressing while diverting us away from more structural solutions such as legal regulations.
This seems to be quite clear in the sets of irreconcilable differences between ethical documents and actual change. For instance, AI practitioners look at legal limits to what they can do and place no importance on ethical guidelines, but big tech lobbyists use the guidelines to delay regulation. Ethical responsibility of AI is spread across the entire value chain in practice, but the guidelines focus on developers as the primary implementers. The lesson learnt here seems to be that we need to look further than ethical self-regulation that is offered by the tech industry, for governance measures that are truly inclusive and effective in tackling the global issues of AI harms and safety.
The next (and last) blog post in the series delves into ethical perspectives that are non-Western, challenging the universality of the current frameworks of AI ethics, and provides some alternative thinking of how to build better AI.
- Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x ↩︎
- Ibid. ↩︎
- Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8 ↩︎
- Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735. https://doi.org/10.1080/1369118X.2020.1713842 ↩︎
- Metcalf, J., Moss, E., & Boyd, D. (2019). Owning ethics: corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research: An International Quarterly, 82(2), 449–476. ↩︎
Dr. Jun-E Tan is an independent policy researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E has written extensively on digital rights and AI governance in the context of Southeast Asia, and has participated in numerous international and regional fora on these topics. More information about her work can be found on her website, jun-etan.com.
Comments are closed.