Digital Ethics

It’s Everybody’s Business
©198803597 - stock.adobe.com

Introduction

The digital age, characterised by the exponential growth of artificial intelligence (AI) and digital influence in our daily lives, has brought innumerable benefits. Unfortunately, and in the same breath, these developments have also brought with them “new forms of exploitation and oppression” (Radermacher, 2018, p. 444), which have also proved to be harmful, for example, when perpetuating unfair biases in corporate decision-making or on curtailing personal freedom due to security-enhancing AI systems (Smuha, 2019). Based on the aforesaid, ethics is playing a rapidly increasing role in the current digital era, raising questions around topics relating to the protection of personal data, consumer transparency, bias in algorithms (Beranger, 2018). In fact, it is said that now, for the first time ever, we are confronted universally with the challenge of collectively addressing ethics (Kaitatzi-Whitlock, 2021).

As technology becomes more autonomous, it becomes value-laden, as the underlying algorithms interpret large bodies of data, which data has repeatedly shown to be bias (Martin, Shilton, & Smith, 2019). This article, as basis, briefly outlines the requirements to be met for trustworthy Artificial Intelligence, which form a part of the EU Commission’s Ethics Guidelines for Trustworthy Artificial Intelligence. In these guidelines, AI practitioners are defined as “all individuals or organisations that develop (including research, design or provide data for), deploy (including implement) or use AI systems, excluding those that use AI systems in the capacity of end-user or consumer”. The remainder of this article is dedicated to encouraging AI practitioners to adopt certain ethical concepts in their respective lines of work.

The Ethics Guidelines for Trustworthy Artificial Intelligence

The regulation of the internet through, for example, the EU General Data Protection Regulation since 2018 (GDPR) and the subsequent EU “white paper” on Artificial Intelligence published by the EU Commission in February 2020, are paramount to curtail the malevolent actions of perpetrators (Kaitatzi-Whitlock, 2021). In fact, the protection of personal data is said to be “key to ethics, human dignity, value, respect and autonomy supporting citizens’ rights against commercial exploitation and profiling” (Ibiricu & van der Made, 2020, p. 396). The Ethics Guidelines for Trustworthy Artificial Intelligence, published in 2019 by the European Commission’s High-Level Expert Group on AI (EC Guidelines for Trustworthy AI) sets out seven fundamental requirements that AI systems must meet before being considered trustworthy. 

The following requirements are intended to serve as guidelines for AI practitioners:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured.

The above guidelines are intended to apply over and above or in addition to prescribed legislation and regulation (Smuha, 2019). This approach supports the hypothesis that ethics should be considered as complementary to the law (Rochel, 2021).

The EC Guidelines for Trustworthy AI (along with the preceding white paper of 2019) have, since publication, made their mark in the EU, having influenced European Commission’s Von der Leyen’s 2019 political agenda and one of the first legislative framework proposal for AI (Stix, 2021).

Digital Ethics requires Everyone’s Actionable Attention

In addition to guidelines, standards, prescriptive regulation and legislation, self-regulation and the conscious actions of designers, users and everyone in-between (in the public and private sectors) remains equally crucial (Kaitatzi-Whitlock, 2021). This means everyone requires tobecome more ethically and socially aware, thereby playing a part in avoiding the birth and growth of unacceptable norms (Ibiricu & van der Made, 2020).

Practitioners in Cloud Computing Services

In the area of cloud computing services, the main participants are the cloud computing providers (who design and sell the cloud-based services like software and infrastructure) and, on the other hand, cloud users (being the individuals and businesses, who and which, by using the cloud-based solutions, are required to surrender some control of the data inputted into the system) (Murphy & Rocchi, 2021). At the centre of an effective provider-user relationship is trust, which can be achieved, for example, by the provider’s digital ethics code and a manifest implementation of good practices. The user of cloud computing services must also oblige in this relationship in the manner, in which the cloud computing services are used, the user’s efforts towards increased technological and the own ethical awareness (Murphy & Rocchi, 2021).

Digital Health Technology Practitioners

Digital health technology has also leaped in recent years – from tracing technologies to wellness products and mobile apps and the use of AI in the diagnosis, tracking and treatment of cancer (Nebeker, Torous, & Bartlett Ellis, 2019). Persons involved in the development, testing, improvement of technologies in the digital health sector include developers, funders, researchers, start-ups, testers well as journal editors – all of whom have a responsibility to advance ethics and ethical principles in their respective work (Nebeker, Torous, & Bartlett Ellis, 2019). In the specific context of the features of digital contact tracing technology, the guidelines edited and published by Dr. Jeffrey Kahn and the John Hopkins Project on Ethics and Governance of Digital Contact Tracing Technologies, propose that also the following be considered (Kahn & J.H.P.E.G.D.T., 2020):

  • Enabling an effective and efficient public health response
  • While also protecting individual privacy and preventing harm to individuals (e.g. the adverse consequences arising from the leak of sensitive data)
  • Permitting the individuals to control the kind of information that is collected and processed
  • Promoting the equitable distribution of benefits and impacts of digital contact tracing technology

Professionals in the Cryptocurrency Business

Cryptocurrencies (like Bitcoin) characteristically contain blockchain technology, meaning they are encrypted and generated by a software code, are transferred via electronic wallets and are not backed by an issuing authority (for example, a central or commercial bank) (Dierksmeier & Seele, 2018). While this technological development has been hailed as significant digital advancement, ethically problematic issues like the growth of transactions on the so-called ‘dark web’, higher probability of digital theft and money laundering are a reality (Dierksmeier & Seele, 2018). For monetary systems to remain sustainable, they require the confidence and general approval by society (Meyer & Hudon, 2019). The ethics pertaining to cryptocurrencies, at the core, are premised on trust, which, in turn, is founded upon traceability and transparency – in respect of decentralised data storage, the traceability of transactions among users and the decentralised public ledger (Meyer & Hudon, 2019).

Corporate Leadership of Digital Businesses

The rise of digital businesses invariably means the parallel impact of AI in business. It is forecast that soon, AI-based technologies will “algorithmise the business”, which also entails more limited control by human beings (Tretiak, Shamruk, Yermak, Pedorych, & Olefir, 2019).
Ethical leadership requires leaders to be fair, honest and principled, which would, in turn, show up in the communication mechanisms, punishments and rewards (Kvalnes, 2020). The leadership of the designers and sellers of AI-based technologies have a duty to ensure that attention to ethics is given already during the design phase of the said technologies (Martin, Shilton, & Smith, 2019). This must be preceded by considering the ethical implications of business decisions (Ibiricu & van der Made, 2020). The establishment of the roles of Chief Ethical Officers and Ethical Officers are gaining popularity in European institutions and are tasked to design, implement and monitor ethical standards across the network of institutions (Ibiricu & van der Made, 2020). The introduction of a code of ethics shows a company’s commitment to the positive impact of digitalisation. However, also very necessary are the support by the corporate executive, due communication and training of employees as well as establishing review and compliance mechanisms (Ibiricu & van der Made, 2020).

Social Media Users

Kvalnes, in the chapter “Leadership and Ethics in Social Media” of the book Ethical Dilemmas (2020), makes a distinction between do-good ethics and avoid-harm ethics in leadership, here focusing specifically on social media (Kvalnes, 2020, p. 77):

Table 4.1 Do-good and avoid-harm ethics issues in social media

The following requirements are intended to serve as guidelines for AI practitioners:

  • Empowerment
  • Transparency
  • Employee engagement
  • Sharing of knowledge
  • Driving positive change
  • Prosocial behavior

Avoid-harm ethics Social media use should not contribute to the following:

  • Loss of integrity

  • Harassment
  • Discrimination
  • Trolling
  • Fake news
  • Destructive politics

As social media has become an intrinsic part of the lives of human beings and business alike, the above guidance is not only important and relevant, but duly addresses all social media users.

Closing Remarks

A complementary relationship between the law (prescriptive legislation, regulations) and ethics is necessary for the effective and efficient AI governance (Smuha, 2019). In this article, an attempt is made to illustrate the relevant ethical considerations to be taken into account by the professionals addressed. The list of persons addressed herein is, in no way, exhaustive. Also, the ethical concepts along with the ethical landscape are set to evolve with the advancement of digital technologies. The duty rests on every single stakeholder to remain vigilant in keeping ethical matters top-of-mind at all times. The continuous, peaceful and prosperous co-existence of all depends on it.


Bibliography

Beranger, J. (2018). The Algorithmic Code of Ethics. John Wiley & Sons, Incorporated.
Dierksmeier, C., & Seele, P. (2018). Cryptocurrencies and Business Ethics. Journal of Business Ethics, 152, S. 1-14.
Ibiricu, B., & van der Made, M. L. (2020). Ethics by design: a code of ethics for the digital age. Records Management Journal, 30(3), S. 395-414.
Kahn, J., & J.H.P.E.G.D.T. (2020). Digital Contact Tracing for Pandemic Response: Ethics and Governance. Baltimore: John Hopkins University Press.
Kaitatzi-Whitlock, S. (2021). Toward a digital civil society: digital ethics through communication education. Journal of Information, Communication and Ethics in Society, ahead-of-print(ahead-of-print), S. ahead-of-print.
Kvalnes, Ø. (2020). Leadership and Ethics in Social Media. In Ø. Kvalnes, Digital Dilemmas: Exploring Social Media Ethics in Organizations (S. 65-82). Cham: Palgrave Macmillan.
Martin, K., Shilton, K., & Smith, J. (2019). Business and the Ethical Implications of Technology: Introduction to the Symposium. Journal of Business Ethics, 160(2), S. 307-317.
Meyer, C., & Hudon, M. (2019). Money and the Commons: An Investigation of Complementary Currencies and Their Ethical Implications. Journal of Business Ethics, 160(1), S. 277-292.
Murphy, B., & Rocchi, M. (2021). Chapter 6: Ethics and Cloud Computing. In T. Lynn, J. G. Mooney, L. van der Werff, & G. Fox, Data Privacy and Trust in Cloud Computing: Building trust in the cloud through assurance and accountability (S. 1-150). Cham: Palgrave Macmillan.
Nebeker, C., Torous, J., & Bartlett Ellis, R. J. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Medicine, 17(137), S. 1-7.
Radermacher, I. (2018). Cyber Ethics Requires Critical Thinking of Citizens. In C. Stücklberger, & P. Duggal, Cyber Ethics 4.0: Serving Humanity with Values (S. 1-489). Geneva: Globethics.net Global 17.
Rochel, J. (2021). Ethics in GDPR: A Blueprint for Applied Legal Theory. International data privacy law, S. 1-15.
Smuha, N. A. (2019). The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Computer Law Review International, 20(4), S. 97-106.
Stix, C. (2021). Actionable Principles for Artificial Intelligence Policy: Three Pathways. Sci Eng Ethics, 27, S. 1-26.
Tretiak, H., Shamruk, O., Yermak, S., Pedorych, A., & Olefir, L. (2019). The Emergence of Ethical Paradigms in Business Activities in the Context of the Use of Information Technologies. Journal of Legal, Ethical and Regulatory Issues, 22(1), S. 1-6.

Total
0
Shares
Prev
Vorurteile der KI
©207493167 - stock.adobe.com

Vorurteile der KI

Eine gesellschaftliche Herausforderung, die alle betrifft

Next
Privacy by Design
©235786609 - stock.adobe.com

Privacy by Design

A Necessity from the on-set for All Digital Solutions

You May Also Like