Podcast transcript: AI, Ethics and us

Digital Society admin
3 min readMar 6, 2024

--

Sabine Sharp introduces this topic on AI, Ethics and Us.

This podcast is part of the UCIL Digital Society course from the University of Manchester. The story it relates to is hosted on Medium and can be found here.

In this podcast Sabine Sharp introduces the topic — AI, Ethics and us.

Transcript

Sabine Sharp: I’m interested in the ways thinking about our digital society can help us grapple with challenges posed by new technologies in the present moment. Technologies that once seemed the stuff of science fiction stories are fast becoming part of everyday life, and no more so than with artificial intelligence (AI). With these new technologies come significant ethical issues that demand our attention.

Each topic of the Digital Society course raises ethical dilemmas that could not have occurred before the advent of the internet and our increasingly digital society. Rather than taking for granted that our existing moral frameworks continue to apply to these new circumstances, we urgently need to consider what it means for us to live together as members of a society in a digital world.

This topic places those ethical issues in the foreground, examining in particular some of the problems that generative AI and machine learning tools present for living well and doing good by others. We will look at Walter Maner’s definition of computer ethics in relation to recent developments in AI and machine learning. By looking at some of the key ethical issues which are being “aggravated, transformed or created” by the advance of AI tools so far, we will consider what key questions we need to ask ourselves as these technologies become further integrated into our everyday lives.

We’ll then look at where the moral responsibility lies when these new AI applications go awry. As users of these tools, we might be quick to claim ownership of their more positive outputs, but who is accountable when they cause harm? In this section of the topic, we look at issues of plagiarism, misinformation, and bias through some recent examples of AI technologies going rogue.

These examples move us neatly into our next section, in which we consider the ethical questions involved in moderating AI technologies. Without human filters, AI can reproduce incredibly toxic material, reflecting humans at our very worst. Yet there are serious problems with the working conditions of those involved in this filtering, in preparing the large sets of data on which AI tools are trained. We also need to think about who controls what users see and how that control might be abused. While content moderation has long been an issue with the rise of social media, AI exacerbates this in ways that require urgent consideration. To explore some of these issues around regulating AI, we turn to the implications of AI for academic freedom. If the boundaries of AI outputs are more restrictive than those of our research and study in university contexts, might this limit our ideas or change how we think?

Finally, we ask questions about the future of AI and machine learning. What insights might AI be able to offer us for thinking through ethical problems? Can AI chatbots like Ask Delphi give us helpful answers to guide our moral judgements? And what might be some of the risks of letting AI make decisions for us? We might not have immediate answers to these questions, but what we do know is that understanding how AI interacts with ethics is going to be crucial for the future.

--

--