Podcast Transcript: AI, Ethics and Us and the Rise of Simulated Spaces
Hi from the Library Student Team! This podcast will focus on Week 6: AI, ethics, and us and Week 7: The Rise of Simulated Spaces.
TRANSCRIPT
ST1: Hi from the Library Student Team! I’m Fariha and I’m Bethany, and this week’s podcast will focus on Week 6: AI, ethics, and us and Week 7: The Rise of Simulated Spaces.
ST2: There’s so much discussion and loads of ideas around these two topics so let’s just jump right in. In AI, Ethics and Us, the topic of ethics and what this means when we’re using AI was highlighted. When we’re talking about ethics, we understand it as a set of beliefs around what’s right or wrong, but I think ethics can be subjective. What I believe to be wrong might not be the same as what you believe.
ST1: I agree, and based on this week’s responses, AI can contribute to the aggravation of political, cultural and social issues. There have even been some problems surrounding copyright infringement. This makes us think, when we’re using AI tools, whose ideas are being generated? The machine or the moderators behind the machine? Therefore, what kind of morals should AI follow?
ST2: This can definitely be tricky but for the sake of its users, some of you suggested that AI should be regulated for misinformation as well as harmful or discriminatory biases. The onus may be on the creators of this model to do so. What do you think about this Fariha?
ST1: It’s such a mind-boggling topic. I think in order for AI to be effective there should be some level of human intervention. But this can only work if these humans have undergone extensive training to equip them with knowledge and techniques to recognise and mitigate biases. Otherwise, AI tools can be really harmful, especially for vulnerable people such as those unfamiliar with AI, especially children.
ST2: Not only can it be harmful, but it can also cause issues in the academic field. Students using AI tools such as Chat GPT run the risk of plagiarism. By depending on a device that thinks for you, some of you suggested that they can lose intellectual and critical thinking.
ST1: And this is why it’s so important to be careful when using AI. Helping students to become AI- literate while maintaining academic integrity can be a solution to this challenge. Bethany, during our discussion, you mentioned human intervention and possible regulation when thinking about the morals that AI should follow. But what about using AI to make moral judgements?
ST2: You know I tested the Ask Delphi model proposed on the course and it was such an interesting thing. I’ve never come across anything like it. But I do think this should not be used extensively and applied to every situation. I’d like to hope that as humans we have better judgement in some situations than an AI model. But what about ethical situations in virtual reality?
ST1: Moving onto week 7, you discussed the different way in which we interact within Simulated Spaces. Now, I don’t know about you Bethany, but I didn’t really understand what a ‘simulated space’ was before I read through the content of this week!
ST2: It’s such a fascinating concept, isn’t it? Novel technologies like VR headsets allow us to do many things we could’ve never thought of in the past. I mean, trying on clothes without actually having to trawl through shops or spending hours in changing rooms- sounds like a gamechanger!
ST1: Whilst that might be the case, our data security is also so important and I don’t know how safe I feel, if I’m sharing so much information about myself with companies that I can’t fully trust. Take Mark Zuckerberg’s Metaverse, for instance. Should one company have the monopoly over our digital data? I tend to agree with Jaron Lanier’s stance on ‘data dignity’ and would prefer a decentralised approach to online data.
ST2: I agree, and I think most of you do too! When asked for your opinions about whether we should accept novel technologies without a democratic vote, it was a resounding no!
ST1: There was a comment from one of you that caught my eye in particular, about how social media has been of detriment to our mental health and, in effect, we should take this as a warning not to be complacent with accepting new technologies without analysing risk first.
ST2: Social media definitely plays into the creation of virtual bodies too- it has become so easy to build an entirely new identity, new friendship groups, new ways to socialise and especially during the Covid-19 pandemic, it was very common.
ST1: You’re right, but I’m not sure that’s always the case, in fact there was discussion about how virtual bodies are often just replicas of the physical. And you all agreed- your comments suggest that whilst there is opportunity to omit certain things about yourself or create entire new virtual identities, you can be exactly as you are online as you are in real life. But I wonder, will this last forever?
Or will we become more familiar with fabricated posthuman identities online?
ST2: One thing this week that really amazed me was the transformative changes that Brain Computing-Interfaces could bring about. It’s like the ultimate childhood dream: being able to make things happen with your mind!
ST1: Absolutely! Having such sophisticated tools at play to support rehabilitation sounds fantastic- these are the type of things that make me excited about the development of a Digital Society in the future.
ST2: Without doubt, there are challenges with Brain Computing-Interfaces, but I hope with time, they will be equally beneficial to all.
ST1: Well that wraps up our conversation on Week 6 and 7 of the Digital Society module. Thank you all for your thoughtful comments, we loved looking through them.
You have really interesting and nuanced thoughts about how we construct and interact with AI and Simulated Spaces. Thanks for taking the time to listen, and have a great week!