<?xml version="1.0" encoding="utf-8" ?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:atom="http://www.w3.org/2005/Atom"><atom:link href="https://www.machine-ethics.net/itunes-rss-feed/" rel="self" type="application/rss+xml" /><channel><title>Machine Ethics Podcast episodes</title><link>https://www.machine-ethics.net/</link><language>en</language><copyright>Copyright Ben Byford</copyright><itunes:author>Ben Byford and friends</itunes:author><itunes:subtitle>Artificial Intelligence, technology, autonomy, ethics and society</itunes:subtitle><itunes:summary>Discourse on AI Ethics.

News, explanation and Interviews with academics, authors, business leaders, creatives and engineers on the subject of autonomous algorithms, artificial intelligence, responsible AI, machine learning, AGI, technology ethics, conciousness, philosophy and more.</itunes:summary><description><![CDATA[Discourse on AI Ethics.

News, explanation and Interviews with academics, authors, business leaders, creatives and engineers on the subject of autonomous algorithms, artificial intelligence, responsible AI, machine learning, AGI, technology ethics, conciousness, philosophy and more.]]></description><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Benjamin Byford</itunes:name><itunes:email>hello@machine-ethics.net</itunes:email></itunes:owner><itunes:image href="https://www.machine-ethics.net/site/assets/files/1044/atoms-logo-1400-1.jpg" /><itunes:explicit>false</itunes:explicit><podcast:funding url="https://www.patreon.com/machineethics">Support the show!</podcast:funding><itunes:category text="Society &amp; Culture"><itunes:category text="Philosophy" /></itunes:category><itunes:category text="Technology"></itunes:category><itunes:category text="Science"><itunes:category text="Social Sciences" /></itunes:category><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Effects of AI with Dietmar Fischer</itunes:title><title>111. Effects of AI with Dietmar Fischer</title><link>https://www.machine-ethics.net/podcast/effects-of-ai-with-dietmar-fischer/</link><itunes:episode>111</itunes:episode><itunes:author>Ben Byford and Dietmar Fischer</itunes:author><itunes:subtitle>One hundred first episode of Machine Ethics podcast with Dietmar Fischer</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting with Dietmar Fischer about what we will mean by saying &quot;AI&quot; in the future? AI in science fiction, the fact that AI’s don’t want for anything, jobs and the political effects of unemployment, post-work society and defining what a good life is, the Chinese AI legislation, protecting young people from AI anthropomorphising, AI literacy, the AI bubble, Human extinction, and more]]></itunes:summary><description><![CDATA[This month we&#039;re chatting with Dietmar Fischer about what we will mean by saying &quot;AI&quot; in the future? AI in science fiction, the fact that AI’s don’t want for anything, jobs and the political effects of unemployment, post-work society and defining what a good life is, the Chinese AI legislation, protecting young people from AI anthropomorphising, AI literacy, the AI bubble, Human extinction, and more]]></description><pubDate>Tue, 14 Apr 2026 14:09:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1629/dietmar-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1630/deitmar_machine-ethics-pod.mp3" length="75381174" type="audio/mp3" /><itunes:duration>52:20</itunes:duration><guid>https://www.machine-ethics.net/podcast/effects-of-ai-with-dietmar-fischer/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Organoid Computing with Dr. Ewelina Kurtys</itunes:title><title>110. Organoid Computing with Dr. Ewelina Kurtys</title><link>https://www.machine-ethics.net/podcast/organoid-computing-with-dr-ewelina-kurtys/</link><itunes:episode>110</itunes:episode><itunes:author>Ben Byford and Dr. Ewelina Kurtys</itunes:author><itunes:subtitle>One hundred and tenth episode of Machine Ethics podcast with Dr. Ewelina Kurtys</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting with Dr. Ewelina Kurtys on the uses of organoids and energy saving computing, the unknowns in neural science, differences between biological neurons and digital neural networks, how neurons operate and encoding information, the impractical nature of recreating brain structures, the tendency to anthropomorphise, determinism and more...]]></itunes:summary><description><![CDATA[This month we&#039;re chatting with Dr. Ewelina Kurtys on the uses of organoids and energy saving computing, the unknowns in neural science, differences between biological neurons and digital neural networks, how neurons operate and encoding information, the impractical nature of recreating brain structures, the tendency to anthropomorphise, determinism and more...]]></description><pubDate>Tue, 31 Mar 2026 10:34:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1622/ewelina-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1628/ewelina_machine-ethics-podcast.mp3" length="64060444" type="audio/mp3" /><itunes:duration>44:26</itunes:duration><guid>https://www.machine-ethics.net/podcast/organoid-computing-with-dr-ewelina-kurtys/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Intelligence explosion with James Barrat</itunes:title><title>109. Intelligence explosion with James Barrat</title><link>https://www.machine-ethics.net/podcast/intelligence-explosion-with-james-barrat/</link><itunes:episode>109</itunes:episode><itunes:author>Ben Byford and James Barrat</itunes:author><itunes:subtitle>One hundred and nineth episode of Machine Ethics podcast with James Barrat</itunes:subtitle><itunes:summary><![CDATA[This episode James and I are trying to stay positive while chatting about: superintelligence, AI basic drives, and the alignment problem; The intelligence explosion, existential risks of AI; profit over responsibility, the super rich; AI regulation and much more]]></itunes:summary><description><![CDATA[This episode James and I are trying to stay positive while chatting about: superintelligence, AI basic drives, and the alignment problem; The intelligence explosion, existential risks of AI; profit over responsibility, the super rich; AI regulation and much more]]></description><pubDate>Tue, 03 Mar 2026 10:54:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1619/james-illustratation.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1621/james-barrat-machine-ethics-podcast.mp3" length="86566236" type="audio/mp3" /><itunes:duration>01:00:00</itunes:duration><guid>https://www.machine-ethics.net/podcast/intelligence-explosion-with-james-barrat/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Moral Agents with Jen Semler</itunes:title><title>108. Moral Agents with Jen Semler</title><link>https://www.machine-ethics.net/podcast/moral-agents-with-jen-semler/</link><itunes:episode>108</itunes:episode><itunes:author>Ben Byford and Jen Semler</itunes:author><itunes:subtitle>One hundred and eighth episode of Machine Ethics podcast with Jen Semler</itunes:subtitle><itunes:summary><![CDATA[This month we chatted in-person with Jen Semler. We chatted about what is AI? Philosophers and engineer collboartions, businesses working with ethicists, machine ethics and AMAs, what makes a moral agent, how to create a moral agent, types of moral decisions, the point of moral agents, how can we tell if a machine is conscious, tech companies being not democratic organisations and don’t have to adhere to their citizens, and more...]]></itunes:summary><description><![CDATA[This month we chatted in-person with Jen Semler. We chatted about what is AI? Philosophers and engineer collboartions, businesses working with ethicists, machine ethics and AMAs, what makes a moral agent, how to create a moral agent, types of moral decisions, the point of moral agents, how can we tell if a machine is conscious, tech companies being not democratic organisations and don’t have to adhere to their citizens, and more...]]></description><pubDate>Wed, 04 Feb 2026 11:40:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1615/jen-semler.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1616/jen-semler-machine-ethics-podcast.mp3" length="63903908" type="audio/mp3" /><itunes:duration>44:18</itunes:duration><guid>https://www.machine-ethics.net/podcast/moral-agents-with-jen-semler/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>2025 wrap up with Lisa Talia Moretti &amp; Ben Byford</itunes:title><title>107. 2025 wrap up with Lisa Talia Moretti &amp; Ben Byford</title><link>https://www.machine-ethics.net/podcast/2025-wrap-up-with-lisa-talia-moretti/</link><itunes:episode>107</itunes:episode><itunes:author>Ben Byford and Lisa Talia Moretti</itunes:author><itunes:subtitle>One hundred and seventh episode of Machine Ethics podcast with Lisa Talia Moretti</itunes:subtitle><itunes:summary><![CDATA[For our 2025 round up episode we&#039;re again chatting with Lisa Talia Moretti on the prevalence of AI slop, the end of social media, Grok and explicit content generation, giving legislation more teeth, anthropomorphising reasoning models, AI literacy and safeguarding, fighting data centre construction, importantance of journalism, and AI chatbot bingo...]]></itunes:summary><description><![CDATA[For our 2025 round up episode we&#039;re again chatting with Lisa Talia Moretti on the prevalence of AI slop, the end of social media, Grok and explicit content generation, giving legislation more teeth, anthropomorphising reasoning models, AI literacy and safeguarding, fighting data centre construction, importantance of journalism, and AI chatbot bingo...]]></description><pubDate>Tue, 13 Jan 2026 09:42:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1610/ben_lisa-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1611/lisa2025wrapup-machine-ethics-podcast.mp3" length="60696132" type="audio/mp3" /><itunes:duration>42:05</itunes:duration><guid>https://www.machine-ethics.net/podcast/2025-wrap-up-with-lisa-talia-moretti/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Companion AI with Giulia Trojano</itunes:title><title>106. Companion AI with Giulia Trojano</title><link>https://www.machine-ethics.net/podcast/companion-ai-with-giulia-trojano/</link><itunes:episode>106</itunes:episode><itunes:author>Ben Byford with Giulia Trojano</itunes:author><itunes:subtitle>One hundred and sixth episode of Machine Ethics podcast with Giulia Trojano</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting with Giulia Trojano about AI as an economic narrative, companion chatbots, deskilling of digital literacy, chatbot parental controls, differences between social AI and general AI services, increasing surveillance in the guise of safety, advertising creeping into GenAI services, ReplikaAI, lack of research in emotional AI, techno-determinism, and more...]]></itunes:summary><description><![CDATA[This month we&#039;re chatting with Giulia Trojano about AI as an economic narrative, companion chatbots, deskilling of digital literacy, chatbot parental controls, differences between social AI and general AI services, increasing surveillance in the guise of safety, advertising creeping into GenAI services, ReplikaAI, lack of research in emotional AI, techno-determinism, and more...]]></description><pubDate>Wed, 03 Dec 2025 16:23:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1605/giulia-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1609/giulia-trojano_machine-ethics-podcast.mp3" length="88522273" type="audio/mp3" /><itunes:duration>01:01:23</itunes:duration><guid>https://www.machine-ethics.net/podcast/companion-ai-with-giulia-trojano/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The AI Bubble with Tim El-Sheikh</itunes:title><title>105. The AI Bubble with Tim El-Sheikh</title><link>https://www.machine-ethics.net/podcast/the-ai-bubble-with-tim-el-sheikh/</link><itunes:episode>105</itunes:episode><itunes:author>Ben Byford with Tim El-Sheikh</itunes:author><itunes:subtitle>Hundredth and fifth episode of Machine Ethics podcast with Tim El-Sheikh</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting again with Tim El-Sheikh. We discuss podcasting,
history of openAI, London startups, what are the AI use cases, is GenAI even safe? the AI bubble, snake oil salesmen, why do we need all these data centres? replacing human workers, data oligachies, the erosion of trust in AI, AI psychosis and more...]]></itunes:summary><description><![CDATA[This month we&#039;re chatting again with Tim El-Sheikh. We discuss podcasting,
history of openAI, London startups, what are the AI use cases, is GenAI even safe? the AI bubble, snake oil salesmen, why do we need all these data centres? replacing human workers, data oligachies, the erosion of trust in AI, AI psychosis and more...]]></description><pubDate>Wed, 19 Nov 2025 11:02:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1600/tim-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1604/tim2-el-sheikh_machine-ethics-podcast-1.mp3" length="99861975" type="audio/mp3" /><itunes:duration>01:00:53</itunes:duration><guid>https://www.machine-ethics.net/podcast/the-ai-bubble-with-tim-el-sheikh/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Fostering morality with Dr Oliver Bridge</itunes:title><title>104. Fostering morality with Dr Oliver Bridge</title><link>https://www.machine-ethics.net/podcast/fostering-morality-with-dr-oliver-bridge/</link><itunes:episode>104</itunes:episode><itunes:author>Ben Byford with Dr Oliver Bridge</itunes:author><itunes:subtitle>Hundredth and fourth episode of Machine Ethics podcast with Dr Oliver Bridge</itunes:subtitle><itunes:summary><![CDATA[This time we&#039;re chatting with Dr Oliver Bridge about machine ethics, superintelligence, virtue ethics, AI alignment, fostering morality in humans and AI, evolutional moral systems, socialising AI, and systems thinking...]]></itunes:summary><description><![CDATA[This time we&#039;re chatting with Dr Oliver Bridge about machine ethics, superintelligence, virtue ethics, AI alignment, fostering morality in humans and AI, evolutional moral systems, socialising AI, and systems thinking...]]></description><pubDate>Wed, 29 Oct 2025 13:58:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1595/oliver-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1599/oliver-bridge_machine-ethics-podcast-1.mp3" length="62835425" type="audio/mp3" /><itunes:duration>43:36</itunes:duration><guid>https://www.machine-ethics.net/podcast/fostering-morality-with-dr-oliver-bridge/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What excites you about AI? Vol.2</itunes:title><title>103. What excites you about AI? Vol.2</title><link>https://www.machine-ethics.net/podcast/what-excites-you-about-ai-vol.2/</link><itunes:episode>103</itunes:episode><itunes:author>Ben Byford and guests</itunes:author><itunes:subtitle>Hundredth and third episode of Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[This is a bonus episode looking back over answers to our question: What excites you about AI?]]></itunes:summary><description><![CDATA[This is a bonus episode looking back over answers to our question: What excites you about AI?]]></description><pubDate>Sat, 27 Sep 2025 13:31:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1593/excites-thumbs2.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1594/machine-ethics-what-excites-you2.mp3" length="15183118" type="audio/mp3" /><itunes:duration>10:32</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-excites-you-about-ai-vol.2/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Autonomy AI with Adir Ben-Yehuda</itunes:title><title>102. Autonomy AI with Adir Ben-Yehuda</title><link>https://www.machine-ethics.net/podcast/autonomy-ai-with-adir-ben-yehuda/</link><itunes:episode>102</itunes:episode><itunes:author>Ben Byford with Adir Ben-Yehuda</itunes:author><itunes:subtitle>Hundredth and second episode of Machine Ethics podcast with Adir Ben-Yehuda</itunes:subtitle><itunes:summary><![CDATA[This episode Adir and I chat about Autonomy.ai–AI automation for frontend web development, where Human Machine Interface could be going? allowing an LLM to optimism itself, job displacement, vibe coding, Grok&#039;s MechaHitler, the ethics and guard rails of LLMs, and go be a plumber!?]]></itunes:summary><description><![CDATA[This episode Adir and I chat about Autonomy.ai–AI automation for frontend web development, where Human Machine Interface could be going? allowing an LLM to optimism itself, job displacement, vibe coding, Grok&#039;s MechaHitler, the ethics and guard rails of LLMs, and go be a plumber!?]]></description><pubDate>Mon, 28 Jul 2025 22:15:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1589/adir-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1591/adir-ben-yehuda_machine-ethics-podcast.mp3" length="72864415" type="audio/mp3" /><itunes:duration>50:34</itunes:duration><guid>https://www.machine-ethics.net/podcast/autonomy-ai-with-adir-ben-yehuda/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI Ethics, Risks and Safety Conference 2025</itunes:title><title>101. AI Ethics, Risks and Safety Conference 2025</title><link>https://www.machine-ethics.net/podcast/ai-ethics-risks-and-safety-conference-2025/</link><itunes:episode>101</itunes:episode><itunes:author>Ben Byford with Dr. Simon Fothergill and Prof. Lucy Mason</itunes:author><itunes:subtitle>One hundredth and first episode of Machine Ethics podcast at AI Ethics, Risks and Safety Conference 2025</itunes:subtitle><itunes:summary><![CDATA[In this special live panel episode we recorded at the AI Ethics, Risks and Safety Conference 2025 in Bristol, May 2025. We chat about what is AI, predictions for the next 5 years - good and bad, the incoming wave of fraud, AI education and in education, copyright in the age of LLMs, the uncertainty of AI regulation, responsible AI in organisation, sovereign AI capabilities, the question: are we not being experimented on? Elderly AI, AI&#039;s impact on the creative industries and more...]]></itunes:summary><description><![CDATA[In this special live panel episode we recorded at the AI Ethics, Risks and Safety Conference 2025 in Bristol, May 2025. We chat about what is AI, predictions for the next 5 years - good and bad, the incoming wave of fraud, AI education and in education, copyright in the age of LLMs, the uncertainty of AI regulation, responsible AI in organisation, sovereign AI capabilities, the question: are we not being experimented on? Elderly AI, AI&#039;s impact on the creative industries and more...]]></description><pubDate>Mon, 23 Jun 2025 21:23:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1585/ai-conf2025-thumb.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1586/machine-ethics-aiconf2025.mp3" length="71484076" type="audio/mp3" /><itunes:duration>49:36</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-ethics-risks-and-safety-conference-2025/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>DeepDive: AI and the Environment</itunes:title><title>100. DeepDive: AI and the Environment</title><link>https://www.machine-ethics.net/podcast/deepdive-ai-and-the-environment/</link><itunes:episode>100</itunes:episode><itunes:author>Ben Byford with Hannah Smith, Boris Gamazaychikov, Will Alpine and Mél Hogan</itunes:author><itunes:subtitle>one Hundredth episode of Machine Ethics podcast with Hannah Smith, Boris Gamazaychikov, Will Alpine and Mél Hogan</itunes:subtitle><itunes:summary><![CDATA[This is our 100th episode! A super special look at AI and the Environment, we interviewed 4 experts for this DeepDive episode. We chatted about water stress, the energy usage of AI systems and data centres, using AI for fossil fuel discovery, the geo-political nature of AI, GenAI vs other ML alogrithms for energy use, demanding transparency on energy usage for training and operating AI, more AI regulation for carbon consumption, things we can change today like picking renewable hosting solutions, publishing your data, when doing &quot;responsible AI&quot; you must include the environment, considering who are the controllers of the technology and what do they want, and more...]]></itunes:summary><description><![CDATA[This is our 100th episode! A super special look at AI and the Environment, we interviewed 4 experts for this DeepDive episode. We chatted about water stress, the energy usage of AI systems and data centres, using AI for fossil fuel discovery, the geo-political nature of AI, GenAI vs other ML alogrithms for energy use, demanding transparency on energy usage for training and operating AI, more AI regulation for carbon consumption, things we can change today like picking renewable hosting solutions, publishing your data, when doing &quot;responsible AI&quot; you must include the environment, considering who are the controllers of the technology and what do they want, and more...]]></description><pubDate>Tue, 20 May 2025 09:33:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1576/eco-thumb-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1578/ai-and-the-environment.mp3" length="44144820" type="audio/mp3" /><itunes:duration>30:39</itunes:duration><guid>https://www.machine-ethics.net/podcast/deepdive-ai-and-the-environment/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Co-design with Pinar Guvenc</itunes:title><title>99. Co-design with Pinar Guvenc</title><link>https://www.machine-ethics.net/podcast/co-design-with-pinar-guvenc/</link><itunes:episode>99</itunes:episode><itunes:author>Ben Byford with Pinar Guvenc</itunes:author><itunes:subtitle>Ninety Ninth episode of Machine Ethics podcast with Pinar Guvenc</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chat with Pinar Guvenc on “What&#039;s Wrong With” podcast, co-design, is AI ready for society and is society ready for AI? What is design? co-creation with AI as a stakeholder, bias in design, small language models, is AI making us lazy? human experience, digital life and our attention, and talking to diverse people...]]></itunes:summary><description><![CDATA[This episode we&#039;re chat with Pinar Guvenc on “What&#039;s Wrong With” podcast, co-design, is AI ready for society and is society ready for AI? What is design? co-creation with AI as a stakeholder, bias in design, small language models, is AI making us lazy? human experience, digital life and our attention, and talking to diverse people...]]></description><pubDate>Tue, 08 Apr 2025 09:39:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1572/pinar-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1573/pinar-guvenc-machine-ethics-podcast.mp3" length="71301583" type="audio/mp3" /><itunes:duration>49:27</itunes:duration><guid>https://www.machine-ethics.net/podcast/co-design-with-pinar-guvenc/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Careful technology with Rachel Coldicutt</itunes:title><title>98. Careful technology with Rachel Coldicutt</title><link>https://www.machine-ethics.net/podcast/careful-technology-with-rachel-coldicutt/</link><itunes:episode>98</itunes:episode><itunes:author>Ben Byford with Rachel Coldicutt</itunes:author><itunes:subtitle>Ninety eighth episode of Machine Ethics podcast with Rachel Coldicutt</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Rachel about AI taxonomy, innovating for everyone not just the few, Rachel&#039;s chronic honesty, responsibilities of researchers, socially responsible technology, ethics work as free labour, the right to repair, tinker, improve...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Rachel about AI taxonomy, innovating for everyone not just the few, Rachel&#039;s chronic honesty, responsibilities of researchers, socially responsible technology, ethics work as free labour, the right to repair, tinker, improve...]]></description><pubDate>Wed, 12 Mar 2025 12:05:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1569/rachel-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1570/rachel-coldicutt_machine-ethics-podcast.mp3" length="72905698" type="audio/mp3" /><itunes:duration>50:33</itunes:duration><guid>https://www.machine-ethics.net/podcast/careful-technology-with-rachel-coldicutt/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Running faster with Enrico Panai</itunes:title><title>97. Running faster with Enrico Panai</title><link>https://www.machine-ethics.net/podcast/running-faster-with-enrico-panai/</link><itunes:episode>97</itunes:episode><itunes:author>Ben Byford with Enrico Panai</itunes:author><itunes:subtitle>Ninety seventh episode of Machine Ethics podcast with Enrico Panai</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Enrico Panai about the elements of the digital revolution, AI transforms data into information. HCI, the importance of knowing the tech as a tech philosopher, that ethicists should diagnose not judge, quality and making pasta, whether ethics is really a burden for companies or if you can run faster with ethics, don’t steal peoples life, and finding a Marx for the digital world.]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Enrico Panai about the elements of the digital revolution, AI transforms data into information. HCI, the importance of knowing the tech as a tech philosopher, that ethicists should diagnose not judge, quality and making pasta, whether ethics is really a burden for companies or if you can run faster with ethics, don’t steal peoples life, and finding a Marx for the digital world.]]></description><pubDate>Wed, 05 Feb 2025 08:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1563/enrico-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1567/enrico-panai_machine-ethics-podcast.mp3" length="82936829" type="audio/mp3" /><itunes:duration>57:33</itunes:duration><guid>https://www.machine-ethics.net/podcast/running-faster-with-enrico-panai/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>2024 in review with Karin Rudolph and Ben Byford</itunes:title><title>96. 2024 in review with Karin Rudolph and Ben Byford</title><link>https://www.machine-ethics.net/podcast/2024-in-review-with-karin-rudolph-and-ben-byford/</link><itunes:episode>96</itunes:episode><itunes:author>Ben Byford with Karin Rudolph</itunes:author><itunes:subtitle>Ninety sixth episode of Machine Ethics podcast with Karin Rudolph</itunes:subtitle><itunes:summary><![CDATA[For our 2024 round up episode we&#039;re again chatting with Karin Rudolph about the AI Ethics Risk and Safety Conference, the EU AI Act, agent based AI and Advertising! AI search and access to information, conflicting goals of many AI agents, weaponising disinformation, freedoms of speech, the LLM plateau, shadow AI, and more...]]></itunes:summary><description><![CDATA[For our 2024 round up episode we&#039;re again chatting with Karin Rudolph about the AI Ethics Risk and Safety Conference, the EU AI Act, agent based AI and Advertising! AI search and access to information, conflicting goals of many AI agents, weaponising disinformation, freedoms of speech, the LLM plateau, shadow AI, and more...]]></description><pubDate>Thu, 19 Dec 2024 14:44:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1559/eoy2024-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1560/eoy2024-machine-ethics-podcast-1.mp3" length="80105144" type="audio/mp3" /><itunes:duration>54:37</itunes:duration><guid>https://www.machine-ethics.net/podcast/2024-in-review-with-karin-rudolph-and-ben-byford/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Responsible AI strategy with Olivia Gambelin</itunes:title><title>95. Responsible AI strategy with Olivia Gambelin</title><link>https://www.machine-ethics.net/podcast/responsible-ai-strategy-with-olivia-gamblin/</link><itunes:episode>95</itunes:episode><itunes:author>Ben Byford with Olivia Gambelin</itunes:author><itunes:subtitle>Ninety Fifth episode of Machine Ethics podcast with Olivia Gambelin</itunes:subtitle><itunes:summary><![CDATA[For Olivia&#039;s 3rd time on the show we&#039;re chatting about Olivia&#039;s book on Responsible AI, scalable AI strategy, AI ethics and RAI, bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy? why to work with an external consultant? agentic AI, predictions fo the next two years, and more...]]></itunes:summary><description><![CDATA[For Olivia&#039;s 3rd time on the show we&#039;re chatting about Olivia&#039;s book on Responsible AI, scalable AI strategy, AI ethics and RAI, bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy? why to work with an external consultant? agentic AI, predictions fo the next two years, and more...]]></description><pubDate>Wed, 11 Dec 2024 10:17:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1557/olivia-illustration2.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1558/olivia-gambelin_machine-ethics-podcast.mp3" length="83290378" type="audio/mp3" /><itunes:duration>57:46</itunes:duration><guid>https://www.machine-ethics.net/podcast/responsible-ai-strategy-with-olivia-gamblin/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Diversity in the AI life-cycle with Caitlin Kraft-Buchman</itunes:title><title>94. Diversity in the AI life-cycle with Caitlin Kraft-Buchman</title><link>https://www.machine-ethics.net/podcast/diversity-in-the-life-cycle-with-caitlin-kraft-buchman/</link><itunes:episode>94</itunes:episode><itunes:author>Ben Byford with Caitlin Kraft-Buchman</itunes:author><itunes:subtitle>Ninety Fourth episode of Machine Ethics podcast with Caitlin Kraft-Buchman</itunes:subtitle><itunes:summary><![CDATA[In this episode we&#039;re chatting to Caitlin about gender and AI, technology isn’t neutral, using technology for good, diversity creation and exploitation, lived experience expertise, co-creating technologies and AI life cycle, importance of success metrics, international treaties on AI and more...]]></itunes:summary><description><![CDATA[In this episode we&#039;re chatting to Caitlin about gender and AI, technology isn’t neutral, using technology for good, diversity creation and exploitation, lived experience expertise, co-creating technologies and AI life cycle, importance of success metrics, international treaties on AI and more...]]></description><pubDate>Tue, 26 Nov 2024 01:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1551/caitlin-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1556/caitlin-kraft-buchman_machine-ethicspodcast.mp3" length="67800806" type="audio/mp3" /><itunes:duration>47:03</itunes:duration><guid>https://www.machine-ethics.net/podcast/diversity-in-the-life-cycle-with-caitlin-kraft-buchman/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Socio-technical systems with Lisa Talia Moretti</itunes:title><title>93. Socio-technical systems with Lisa Talia Moretti</title><link>https://www.machine-ethics.net/podcast/techno-social-systems-with-lisa-talia-moretti/</link><itunes:episode>93</itunes:episode><itunes:author>Ben Byford with Lisa Talia Moretti</itunes:author><itunes:subtitle>Ninety third episode of Machine Ethics podcast with Lisa Talia Moretti</itunes:subtitle><itunes:summary><![CDATA[In this episode we&#039;re chatting to Lisa about: Data and AI literacy, data sharing, data governance and data wallets, design values, selling in ethics to organisations, contractual agreements and ethical frameworks, AI unlearning, what organisations needs to know about ethics, and an AI ethics consultant directory...]]></itunes:summary><description><![CDATA[In this episode we&#039;re chatting to Lisa about: Data and AI literacy, data sharing, data governance and data wallets, design values, selling in ethics to organisations, contractual agreements and ethical frameworks, AI unlearning, what organisations needs to know about ethics, and an AI ethics consultant directory...]]></description><pubDate>Thu, 03 Oct 2024 20:57:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1547/lisa-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1550/lisa-talia-moretti_machine-ethics-podcast.mp3" length="89061005" type="audio/mp3" /><itunes:duration>01:01:49</itunes:duration><guid>https://www.machine-ethics.net/podcast/techno-social-systems-with-lisa-talia-moretti/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI Truth with Alex Tsakiris</itunes:title><title>92. AI Truth with Alex Tsakiris</title><link>https://www.machine-ethics.net/podcast/ai-truth-with-alex-tsakiris/</link><itunes:episode>92</itunes:episode><itunes:author>Ben Byford with Alex Tsakiris</itunes:author><itunes:subtitle>Ninety second episode of Machine Ethics podcast with Alex Tsakiris</itunes:subtitle><itunes:summary><![CDATA[In this special filmed podcast swap episode I&#039;m chatting with Alex Tsakiris about: should you learn to code to be in AI? stockfish chess AI, AI truth and What is Truth? Deductive and inductive learning, everything is statistics, statistical ethics, free will and conciousness, ESP, red teaming LLMs, Shadowbanning, and much more...]]></itunes:summary><description><![CDATA[In this special filmed podcast swap episode I&#039;m chatting with Alex Tsakiris about: should you learn to code to be in AI? stockfish chess AI, AI truth and What is Truth? Deductive and inductive learning, everything is statistics, statistical ethics, free will and conciousness, ESP, red teaming LLMs, Shadowbanning, and much more...]]></description><pubDate>Tue, 03 Sep 2024 12:17:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1539/alex-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1546/alex-tsakiris_machine-ethics-podcast.mp3" length="118131294" type="audio/mp3" /><itunes:duration>01:21:59</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-truth-with-alex-tsakiris/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What scares you about AI? Vol.2</itunes:title><title>91. What scares you about AI? Vol.2</title><link>https://www.machine-ethics.net/podcast/what-scares-you-about-ai-vol.2/</link><itunes:episode>91</itunes:episode><itunes:author>Ben Byford and friends</itunes:author><itunes:subtitle>Ninety first episode of Machine Ethics podcast with a bonus: what scares you?</itunes:subtitle><itunes:summary><![CDATA[This is a bonus episode we&#039;re looking back over answers to our question: What scares you about AI?]]></itunes:summary><description><![CDATA[This is a bonus episode we&#039;re looking back over answers to our question: What scares you about AI?]]></description><pubDate>Mon, 15 Jul 2024 09:46:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1537/scares-banner2.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1538/what-scares-you2.mp3" length="13254379" type="audio/mp3" /><itunes:duration>09:11</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-scares-you-about-ai-vol.2/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>An ethos for the future with Wendell Wallach</itunes:title><title>90. An ethos for the future with Wendell Wallach</title><link>https://www.machine-ethics.net/podcast/an-ethos-for-the-future-with-wendell-wallach/</link><itunes:episode>90</itunes:episode><itunes:author>Ben Byford with Wendell Wallach</itunes:author><itunes:subtitle>Ninetieth episode of Machine Ethics podcast with Wendell Wallach</itunes:subtitle><itunes:summary><![CDATA[This time we&#039;re chatting with Wendell Wallach on moral machines and Machine Ethics, AGI sceptics, the usefulness of the term of artificial intelligence, a new ethic or ethos for human society, ethics as decisions fast and slow, trade-off ethics, the AI oligopoly, the good and bad of capitalism, conciousness, global workspace theory and more...]]></itunes:summary><description><![CDATA[This time we&#039;re chatting with Wendell Wallach on moral machines and Machine Ethics, AGI sceptics, the usefulness of the term of artificial intelligence, a new ethic or ethos for human society, ethics as decisions fast and slow, trade-off ethics, the AI oligopoly, the good and bad of capitalism, conciousness, global workspace theory and more...]]></description><pubDate>Wed, 03 Jul 2024 22:12:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1534/wendell-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1535/wendell-machine-ethics-podcast.mp3" length="92247720" type="audio/mp3" /><itunes:duration>01:03:57</itunes:duration><guid>https://www.machine-ethics.net/podcast/an-ethos-for-the-future-with-wendell-wallach/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI Ethics, Risks and Safety Conference - Special Edition</itunes:title><title>89. AI Ethics, Risks and Safety Conference - Special Edition</title><link>https://www.machine-ethics.net/podcast/special-edition-ai-ethics-risks-and-safety-conference/</link><itunes:episode>89</itunes:episode><itunes:author>Ben Byford and Herbie Robson at the AI Ethics, Risks and Safety Conference</itunes:author><itunes:subtitle>Eighty nineth episode of Machine Ethics podcast at the AI Ethics, Risks and Safety Conference</itunes:subtitle><itunes:summary><![CDATA[In this special edition episode we hear vox-pops recorded at the AI Ethics, Risks and Safety Conference in Bristol on the 15th of May 2024. We hear about AI regulations, AI Standards, AI Ethics frameworks, principles, ethics guiding research, awareness of the ethics of AI, and explainable AI.]]></itunes:summary><description><![CDATA[In this special edition episode we hear vox-pops recorded at the AI Ethics, Risks and Safety Conference in Bristol on the 15th of May 2024. We hear about AI regulations, AI Standards, AI Ethics frameworks, principles, ethics guiding research, awareness of the ethics of AI, and explainable AI.]]></description><pubDate>Mon, 24 Jun 2024 11:14:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1529/thumb.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1533/aiethics-risks-safety-machine-ethics-podcast.mp3" length="18658203" type="audio/mp3" /><itunes:duration>12:55</itunes:duration><guid>https://www.machine-ethics.net/podcast/special-edition-ai-ethics-risks-and-safety-conference/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI fictions with Alex Shvartsman</itunes:title><title>88. AI fictions with Alex Shvartsman</title><link>https://www.machine-ethics.net/podcast/ai-fictions-with-alex-shvartsman/</link><itunes:episode>88</itunes:episode><itunes:author>Ben Byford with Alex Shvartsman</itunes:author><itunes:subtitle>Eighty eighth episode of Machine Ethics podcast with Alex Shvartsman</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Alex Shvartsman about what is our AI future, human crafted storytelling, the Generative AI use backlash, disclaimers for generated text, human vs AI authorship, practical or functional goals of LLMs, changing themes in science fiction, a diversity of international perspectives and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Alex Shvartsman about what is our AI future, human crafted storytelling, the Generative AI use backlash, disclaimers for generated text, human vs AI authorship, practical or functional goals of LLMs, changing themes in science fiction, a diversity of international perspectives and more...]]></description><pubDate>Tue, 14 May 2024 14:39:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1526/alex-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1528/alex-shvartsman_machine-ethics-podcast.mp3" length="50969931" type="audio/mp3" /><itunes:duration>35:22</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-fictions-with-alex-shvartsman/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Good tech with Eleanor Drage and Kerry McInerney</itunes:title><title>87. Good tech with Eleanor Drage and Kerry McInerney</title><link>https://www.machine-ethics.net/podcast/good-tech-with-eleanor-drage-and-kerry-mcinerney/</link><itunes:episode>87</itunes:episode><itunes:author>Ben Byford with Eleanor Drage and Kerry McInerney</itunes:author><itunes:subtitle>Eighty seventh episode of Machine Ethics podcast with Dr Eleanor Drage and Dr Kerry McInerney</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Eleanor and Kerry on what is good technology and is it even possible? Technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, mixed race studies can help AI development? The performative nature tech and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Eleanor and Kerry on what is good technology and is it even possible? Technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, mixed race studies can help AI development? The performative nature tech and more...]]></description><pubDate>Tue, 02 Apr 2024 14:39:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1522/illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1525/good-tech_machine-ethics-podcast.mp3" length="77600456" type="audio/mp3" /><itunes:duration>53:49</itunes:duration><guid>https://www.machine-ethics.net/podcast/good-tech-with-eleanor-drage-and-kerry-mcinerney/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What is AI? Vol.3</itunes:title><title>86. What is AI? Vol.3</title><link>https://www.machine-ethics.net/podcast/what-is-ai-vol.3/</link><itunes:episode>86</itunes:episode><itunes:author>Ben Byford and guests</itunes:author><itunes:subtitle>Eighty sixth episode of Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[This is a bonus episode looking back over answers to our question: What is AI?]]></itunes:summary><description><![CDATA[This is a bonus episode looking back over answers to our question: What is AI?]]></description><pubDate>Tue, 19 Mar 2024 13:54:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1520/thumb.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1521/what-is-ai3_machine-ethics-podcast.mp3" length="21308015" type="audio/mp3" /><itunes:duration>14:47</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-is-ai-vol.3/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>New forms of story telling with Guy Gadney</itunes:title><title>85. New forms of story telling with Guy Gadney</title><link>https://www.machine-ethics.net/podcast/new-forms-of-story-telling-with-guy-gadney/</link><itunes:episode>85</itunes:episode><itunes:author>Ben Byford with Adam Braus</itunes:author><itunes:subtitle>Eighty fifth episode of Machine Ethics podcast with Adam Braus</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Guy Gadney on new forms of story telling, placing people inside a story, natural language in games, LLM hype, data used in LLMs, copyright infringment, the destructive ideology of innovation, an unprecedented redistribution of wealth away from the cultural industries and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Guy Gadney on new forms of story telling, placing people inside a story, natural language in games, LLM hype, data used in LLMs, copyright infringment, the destructive ideology of innovation, an unprecedented redistribution of wealth away from the cultural industries and more...]]></description><pubDate>Fri, 09 Feb 2024 22:17:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1489/guy-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1493/guy-mix_mixdown.mp3" length="67026088" type="audio/mp3" /><itunes:duration>47:51</itunes:duration><guid>https://www.machine-ethics.net/podcast/new-forms-of-story-telling-with-guy-gadney/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Review of 2023 with Karin Rudolph</itunes:title><title>84. Review of 2023 with Karin Rudolph</title><link>https://www.machine-ethics.net/podcast/review-of-2023-with-karin-rudolph/</link><itunes:episode>84</itunes:episode><itunes:author>Ben Byford with Karin Rudolph</itunes:author><itunes:subtitle>Eighty fourth episode of Machine Ethics podcast with Karin Rudolph</itunes:subtitle><itunes:summary><![CDATA[For our in-person episode on 2023 with Karin Rudolph we chat about the Future of Life Institute letter, existential risk of AI, TESCREAL, Geoffrey Hinton’s resignation from Google, the AI Safety Summit, EU AI act and  legislating AI, neural rights and more...]]></itunes:summary><description><![CDATA[For our in-person episode on 2023 with Karin Rudolph we chat about the Future of Life Institute letter, existential risk of AI, TESCREAL, Geoffrey Hinton’s resignation from Google, the AI Safety Summit, EU AI act and  legislating AI, neural rights and more...]]></description><pubDate>Tue, 02 Jan 2024 21:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1485/karin-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1486/eoy2023_machine-ethics.mp3" length="90145365" type="audio/mp3" /><itunes:duration>01:02:33</itunes:duration><guid>https://www.machine-ethics.net/podcast/review-of-2023-with-karin-rudolph/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Avoidable misery with Adam Braus</itunes:title><title>83. Avoidable misery with Adam Braus</title><link>https://www.machine-ethics.net/podcast/avoidable-misery-with-adam-braus/</link><itunes:episode>83</itunes:episode><itunes:author>Ben Byford with Adam Braus</itunes:author><itunes:subtitle>Eighty third episode of Machine Ethics podcast with Adam Braus</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Adam Braus about natural stupidity, natural intelligence, misericordianism and avoidable misery, the drowning child thought experiment, natural state of morality, Donald Trump bot, Asimov’s rules, human instincts, the positive outcomes of AI and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Adam Braus about natural stupidity, natural intelligence, misericordianism and avoidable misery, the drowning child thought experiment, natural state of morality, Donald Trump bot, Asimov’s rules, human instincts, the positive outcomes of AI and more...]]></description><pubDate>Thu, 07 Dec 2023 21:49:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1478/adam-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1479/adam-braus_machine-ethics-podcast.mp3" length="96386544" type="audio/mp3" /><itunes:duration>01:06:52</itunes:duration><guid>https://www.machine-ethics.net/podcast/avoidable-misery-with-adam-braus/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Work, wellness and creativity with Harriet Pellereau</itunes:title><title>82. Work, wellness and creativity with Harriet Pellereau</title><link>https://www.machine-ethics.net/podcast/mind-wellness-and-creativity-with-harriet-pellereau/</link><itunes:episode>82</itunes:episode><itunes:author>Ben Byford with Harriet Pellereau</itunes:author><itunes:subtitle>Eighty second episode of Machine Ethics podcast with Harriet Pellereau</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Harriet Pellereau about AI’s lack of reasoning ability, uses of generative AI, creativity and AI, what even is creativity? creative duties, new ways of working / digital working, 4 day week global, work life balance, the hidden cost of convenience, responsible tech and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Harriet Pellereau about AI’s lack of reasoning ability, uses of generative AI, creativity and AI, what even is creativity? creative duties, new ways of working / digital working, 4 day week global, work life balance, the hidden cost of convenience, responsible tech and more...]]></description><pubDate>Wed, 04 Oct 2023 20:46:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1470/harriet-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1477/harriet_machine-ethics-podcast-1.mp3" length="80813336" type="audio/mp3" /><itunes:duration>56:03</itunes:duration><guid>https://www.machine-ethics.net/podcast/mind-wellness-and-creativity-with-harriet-pellereau/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The state of AI Ethics with Alice Thwaite</itunes:title><title>81. The state of AI Ethics with Alice Thwaite</title><link>https://www.machine-ethics.net/podcast/the-state-of-ai-ethics-with-alice-thwaite/</link><itunes:episode>81</itunes:episode><itunes:author>Ben Byford with Alice Thwaite</itunes:author><itunes:subtitle>Eighty first episode of Machine Ethics podcast with Alice Thwaite</itunes:subtitle><itunes:summary><![CDATA[This time I&#039;m chatting to alice about teaching ethics, the idea of information environments, the importance of democracy, the Ethics hype train and the ethics community, people to follow in AI and Data Ethics, ethics as innovation and more...]]></itunes:summary><description><![CDATA[This time I&#039;m chatting to alice about teaching ethics, the idea of information environments, the importance of democracy, the Ethics hype train and the ethics community, people to follow in AI and Data Ethics, ethics as innovation and more...]]></description><pubDate>Sun, 17 Sep 2023 18:12:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1469/alice-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1473/alice-thwaite_machine-ethics-podcast.mp3" length="91111901" type="audio/mp3" /><itunes:duration>01:03:11</itunes:duration><guid>https://www.machine-ethics.net/podcast/the-state-of-ai-ethics-with-alice-thwaite/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Art and AI collaboration with Sarah Brin</itunes:title><title>80. Art and AI collaboration with Sarah Brin</title><link>https://www.machine-ethics.net/podcast/art-and-ai-collaboration-with-sarah-brin/</link><itunes:episode>80</itunes:episode><itunes:author>Ben Byford with Sarah Brin</itunes:author><itunes:subtitle>Eighth episode of Machine Ethics podcast with Sarah Brin</itunes:subtitle><itunes:summary><![CDATA[This time we&#039;re chatting with Sarah Brin about types of AI, the process of making artwork, how is an artwork culturally valuable, curatorial practise for AI art, unionising creative art workers, collaborative artwork with AI, using AI to help the climate emergency, AI in games and more...]]></itunes:summary><description><![CDATA[This time we&#039;re chatting with Sarah Brin about types of AI, the process of making artwork, how is an artwork culturally valuable, curatorial practise for AI art, unionising creative art workers, collaborative artwork with AI, using AI to help the climate emergency, AI in games and more...]]></description><pubDate>Sun, 17 Sep 2023 18:11:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1468/sarah-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1471/sarah-brin_machine-ethics-podcast.mp3" length="51550918" type="audio/mp3" /><itunes:duration>35:44</itunes:duration><guid>https://www.machine-ethics.net/podcast/art-and-ai-collaboration-with-sarah-brin/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Taming Uncertainty with Roger Spitz</itunes:title><title>79. Taming Uncertainty with Roger Spitz</title><link>https://www.machine-ethics.net/podcast/taming-uncertainty-with-roger-spitz/</link><itunes:episode>79</itunes:episode><itunes:author>Ben Byford with Roger Spitz</itunes:author><itunes:subtitle>Seventy Nineth episode of Machine Ethics podcast with Roger Spitz</itunes:subtitle><itunes:summary><![CDATA[This time we chat with Roger Spitz about how to think about the future, what does a futurist do? Thriving with disruption, a chief existential officer, virtuous inflection points, delegating too much authority / decision making, our inappropriate education system]]></itunes:summary><description><![CDATA[This time we chat with Roger Spitz about how to think about the future, what does a futurist do? Thriving with disruption, a chief existential officer, virtuous inflection points, delegating too much authority / decision making, our inappropriate education system]]></description><pubDate>Tue, 11 Jul 2023 11:13:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1458/roger-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1467/roger-spitz_machine-ethics-podcast.mp3" length="106064346" type="audio/mp3" /><itunes:duration>01:13:36</itunes:duration><guid>https://www.machine-ethics.net/podcast/taming-uncertainty-with-roger-spitz/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Design and AI with Nadia Piet</itunes:title><title>78. Design and AI with Nadia Piet</title><link>https://www.machine-ethics.net/podcast/design-and-ai-with-nadia-piet/</link><itunes:episode>78</itunes:episode><itunes:author>Ben Byford with Nadia Piet</itunes:author><itunes:subtitle>Seventy eighth episode of Machine Ethics podcast with Nadia Piet</itunes:subtitle><itunes:summary><![CDATA[This episode Nadia and I chat about how design can co-create AI, what the role of designers are in AI services? post-deployment design, narratives in AI development and AI ideologues, anthropocentric AI, augmented creativity, new AI perspectives, situated intelligences and more...]]></itunes:summary><description><![CDATA[This episode Nadia and I chat about how design can co-create AI, what the role of designers are in AI services? post-deployment design, narratives in AI development and AI ideologues, anthropocentric AI, augmented creativity, new AI perspectives, situated intelligences and more...]]></description><pubDate>Mon, 19 Jun 2023 12:32:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1457/nadia-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1459/nadia-piet_machine-ethics-podcast.mp3" length="58976756" type="audio/mp3" /><itunes:duration>40:56</itunes:duration><guid>https://www.machine-ethics.net/podcast/design-and-ai-with-nadia-piet/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Doing Ethics with Marc Steen</itunes:title><title>77. Doing Ethics with Marc Steen</title><link>https://www.machine-ethics.net/podcast/doing-ethics-with-marc-stern/</link><itunes:episode>77</itunes:episode><itunes:author>Ben Byford with Marc Steen</itunes:author><itunes:subtitle>Seventy seventh episode of Machine Ethics podcast with Marc Steen</itunes:subtitle><itunes:summary><![CDATA[This episode Marc Steen and I chat about: AI as tools, the ethics of business models, writing Ethics for People Who Work in Tech, the process of ethics - “doing ethics” and his three step process, misconceptions of ethics as compliance or a road block, evaluating ethical theories, universal rights, types of knowledges, what is the world we’re creating with AI?]]></itunes:summary><description><![CDATA[This episode Marc Steen and I chat about: AI as tools, the ethics of business models, writing Ethics for People Who Work in Tech, the process of ethics - “doing ethics” and his three step process, misconceptions of ethics as compliance or a road block, evaluating ethical theories, universal rights, types of knowledges, what is the world we’re creating with AI?]]></description><pubDate>Tue, 23 May 2023 09:26:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1453/illlustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1456/marc-steen_machine-ethics-podcast.mp3" length="75563746" type="audio/mp3" /><itunes:duration>52:24</itunes:duration><guid>https://www.machine-ethics.net/podcast/doing-ethics-with-marc-stern/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The professionalisation of data science with Dr Marie Oldfield</itunes:title><title>76. The professionalisation of data science with Dr Marie Oldfield</title><link>https://www.machine-ethics.net/podcast/the-professionalisation-of-data-science-with-dr-marie-oldfield/</link><itunes:episode>76</itunes:episode><itunes:author>Ben Byford with Dr Marie Oldfield</itunes:author><itunes:subtitle>Seventy sixth episode of Machine Ethics podcast with Dr Marie Oldfield</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re talking with Dr Marie Oldfield on definitions of AI, the education and communication gaps with AI, explainable models, ethics in education, problems with audits and legislation, AI accreditation, importance of interdisciplinary teams, when to use AI or not, and harms from algorithms.]]></itunes:summary><description><![CDATA[This episode we&#039;re talking with Dr Marie Oldfield on definitions of AI, the education and communication gaps with AI, explainable models, ethics in education, problems with audits and legislation, AI accreditation, importance of interdisciplinary teams, when to use AI or not, and harms from algorithms.]]></description><pubDate>Mon, 17 Apr 2023 18:03:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1442/marie-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1444/marie-oldfield_machine-ethics-podcast.mp3" length="55017282" type="audio/mp3" /><itunes:duration>38:10</itunes:duration><guid>https://www.machine-ethics.net/podcast/the-professionalisation-of-data-science-with-dr-marie-oldfield/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The moral status of non-humans with Josh Gellers</itunes:title><title>75. The moral status of non-humans with Josh Gellers</title><link>https://www.machine-ethics.net/podcast/robot-right-with-josh-gellers/</link><itunes:episode>75</itunes:episode><itunes:author>Ben Byford with Josh Gellers</itunes:author><itunes:subtitle>Seventy fifth episode of Machine Ethics podcast with Josh Gellers</itunes:subtitle><itunes:summary><![CDATA[This episode we talk with Josh Gellers about nature rights, rights for robots, non-human and human rights, justification for the attribution of rights, the sphere of moral importance, perspectives on legal and moral concepts, shaping better policy, the Lamda/Lemoine controversy, predicates of legal personhood, the heated discourse on robot rights, science fiction as a moral playground and more...]]></itunes:summary><description><![CDATA[This episode we talk with Josh Gellers about nature rights, rights for robots, non-human and human rights, justification for the attribution of rights, the sphere of moral importance, perspectives on legal and moral concepts, shaping better policy, the Lamda/Lemoine controversy, predicates of legal personhood, the heated discourse on robot rights, science fiction as a moral playground and more...]]></description><pubDate>Tue, 28 Mar 2023 14:39:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1433/josh-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1439/josh-gellers_machine-ethics-podcast.mp3" length="93728111" type="audio/mp3" /><itunes:duration>01:05:03</itunes:duration><guid>https://www.machine-ethics.net/podcast/robot-right-with-josh-gellers/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI, ethics and the future - DFA talk special edition</itunes:title><title>74. AI, ethics and the future - DFA talk special edition</title><link>https://www.machine-ethics.net/podcast/dfa-ai-ethics-talk/</link><itunes:episode>74</itunes:episode><itunes:author>Ben Byford with Alex Joseph, Dr Marie Oldfield, Alice Thwaite, and Sophia Davies</itunes:author><itunes:subtitle>Seventy fourth episode of Machine Ethics podcast hosting a DFA talk</itunes:subtitle><itunes:summary><![CDATA[In this special edition episode with Data Science Festival we&#039;re hosting a panel discussing: what is ethics? Designing for Responsible AI, ethics as innovation and competitive advantage, ghost work, fairer AI, language as a human computer interface, cleaning up the web, technologies that shouldn’t be deployed and much more...]]></itunes:summary><description><![CDATA[In this special edition episode with Data Science Festival we&#039;re hosting a panel discussing: what is ethics? Designing for Responsible AI, ethics as innovation and competitive advantage, ghost work, fairer AI, language as a human computer interface, cleaning up the web, technologies that shouldn’t be deployed and much more...]]></description><pubDate>Sat, 11 Mar 2023 14:38:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1432/thumb.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1438/dfa2023_machine-ethics-podcast.mp3" length="101001875" type="audio/mp3" /><itunes:duration>01:10:05</itunes:duration><guid>https://www.machine-ethics.net/podcast/dfa-ai-ethics-talk/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>2022 in review with Olivia Gamblin</itunes:title><title>73. 2022 in review with Olivia Gamblin</title><link>https://www.machine-ethics.net/podcast/2022-in-review-with-olivia-gamblin/</link><itunes:episode>73</itunes:episode><itunes:author>Ben Byford with Olivia Gamblin</itunes:author><itunes:subtitle>Seventy third episode of Machine Ethics podcast with Olivia Gamblin</itunes:subtitle><itunes:summary><![CDATA[For this end of year episode I&#039;m joined by Olivia Gamblinn to discuss: ethics boards, generative images models and copywrite, concept art, model bias and representation in the generative models, paying artists to appear in training sets, plagerism, chatGPT and when it breaks down, factual “truth” in text models, expectations for AI and digital technologies generally, limitations of AGI, inner life and the Chinese room, consciousness, robot rights, animal rights and getting into AI Ethics...]]></itunes:summary><description><![CDATA[For this end of year episode I&#039;m joined by Olivia Gamblinn to discuss: ethics boards, generative images models and copywrite, concept art, model bias and representation in the generative models, paying artists to appear in training sets, plagerism, chatGPT and when it breaks down, factual “truth” in text models, expectations for AI and digital technologies generally, limitations of AGI, inner life and the Chinese room, consciousness, robot rights, animal rights and getting into AI Ethics...]]></description><pubDate>Wed, 01 Feb 2023 15:51:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1429/olivia-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1431/eoy2022-machine-ethics-podcast.mp3" length="111846077" type="audio/mp3" /><itunes:duration>01:17:37</itunes:duration><guid>https://www.machine-ethics.net/podcast/2022-in-review-with-olivia-gamblin/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Algorithms with Social Impact with Mitchel Ondili</itunes:title><title>72. Algorithms with Social Impact with Mitchel Ondili</title><link>https://www.machine-ethics.net/podcast/algorithms-with-social-impact-with-mitchel-ondili/</link><itunes:episode>72</itunes:episode><itunes:author>Ben Byford with Mitchel Ondili</itunes:author><itunes:subtitle>Seventy second episode of Machine Ethics podcast with Mitchel Ondili</itunes:subtitle><itunes:summary><![CDATA[This episode we talk with Mitchel Ondili on algorithm awareness, technology colonisation in the global south, OASI the registry for Algorithms with Social Impact, AI auditing, private vs public rights to consent, submitting your algorithms to OASI, hiring, and social services algorithms, the over datafication of life, or becoming an algorithmic subject, Intentionality of services and much more.]]></itunes:summary><description><![CDATA[This episode we talk with Mitchel Ondili on algorithm awareness, technology colonisation in the global south, OASI the registry for Algorithms with Social Impact, AI auditing, private vs public rights to consent, submitting your algorithms to OASI, hiring, and social services algorithms, the over datafication of life, or becoming an algorithmic subject, Intentionality of services and much more.]]></description><pubDate>Fri, 16 Dec 2022 14:18:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1426/mitchel-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1428/mitchel-ondili_machine-ethics-podcast.mp3" length="63138492" type="audio/mp3" /><itunes:duration>43:48</itunes:duration><guid>https://www.machine-ethics.net/podcast/algorithms-with-social-impact-with-mitchel-ondili/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The Politics of AI with Mark Coeckelbergh</itunes:title><title>71. The Politics of AI with Mark Coeckelbergh</title><link>https://www.machine-ethics.net/podcast/the-politics-of-ai-with-mark-coeckelbergh/</link><itunes:episode>71</itunes:episode><itunes:author>Ben Byford with Mark Coeckelbergh</itunes:author><itunes:subtitle>Seventy first episode of Machine Ethics podcast with Mark Coeckelbergh</itunes:subtitle><itunes:summary><![CDATA[This episode we talk with Mark Coeckelbergh about AI as a story about machines and where are we heading in creating human level intelligence, moral standing and robot-animal interfaces, technology determinism, environmental impacts of robots and AI, energy budgets, politics and AI, self-regulation and global governance for global issues.]]></itunes:summary><description><![CDATA[This episode we talk with Mark Coeckelbergh about AI as a story about machines and where are we heading in creating human level intelligence, moral standing and robot-animal interfaces, technology determinism, environmental impacts of robots and AI, energy budgets, politics and AI, self-regulation and global governance for global issues.]]></description><pubDate>Tue, 22 Nov 2022 10:06:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1424/mark-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1425/mark-coeckelbergh_machine-ethics-podcast.mp3" length="66048811" type="audio/mp3" /><itunes:duration>45:49</itunes:duration><guid>https://www.machine-ethics.net/podcast/the-politics-of-ai-with-mark-coeckelbergh/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Rights, trust and ethical choice with Ricardo Baeza-Yates</itunes:title><title>70. Rights, trust and ethical choice with Ricardo Baeza-Yates</title><link>https://www.machine-ethics.net/podcast/rights-trust-and-ethical-choice-with-ricardo-baeza-yates/</link><itunes:episode>70</itunes:episode><itunes:author>Ben Byford with Ricardo Baeza-Yates</itunes:author><itunes:subtitle>Seventieth episode of Machine Ethics podcast with Ricardo Baeza-Yates</itunes:subtitle><itunes:summary><![CDATA[This episode we talk with Ricardo Baeza-Yates about: Responsible AI, the importance of AI governance, questioning people&#039;s intent to create AGI, robot rights and brain / neural rights, the evolution of intelligence, ethical risk assessment, machine ethics, making ethical choices on behalf of your users, binary notions of trust, stupid uses of AI and more...]]></itunes:summary><description><![CDATA[This episode we talk with Ricardo Baeza-Yates about: Responsible AI, the importance of AI governance, questioning people&#039;s intent to create AGI, robot rights and brain / neural rights, the evolution of intelligence, ethical risk assessment, machine ethics, making ethical choices on behalf of your users, binary notions of trust, stupid uses of AI and more...]]></description><pubDate>Wed, 24 Aug 2022 21:43:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1421/ricardo-illutration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1423/richardo_machine-ethics-podcast.mp3" length="65301428" type="audio/mp3" /><itunes:duration>45:19</itunes:duration><guid>https://www.machine-ethics.net/podcast/rights-trust-and-ethical-choice-with-ricardo-baeza-yates/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI ethics strategy with Reid Blackman</itunes:title><title>69. AI ethics strategy with Reid Blackman</title><link>https://www.machine-ethics.net/podcast/ai-ethics-strategy-with-reid-blackman/</link><itunes:episode>69</itunes:episode><itunes:author>Ben Byford with Reid Blackman</itunes:author><itunes:subtitle>Sixty ninth episode of Machine Ethics podcast with Reid Blackman</itunes:subtitle><itunes:summary><![CDATA[In this episode we talk with Reid Blackman about: what is learning? What it means to be worthy of trust, bullsh**t AI principles, company values, purpose and use in decision making, his AI ethics risk strategy book, machine ethics as a fools errand, weighing metrics for measuring bias, ethics committees, police and the IRB. And much more...]]></itunes:summary><description><![CDATA[In this episode we talk with Reid Blackman about: what is learning? What it means to be worthy of trust, bullsh**t AI principles, company values, purpose and use in decision making, his AI ethics risk strategy book, machine ethics as a fools errand, weighing metrics for measuring bias, ethics committees, police and the IRB. And much more...]]></description><pubDate>Tue, 12 Jul 2022 10:35:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1419/reid-illustrator.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1420/reid-blackman_machine-ethics-podcast.mp3" length="82741149" type="audio/mp3" /><itunes:duration>57:24</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-ethics-strategy-with-reid-blackman/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Ethics of digital worlds with Richard Bartle</itunes:title><title>68. Ethics of digital worlds with Richard Bartle</title><link>https://www.machine-ethics.net/podcast/ethics-of-digital-worlds-with-richard-bartle/</link><itunes:episode>68</itunes:episode><itunes:author>Ben Byford with Richard Bartle</itunes:author><itunes:subtitle>Sixty eighth episode of Machine Ethics podcast with Richard Bartle</itunes:subtitle><itunes:summary><![CDATA[Richard Bartle joins us again after his appearance on ep.65 to chat about the metaverse, different ways to design AI controlled NPC, the lack of progress of AI in games, ethical considerations of games designers, ethics of AI life, virutalism, &#039;smart&#039; AI will happened, robot rights and more...]]></itunes:summary><description><![CDATA[Richard Bartle joins us again after his appearance on ep.65 to chat about the metaverse, different ways to design AI controlled NPC, the lack of progress of AI in games, ethical considerations of games designers, ethics of AI life, virutalism, &#039;smart&#039; AI will happened, robot rights and more...]]></description><pubDate>Fri, 25 Feb 2022 16:09:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1410/richard-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1415/richard-bartle_machine-ethics-podcast.mp3" length="88739178" type="audio/mp3" /><itunes:duration>01:01:31</itunes:duration><guid>https://www.machine-ethics.net/podcast/ethics-of-digital-worlds-with-richard-bartle/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI Audits with Ryan Carrier</itunes:title><title>67. AI Audits with Ryan Carrier</title><link>https://www.machine-ethics.net/podcast/ai-audits-with-ryan-carrier/</link><itunes:episode>67</itunes:episode><itunes:author>Ben Byford with Ryan Carrier</itunes:author><itunes:subtitle>Sixty seventh episode of Machine Ethics podcast with Ryan Carrier</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Ryan Carrier about the positivity of the ForHumanity community, being compelled to do something about AI technologies negative impact, AI audits and topics including: trust, oversight, governance, privacy, cyber security, bias; creating an infrastructure of trust, disclosing found risks and the ethical decisions, the new industry of AI audits, human wellbeing as the whole point of business and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Ryan Carrier about the positivity of the ForHumanity community, being compelled to do something about AI technologies negative impact, AI audits and topics including: trust, oversight, governance, privacy, cyber security, bias; creating an infrastructure of trust, disclosing found risks and the ethical decisions, the new industry of AI audits, human wellbeing as the whole point of business and more...]]></description><pubDate>Thu, 24 Feb 2022 15:10:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1409/ryan-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1411/ryan-carrier_machine-ethics-podcast.mp3" length="82394250" type="audio/mp3" /><itunes:duration>57:11</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-audits-with-ryan-carrier/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>2021 in review with Merve Hickok</itunes:title><title>66. 2021 in review with Merve Hickok</title><link>https://www.machine-ethics.net/podcast/2021-in-review-with-merve-hickok-and-ben-byford/</link><itunes:episode>66</itunes:episode><itunes:author>Ben Byford and Merve Hickok</itunes:author><itunes:subtitle>Sixty sixth episode of Machine Ethics podcast with Merve Hickok and Ben Byford</itunes:subtitle><itunes:summary><![CDATA[This episode Ben and Merve are chatting about 2021–EU AI legislation &amp; harmonising AI product markets through policy, the UNESCO principles, systemic dogma, AI ethics in defence, Reith lectures and Lethal autonomous weapons, demonstrating values / principles and much more...]]></itunes:summary><description><![CDATA[This episode Ben and Merve are chatting about 2021–EU AI legislation &amp; harmonising AI product markets through policy, the UNESCO principles, systemic dogma, AI ethics in defence, Reith lectures and Lethal autonomous weapons, demonstrating values / principles and much more...]]></description><pubDate>Mon, 03 Jan 2022 11:24:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1404/merve-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1408/merve-hickok-2021wrapup.mp3" length="72549385" type="audio/mp3" /><itunes:duration>50:18</itunes:duration><guid>https://www.machine-ethics.net/podcast/2021-in-review-with-merve-hickok-and-ben-byford/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>DeepDive: AI and Games</itunes:title><title>65. DeepDive: AI and Games</title><link>https://www.machine-ethics.net/podcast/deepdive-ai-and-games/</link><itunes:episode>65</itunes:episode><itunes:author>Ben Byford with Amandine Flachs, Tommy Thompson and Richard Bartle</itunes:author><itunes:subtitle>Sixty fifth episode of Machine Ethics podcast with doing a deep dive on Ai and Games</itunes:subtitle><itunes:summary><![CDATA[This first Deepdive episode we talk to Amandine Flachs, Tommy Thompson and Richard Bartle about AI in games, it&#039;s history, it&#039;s uses and where its going. We discover NPCs, games as a test bed for AI research, different game AI techniques, back office uses of AI, job displacement, bad actors and possible futures...]]></itunes:summary><description><![CDATA[This first Deepdive episode we talk to Amandine Flachs, Tommy Thompson and Richard Bartle about AI in games, it&#039;s history, it&#039;s uses and where its going. We discover NPCs, games as a test bed for AI research, different game AI techniques, back office uses of AI, job displacement, bad actors and possible futures...]]></description><pubDate>Fri, 10 Dec 2021 15:23:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1399/aiandgames-sq.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1400/deepdive_ai-in-games_machine-ethics.mp3" length="50793502" type="audio/mp3" /><itunes:duration>35:16</itunes:duration><guid>https://www.machine-ethics.net/podcast/deepdive-ai-and-games/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Emotion detection with Andrew McStay</itunes:title><title>64. Emotion detection with Andrew McStay</title><link>https://www.machine-ethics.net/podcast/emotion-detection-with-andrew-mcstay/</link><itunes:episode>64</itunes:episode><itunes:author>Ben Byford with Andrew McStay</itunes:author><itunes:subtitle>Sixty fourth episode of Machine Ethics podcast with Andrew McStay</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Andrew McStay about emotional human machine interface, emotion face and voice detection, emotion detection and hiring–and the possiblity of gaming these systems, interactive AI kids toys, the space between an ethical subject and an object in AI systems, raising children in an AI world, cultural differences in emotional profiling, emotional AI regulation.]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Andrew McStay about emotional human machine interface, emotion face and voice detection, emotion detection and hiring–and the possiblity of gaming these systems, interactive AI kids toys, the space between an ethical subject and an object in AI systems, raising children in an AI world, cultural differences in emotional profiling, emotional AI regulation.]]></description><pubDate>Mon, 15 Nov 2021 21:30:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1394/andrew-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1398/andrew-mcstay_machine-ethics-podcast.mp3" length="72664083" type="audio/mp3" /><itunes:duration>50:23</itunes:duration><guid>https://www.machine-ethics.net/podcast/emotion-detection-with-andrew-mcstay/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI readiness with Tim El-Sheikh</itunes:title><title>63. AI readiness with Tim El-Sheikh</title><link>https://www.machine-ethics.net/podcast/practical-ai-with-tim-el-sheik/</link><itunes:episode>63</itunes:episode><itunes:author>Ben Byford with Tim El-Sheikh</itunes:author><itunes:subtitle>Sixty third episode of Machine Ethics podcast with Tim El-Sheikh</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re talking with Tim El-Sheikh of Nebuli.com. We chat about definitions of intelligence and augmented intelligence, ethical AI as the smarter AI, importance of a businesses AI strategy and getting data ready, AGI and what is conciousness? Human intuition, privacy as a human right and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re talking with Tim El-Sheikh of Nebuli.com. We chat about definitions of intelligence and augmented intelligence, ethical AI as the smarter AI, importance of a businesses AI strategy and getting data ready, AGI and what is conciousness? Human intuition, privacy as a human right and more...]]></description><pubDate>Sun, 10 Oct 2021 22:28:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1392/tim-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1393/tim-el-sheikh_machine-ethics-podcast.mp3" length="87761431" type="audio/mp3" /><itunes:duration>01:00:53</itunes:duration><guid>https://www.machine-ethics.net/podcast/practical-ai-with-tim-el-sheik/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What excites you about AI? Vol.1</itunes:title><title>62. What excites you about AI? Vol.1</title><link>https://www.machine-ethics.net/podcast/what-excites-you-about-ai/</link><itunes:episode>62</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Fifty nineth episode of Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[In this bonus compilation episode we look back at our interviewees answers to the question: What excites you about our AI mediated future? We chat about rethinking our responsibility towards our world, algorithms that work for everyone not just the a few, social justice, solving coordination problems and humanitarian problems, growing as a humanity, building with the next generation in mind, and more...]]></itunes:summary><description><![CDATA[In this bonus compilation episode we look back at our interviewees answers to the question: What excites you about our AI mediated future? We chat about rethinking our responsibility towards our world, algorithms that work for everyone not just the a few, social justice, solving coordination problems and humanitarian problems, growing as a humanity, building with the next generation in mind, and more...]]></description><pubDate>Fri, 03 Sep 2021 15:58:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1384/excites-image.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1385/what-excites-you_machine-ethics-podcast.mp3" length="25097478" type="audio/mp3" /><itunes:duration>17:25</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-excites-you-about-ai/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Welcome to the Machine Ethics podcast</itunes:title><title>61. Welcome to the Machine Ethics podcast</title><link>https://www.machine-ethics.net/podcast/welcome-to-the-machine-ethics-podcast/</link><itunes:episode>61</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Introduction to the Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[Short introduction to the podcast: what it is about, when it started, and how to get involved.]]></itunes:summary><description><![CDATA[Short introduction to the podcast: what it is about, when it started, and how to get involved.]]></description><pubDate>Thu, 26 Aug 2021 16:35:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1380/screenshot_2021-08-26_at_16_32_07.1400x1400.png" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1383/teaser_audio-only-1.mp3" length="4707101" type="audio/mp3" /><itunes:duration>03:13</itunes:duration><guid>https://www.machine-ethics.net/podcast/welcome-to-the-machine-ethics-podcast/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Responsible AI Research with Madhulika Srikumar</itunes:title><title>60. Responsible AI Research with Madhulika Srikumar</title><link>https://www.machine-ethics.net/podcast/responsible-ai-research-with-madhulika-srikumar/</link><itunes:episode>60</itunes:episode><itunes:author>Ben Byford with Madhulika Srikumar</itunes:author><itunes:subtitle>Sixtieth episode of Machine Ethics podcast with Madhulika Srikumar</itunes:subtitle><itunes:summary><![CDATA[This time we&#039;re talking AI research with Madhulika Srikumar of Partnership on AI. We chat about managing the risks of AI research, how should the AI community think about the consequences of their research, documenting best practises for AI, OpenAI&#039;s GTP2 research disclosure example, considering unintended consequences &amp; negative downstream outcomes, considering what your research may actually contribute,  promoting scientific openness, proportional ethical reflection, research social impact assessments and more...]]></itunes:summary><description><![CDATA[This time we&#039;re talking AI research with Madhulika Srikumar of Partnership on AI. We chat about managing the risks of AI research, how should the AI community think about the consequences of their research, documenting best practises for AI, OpenAI&#039;s GTP2 research disclosure example, considering unintended consequences &amp; negative downstream outcomes, considering what your research may actually contribute,  promoting scientific openness, proportional ethical reflection, research social impact assessments and more...]]></description><pubDate>Wed, 25 Aug 2021 10:05:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1377/madhu-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1379/madhulika-srikumar_machine-ethics-podcast-1.mp3" length="58607386" type="audio/mp3" /><itunes:duration>40:38</itunes:duration><guid>https://www.machine-ethics.net/podcast/responsible-ai-research-with-madhulika-srikumar/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What scares you about AI? Vol.1</itunes:title><title>59. What scares you about AI? Vol.1</title><link>https://www.machine-ethics.net/podcast/what-scares-you-about-ai/</link><itunes:episode>59</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Fifty nineth episode of Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[In this bonus compilation episode we look back at our interviewees answers to the question: What scares you about our AI mediated future? We chat gender imbalance and lack of diversity, digital personhood, climate change, ubiquitous surveillance, deep-fakes, people misusing AI, human hubris, capitalism getting in the way and more...]]></itunes:summary><description><![CDATA[In this bonus compilation episode we look back at our interviewees answers to the question: What scares you about our AI mediated future? We chat gender imbalance and lack of diversity, digital personhood, climate change, ubiquitous surveillance, deep-fakes, people misusing AI, human hubris, capitalism getting in the way and more...]]></description><pubDate>Tue, 10 Aug 2021 09:04:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1370/sacres.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1376/ai-scares_machine-ethics-podcast.mp3" length="24120632" type="audio/mp3" /><itunes:duration>16:44</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-scares-you-about-ai/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI regulation with Lofred Madzou</itunes:title><title>58. AI regulation with Lofred Madzou</title><link>https://www.machine-ethics.net/podcast/ai-regulation-with-lofred-madzou/</link><itunes:episode>58</itunes:episode><itunes:author>Ben Byford with Lofred Madzou</itunes:author><itunes:subtitle>Fifty eighth episode of Machine Ethics podcast with Lofred Madzou</itunes:subtitle><itunes:summary><![CDATA[We chat with Lofred Madzou about AI as a journey to understand ourselves through smart machines, scepticism about wholesale job lose, understanding that “you are not your data”, dissecting the European proposal for AI regulation, examples of types of AI activities under regulation, the spirit of the regulation - human rights centric, risk based approaches, infringement exposition and compliance...]]></itunes:summary><description><![CDATA[We chat with Lofred Madzou about AI as a journey to understand ourselves through smart machines, scepticism about wholesale job lose, understanding that “you are not your data”, dissecting the European proposal for AI regulation, examples of types of AI activities under regulation, the spirit of the regulation - human rights centric, risk based approaches, infringement exposition and compliance...]]></description><pubDate>Mon, 19 Jul 2021 10:51:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1367/lofred-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1368/lofred-madzou_machine-ethics-podcast.mp3" length="65315355" type="audio/mp3" /><itunes:duration>45:19</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-regulation-with-lofred-madzou/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Value Sensitive Design with Steven Umbrello</itunes:title><title>57. Value Sensitive Design with Steven Umbrello</title><link>https://www.machine-ethics.net/podcast/vsd-with-steven-umbrello/</link><itunes:episode>57</itunes:episode><itunes:author>Ben Byford with Steven Umbrello</itunes:author><itunes:subtitle>Fifty seventh episode of Machine Ethics podcast with Steven Umbrello</itunes:subtitle><itunes:summary><![CDATA[We&#039;re talking with Steven Umbrello about transhumanism, his passion for philosophy and it&#039;s practical applications, Value sensitive design a modular design practise, technologies co-constructing society, integrating VSD using agile workflows, issues of principles, moral imagination and more...]]></itunes:summary><description><![CDATA[We&#039;re talking with Steven Umbrello about transhumanism, his passion for philosophy and it&#039;s practical applications, Value sensitive design a modular design practise, technologies co-constructing society, integrating VSD using agile workflows, issues of principles, moral imagination and more...]]></description><pubDate>Thu, 01 Jul 2021 09:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1353/steven-illustration-2.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1366/steven-umbrello_machine-ethics-podcast.mp3" length="72410652" type="audio/mp3" /><itunes:duration>50:15</itunes:duration><guid>https://www.machine-ethics.net/podcast/vsd-with-steven-umbrello/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What is AI? Vol.2</itunes:title><title>56. What is AI? Vol.2</title><link>https://www.machine-ethics.net/podcast/what-is-ai-2/</link><itunes:episode>56</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Fity Sixth episode of Machine Ethics podcast - retrospective look at What is AI?</itunes:subtitle><itunes:summary><![CDATA[This episode is our second bonus compilation of answers from previous years of interviews asking the question: What is AI? We hear from past interviewees Jess Smith, Rishal Hurbans, Jacob Turner, Cennydd Bowles, Joanna J Bryson, Damien Williams, Olivia Gamelin, David Gunkel, Bertram Malle, David Yakobovitch, Luciano Floridi, Lydia Nicholas.]]></itunes:summary><description><![CDATA[This episode is our second bonus compilation of answers from previous years of interviews asking the question: What is AI? We hear from past interviewees Jess Smith, Rishal Hurbans, Jacob Turner, Cennydd Bowles, Joanna J Bryson, Damien Williams, Olivia Gamelin, David Gunkel, Bertram Malle, David Yakobovitch, Luciano Floridi, Lydia Nicholas.]]></description><pubDate>Wed, 16 Jun 2021 16:16:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1360/what-is-ai-thumbnail.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1362/what-is-ai2_machine-ethics-podcast.mp3" length="35272293" type="audio/mp3" /><itunes:duration>24:28</itunes:duration><guid>https://www.machine-ethics.net/podcast/what-is-ai-2/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Speculative design with Phil Balagtas</itunes:title><title>55. Speculative design with Phil Balagtas</title><link>https://www.machine-ethics.net/podcast/speculative-design-with-phil-balagtas/</link><itunes:episode>55</itunes:episode><itunes:author>Ben Byford with Phil Balagtas</itunes:author><itunes:subtitle>Fifty fifth episode of Machine Ethics podcast with Phil Balagtas</itunes:subtitle><itunes:summary><![CDATA[We&#039;re chatting with Phil Balagtas about speculative &amp; critical design, speculative design as a strategy tool, using design as a what if tool, or a story to strive for, The Design Futures Initiative, doing meanful work, and getting to real trust in mission critical AI...]]></itunes:summary><description><![CDATA[We&#039;re chatting with Phil Balagtas about speculative &amp; critical design, speculative design as a strategy tool, using design as a what if tool, or a story to strive for, The Design Futures Initiative, doing meanful work, and getting to real trust in mission critical AI...]]></description><pubDate>Tue, 01 Jun 2021 14:55:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1355/phil-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1359/phil-balagtas_machine-ethics-podcast.mp3" length="65043975" type="audio/mp3" /><itunes:duration>45:06</itunes:duration><guid>https://www.machine-ethics.net/podcast/speculative-design-with-phil-balagtas/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The business of AI ethics with Josie Young</itunes:title><title>54. The business of AI ethics with Josie Young</title><link>https://www.machine-ethics.net/podcast/the-business-of-ai-ethics-with-josie-young/</link><itunes:episode>54</itunes:episode><itunes:author>Ben Byford with Josie Young</itunes:author><itunes:subtitle>Fifty fourth episode of Machine Ethics podcast with Josie Young</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with the amazing Josie Young on making businesses more efficient, how the AI ethics landscape changed over the last 5 years, ethics roles and collaborations, feminist AI and chatbots, responsible AI at Microsoft, ethics push back from teams and selling in AI ethics, disinformation’s risk to democracy and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with the amazing Josie Young on making businesses more efficient, how the AI ethics landscape changed over the last 5 years, ethics roles and collaborations, feminist AI and chatbots, responsible AI at Microsoft, ethics push back from teams and selling in AI ethics, disinformation’s risk to democracy and more...]]></description><pubDate>Sat, 27 Mar 2021 11:16:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1343/josie-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1352/josie-young_machine-ethics-podcast.mp3" length="74179369" type="audio/mp3" /><itunes:duration>51:28</itunes:duration><guid>https://www.machine-ethics.net/podcast/the-business-of-ai-ethics-with-josie-young/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Comedy and AI with Anthony Jeannot</itunes:title><title>53. Comedy and AI with Anthony Jeannot</title><link>https://www.machine-ethics.net/podcast/comedy-and-ai-with-anthony-jeannot/</link><itunes:episode>53</itunes:episode><itunes:author>Ben Byford with Anthony Jeannot</itunes:author><itunes:subtitle>Fifty third episode of Machine Ethics podcast with Anthony Jeannot</itunes:subtitle><itunes:summary><![CDATA[A laid back episode of the podcast where Anthony and I chat about Netflix and recommender systems, finding comedy in AI, AI written movies and theatre, human content moderation, bringing an AI Ben back from the dead, constructing jokes recursively and much more...]]></itunes:summary><description><![CDATA[A laid back episode of the podcast where Anthony and I chat about Netflix and recommender systems, finding comedy in AI, AI written movies and theatre, human content moderation, bringing an AI Ben back from the dead, constructing jokes recursively and much more...]]></description><pubDate>Mon, 22 Mar 2021 16:21:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1340/anthony-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1341/anthony-jeannot_machine-ethics-podcast.mp3" length="64081292" type="audio/mp3" /><itunes:duration>44:30</itunes:duration><guid>https://www.machine-ethics.net/podcast/comedy-and-ai-with-anthony-jeannot/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Algorithmic discrimination with Damien Williams</itunes:title><title>52. Algorithmic discrimination with Damien Williams</title><link>https://www.machine-ethics.net/podcast/algorithmic-discrimination-with-damien-williams/</link><itunes:episode>52</itunes:episode><itunes:author>Ben Byford with Damien Williams</itunes:author><itunes:subtitle>Fifty second episode of Machine Ethics podcast with Damien Williams</itunes:subtitle><itunes:summary><![CDATA[This episode we chat with Damien Williams about types of human and algorithmic discrimination, human-technology expectations and norms, algorithms and benefit services, the contextual nature of sample data, is face recognition even a good idea? Should we be scared that GTP-3 will take our jobs and the cultural value of jobs, encoding values into autonomous beings, culture and mothering AI, AI and dogma, and more...]]></itunes:summary><description><![CDATA[This episode we chat with Damien Williams about types of human and algorithmic discrimination, human-technology expectations and norms, algorithms and benefit services, the contextual nature of sample data, is face recognition even a good idea? Should we be scared that GTP-3 will take our jobs and the cultural value of jobs, encoding values into autonomous beings, culture and mothering AI, AI and dogma, and more...]]></description><pubDate>Mon, 01 Mar 2021 09:09:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1335/damian-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1339/damien-williams_machine-ethics-podcast.mp3" length="82645796" type="audio/mp3" /><itunes:duration>57:21</itunes:duration><guid>https://www.machine-ethics.net/podcast/algorithmic-discrimination-with-damien-williams/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AGI Safety and Alignment with Robert Miles</itunes:title><title>51. AGI Safety and Alignment with Robert Miles</title><link>https://www.machine-ethics.net/podcast/agi-safety-and-alignment-with-robert-miles/</link><itunes:episode>51</itunes:episode><itunes:author>Ben Byford with Robert Miles</itunes:author><itunes:subtitle>Fifty first episode of Machine Ethics podcast with Robert Miles</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more]]></description><pubDate>Wed, 13 Jan 2021 10:09:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1329/rob-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1334/rob-miles_machine-ethics-podcast.mp3" length="80804155" type="audio/mp3" /><itunes:duration>56:04</itunes:duration><guid>https://www.machine-ethics.net/podcast/agi-safety-and-alignment-with-robert-miles/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Privacy and the end of the data economy with Carissa Veliz</itunes:title><title>50. Privacy and the end of the data economy with Carissa Veliz</title><link>https://www.machine-ethics.net/podcast/privacy-and-the-end-of-the-data-economy-with-carissa-veliz/</link><itunes:episode>50</itunes:episode><itunes:author>Ben Byford with Carissa Véliz</itunes:author><itunes:subtitle>Fiftith episode of Machine Ethics podcast with Carissa Véliz</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Carissa Véliz on the transforming of power, how personal data is toxic, end of the data economy, dangers of privacy violations, differential privacy, what you can do to help, ethics committees and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Carissa Véliz on the transforming of power, how personal data is toxic, end of the data economy, dangers of privacy violations, differential privacy, what you can do to help, ethics committees and more...]]></description><pubDate>Thu, 31 Dec 2020 08:01:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1323/carissa-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1328/carissa-veliz_machine-ethics-podcast.mp3" length="65769556" type="audio/mp3" /><itunes:duration>45:37</itunes:duration><guid>https://www.machine-ethics.net/podcast/privacy-and-the-end-of-the-data-economy-with-carissa-veliz/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>2020 rambling chat with Ben Gilburt and Ben Byford</itunes:title><title>49. 2020 rambling chat with Ben Gilburt and Ben Byford</title><link>https://www.machine-ethics.net/podcast/2020/</link><itunes:episode>49</itunes:episode><itunes:author>Ben Byford and Ben Gilburt</itunes:author><itunes:subtitle>Forty ninth episode of Machine Ethics podcast with Ben Gilburt and Ben Byford</itunes:subtitle><itunes:summary><![CDATA[This episode Ben and Ben are chatting about 2020 - Timnit Gebru leaving google, the promise of AI and COVID-19, Kaggle&#039;s COVID competition, GTP3, test and trace apps and privacy, AI Ethics bookclub, AI ethics courses, when transparency is good or bad, alpha fold, and more...]]></itunes:summary><description><![CDATA[This episode Ben and Ben are chatting about 2020 - Timnit Gebru leaving google, the promise of AI and COVID-19, Kaggle&#039;s COVID competition, GTP3, test and trace apps and privacy, AI Ethics bookclub, AI ethics courses, when transparency is good or bad, alpha fold, and more...]]></description><pubDate>Wed, 30 Dec 2020 16:59:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1321/ben_ben-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1322/2020_machine-ethics-podcast.mp3" length="97236827" type="audio/mp3" /><itunes:duration>01:07:27</itunes:duration><guid>https://www.machine-ethics.net/podcast/2020/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Jessie Smith co-designing AI</itunes:title><title>48. Jessie Smith co-designing AI</title><link>https://www.machine-ethics.net/podcast/jessie-smith-co-designing-ai/</link><itunes:episode>48</itunes:episode><itunes:author>Ben Byford with Jess Smith</itunes:author><itunes:subtitle>Forty eighth episode of Machine Ethics podcast with Jess Smith</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with Jess Smith about the Radical AI podcast and defining the word radical, what is AI - non-living ability to learn… maybe, AI consciousness, the responsibility of technologists, robot rights, what makes us human, creativity and more...]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with Jess Smith about the Radical AI podcast and defining the word radical, what is AI - non-living ability to learn… maybe, AI consciousness, the responsibility of technologists, robot rights, what makes us human, creativity and more...]]></description><pubDate>Wed, 25 Nov 2020 20:19:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1315/jess-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1319/jess-smith_machine-ethics-podcast.mp3" length="72590132" type="audio/mp3" /><itunes:duration>50:23</itunes:duration><guid>https://www.machine-ethics.net/podcast/jessie-smith-co-designing-ai/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Robot Rights with David Gunkel</itunes:title><title>47. Robot Rights with David Gunkel</title><link>https://www.machine-ethics.net/podcast/robot-rights-with-david-gunkel/</link><itunes:episode>47</itunes:episode><itunes:author>Ben Byford with David Gunkel</itunes:author><itunes:subtitle>Forty seventh episode of Machine Ethics podcast with David Gunkel</itunes:subtitle><itunes:summary><![CDATA[This episode we&#039;re chatting with David Gunkel on AI ideologies, why write the Robots Rights book, what are rights and categories of rights, computer ethics and hitch bot, anthropomorphising as a human feature, supporting environmental rights through this endeavour of robot rights, relational ethics, and acknowledging the western ethical view point.]]></itunes:summary><description><![CDATA[This episode we&#039;re chatting with David Gunkel on AI ideologies, why write the Robots Rights book, what are rights and categories of rights, computer ethics and hitch bot, anthropomorphising as a human feature, supporting environmental rights through this endeavour of robot rights, relational ethics, and acknowledging the western ethical view point.]]></description><pubDate>Tue, 20 Oct 2020 16:13:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1309/gunkel-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1310/david-gunkel_machine-ethics-podcast.mp3" length="79820934" type="audio/mp3" /><itunes:duration>55:22</itunes:duration><guid>https://www.machine-ethics.net/podcast/robot-rights-with-david-gunkel/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Belief Systems and AI with Dylan Doyle-Burke</itunes:title><title>46. Belief Systems and AI with Dylan Doyle-Burke</title><link>https://www.machine-ethics.net/podcast/belief-systems-and-ai-with-dylan-doyle-burke/</link><itunes:episode>46</itunes:episode><itunes:author>Ben Byford with Dylan Doyle-Burke</itunes:author><itunes:subtitle>Forty sixth episode of Machine Ethics podcast with Dylan Doyle-Burke</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting with Dylan Doyle-Burke of the Radical AI podcast about starting the podcast, new religions and how systems of belief relate to AI, faith and digital participation, digital death and memorial, what does it mean to be human, and much more...]]></itunes:summary><description><![CDATA[This month we&#039;re chatting with Dylan Doyle-Burke of the Radical AI podcast about starting the podcast, new religions and how systems of belief relate to AI, faith and digital participation, digital death and memorial, what does it mean to be human, and much more...]]></description><pubDate>Sun, 04 Oct 2020 11:28:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1303/dylan-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1308/dylan-doyle-burke-machine-ethics-podcast.mp3" length="78051102" type="audio/mp3" /><itunes:duration>54:09</itunes:duration><guid>https://www.machine-ethics.net/podcast/belief-systems-and-ai-with-dylan-doyle-burke/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Responsible AI with Maria Axente</itunes:title><title>45. Responsible AI with Maria Axente</title><link>https://www.machine-ethics.net/podcast/responsible-ai-with-maria-axente/</link><itunes:episode>45</itunes:episode><itunes:author>Ben Byford with Maria Luciana Axente</itunes:author><itunes:subtitle>Forty fifth episode of Machine Ethics podcast with Maria Luciana Axente</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re chatting with Maria Luciana Axente about responsible AI, defining AI ethic terms and collaboration, where does the interest in AI ethics come from within organisations, how is it ethics for AI is related to good business outcomes, the connection of ethics and risk and much more...]]></itunes:summary><description><![CDATA[This month we&#039;re chatting with Maria Luciana Axente about responsible AI, defining AI ethic terms and collaboration, where does the interest in AI ethics come from within organisations, how is it ethics for AI is related to good business outcomes, the connection of ethics and risk and much more...]]></description><pubDate>Mon, 31 Aug 2020 14:45:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1299/maria-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1302/maria-axente_machine-ethics-podcast.mp3" length="90706762" type="audio/mp3" /><itunes:duration>01:02:55</itunes:duration><guid>https://www.machine-ethics.net/podcast/responsible-ai-with-maria-axente/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Moral Machines with Rebecca Raper</itunes:title><title>44. Moral Machines with Rebecca Raper</title><link>https://www.machine-ethics.net/podcast/moral-machines-with-rebecca-raper/</link><itunes:episode>44</itunes:episode><itunes:author>Ben Byford with Rebecca Raper</itunes:author><itunes:subtitle>Forty forth episode of Machine Ethics podcast with Rebecca Raper</itunes:subtitle><itunes:summary><![CDATA[This month we go back to our roots with an episode about Machine Ethics with Rebecca Raper. We chat about Moral machines and why make them, morals as constraints, moral capacity and approaches to machine ethics, machine moral ontologies, legislation vs innovation and more...]]></itunes:summary><description><![CDATA[This month we go back to our roots with an episode about Machine Ethics with Rebecca Raper. We chat about Moral machines and why make them, morals as constraints, moral capacity and approaches to machine ethics, machine moral ontologies, legislation vs innovation and more...]]></description><pubDate>Tue, 04 Aug 2020 15:17:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1295/rebecca-illustration-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1298/rebecca-raper_machine-ethics-podcast.mp3" length="76517397" type="audio/mp3" /><itunes:duration>53:03</itunes:duration><guid>https://www.machine-ethics.net/podcast/moral-machines-with-rebecca-raper/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Reskilling with David Yakobovitch</itunes:title><title>43. Reskilling with David Yakobovitch</title><link>https://www.machine-ethics.net/podcast/reskilling-with-david-yakobovitch/</link><itunes:episode>43</itunes:episode><itunes:author>Ben Byford with David Yakobovitch</itunes:author><itunes:subtitle>Forty third episode of Machine Ethics podcast with David Yakobovitch</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re zooming with David Yakobovitch, chatting about data science education, where is the industry going, the importance of data protection and ethics, transhumanism and discrimination, reimagining the world after covid and much more.]]></itunes:summary><description><![CDATA[This month we&#039;re zooming with David Yakobovitch, chatting about data science education, where is the industry going, the importance of data protection and ethics, transhumanism and discrimination, reimagining the world after covid and much more.]]></description><pubDate>Tue, 30 Jun 2020 19:56:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1290/david-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1291/david-yakobovitch_machine-ethics-podcast.mp3" length="65407470" type="audio/mp3" /><itunes:duration>45:24</itunes:duration><guid>https://www.machine-ethics.net/podcast/reskilling-with-david-yakobovitch/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Probability &amp; moral responsibility with Olivia Gambelin</itunes:title><title>42. Probability &amp; moral responsibility with Olivia Gambelin</title><link>https://www.machine-ethics.net/podcast/probability-moral-responsibility-with-olivia-gambelin/</link><itunes:episode>42</itunes:episode><itunes:author>Ben Byford with Olivia Gambelin</itunes:author><itunes:subtitle>Forty second episode of Machine Ethics podcast with Olivia Gambelin</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re speaking to Olivia Gambelin about: what should and shouldnt be automated, the importance of human connection, call for ethics, what are ethics, where is value created in data, probability intuition of automated cars and the the moral gap, and more.]]></itunes:summary><description><![CDATA[This month we&#039;re speaking to Olivia Gambelin about: what should and shouldnt be automated, the importance of human connection, call for ethics, what are ethics, where is value created in data, probability intuition of automated cars and the the moral gap, and more.]]></description><pubDate>Mon, 04 May 2020 13:21:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1288/olivia-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1289/olivia-gambelin_machine-ethics-podcast.mp3" length="74791510" type="audio/mp3" /><itunes:duration>51:53</itunes:duration><guid>https://www.machine-ethics.net/podcast/probability-moral-responsibility-with-olivia-gambelin/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Art &amp; AI with Eva Jäger &amp; Mercedes Bunz</itunes:title><title>41. Art &amp; AI with Eva Jäger &amp; Mercedes Bunz</title><link>https://www.machine-ethics.net/podcast/art-and-ai-with-eva-and-mercedes/</link><itunes:episode>41</itunes:episode><itunes:author>Ben Byford with Eva Jäger and Mercedes Bunz</itunes:author><itunes:subtitle>Forty first episode of Machine Ethics podcast with Eva Jäger and Mercedes Bunz</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re speaking to both Eva Jäger &amp; Mercedes Bunz on the topic of AI in Art. We discuss AI design interfaces, artificial stupidity, AI and the art market, the curation of AI art, the creative AI lab at the Serpentine Gallery a space for learning collaboration, work in progress and tools...]]></itunes:summary><description><![CDATA[This month we&#039;re speaking to both Eva Jäger &amp; Mercedes Bunz on the topic of AI in Art. We discuss AI design interfaces, artificial stupidity, AI and the art market, the curation of AI art, the creative AI lab at the Serpentine Gallery a space for learning collaboration, work in progress and tools...]]></description><pubDate>Mon, 20 Apr 2020 14:39:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1284/eva-mercedes-portriats-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1287/eva-mercedes_machine-ethics-podcast.mp3" length="55273499" type="audio/mp3" /><itunes:duration>38:20</itunes:duration><guid>https://www.machine-ethics.net/podcast/art-and-ai-with-eva-and-mercedes/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI alignment with Rohin Shah</itunes:title><title>40. AI alignment with Rohin Shah</title><link>https://www.machine-ethics.net/podcast/ai-alignment-with-rohin-shah/</link><itunes:episode>40</itunes:episode><itunes:author>Ben Gilburt with Rohin Shah</itunes:author><itunes:subtitle>Fortieth episode of Machine Ethics podcast with Rohin Shah</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re talking to Rohin Shah about alignment problems in AI, constraining AI behaviour, current AI vs future AI, recommendation algorithms and extremism, appropriate uses of AI, the fuzziness of fairness, and Rohin’s love of coordination problems.]]></itunes:summary><description><![CDATA[This month we&#039;re talking to Rohin Shah about alignment problems in AI, constraining AI behaviour, current AI vs future AI, recommendation algorithms and extremism, appropriate uses of AI, the fuzziness of fairness, and Rohin’s love of coordination problems.]]></description><pubDate>Wed, 11 Mar 2020 09:23:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1276/rohin-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1280/rohin-shah_machine-ethics-podcast.mp3" length="54407209" type="audio/mp3" /><itunes:duration>37:46</itunes:duration><guid>https://www.machine-ethics.net/podcast/ai-alignment-with-rohin-shah/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Our AI intentions with Rishal Hurbans</itunes:title><title>39. Our AI intentions with Rishal Hurbans</title><link>https://www.machine-ethics.net/podcast/rishal-hurbans/</link><itunes:episode>39</itunes:episode><itunes:author>Ben Byford with Rishal Hurbans</itunes:author><itunes:subtitle>Thirty ninth episode of Machine Ethics podcast with Rishal Hurbans</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m talking to the lovely Rishal Hurbans about the AI scene in South Africa, ethics as an important part of the intro to the book Grokking AI Algorithms, what Black Mirror can teach us about AI, going past ethical principles, why expert systems were omitted from the book and much more.]]></itunes:summary><description><![CDATA[This month I&#039;m talking to the lovely Rishal Hurbans about the AI scene in South Africa, ethics as an important part of the intro to the book Grokking AI Algorithms, what Black Mirror can teach us about AI, going past ethical principles, why expert systems were omitted from the book and much more.]]></description><pubDate>Fri, 07 Feb 2020 15:11:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1267/rishal_illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1272/rishal-hurbans-machine-ethics-podcast.mp3" length="66246860" type="audio/mp3" /><itunes:duration>45:57</itunes:duration><guid>https://www.machine-ethics.net/podcast/rishal-hurbans/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Automation and Utopia with John Danaher</itunes:title><title>38. Automation and Utopia with John Danaher</title><link>https://www.machine-ethics.net/podcast/automation-and-utopia-with-john-danaher/</link><itunes:episode>38</itunes:episode><itunes:author>Ben Byford with John Danaher</itunes:author><itunes:subtitle>Thirty eighth episode of Machine Ethics podcast with John Danaher</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m talking to the prolific John Danaher about cyborg and digital utopias, why you should hate your job, the idea of robot tax, behaviourism, and theories of moral standing.]]></itunes:summary><description><![CDATA[This month I&#039;m talking to the prolific John Danaher about cyborg and digital utopias, why you should hate your job, the idea of robot tax, behaviourism, and theories of moral standing.]]></description><pubDate>Tue, 14 Jan 2020 11:06:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1268/john-danaher-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1270/john-danaher_machine-ethics-podcast.mp3" length="64159009" type="audio/mp3" /><itunes:duration>44:29</itunes:duration><guid>https://www.machine-ethics.net/podcast/automation-and-utopia-with-john-danaher/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Social robots with Bertram Malle</itunes:title><title>37. Social robots with Bertram Malle</title><link>https://www.machine-ethics.net/podcast/social-robots-with-bertram-malle/</link><itunes:episode>37</itunes:episode><itunes:author>Ben Byford with Bertram Malle</itunes:author><itunes:subtitle>Thirty seventh episode of Machine Ethics podcast with Bertram Malle</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re talking to the gracious Bertram Malle about social robots, whether people react differently to robots in different context, how can we build trust and destroy it, explainable AI, what is a moral robot, possible futures, and much more...]]></itunes:summary><description><![CDATA[This month we&#039;re talking to the gracious Bertram Malle about social robots, whether people react differently to robots in different context, how can we build trust and destroy it, explainable AI, what is a moral robot, possible futures, and much more...]]></description><pubDate>Tue, 26 Nov 2019 11:49:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1262/bertram_podcast.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1266/bertram-malle_machine-ethics-podcast.mp3" length="87960637" type="audio/mp3" /><itunes:duration>01:01:03</itunes:duration><guid>https://www.machine-ethics.net/podcast/social-robots-with-bertram-malle/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Metrics for wellbeing with John C. Havens</itunes:title><title>36. Metrics for wellbeing with John C. Havens</title><link>https://www.machine-ethics.net/podcast/metrics-for-wellbeing-john-c-havens/</link><itunes:episode>36</itunes:episode><itunes:author>Ben Byford with John C. Havens</itunes:author><itunes:subtitle>Thirty sixth episode of Machine Ethics podcast with John C. Havens</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re talking to John C. Havens about his work on IEEE&#039;s ethically aligned design, human rights &amp; access to and data agency, signalling a persons values in respect to their personal data, GDP being an insignificant metric for our future, making sure no one is left out of the room when designing technology, and more...]]></itunes:summary><description><![CDATA[This month we&#039;re talking to John C. Havens about his work on IEEE&#039;s ethically aligned design, human rights &amp; access to and data agency, signalling a persons values in respect to their personal data, GDP being an insignificant metric for our future, making sure no one is left out of the room when designing technology, and more...]]></description><pubDate>Fri, 15 Nov 2019 10:49:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1258/john-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1261/john-c-havens_machine-ethics-podcast.mp3" length="79286776" type="audio/mp3" /><itunes:duration>55:01</itunes:duration><guid>https://www.machine-ethics.net/podcast/metrics-for-wellbeing-john-c-havens/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Moral reasoning with Marija Slavkovik</itunes:title><title>35. Moral reasoning with Marija Slavkovik</title><link>https://www.machine-ethics.net/podcast/moral-reasoning-with-marija-slavkovik/</link><itunes:episode>35</itunes:episode><itunes:author>Ben Byford with Marija Slavkovik</itunes:author><itunes:subtitle>Thirty fifth episode of Machine Ethics podcast with Marija Slavkovik</itunes:subtitle><itunes:summary><![CDATA[This month we&#039;re talking to the amazing Marija Slavkovik about a new language for talking about machine intelligence, expert systems and AI history, unchecked bot networks on the internet, how our technology doesn’t work for us, collective reasoning &amp; judgment aggregation.]]></itunes:summary><description><![CDATA[This month we&#039;re talking to the amazing Marija Slavkovik about a new language for talking about machine intelligence, expert systems and AI history, unchecked bot networks on the internet, how our technology doesn’t work for us, collective reasoning &amp; judgment aggregation.]]></description><pubDate>Tue, 24 Sep 2019 11:58:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1251/marija-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1257/marija-slavkovik_machine-ethics-podcast.mp3" length="75099360" type="audio/mp3" /><itunes:duration>52:06</itunes:duration><guid>https://www.machine-ethics.net/podcast/moral-reasoning-with-marija-slavkovik/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI for Humans with Rob McCargow</itunes:title><title>34. AI for Humans with Rob McCargow</title><link>https://www.machine-ethics.net/podcast/rob-mccargow/</link><itunes:episode>34</itunes:episode><itunes:author>Ben Byford with Rob McCargow</itunes:author><itunes:subtitle>Thirty fourth episode of Machine Ethics podcast with Rob McCargow</itunes:subtitle><itunes:summary><![CDATA[This month I have a lovely chat with Rob McCargow Director of AI at PWC. We chat AI modelling unintended consequences, AI ethics audits, working with companies with dubious intentions, what we should be teaching our children, a recipe for an AI future mitigating job displacement and much more.]]></itunes:summary><description><![CDATA[This month I have a lovely chat with Rob McCargow Director of AI at PWC. We chat AI modelling unintended consequences, AI ethics audits, working with companies with dubious intentions, what we should be teaching our children, a recipe for an AI future mitigating job displacement and much more.]]></description><pubDate>Tue, 30 Jul 2019 12:43:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1141/rob-mccargow.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1142/rob-mccargow_machine-ethics-podcast.mp3" length="65029695" type="audio/mp3" /><itunes:duration>45:07</itunes:duration><guid>https://www.machine-ethics.net/podcast/rob-mccargow/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>ORGcon special edition episode</itunes:title><title>33. ORGcon special edition episode</title><link>https://www.machine-ethics.net/podcast/orgcon-special-edition/</link><itunes:episode>33</itunes:episode><itunes:author>Ben Byford with the audience of ORGcon</itunes:author><itunes:subtitle>Thirty third episode of Machine Ethics podcast with including vox-pop recordings from ORGcon2019</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m releasing two episodes, the first of which is a bonus episode recorded at ORGcon2019. I talk to the audience and speakers of ORGcon about human rights, privacy, face recognition, ethical relativism, whistleblowing, AI auditing and GDPR and much more.]]></itunes:summary><description><![CDATA[This month I&#039;m releasing two episodes, the first of which is a bonus episode recorded at ORGcon2019. I talk to the audience and speakers of ORGcon about human rights, privacy, face recognition, ethical relativism, whistleblowing, AI auditing and GDPR and much more.]]></description><pubDate>Tue, 16 Jul 2019 10:54:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1139/orgcon-image.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1140/orgcon2019_machine-ethics.mp3" length="66107155" type="audio/mp3" /><itunes:duration>45:52</itunes:duration><guid>https://www.machine-ethics.net/podcast/orgcon-special-edition/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Sex Robots with Kate Devlin</itunes:title><title>32. Sex Robots with Kate Devlin</title><link>https://www.machine-ethics.net/podcast/sex-robots-with-kate-devlin/</link><itunes:episode>32</itunes:episode><itunes:author>Ben Byford with Kate Devlin</itunes:author><itunes:subtitle>Thirty second episode of Machine Ethics podcast with Kate Devlin</itunes:subtitle><itunes:summary><![CDATA[I met Kate in person in Bristol. We discussed chatbots in sex-tech, the complexity of human intimacy and technology, taboos in sex-tech, how sex tech can be a positive enabling industry, deepfakes and more.]]></itunes:summary><description><![CDATA[I met Kate in person in Bristol. We discussed chatbots in sex-tech, the complexity of human intimacy and technology, taboos in sex-tech, how sex tech can be a positive enabling industry, deepfakes and more.]]></description><pubDate>Tue, 02 Jul 2019 21:19:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1137/kate-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1138/kate-devlin_machine-ethics-podcast.mp3" length="65594762" type="audio/mp3" /><itunes:duration>45:31</itunes:duration><guid>https://www.machine-ethics.net/podcast/sex-robots-with-kate-devlin/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI standards and regulation with Jacob Turner</itunes:title><title>31. AI standards and regulation with Jacob Turner</title><link>https://www.machine-ethics.net/podcast/jacob-turner/</link><itunes:episode>31</itunes:episode><itunes:author>Ben Byford with Jacob Turner</itunes:author><itunes:subtitle>Thirty first episode of Machine Ethics podcast with Jacob Turner</itunes:subtitle><itunes:summary><![CDATA[This month I had a great time chatting with Jacob Turner about recent AI news like openAI GTP-2, some robo-ethics and law, overarching prinicples of AI, professionalising standards and licensing for data scientists, creating institutions capable of democratic principle creation, and doing regulation well to actually encourage innovation and growth.]]></itunes:summary><description><![CDATA[This month I had a great time chatting with Jacob Turner about recent AI news like openAI GTP-2, some robo-ethics and law, overarching prinicples of AI, professionalising standards and licensing for data scientists, creating institutions capable of democratic principle creation, and doing regulation well to actually encourage innovation and growth.]]></description><pubDate>Mon, 27 May 2019 14:50:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1134/jacob_illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1135/jacob-turner_machine-ethics-podcast.mp3" length="69544589" type="audio/mp3" /><itunes:duration>48:14</itunes:duration><guid>https://www.machine-ethics.net/podcast/jacob-turner/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Emotional and loving AI with Julia Mossbridge</itunes:title><title>30. Emotional and loving AI with Julia Mossbridge</title><link>https://www.machine-ethics.net/podcast/julia-mossbridge/</link><itunes:episode>30</itunes:episode><itunes:author>Ben Byford with Julia Mossbridge</itunes:author><itunes:subtitle>Thirtith episode of Machine Ethics podcast with Julia Mossbridge</itunes:subtitle><itunes:summary><![CDATA[Lovely chat with Julia Mossbridge this month, talking about the role of parenting in AI, considering that some kids are &quot;jerks&quot; should we have a new turing test for AI responsibility, the importance of inner emotional states, behaviourism poisoning science, and the acknowagement that we may have to use our intuition to know when an AI is concious.]]></itunes:summary><description><![CDATA[Lovely chat with Julia Mossbridge this month, talking about the role of parenting in AI, considering that some kids are &quot;jerks&quot; should we have a new turing test for AI responsibility, the importance of inner emotional states, behaviourism poisoning science, and the acknowagement that we may have to use our intuition to know when an AI is concious.]]></description><pubDate>Tue, 07 May 2019 15:13:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1132/julia-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1133/julia-mossbridge_machine-ethics-podcast.mp3" length="63027590" type="audio/mp3" /><itunes:duration>43:46</itunes:duration><guid>https://www.machine-ethics.net/podcast/julia-mossbridge/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Human focused AI with Pete Trainor</itunes:title><title>29. Human focused AI with Pete Trainor</title><link>https://www.machine-ethics.net/podcast/pete-trainor/</link><itunes:episode>29</itunes:episode><itunes:author>Ben Byford with Pete Trainor</itunes:author><itunes:subtitle>Twenty nineth episode of Machine Ethics podcast with Pete Trainor</itunes:subtitle><itunes:summary><![CDATA[Great speaking with Pete this month about human focused AI and his book Hippo: The Human Focused Digital Book, helping businesses get prepared and take advantage of AI, the importance and power of asking: why? And much more.]]></itunes:summary><description><![CDATA[Great speaking with Pete this month about human focused AI and his book Hippo: The Human Focused Digital Book, helping businesses get prepared and take advantage of AI, the importance and power of asking: why? And much more.]]></description><pubDate>Tue, 02 Apr 2019 18:01:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1129/pete-podcast.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1130/pete-trainor_machine-ethics-podcast.mp3" length="70183978" type="audio/mp3" /><itunes:duration>48:39</itunes:duration><guid>https://www.machine-ethics.net/podcast/pete-trainor/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Future Ethics with Cennydd Bowles</itunes:title><title>28. Future Ethics with Cennydd Bowles</title><link>https://www.machine-ethics.net/podcast/28-cennydd-bowles/</link><itunes:episode>28</itunes:episode><itunes:author>Ben Byford with Cennydd Bowles</itunes:author><itunes:subtitle>Twenty eighth episode of Machine Ethics podcast with Cennydd Bowles</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m joined by Cennydd Bowles who I&#039;ve been meaning to get on the podcast for over a year. We talk about his book Future Ethics, collective action in the tech industry, ethical design sprints and crits, design fictions to bring ethical thinking to the general public (think Black mirror), the law of double affect, and the tech industry and climate change.]]></itunes:summary><description><![CDATA[This month I&#039;m joined by Cennydd Bowles who I&#039;ve been meaning to get on the podcast for over a year. We talk about his book Future Ethics, collective action in the tech industry, ethical design sprints and crits, design fictions to bring ethical thinking to the general public (think Black mirror), the law of double affect, and the tech industry and climate change.]]></description><pubDate>Tue, 05 Feb 2019 15:50:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1127/cennydd-illustration2.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1128/cennydd-bowles_machine-ethics-podcast.mp3" length="68540656" type="audio/mp3" /><itunes:duration>47:33</itunes:duration><guid>https://www.machine-ethics.net/podcast/28-cennydd-bowles/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Governments and Post-humans with Dan Faggella</itunes:title><title>27. Governments and Post-humans with Dan Faggella</title><link>https://www.machine-ethics.net/podcast/27-dan-faggella/</link><itunes:episode>27</itunes:episode><itunes:author>Ben Byford with Dan Faggella</itunes:author><itunes:subtitle>Twenty seventh episode of Machine Ethics podcast with Dan Faggella</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m joined by Dan Faggella of Emerj to chat about AI events, our post-human trajectory, AI ethicists and their possible higher position, conversations with governments and advising companies on the best uses of AI.]]></itunes:summary><description><![CDATA[This month I&#039;m joined by Dan Faggella of Emerj to chat about AI events, our post-human trajectory, AI ethicists and their possible higher position, conversations with governments and advising companies on the best uses of AI.]]></description><pubDate>Wed, 23 Jan 2019 21:13:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1125/dan_faggella.1400x1400.jpeg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1126/dan-faggella_machine-ethics-podcast.mp3" length="75083468" type="audio/mp3" /><itunes:duration>52:05</itunes:duration><guid>https://www.machine-ethics.net/podcast/27-dan-faggella/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>What is AI? Vol.1</itunes:title><title>26. What is AI? Vol.1</title><link>https://www.machine-ethics.net/podcast/26-what-is-ai/</link><itunes:episode>26</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Twenty sixth episode of Machine Ethics podcast looking back at previous episodes</itunes:subtitle><itunes:summary><![CDATA[This episode is a bonus compilation of answers from 3 years of interviews asking the question: What is AI?]]></itunes:summary><description><![CDATA[This episode is a bonus compilation of answers from 3 years of interviews asking the question: What is AI?]]></description><pubDate>Sat, 12 Jan 2019 22:16:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1123/what-is-ai-thumbnail.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1124/what-is-ai_machine-ethics-podcast.mp3" length="18887363" type="audio/mp3" /><itunes:duration>13:06</itunes:duration><guid>https://www.machine-ethics.net/podcast/26-what-is-ai/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Respecting data with Miranda Mowbray</itunes:title><title>25. Respecting data with Miranda Mowbray</title><link>https://www.machine-ethics.net/podcast/25-miranda-mowbray/</link><itunes:episode>25</itunes:episode><itunes:author>Ben Byford interviewing Miranda Mowbray</itunes:author><itunes:subtitle>Twenty fifth episode of Machine Ethics podcast with Miranda Mowbray</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m talking with Miranda Mowbray on: cyber security and machine learning, big data ethical code of conduct, sitting down as a team to discuss ethical issues in data projects, respecting the people who’s data you might be using, not collecting data you don&#039;t need and deleting things, and much more.]]></itunes:summary><description><![CDATA[This month I&#039;m talking with Miranda Mowbray on: cyber security and machine learning, big data ethical code of conduct, sitting down as a team to discuss ethical issues in data projects, respecting the people who’s data you might be using, not collecting data you don&#039;t need and deleting things, and much more.]]></description><pubDate>Tue, 13 Nov 2018 13:16:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1121/miranda-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1122/miranda-mowbray_machine-ethics-podcast.mp3" length="56714751" type="audio/mp3" /><itunes:duration>39:21</itunes:duration><guid>https://www.machine-ethics.net/podcast/25-miranda-mowbray/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>#AIRetreat</itunes:title><title>24. #AIRetreat</title><link>https://www.machine-ethics.net/podcast/24-airetreat/</link><itunes:episode>24</itunes:episode><itunes:author>Ben Byford interviewing from the A.I retreat</itunes:author><itunes:subtitle>Twenty fourth episode of Machine Ethics podcast with participants of the A.I. retreat</itunes:subtitle><itunes:summary><![CDATA[This is a very special episode of interviews with various participants of this year&#039;s A.I. retreat at Juvet, Norway. 21 of us spent 4 days in remote Norway workshopping, chatting, hiking and arguing on subjects of AI, ethics, data science, consciousness and more.]]></itunes:summary><description><![CDATA[This is a very special episode of interviews with various participants of this year&#039;s A.I. retreat at Juvet, Norway. 21 of us spent 4 days in remote Norway workshopping, chatting, hiking and arguing on subjects of AI, ethics, data science, consciousness and more.]]></description><pubDate>Thu, 27 Sep 2018 16:19:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1119/workshop-hut.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1120/airetreat_machine-ethics-podcast.mp3" length="87059816" type="audio/mp3" /><itunes:duration>01:00:23</itunes:duration><guid>https://www.machine-ethics.net/podcast/24-airetreat/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>How to design a moral algorithm with Derek Leben</itunes:title><title>23. How to design a moral algorithm with Derek Leben</title><link>https://www.machine-ethics.net/podcast/23-derek-leben/</link><itunes:episode>23</itunes:episode><itunes:author>Ben Byford interviewing Derek Leben</itunes:author><itunes:subtitle>Twenty third episode of Machine Ethics podcast with Derek Leben</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m talking with Derek Leben about his new book Ethics for Robots: How to Design a Moral Algorithm. We also dive into a general framework for machine ethics, contractarianism, Rawls’ original position thought experiment (which is one of my favourite ethical thought experiments), maximin function approach to machine ethics, and whether robots should respect the consent of a person in life threatening circumstances...]]></itunes:summary><description><![CDATA[This month I&#039;m talking with Derek Leben about his new book Ethics for Robots: How to Design a Moral Algorithm. We also dive into a general framework for machine ethics, contractarianism, Rawls’ original position thought experiment (which is one of my favourite ethical thought experiments), maximin function approach to machine ethics, and whether robots should respect the consent of a person in life threatening circumstances...]]></description><pubDate>Mon, 20 Aug 2018 13:01:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1110/derek.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1118/derek-leben_machine-ethics-podcast.mp3" length="78019120" type="audio/mp3" /><itunes:duration>54:07</itunes:duration><guid>https://www.machine-ethics.net/podcast/23-derek-leben/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI in Science Fiction with Christopher Noessel</itunes:title><title>22. AI in Science Fiction with Christopher Noessel</title><link>https://www.machine-ethics.net/podcast/22-christopher-noessel/</link><itunes:episode>22</itunes:episode><itunes:author>Ben Byford interviewing Christopher Noessel</itunes:author><itunes:subtitle>Twenty second episode of Machine Ethics podcast with IBM's Christopher Noessel</itunes:subtitle><itunes:summary><![CDATA[This month is (sort of) part 2 of our two part look at AI in Culture. Chris and I take an extended look at how science fiction portray technology from the realistic to law of nature breaking mythos. Our chat meanders from film to TV and includes: Psycho pass, Person of interest, Rick and Morty episode: The Ricks Must Be Crazy, Buck Rogers, Rossum&#039;s Universal Robots, 2001: a space odyssey, Moon, iRobot (film and book), The Animatrix, Her, Futurama, Robot and Frank, Big hero 6, Colossus: The Forbin Project]]></itunes:summary><description><![CDATA[This month is (sort of) part 2 of our two part look at AI in Culture. Chris and I take an extended look at how science fiction portray technology from the realistic to law of nature breaking mythos. Our chat meanders from film to TV and includes: Psycho pass, Person of interest, Rick and Morty episode: The Ricks Must Be Crazy, Buck Rogers, Rossum&#039;s Universal Robots, 2001: a space odyssey, Moon, iRobot (film and book), The Animatrix, Her, Futurama, Robot and Frank, Big hero 6, Colossus: The Forbin Project]]></description><pubDate>Mon, 20 Aug 2018 12:09:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1109/chris.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1111/chris-noessel_machine-ethics-podcast.mp3" length="88935010" type="audio/mp3" /><itunes:duration>01:01:39</itunes:duration><guid>https://www.machine-ethics.net/podcast/22-christopher-noessel/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Evolution and AI with Tim Taylor</itunes:title><title>21. Evolution and AI with Tim Taylor</title><link>https://www.machine-ethics.net/podcast/21-tim-taylor/</link><itunes:episode>21</itunes:episode><itunes:author>Ben Byford interviewing Tim Taylor</itunes:author><itunes:subtitle>Twenty first episode of Machine Ethics podcast with Tim Taylor</itunes:subtitle><itunes:summary><![CDATA[This month is part one of a loose series on the history of AI in Culture. In Part 1 I talk to Tim Taylor about his upcoming book on evolving machines before the 1950s, at length about genetic algorithms and their environments, as well as Descartes notion of animals as machines, machine and human co-evolution and much more. I also started a new AI consultancy for companies looking to implement responsible AI - www.ethicalby.design]]></itunes:summary><description><![CDATA[This month is part one of a loose series on the history of AI in Culture. In Part 1 I talk to Tim Taylor about his upcoming book on evolving machines before the 1950s, at length about genetic algorithms and their environments, as well as Descartes notion of animals as machines, machine and human co-evolution and much more. I also started a new AI consultancy for companies looking to implement responsible AI - www.ethicalby.design]]></description><pubDate>Fri, 27 Jul 2018 16:27:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1107/tim-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1108/tim-taylor_machine-ethics-podcast.mp3" length="66211192" type="audio/mp3" /><itunes:duration>45:56</itunes:duration><guid>https://www.machine-ethics.net/podcast/21-tim-taylor/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>The meaning of life with Luciano Floridi</itunes:title><title>20. The meaning of life with Luciano Floridi</title><link>https://www.machine-ethics.net/podcast/20-luciano-floridi/</link><itunes:episode>20</itunes:episode><itunes:author>Ben Byford interviewing Professor Luciano Floridi</itunes:author><itunes:subtitle>Twentieth episode of Machine Ethics podcast with Professor Luciano Floridi</itunes:subtitle><itunes:summary><![CDATA[It is my honour to speak with Professor Luciano Floridi this month on subjects like information philosophy: “philosophy or our time, for our time”; understanding that mistakes happen in technology whether they’re design issues, bugs or oversights at big companies; what is the meaning of life in the digital world? And much much more.]]></itunes:summary><description><![CDATA[It is my honour to speak with Professor Luciano Floridi this month on subjects like information philosophy: “philosophy or our time, for our time”; understanding that mistakes happen in technology whether they’re design issues, bugs or oversights at big companies; what is the meaning of life in the digital world? And much much more.]]></description><pubDate>Fri, 22 Jun 2018 14:31:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1104/luciano-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1105/luciano-floridi_machine-ethics-podcast.mp3" length="53699576" type="audio/mp3" /><itunes:duration>37:15</itunes:duration><guid>https://www.machine-ethics.net/podcast/20-luciano-floridi/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Digital rights with Ruth Coustick-Deal</itunes:title><title>19. Digital rights with Ruth Coustick-Deal</title><link>https://www.machine-ethics.net/podcast/19-ruth-coustick-deal/</link><itunes:episode>19</itunes:episode><itunes:author>Ben Byford interviewing Ruth Coustick-Deal</itunes:author><itunes:subtitle>Nineteenth episode of Machine Ethics podcast with digital rights campaigner Ruth Coustick-Deal.</itunes:subtitle><itunes:summary><![CDATA[This month, with the lead up to the new GDPR European personal data legislation coming in, we talk to digital rights campaigner Ruth Coustick-Deal on everything personal data.]]></itunes:summary><description><![CDATA[This month, with the lead up to the new GDPR European personal data legislation coming in, we talk to digital rights campaigner Ruth Coustick-Deal on everything personal data.]]></description><pubDate>Wed, 18 Apr 2018 16:15:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1102/ruth-portrait.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1103/ruth_coustick-deal_machine-ethics-podcast.mp3" length="82155080" type="audio/mp3" /><itunes:duration>56:59</itunes:duration><guid>https://www.machine-ethics.net/podcast/19-ruth-coustick-deal/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI 101 with Greg Edwards</itunes:title><title>18. AI 101 with Greg Edwards</title><link>https://www.machine-ethics.net/podcast/18-ai-101-with-greg-edwards/</link><itunes:episode>18</itunes:episode><itunes:author>Ben Byford interviewing Greg Edwards</itunes:author><itunes:subtitle>Eighteenth episode of Machine Ethics podcast with Greg Edwards of Decoded and Ben Byford</itunes:subtitle><itunes:summary><![CDATA[This month I interview Greg Edwards to get to grips with the basics of machine learning. We look at the history of AI, what machine learning consists, try to describe how neuronets work, and discover the new and interesting ideas in AI research.]]></itunes:summary><description><![CDATA[This month I interview Greg Edwards to get to grips with the basics of machine learning. We look at the history of AI, what machine learning consists, try to describe how neuronets work, and discover the new and interesting ideas in AI research.]]></description><pubDate>Tue, 20 Mar 2018 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1100/greg-edwards-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1101/greg-edwards_machine-ethics-podcast.mp3" length="79173907" type="audio/mp3" /><itunes:duration>54:55</itunes:duration><guid>https://www.machine-ethics.net/podcast/18-ai-101-with-greg-edwards/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Narratives in tech ethics with Charles Radclyffe</itunes:title><title>17. Narratives in tech ethics with Charles Radclyffe</title><link>https://www.machine-ethics.net/podcast/17-charles-radclyffe/</link><itunes:episode>17</itunes:episode><itunes:author>Ben Byford interviewing Charles Radclyffe</itunes:author><itunes:subtitle>Seventeenth episode of Machine Ethics podcast with Charles Radclyffe and Ben Byford</itunes:subtitle><itunes:summary><![CDATA[This month I speak to Charles Radclyffe in Bristol&#039;s Engine Shed about dubious business models exploiting technology, differing narratives of tech ethics, using the court for automated car law preemptively and much more.]]></itunes:summary><description><![CDATA[This month I speak to Charles Radclyffe in Bristol&#039;s Engine Shed about dubious business models exploiting technology, differing narratives of tech ethics, using the court for automated car law preemptively and much more.]]></description><pubDate>Mon, 22 Jan 2018 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1098/charles-image-edit.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1099/charles-radclyffe_machine-ethics-podcast.mp3" length="97669102" type="audio/mp3" /><itunes:duration>01:07:49</itunes:duration><guid>https://www.machine-ethics.net/podcast/17-charles-radclyffe/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI and the built environment with Josh Artus and Ben Byford</itunes:title><title>16. AI and the built environment with Josh Artus and Ben Byford</title><link>https://www.machine-ethics.net/podcast/16-josh-artus/</link><itunes:episode>16</itunes:episode><itunes:author>Ben Byford and Josh Artus interviewing each other</itunes:author><itunes:subtitle>Sixteenth episode of Machine Ethics podcast with Josh Artus and Ben Byford</itunes:subtitle><itunes:summary><![CDATA[This episode I share the interviewer responsibilities with Conscious Cities&#039; podcaster Josh Artus. I get to ask Josh some questions and Josh has some for me (Ben Byford). We chatted about: AI data misrepresentation, bias and misuse; mindful technology implementation; using AI for things other than tricking people into looking at adverts; EEG misrepresentation; Bristol is Open and technology used in the public section in our cities; and much more.]]></itunes:summary><description><![CDATA[This episode I share the interviewer responsibilities with Conscious Cities&#039; podcaster Josh Artus. I get to ask Josh some questions and Josh has some for me (Ben Byford). We chatted about: AI data misrepresentation, bias and misuse; mindful technology implementation; using AI for things other than tricking people into looking at adverts; EEG misrepresentation; Bristol is Open and technology used in the public section in our cities; and much more.]]></description><pubDate>Sat, 11 Nov 2017 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1093/josh-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1094/josh-artus_machine-ethics-podcast.mp3" length="92662267" type="audio/mp3" /><itunes:duration>01:04:24</itunes:duration><guid>https://www.machine-ethics.net/podcast/16-josh-artus/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Machine Ethics with Susan and Michael Anderson</itunes:title><title>15. Machine Ethics with Susan and Michael Anderson</title><link>https://www.machine-ethics.net/podcast/15-susan-and-michael-anderson/</link><itunes:episode>15</itunes:episode><itunes:author>Ben Byford chatting to Susan and Michael Anderson</itunes:author><itunes:subtitle>Fifthteenth episode of Machine Ethics podcast with Susan and Michael Anderson</itunes:subtitle><itunes:summary><![CDATA[Discussing almost coining Machine Ethics, Big Data and social sciences not having the answer to AI ethics, prima facie duties and robots, everything is going to have to have some ethic, and AI as the continuation of humanity into space.]]></itunes:summary><description><![CDATA[Discussing almost coining Machine Ethics, Big Data and social sciences not having the answer to AI ethics, prima facie duties and robots, everything is going to have to have some ethic, and AI as the continuation of humanity into space.]]></description><pubDate>Wed, 11 Oct 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1091/anderson-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1092/susan-and-michael-anderson_machine-ethics-podcast.mp3" length="82807811" type="audio/mp3" /><itunes:duration>57:28</itunes:duration><guid>https://www.machine-ethics.net/podcast/15-susan-and-michael-anderson/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI in AR and VR with Michael Ludden</itunes:title><title>14. AI in AR and VR with Michael Ludden</title><link>https://www.machine-ethics.net/podcast/14-michael-ludden/</link><itunes:episode>14</itunes:episode><itunes:author>Ben Byford chatting to Michael Ludden</itunes:author><itunes:subtitle>Fourteenth episode of Machine Ethics podcast with Michael Ludden</itunes:subtitle><itunes:summary><![CDATA[This month I&#039;m talking to Michael Ludden about his work at IBM, what the hell Watson is, AI in culture and creating a positive cultural AI view, as well as using AI in AR and VR projects.]]></itunes:summary><description><![CDATA[This month I&#039;m talking to Michael Ludden about his work at IBM, what the hell Watson is, AI in culture and creating a positive cultural AI view, as well as using AI in AR and VR projects.]]></description><pubDate>Mon, 14 Aug 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1089/ludden-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1090/michael-ludden_machine-ethics-podcast.mp3" length="56301529" type="audio/mp3" /><itunes:duration>39:05</itunes:duration><guid>https://www.machine-ethics.net/podcast/14-michael-ludden/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Robotics and autonomy with Alan Winfield</itunes:title><title>13. Robotics and autonomy with Alan Winfield</title><link>https://www.machine-ethics.net/podcast/13-alan-winfield/</link><itunes:episode>13</itunes:episode><itunes:author>Ben Byford in conversation with Alan Winfield</itunes:author><itunes:subtitle>Thirteenth episode of Machine Ethics podcast with Alan Winfield</itunes:subtitle><itunes:summary><![CDATA[I talk to Alan about how humans should innovate ethically, different standards and the role of standards for designing and building robots, how autonomous systems should be transparent, and how studying robotics enables us to peer into our own behaviours and intelligence.]]></itunes:summary><description><![CDATA[I talk to Alan about how humans should innovate ethically, different standards and the role of standards for designing and building robots, how autonomous systems should be transparent, and how studying robotics enables us to peer into our own behaviours and intelligence.]]></description><pubDate>Thu, 13 Jul 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1086/allan-illustration.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1087/alan-winfield_machine-ethics-podcast.mp3" length="61509256" type="audio/mp3" /><itunes:duration>42:42</itunes:duration><guid>https://www.machine-ethics.net/podcast/13-alan-winfield/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>CogX 2017 special edition episode</itunes:title><title>12. CogX 2017 special edition episode</title><link>https://www.machine-ethics.net/podcast/12-cogx-2017-special-edition-episode/</link><itunes:episode>12</itunes:episode><itunes:author>Ben Byford in conversation with Josie Swords and visitors to CogX 2017</itunes:author><itunes:subtitle>Twelfth episode of Machine Ethics podcast at CogX 2017</itunes:subtitle><itunes:summary><![CDATA[This month I travel to CogX 2017 in London to do a special report from the conference floor. This episode is a collection of bits of talks and sessions, vox-pops from the CogX attendees and our thoughts on the show.]]></itunes:summary><description><![CDATA[This month I travel to CogX 2017 in London to do a special report from the conference floor. This episode is a collection of bits of talks and sessions, vox-pops from the CogX attendees and our thoughts on the show.]]></description><pubDate>Mon, 26 Jun 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1079/18949884_1249931161798922_2603706077887332352_n.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1080/cogx2017_machine-ethics-podcast.mp3" length="93811630" type="audio/mp3" /><itunes:duration>01:05:07</itunes:duration><guid>https://www.machine-ethics.net/podcast/12-cogx-2017-special-edition-episode/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Robot transparency with Rob Wortham</itunes:title><title>11. Robot transparency with Rob Wortham</title><link>https://www.machine-ethics.net/podcast/11-rob-wortham/</link><itunes:episode>11</itunes:episode><itunes:author>Ben Byford in conversation with Rob Wortham</itunes:author><itunes:subtitle>Eleventh episode of Machine Ethics podcast with Rob Wortham</itunes:subtitle><itunes:summary><![CDATA[I chat to Rob about what mind models people have of robots if any? Principles of robotics for creators, intelligence: doing the right thing at the right time, embodied or distributed robots, robot transparency and much more.]]></itunes:summary><description><![CDATA[I chat to Rob about what mind models people have of robots if any? Principles of robotics for creators, intelligence: doing the right thing at the right time, embodied or distributed robots, robot transparency and much more.]]></description><pubDate>Fri, 21 Apr 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1077/rob-wortham.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1078/rob_wortham_machine-ethics-podcast.mp3" length="90267764" type="audio/mp3" /><itunes:duration>01:02:40</itunes:duration><guid>https://www.machine-ethics.net/podcast/11-rob-wortham/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Retrospective cast 1</itunes:title><title>10. Retrospective cast 1</title><link>https://www.machine-ethics.net/podcast/10-retrospective-cast-1/</link><itunes:episode>10</itunes:episode><itunes:author>Ben Byford</itunes:author><itunes:subtitle>Interview clips from the first year of the Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[Clips from 2016 interviews with Nick Reed, Calum Chace, Cosima Gretton, Lydia Nicholas, Lucy McCormick, Matthew Channon, Sam Hill, and Sam Kinsley.]]></itunes:summary><description><![CDATA[Clips from 2016 interviews with Nick Reed, Calum Chace, Cosima Gretton, Lydia Nicholas, Lucy McCormick, Matthew Channon, Sam Hill, and Sam Kinsley.]]></description><pubDate>Fri, 31 Mar 2017 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1075/retrospective1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1076/machine-ethics-podcast-year1.mp3" length="44673848" type="audio/mp3" /><itunes:duration>31:00</itunes:duration><guid>https://www.machine-ethics.net/podcast/10-retrospective-cast-1/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI as slaves with Joanna J Bryson</itunes:title><title>9. AI as slaves with Joanna J Bryson</title><link>https://www.machine-ethics.net/podcast/9-joanna-j-bryson/</link><itunes:episode>9</itunes:episode><itunes:author>Ben Byford with Joanna J Bryson</itunes:author><itunes:subtitle>AI, Ethics and systems chat with Joanna J Bryson</itunes:subtitle><itunes:summary><![CDATA[Interview with Dr Joanna J Bryson talking about her work at Bath University, the new principles of robotics, qualifying definitions of AI, the ethical paradox of living forever, AI as slaves, while trying not to mention Donald Trump.]]></itunes:summary><description><![CDATA[Interview with Dr Joanna J Bryson talking about her work at Bath University, the new principles of robotics, qualifying definitions of AI, the ethical paradox of living forever, AI as slaves, while trying not to mention Donald Trump.]]></description><pubDate>Fri, 03 Mar 2017 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1069/joanna-j-bryson-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1070/joanna-j-bryson_machine-ethics-podcast.mp3" length="96571907" type="audio/mp3" /><itunes:duration>01:07:04</itunes:duration><guid>https://www.machine-ethics.net/podcast/9-joanna-j-bryson/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Technologies narratives in culture with Sam Kinsley</itunes:title><title>8. Technologies narratives in culture with Sam Kinsley</title><link>https://www.machine-ethics.net/podcast/8-sam-kinsley/</link><itunes:episode>8</itunes:episode><itunes:author>Ben Byford in conversation with Sam Kinsley</itunes:author><itunes:subtitle>Eighth episode of Machine Ethics podcast with Sam Kinsley</itunes:subtitle><itunes:summary><![CDATA[Interview with Sam Kinsley on AI in Geography, ways of talking about and exploring possibilities of technologies within space, how we talk about technologies in culture, and about the apparent technology optimism in Silicon Valley.]]></itunes:summary><description><![CDATA[Interview with Sam Kinsley on AI in Geography, ways of talking about and exploring possibilities of technologies within space, how we talk about technologies in culture, and about the apparent technology optimism in Silicon Valley.]]></description><pubDate>Tue, 20 Dec 2016 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1067/sam_kinsley.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1068/sam-kinsley_machine-ethics-podcast.mp3" length="78135207" type="audio/mp3" /><itunes:duration>54:15</itunes:duration><guid>https://www.machine-ethics.net/podcast/8-sam-kinsley/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Machine suffering with Sam Hill</itunes:title><title>7. Machine suffering with Sam Hill</title><link>https://www.machine-ethics.net/podcast/7-sam-hill/</link><itunes:episode>7</itunes:episode><itunes:author>Ben Byford in conversation with Sam Hill</itunes:author><itunes:subtitle>Seventh episode of Machine Ethics podcast with Sam Hill</itunes:subtitle><itunes:summary><![CDATA[Interview with Sam Hill on Machine suffering, Social / emotional machines, What is AI? Smart machines with dumb jobs, games and simulation, this week&#039;s AI news and creating a computer narrative immersive theatre production.]]></itunes:summary><description><![CDATA[Interview with Sam Hill on Machine suffering, Social / emotional machines, What is AI? Smart machines with dumb jobs, games and simulation, this week&#039;s AI news and creating a computer narrative immersive theatre production.]]></description><pubDate>Mon, 17 Oct 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1062/sam-hill.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1063/sam-hill_machine-ethics-podcast.mp3" length="66253425" type="audio/mp3" /><itunes:duration>50:11</itunes:duration><guid>https://www.machine-ethics.net/podcast/7-sam-hill/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Insurance and automated cars with Matthew Channon</itunes:title><title>6. Insurance and automated cars with Matthew Channon</title><link>https://www.machine-ethics.net/podcast/6-matthew-channon/</link><itunes:episode>6</itunes:episode><itunes:author>Ben Byford in conversation with Matthew Channon</itunes:author><itunes:subtitle>Sixth episode of Machine Ethics podcast with Matthew Channon</itunes:subtitle><itunes:summary><![CDATA[Interview with Matthew Channon talking about EU and UK car legislation for automated cars, insurance industry and automated cars, central governmental insurance fund, car loop systems, cross border insurance, strict liability and much more.]]></itunes:summary><description><![CDATA[Interview with Matthew Channon talking about EU and UK car legislation for automated cars, insurance industry and automated cars, central governmental insurance fund, car loop systems, cross border insurance, strict liability and much more.]]></description><pubDate>Sun, 18 Sep 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1057/matthew-channon-1.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1058/matthew-channon_machine-ethics-podcast-1.mp3" length="66283131" type="audio/mp3" /><itunes:duration>46:01</itunes:duration><guid>https://www.machine-ethics.net/podcast/6-matthew-channon/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Law and autonomous cars with Lucy McCormick</itunes:title><title>5. Law and autonomous cars with Lucy McCormick</title><link>https://www.machine-ethics.net/podcast/5-lucy-mccormick/</link><itunes:episode>5</itunes:episode><itunes:author>Ben Byford in conversation with Lucy McCormick</itunes:author><itunes:subtitle>Fifth episode of Machine Ethics podcast with lawyer Lucy McCormick</itunes:subtitle><itunes:summary><![CDATA[Interview with Lucy McCormick talking about her book on the law of driverless cars, the Google and Tesla car crashes, autonomous car insurance legislation, the Queen&#039;s modern transport bill, and much more.]]></itunes:summary><description><![CDATA[Interview with Lucy McCormick talking about her book on the law of driverless cars, the Google and Tesla car crashes, autonomous car insurance legislation, the Queen&#039;s modern transport bill, and much more.]]></description><pubDate>Mon, 25 Jul 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1055/lucy.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1056/lucy-mccormick_machine-ethics-podcast_5.mp3" length="63948335" type="audio/mp3" /><itunes:duration>44:28</itunes:duration><guid>https://www.machine-ethics.net/podcast/5-lucy-mccormick/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Adaptive systems with Lydia Nicholas</itunes:title><title>4. Adaptive systems with Lydia Nicholas</title><link>https://www.machine-ethics.net/podcast/4-lydia-nicholas/</link><itunes:episode>4</itunes:episode><itunes:author>Ben Byford in conversation with Lydia Nicholas</itunes:author><itunes:subtitle>Talking with Lydia Nicholas on adaptive systems, story telling, machine learning regulation, and managing data bias.</itunes:subtitle><itunes:summary><![CDATA[Podcast 4. Talking with Lydia Nicholas on adaptive systems, story telling, machine learning regulation, and managing data bias.]]></itunes:summary><description><![CDATA[Podcast 4. Talking with Lydia Nicholas on adaptive systems, story telling, machine learning regulation, and managing data bias.]]></description><pubDate>Thu, 16 Jun 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1050/lydia.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1054/lydia-nicholas_machine-ethics-podcast.mp3" length="53989319" type="audio/mp3" /><itunes:duration>37:29</itunes:duration><guid>https://www.machine-ethics.net/podcast/4-lydia-nicholas/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Technology and health with Cosima Gretton</itunes:title><title>3. Technology and health with Cosima Gretton</title><link>https://www.machine-ethics.net/podcast/3-cosima-gretton/</link><itunes:episode>3</itunes:episode><itunes:author>Ben Byford with Cosima Gretton</itunes:author><itunes:subtitle>Third episode of Machine Ethics podcast with Cosima Gretton</itunes:subtitle><itunes:summary><![CDATA[Chatting with Dr Cosima Gretton on AI, Health care and it&#039;s current cost structure and technologies, Ethics in education and more...]]></itunes:summary><description><![CDATA[Chatting with Dr Cosima Gretton on AI, Health care and it&#039;s current cost structure and technologies, Ethics in education and more...]]></description><pubDate>Tue, 10 May 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1049/cosima_sketch-fin.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1053/cosima-gretton_machine-ethics-podcast-1.mp3" length="36887011" type="audio/mp3" /><itunes:duration>43:46</itunes:duration><guid>https://www.machine-ethics.net/podcast/3-cosima-gretton/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>AI future scenarios with Calum Chace</itunes:title><title>2. AI future scenarios with Calum Chace</title><link>https://www.machine-ethics.net/podcast/2-calum-chace/</link><itunes:episode>2</itunes:episode><itunes:author>Ben Byford with Calum Chace</itunes:author><itunes:subtitle>Second episode of Machine Ethics podcast</itunes:subtitle><itunes:summary><![CDATA[We discuss differring types of AI and future scenarios, the end of Moore&#039;s Law, the possibilities of &#039;friendly&#039; AI, China&#039;s Sesame project and Calum&#039;s Books.]]></itunes:summary><description><![CDATA[We discuss differring types of AI and future scenarios, the end of Moore&#039;s Law, the possibilities of &#039;friendly&#039; AI, China&#039;s Sesame project and Calum&#039;s Books.]]></description><pubDate>Tue, 29 Mar 2016 00:00:00 +0100</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1048/calum-edit.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1052/calum-chace_machine-ethics-podcast.mp3" length="71569269" type="audio/mp3" /><itunes:duration>40:26</itunes:duration><guid>https://www.machine-ethics.net/podcast/2-calum-chace/</guid><itunes:explicit>false</itunes:explicit></item><item><itunes:episodeType>full</itunes:episodeType><itunes:title>Testing automated cars Nick Reed</itunes:title><title>1. Testing automated cars Nick Reed</title><link>https://www.machine-ethics.net/podcast/1-nick-reed/</link><itunes:episode>1</itunes:episode><itunes:author>Ben Byford with Nick Reed</itunes:author><itunes:subtitle>AI and Ethics chat with Nick Reed</itunes:subtitle><itunes:summary><![CDATA[Chat with Nick Reed of TRL - we ask hard questions about how the UK are testing automated cars today, discuss recent AI news, blue sky thinking, and chat neural networks and genetic algorithms.]]></itunes:summary><description><![CDATA[Chat with Nick Reed of TRL - we ask hard questions about how the UK are testing automated cars today, discuss recent AI news, blue sky thinking, and chat neural networks and genetic algorithms.]]></description><pubDate>Tue, 08 Mar 2016 00:00:00 +0000</pubDate><itunes:image href="https://www.machine-ethics.net/site/assets/files/1045/nick-reed.1400x1400.jpg" /><enclosure url="https://www.machine-ethics.net/site/assets/files/1051/nick-reed_machine-ethics-podcast.mp3" length="52566606" type="audio/mp3" /><itunes:duration>36:30</itunes:duration><guid>https://www.machine-ethics.net/podcast/1-nick-reed/</guid><itunes:explicit>false</itunes:explicit></item></channel></rss>