A collection of audio links on AI and robot ethics that you can listen to while on the move.
Ethics and technology
Future of Life Institute, June 2020
Suddenly, everybody is talking about Wisdom!
Podcast, Global Priorities, Existential Risk, and What Matters Most, Future of Life Institute, June 2020, 1:42:46 hours
80000 Hours, May 2020
Some excellent observations about prediction at a time when accurate prediction matters.
Podcast, Forecasting and the drivers of AI progress, 80000 Hours, May 2020, 2:11:36 hours
Artificial intelligence, ethics and education
November 2019, Future Tense – ABC RN
An excellent podcast featuring three speakers, all of whom make a lot of sense.
1. Simon Buckingham Shum – AI in education
2. Bret Greenstein – Unethics of not using AI
3. Roger Taylor – US, China or what models of AI rollout
AI holds enormous potential for transforming the way we teach, but first we need to define what kind of education system we want.
Also, the head of the UK’s new Centre for Data Ethics and Innovation warns democratic governments that they urgently need an ethics and governance framework for emerging technologies.
Podcast, Artificial intelligence, ethics and education, Future Tense – ABC RN, November 2019, 29 minutes
The Future of our Economy – Andrew McAfee
October 2019, TED Interview on BBC Sounds
MIT research scientist Andrew McAfee looks at how AI and robots impacts the economy.
BBC Sounds,The TED Interview: Andrew McCafee, TED, October 2019, 43:49 minutes
Make Me a Programme
October 2019, BBC Radio 4
State of the art in voice synthesis and chatbots for building practical applications. An experiment in creating application to host a radio programme and advise on relationships.
BBC Radio 4, Make Me a Programme, BBC, October 2019, 28 minutes
AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell
October 2019, Future of Life Institute
2:10 Intentions and background on the book
4:30 Human intellectual tradition leading up to the problem of control
7:41 Summary of the structure of the book
8:28 The issue with the current formulation of building intelligent machine systems
10:57 Beginnings of a solution
12:54 Might tool AI be of any help here?
16:30 Core message of the book
20:36 How the book is useful for different audiences
26:30 Inferring the preferences of irrational agents
36:30 Why does this all matter?
39:50 What is really at stake?
45:10 Risks and challenges on the path to beneficial AI
54:55 We should consider laws and regulations around AI
01:03:54 How is this book differentiated from those like it?
Future of Life Institute Podcast, Artificial Intelligence and the Problem of Control with Stuart Russell, October 2019, 1:16:32 hours
From Data Governance to AI Governance
September 2019, The AI Element
Can you trust AI? What does trust really mean? What do justice and ethics look like in a world shaped by algorithms? One of a series of podcasts on AI and trust.
Tanya O’Carroll – Director of Amnesty Tech at Amnesty International
Alix Dunn – Founder and Director of Computer Says Maybe
Jesse McWaters – Financial Innovation Lead at World Economic Forum
Richard Zuroff – Director of AI Advisory and Enablement at Element AI
Podcast, From Data Governance to AI Governance, The AI Element, September 2019, 30 minutes
Can we trust artificial intelligence to be ethical in a war?
September 2019, BBC World Service
“As the abilities of artificial intelligence continue to advance the defence industry is facing the challenge of dealing with the ethical question of computers deciding to kill the enemy in a war. Patrice Caine, the chief executive of the defence giant Thales, tells us it is a major problem deciding to trust AI, because the technology is surrounded by secrecy. We also hear from the BBC’s Frank Gardner who has visited the oil facilities in Saudi Arabia, which were recently destroyed in drone attack, which he describes as a powerful weapon. Other views on the ethical dilemma of using artificial intelllgence in conflict zones are aired by Michael Clare, from Arms Control, an organisation which lobbies for restraint in the use of advanced AI in the military, Deepak Puri, at Dem Labs in Silicon Valley and Stephanie Hare, a technology and ethics researcher.”
BBC Sounds, World Business Report, Can we trust artificial intelligence to be ethical in a war?, September 2019, 28:30 minutes
AI Alignment Podcast: Synthesizing a human’s preferences into a utility function with Stuart Armstrong
September 2019, Future of Life Institute
3:24 A story of evolution (inspiring just-so story)
6:30 How does your “inspiring just-so story” help to inform this research agenda?
8:53 The two core parts to the research agenda
10:00 How this research agenda is contextualized in the AI alignment landscape
12:45 The fundamental ideas behind the research project
15:10 What are partial preferences?
17:50 Why reflexive self-consistency isn’t enough
20:05 How are humans contradictory and how does this affect the difficulty of the agenda?
25:30 Why human values being underdefined presents the greatest challenge
33:55 Expanding on the synthesis process
35:20 How to extract the partial preferences of the person
36:50 Why a utility function?
41:45 Are there alternative goal ordering or action producing methods for agents other than utility functions?
44:40 Extending and normalizing partial preferences and covering the rest of section 2
50:00 Moving into section 3, synthesizing the utility function in practice
52:00 Why this research agenda is helpful for other alignment methodologies
55:50 Limits of the agenda and other problems
58:40 Synthesizing a species wide utility function
1:01:20 Concerns over the alignment methodology containing leaky abstractions
1:06:10 Reflective equilibrium and the agenda not being a philosophical ideal
1:08:10 Can we check the result of the synthesis process?
1:09:55 How did the Mahatma Armstrong idealization process fail?
1:14:40 Any clarifications for the AI alignment community?
Future of Life Institute Podcast, Synthesizing a human’s preferences into a utility function with Stuart Armstrong, September 2019, 1:16:32 hours
Can computer profiles cut crime?
June 2019, BBC Radio 4
Can computer algorithms identify future victims of crime and tell us where and when crimes will happen? David Edmonds asks whether algorithms can make us safer.
BBC Radio 4, Analysis: Can Computer Cut Crime?, June 2019, 27:30 minutes
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
May 2019, Future of Life Institute
Topics discussed include:
- Hopes for the future of AI
- AI-human collaboration
- AI’s influence on art and creativity
- The UN AI for Good Summit
- Gaps in AI safety
- Preparing AI for uncertainty
- Holding AI accountable
Future of Life Institute Podcast, Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi, May 2019, 38:32 minutes
Beyond Today : Why would your mattress spy on you?
April 2019, BBC Radio 4 Podcast
“We all have our own conspiracy theories about who is listening to us through the internet. We probably have considered the idea that Facebook and Google control our lives – but these aren’t necessarily conspiracies. How might we have given the internet giants permission to spy on us? What connects a political scandal like Cambridge Analytica to Alexa and Google Maps? Matthew meets Shoshana Zuboff, who has been investigating this for years, to hear her theory that ties everything together. She calls it surveillance capitalism and she came to the studio to tell us why we should all be more aware of it.”
BBC Radio 4 Podcast, Beyond Today : Why would your mattress spy on you?, April 2019, 21:00 minutes
Ethical Dimensions of AI
Tinius Trust, April 2019, Podcast
“Is there something scary about the development of AI? Who is responsible for ethical algorithms? And what is most disturbing, the algorithms itself or its creators? These are some of the questions Kjersti Løken Stavrum, CEO of the Tinius Trust, asked Anna Felländer, co-founder of the AI Sustainability Center in Stockholm, when she was invited to talk about AI and ethics in Tinius Talks.”
Tinius Trust Podcast, Ethical Dimensions of AI, April 2019, 14:00 minutes
FutureProofing : Opportunity
April 2019, BBC Radio 4
Leo Johnson and Timandra Harkness discuss how technology affect individuals’ opportunities in society. Do biases in algorithms constrain opportunity or has the internet empowered people? Is technology accelerating polarisation in society in wealth, power and opportunity?
BBC Radio 4, FutureProofing, Opportunity, April 2019, 45:06 minutes
The Digital Human: Wisdom
March 2019, BBC Radio 4
Aleks Krotoski finds out if wisdom, or only data, can be shared in the digital world.
BBC Radio 4, The Digital Human: Wisdom, March 2019, 28:00 minutes
Keeping a secret from Google
February 2019, Wired Podcast
Is anything private? Wired podcast presenters tries to conceal from the internet that he and his partner have had a child but finds it impossible.
The story starts at time code 26:25 (-18:41)
Wired Podcast, Keeping a secret from Google: Podcast 405, February 2019, approx. 15:00 minutes
FLI Podcast- Artificial Intelligence: American Attitudes and Trends
January 24, 2019 by Ariel Conn
Topics discussed include: Demographic differences in perceptions of AI, Discrepancies between expert and public opinions, Public trust (or lack thereof) in AI developers and The effect of information on public perceptions of scientific issues
The general public think that high level machine intelligence will happen sooner than the experts think.
Podcast, Future of Life Institute, Artificial Intelligence: American Attitudes and Trends with Baobao Zhang, January 2019, 32:06 minutes
Yuval Noah Harari – 21 lessons for the 21st century
Yuval Noah Harari offers his 21 lessons for the 21st century. In a wide ranging discussion with Andrew Marr, Harari looks back to his best-selling history of the world, Sapiens, and forward to a possible post-human future.
Technological disruption, ecological cataclysms, fake news and threats of terrorism make the 21st century a frightening prospect. Harari argues against sheltering in nostalgic political fantasies. He calls for a clear-sighted view of the unprecedented challenges that lie ahead.
Radio programme, BBC Radio 4, Start The Week – Yuval Noah Harari, October 2018, 42 minutes
Morality in the 21st Century – Artificial Intelligence
Rabbi Jonathan Sacks speaks to some of the world’s leading thinkers about morality, together with voices from the next generation: groups of British 6th form students.
AI is already fundamentally transforming our world, and in the coming years will have an enormous impact on almost every aspect of our lives. So the ethical questions surrounding its development are urgent and important. Rabbi Sacks argues that we must always be able to choose our fate, in the full dignity of responsibility, never forgetting that machines were made to serve human beings, not the other way around.
Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind;
Nick Bostrom, Philosophy Professor at the University of Oxford;
Students from Queens’ School in Hertfordshire.
Radio programme, BBC Radio 4, Morality in the 21st Century – Artificial Intelligence, September 2018, 45 minutes
AI Alignment Podcast: The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. Topics discussed in this episode include:
- What metaethics is and how it ties into AI alignment or not
- Brian and David’s ethics and metaethics
- Moral realism vs antirealism
- Moral epistemology and motivation
- Different paths to and effects on AI alignment given different metaethics
- Moral status of hedonic tones vs preferences
- Can we make moral progress and what would this mean?
- Moving forward given moral uncertainty
AI Alignment Podcast: The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce, Future of Life Institute, August 2018, 1:45:56 hours
Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics
“Nuclear energy is useful, but it wasn’t crucially useful, whereas AI seems like it’s on track to be like the new electricity or the new industrial revolution in the sense it’s a general-purpose technology that will completely transform and invigorate the economy in every sector of the economy. That’s, I guess, one problem or one difference is that its economic bounty and the gradient of incentives to develop it are so much more substantial than most other dual-use technologies we’re used to thinking about governing”
Podcast, Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics, Senior Research Fellow in the International Politics of AI at Oxford University, May 2018, 48 Minutes
Moral Maze:The Morality of Artificial Intelligence
Driverless cars could be on UK roads within four years under government plans to invest in the sector. The Chancellor Philip Hammond said “We have to embrace these technologies if we want the UK to lead the next industrial revolution”. At the thick end of the wedge, Silicon Valley billionaire Elon Musk believes artificial intelligence is “a fundamental risk to the existence of human civilisation”. AI is changing our lives here and now, whether we like it or not. Computer algorithms decide our credit rating and the terms on which we can borrow money; they decide how political campaigns are run and what adverts we see; they have increased the power and prevalence of fake news; through dating apps they even decide who we might date and therefore who we’re likely to marry.
As the technology gathers pace, should we apply the brakes or trustingly freewheel into the future? For those inclined to worry, there’s a lot to worry about; not least the idea of letting robot weapons systems loose on the battlefield or the potential cost of mass automation on society. Should we let machines decide whether a child should be taken into care or empanel them to weigh the evidence in criminal trials? Robots may never be capable of empathy, but perhaps they could be fairer in certain decisions than humans; free of emotional baggage, they might thus be more ‘moral’. Even if machines were to make ‘moral’ decisions on our behalf, according to whose morality should they be programmed? Most aircraft are piloted by computers most of the time, but we still feel safer with a human in the cockpit. Do we really want to be a ‘driverless’ society?
BBC Radio 4, The Moral Maze: The Morality of Artificial Intelligence, November 2017, 43 minutes
Hardtalk: Professor of Robot Ethics Alan Winfield
As research and development into artificial intelligence intensifies is there any sphere of human activity that won’t be revolutionised by A.I. and robotics? Stephen Sackur speaks to Alan Winfield, a world renowned Professor of Robot Ethics. From driving, to education, to work and warfare are we unleashing machines which could turn the dark visions of science fiction into science fact?
Radio programme, BBC World Service, Hardtalk, Professor of Robot Ethics Alan Winfield, October 2017, 24 minutes
The Inquiry: Can We Teach Robots Ethics?
From driverless cars to “carebots”, machines are entering the realm of right and wrong. Should an autonomous vehicle prioritise the lives of its passengers over pedestrians? Should a robot caring for an elderly woman respect her right to life ahead of her right to make her own decisions? And who gets to decide? The challenges facing artificial intelligence are not just technical, but moral – and raise hard questions about what it means to be human.
BBC Radio 4, The Inquiry: Can We Teach Robots Ethics?, October 2017, 23 minutes
Free speech and non-human speakers
October 2017, Leverhulme Centre for the Future of Intelligence
We share our world with powerful and vocal non-human agents, such as corporations and, increasingly, AI-based systems of various kinds. Should such agents have a ‘right’ to free expression, analogous to that of human agents? If not, what restrictions are appropriate in a democratic society, and how should they be imposed?
Join philosophers Professor Rae Langton (Cambridge) and Professor Philip Pettit (Princeton), and political theorist Professor Toni Erskine (UNSW), for a conversation about these fascinating and timely questions.
Youtube video, Free speech and non-human speakers – Leverhulme Centre for the Future of Intelligence, Future of Intelligence, October 2017, 1:26:02 hours
Moral Maze: Moral Philosophy for the Internet
June 2017, BBC Radio 4
Theresa May has been forced to ditch whole chunks of her party’s manifesto in the wake of the election, but one of the key non-Brexit policies to survive is the plan to crack down on tech companies that allow extremist and abusive material to be published on their networks. The recent terrorist attacks have strengthened the arguments of campaigners who’ve long said that it’s far too easy to access this kind of content and have accused internet companies of wilfully ignoring the problem. The promised “Digital Charter” will aim to force those companies to do more to protect users and improve online safety. With the growing power of tech giants like Google, Facebook and Twitter, connecting billions of people around the globe, is it pie in the sky to promise that Britain will be the safest place to be online? On one level this is a moral argument which has been going on for centuries about what we should, and should not be allowed to read and see and who should make those decisions. But is this a bigger problem than freedom of speech? Have we reached a tipping point where the moral, legal, political and social principles that have guided us in this field have been made redundant by the technology? Do we need to find new kind of moral philosophy that can survive in a digital age and tame the power of the tech-corps? Or is the problem uncomfortably closer to home – a question that each and every one of us has to face up to? Tim Cook, the chief executive of Apple, recently said that he was concerned about new technologies making us think like computers “without values or compassion, without concern for consequence.” Witnesses are Nikita Malik, Tom Chatfield, Mike Harris and Mariarosaria Taddeo.
BBC Radio 4, The Moral Maze: Moral Philosophy for the Internet, June 2017, 43 minutes
Meet the Cyborgs
February 2017, Frank Swain can hear Wi-Fi.
Diagnosed with early deafness aged 25, Frank decided to turn his misfortune to his advantage by modifying his hearing aids to create a new sense. He documented the start of his journey three years ago on Radio 4 in ‘Hack My Hearing’.
Since then, Frank has worked with sound artist Daniel Jones to detect and sonify Wi-Fi connections around him. He joins a community around the world who are extending their experience beyond human limitations.
In ‘Meet the Cyborgs’ Frank sets out to meet other people who are hacking their bodies. Neil Harbisson and Moon Rebus run The Cyborg Foundation in Barcelona, which welcomes like-minded body hackers from around the world. Their goal is not just to use or wear technology, but to re-engineer their bodies.
But should limits be placed on self-experimentation? And will cybernetic implants eventually become as ubiquitous as smart phones?
BBC Radio 4, Meet the Cyborgs, February 2017, 27:34 minutes
The Rise of the Robots: Where is my mind?
February 2017, The Rise of the Robots, Series 1 – Part 3
From Skynet and the Terminator franchise, through Wargames and Ava in Ex Machina, artificial intelligences pervade our cinematic experiences. But AIs are already in the real world, answering our questions on our phones and making diagnoses about our health. Adam Rutherford asks if we are ready for AI, when fiction becomes reality, and we create thinking machines.
BBC Radio 4, The Rise of the Robots – Part 3: Where is my mind?, February 2017, 27:45 minutes
The Rise of the Robots:Robots – More Human than Human?
May 2017, The Rise of the Robots, Series 1- – Part 2
Robots are becoming present in our lives, as companions, carers and as workers. Adam Rutherford explores our relationship with these machines. Have we made them to be merely more dextrous versions of us? Why do we want to make replicas of ourselves? Should we be worried that they could replace us at work? Is it a good idea that robots are becoming carers for the elderly?
Adam Rutherford meets some of the latest robots and their researchers and explores how the current reality has been influenced by fictional robots from films. He discusses the need for robots to be human like with Dr Ben Russell, curator of the current exhibition of robots at the Science Museum in London. In the Bristol Robotics Laboratory Adam meets Pepper, a robot that is being programmed to look after the elderly by Professor Praminda Caleb-Solly. He also interacts with Kaspar, a robot that Professor Kerstin Dautenhahn at the University of Hertfordshire has developed to help children with autism learn how to communicate better.
Cultural commentator Matthew Sweet considers the role of robots in films from Robbie in Forbidden Planet to the replicants in Blade Runner. Dr Kate Devlin of Goldsmiths, University of London, talks about sex robots, in the past and now. And Alan Winfield, Professor of Robot Ethics at the Bristol Robotics Laboratory, looks ahead to a future when robots may be taking jobs from us.
BBC Radio 4, The Rise of the Robots – Part 2: Robots – More Human than Human?, May 2017, 28 minutes
Algorithmic Accountability: Designing for safety through human-centered independent oversight
In this talk, Ben Shneiderman explores how some social strategies can play a powerful role in making systems more reliable and trustworthy.
Youtube video, Algorithmic Accountability, Ben Shneiderman – The Alan Turing Institute, May 2017, 1:21:32 hours
Ethics in the Age of Information – The Alan Turing Institute
The excitement of Data Science brings the need to consider the ethics associated with the information age. Likewise a revolution in political science is taking place where the internet, social media and real time electronic monitoring has brought about increased mobilisation of political movements. In addition the generation of huge amounts of data from such processes presents on the one hand opportunities to analyse and indeed predict political volatility, and on the other ethical and technical challenges which will be explored by two of the foremost philosophers and political scientists.
YouTube Video, Professor Luciano Floridi: “Ethics in the Age of Information, The Alan Turing Institute, April 2016, 1:14:46 hours
The Rise of the Robots: The history of things to come
February 2017, The Rise of the Robots, Series 1 – Part 1
The idea of robots goes back to the Ancient Greeks. In myths Hephaestus, the god of fire, created robots to assist in his workshop. In the medieval period the wealthy showed off their automata. In France in the 15th century a Duke of Burgundy had his chateau filled with automata that played practical tricks on his guests, such as spraying water at them. By the 18th century craftsmen were making life like performing robots. In 1738 in Paris people queued to see the amazing flute playing automaton, designed and built by Jacques Vaucanson.
With the industrial revolution the idea of automata became intertwined with that of human workers. The word robot first appears in a 1921 play, Rossum’s Universal Robots, by Czech author Carel Chapek.
Drawing on examples from fact and fiction, Adam Rutherford explores the role of robots in past societies and discovers they were nearly always made in our image, and inspired both fear and wonder in their audiences. He talks to Dr Elly Truitt of Bryn Mawr College in the US about ancient and medieval robots, to Simon Shaffer, Professor of History of Science at Cambridge University and to Dr Andrew Nahum of the Science Museum about !8th century automata, and to Dr Ben Russell of the Science Museum about robots and workers in the 20th century. And Matthew Sweet provides the cultural context.
BBC Radio 4, The Rise of the Robots – Part 1: The history of things to come, February 2017, 27:52 minutes
The Rise of the Robots: Will robots take over?
February 2017, The Rise of the Robots – short
From giant bronze automatons to artificial intelligence capable of beating humans at their own games, Adam Rutherford asks whether or not humanity will be enslaved by robots.
BBC Radio 4, The Rise of the Robots – Short:Will robots take over?, February 2017, 3:00 minutes
The Life Scientific: Alan Winfield on robot ethics
February 2017, BBC Radio 4
Alan Winfield is the only Professor of Robot Ethics in the world. He is a voice of reason amid the growing sense of unease at the pace of progress in the field of artificial intelligence. He believes that robots aren’t going to take over the world – at least not any time soon. But that doesn’t mean we should be complacent.
Alan Winfield talks to Jim al-Khalili about how, at a young age, he delighted in taking things apart. After his degree in microelectronics and a PhD in digital communication at Hull University, he set up a software company in the mid-80s, which he ran for the best part of a decade before returning to academia. In 1993, he co-founded the Bristol Robotics Laboratory at the University of the West of England, by far the largest centre of robotics in the UK. Today, he is a leading authority, not only on robot ethics, but on the idea of swarm robotics and biologically-inspired robotics. Alan explains to Jim that what drives many of his enquiries is the deeply profound question: how can ‘stuff’ become intelligent.
BBC Radio 4, The Life Scientific, Alan Winfield on robot ethics, February 2017, 28 minutes
The Human Zoo: The Lives of Things
The Human Zoo, Series 7, BBC Radio 4
Storms rage and floods take their toll – is this nature taking its revenge? Michael Blastland turns the lens of psychology on how we treat objects and other entities as if they are ‘alive’.
Not just the weather – we rail against a crashed laptop, dote on our cars and have conversations with our pets. Why do we anthropomorphise the things around us?
In fact, we tend to exaggerate what psychologists call ‘agency’ in all kinds of ways – as if there’s a mind behind what goes on in the world, with feelings and intentions. Does this mean we see conspiracy, blame, praise, and power where it doesn’t belong?
Michael Blastland investigates with resident Zoo psychologist Nick Chater, Professor of Behavioural Science at Warwick Business School, and roving reporter Timandra Harkness.
BBC Radio 4, The Human Zoo: The Lives of Things, December 2015, 27:15 minutes
The Human Zoo: Morals and Norms
The Human Zoo, Series 6 Episode 1 of 4, BBC Radio 4
The Human Zoo is a place to learn about the one subject that never fails to fascinate – ourselves. In this episode, morals and norms. Naked tourists on Malaysian mountains? Professional footballers sprawling on the streets of Tenerife? The team turns the lens of psychology on news of bad behaviour.
How do we know about the unwritten rules that govern us? And why does it cause such outrage when we get them wrong?
Michael Blastland investigates with resident Zoo psychologist Nick Chater, Professor of Behavioural Science at Warwick University, and roving reporter Timandra Harkness.
Special guests include Richard Holton, Professor of Philosophy at Cambridge University, Digital Human presenter Aleks Krotoski, psychologist Dr Kate Cross, as well as writer and broadcaster Simon Fanshawe on how his mother started a bread roll fight with a Lord.
BBC Radio 4, The Human Zoo: Morals and Norms, June 2015, 27:55 minutes
The Digital Human: Ethics
The Digital Human, Series 6 Episode 6 of 6, BBC Radio 4
If a driverless car has to choose between crashing you into a school bus or a wall who do you want to be programming that decision? Aleks Krotoski explores ethics in technology.
Join Aleks as she finds out if it’s even possible for a device to ‘behave’ in a morally prescribed way through looking at attempts to make a smart phone ‘kosher’. But nothing captures the conundrum quite like the ethical questions raised by driverless cars and it’s the issues they raise that she explores with engineer turned philosopher Jason Millar and robot ethicist Kate Darling.
Professor of law and medicine Sheila MacLean offers a comparison with how codes of medical ethics were developed before we hear the story of Gus a 13 year old whose world was transformed by SIRI.
BBC Radio 4, The Digital Human: Ethics, October 2015, 31:15 minutes
Weapons of Math Destruction – Cathy O’Neil
Many algorithms used in business and political campaigning routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.
Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes
Analysis: Artificial Intelligence
Should we beware the machines? Professor Stephen Hawking has warned the rise of Artificial Intelligence could mean the end of the human race. He’s joined other renowned scientists urging computer programmers to focus not just on making machines smarter, but also ensuring they promote the good and not the bad. How seriously should we take the warnings that super-intelligent machines could turn on us? And what does AI teach us about what it means to be human? Helena Merriman examines the risks, the opportunities and how we might avoid being turned into paperclips.
BBC Radio 4, Analysis: Artificial Intelligence, March 2015, 27:50 minutes
FutureProofing: The Singularity
Rohan Silva and Timandra Harkness discover how close we are to The Singularity – the day when machines match human intelligence. And they find out why it’s so vital to understand the implications of such a momentous future event right now.
BBC Radio 4, FutureProofing: The Singularity, September 2014, 42:55 minutes
Dinosaur Torture: Robot Ethics with Kate Darling
November 14, 2013, Spark
Is it OK to torture a robot? To lock a Roomba in the closet all day? To kick a Furby or decapitate a Pleo? Researcher Kate Darling thinks we might consider legally protecting robots from abuse in the same way we protect animals.
CBC Radio, Spark Episode 300146220, Dinosaur Torture: Robot Ethics with Kate Darling, November 2013, 12:05 minutes
The Digital Human, Series 2 Episode 6 of 7
Aleks Krotoski explores the digital world. In today’s programme have we all become cyborgs without even knowing it?
We’ve always extended our human bodies ever since we first picked up rocks or sticks as tools, it’s part of human nature. So are the digital tools of today any different? Aleks asks just how far we’ve come and are willing to go to become one with our technology and become cyborg.
Aleks hears from film maker Rob Spence better known as Eyeborg about the reaction he gets to the camera he has where his right eye used to be. It’s a different type of eye artist and composer Neil Harbisson uses, born entirely colour blind Neil uses an electronic eye on an antenna attached to his skull to hear colours it’s now such a part of how Neil perceives the world that he hears the colours in his dreams!
Brandy Ellis is a very different type of cyborg; having suffered from depression for years she opted to have electronics implanted in her brain to control her symptoms. Her feelings are literally regulated by a machine.
Ultimately Aleks finds out from anthropologist Amber Case how we’re all every bit as cyborg as Rob, Neil or Brandy in how we coexist symbiotically with our digital devices.
BBC Radio 4, The Digital Human: Augment, November 2012, 30:00 minutes
The Digital Human: Intent
The Digital Human, Series 2 Episode 4 of 7
Aleks Krotoski looks at whether we’ve all become techno-fundamentalists. Do we know what all our technology is for or more intriguingly what it wants?
Aleks hears from Douglas Rushkoff about how the whole of the world around us has always been programmed by architects, religion, and politics. But it’s something we seem to have forgotten about technology itself.
Tom Chatfield discusses how the biases of technology (the things it naturally tends towards or is best at) interplay with human nature to turn much of our interaction with technology into some sort of perverse game.
But some of these biases like the end use of technology only emerge once people start to use it. Kevin Kelly is one of the world’s most respected commentators on technology he believes that the biases of all our technology put together start to combine so that it behave very much like an organism. His provocative theories are detailed in his book What does Technology want?
We explore these theories by discussing our biggest technologies; the city and whether the latest innovations aiming to make our city’s smarter and more sustainable hint at a better future relationship with the world of technology.
BBC Radio 4, The Digital Human: Intent, October 2012, 30:00 minutes
Robots that Care – Part 1
In the first of a two-part series, Robots that Care, Jon Stewart charts the advances in robotics that are increasingly leading to direct one-to-one contact between humans and robots. Stewart visits robotocists and their collaborators in the USA and UK and asks how the robots will be used in the future. He examines the way cinema has shaped our ideas of robots and investigates the gulf between our expectations of what robots can do and the reality.
A fundamental question that scientists are posing is how we should consider the robots who, in the near future, will live alongside us in our homes. Should they be considered slaves, pets or friends? And Jon Stewart explores how the ideas of Isaac Asimov, that firstly robots should do no harm, have evolved over the decades.
BBC Radio 4, Robots that Care – Episode 1, September 2011, 28:00 minutes
Robots that Care – Part 2
In the second of a two-part series, Robots that Care, Jon Stewart visits research institutes in the USA and UK to explore the brave new ideas about how robots may be able to help humans on a one-to-one basis. He talks to key roboticists, their collaborators and volunteers about the practicalities and ethics of using robots to help people.
A number of studies have been done and more are underway in the use of robots for people wanting to lose weight and for children who are autistic. Robotocists are also conducting long-term projects with people who have suffered strokes. The robots are designed as personal instructors to help motivate and restore motor function. But they must be emotionally smart and coax rather than order about like a sergeant major. The roboticists are also examining how they might customise their robots to fit the personalities of the people whom they will serve.
We have put robots on the moon but it seems that it is more difficult to put them in homes. A visit to a robot house in the UK shows that there are many pitfalls still to overcome before robots will be useful in our living rooms and kitchens.
Finally, Robots that Care asks: what are the dangers of making the robots too human? Are there problems of dependency? What ethical and moral questions arise when robots socialise human beings?
BBC Radio 4, Robots that Care – Episode 2, September 2011, 28:00 minutes
The Digital Human: Altruism
The Digital Human, Series 4 Episode 1 of 6
Aleks Krotoski explores what technology tells us about ouselves and the age we live in. In this first programme; is the digital world allowing us to be more altruistic than ever?
So can true altruism exist online? With all the stories of cyber-bullying and trolling it’s very easy to forget the random acts of kindness that the technology also allows. Aleks explores some amazing stories of online altruism. But when no good deed goes unpublished and you can keep score of your goodness through ‘followers’, ‘likes’ and the accompanying boosts to ego and reputation is truly selfless altruism online an impossibility? And in the end, if good gets done does it matter?
BBC Radio 4, The Digital Human: Altruism, September 2015, 33:16 minutes
A History of Ideas: How can I tell right from wrong?
Neuropsychologist Paul Broks on Morality and the Brain
The eighteenth century writer Jeremy Bentham thought that telling right from wrong as simple: morally right things were the ones that increased the total of human happiness. Wrong things were the ones that increased the stock of suffering. His principle is known as utilitarianism.
It sounds rational, but does it do justice to the way we actually think about morality? Some things seem wrong even when, according to utilitarianism, they are right.
Recently, philosophers and psychologists have started to apply experimental methods to moral philosophy. In this programme, neuropsychologist Paul Broks looks at the recent research. Some experimenters, such as Guy Kahane in Oxford, have been putting people in scanners to see which bits of the brain are most active when they struggle with moral dilemmas. Fiery Cushman at Harvard has been getting people to carry out simulated immoral acts (such as asking volunteers to fire a fake gun at the experimenter) to see how they react to unpleasant but essentially harmless tasks. And Mike Koenigs at Wisconsin Madison University has been looking at how psychopathic criminals and people with brain damage deal with moral puzzles. One school of thought now suggests that utilitarianism, far from being the “rational” way to decide right from wrong, is actually most attractive to people who lack the normal empathic responses – people very like Jeremy Bentham, in fact.
BBC Radio 4, History of Ideas: Neuropsychologist Paul Broks on Morality and the Brain, November 2014, 12:14 minutes
Moral Maze: Science and morality
You wouldn’t have thought that a book on the latest discoveries in the science of human behaviour would be high on the reading lists of politicians, but think again. David Brooks’ The Social Animal is required reading for politicians on both sides of the Atlantic. When he visited the UK a couple of weeks ago he had meetings with both the Prime Minister and leader of the Labour Party. Politicians, it seems, are increasingly turning to disciplines like neuroscience and evolutionary anthropology to understand why we do things, so they can better tailor and design policies that will work in the real world. That all sounds very sensible, but how far should we take this new found enthusiasm for scientifically designed political policies? As science increasingly begins to explain our behaviour it is also challenging our assumptions about moral and social values.
For millennia our moral reasoning has been guided by first principles – theology and philosophy. Should we embrace rather than fear the knowledge science brings as it helps unravel some of morality’s muddles that have so far defeated our greatest thinkers? We almost un-questioningly accept that science can be used to improve our physical wellbeing, but why shouldn’t it be used to make us better people? If neuroscience can change our understanding of human behaviour – and misbehaviour – why should it not be used to frame our laws, our ethics, our morality, to make the world a better place?
BBC Radio 4, Moral Maze: Science and morality, June 2011, 44:00 minutes
Mind Changers: The Heinz Dilemma
Claudia Hammond presents a series looking at the development of the science of psychology during the 20th century.
Lawrence Kohlberg designed the first experiment to quantify the human capacity for ethical reasoning. Fifty years on, aspects of the original experiment in Chicago are replicated with volunteers in the UK.
BBC Radio 4, Mind Changers: The Heinz Dilemma, September 2008, 27:32 minutes
Moral Maze: The Psychology of Morality
Go on – admit it. You like to feel you’re above average. Don’t worry. We all like to feel we’re somehow special – that our gifts make us stand out from – and above – the crowd. Psychologists refer to this phenomenon as positive illusion. It’s the sort of self-deception that helps maintain our self-esteem; a white lie we tell ourselves. The classic example is driving: the majority of people regard themselves as more skilful and less risky than the average driver. But research just published shows that this characteristic isn’t confined to skills like driving. Experiments carried out by psychologists at London’s Royal Holloway University found most people strongly believe they are just, virtuous and moral and yet regard the average person as – well, how shall we put it politely? Let’s just say – distinctly less so. Virtually all the those taking part irrationally inflated their moral qualities.
Worse, the positive illusion of moral superiority is much stronger and more prevalent than any other form of positive illusion. Now, as a programme that’s been testing our nation’s moral fibre for more than 25 years, we feel this is something we’re uniquely qualified to talk about. Well, we would wouldn’t we? So, if we can’t entirely rely on our own calibration to judge a person’s moral worth, how should we go about it? Is the answer better and clearer rules, a kind of updated list of commandments? There might need to be a lot more than ten though. Does legal always mean moral? In a world that is becoming increasingly fractious, being less morally judgmental sounds attractive, but if we accept that morality is merely a matter of cognitive bias, do we take the first step on the road to moral relativism? The Moral Maze – making moral judgements so you don’t have to. Witnesses are David Oderberg, Michael Frohlich, Anne Atkins and Julian Savulescu.
BBC Radio 4, Moral Maze: The Psychology of Morality, November 2016, 42:46 minutes
More programmes on human ethics can be found at: