A collection of (mainly) Youtube videos that address issues in artificial intelligence and robot ethics. You can search Youtube directly using phrases like artificial intelligence ethics, machine ethics, data privacy and so on. The list below is a ‘curated’ list of either the most relevant, best or most influential videos I’ve found, but the rate at which new material is being created means that these can be regarded as just tasters or staring points.
Ethics and Artificial Intelligence
Dr Michael Wilby, Anglia Ruskin University
Dr Michael Wilby, Lecturer in Philosophy, provides an overview of normative ethics and its implications for artificial intelligence in the short, medium and long term.
Youtube video, Ethics and Artificial Intelligence – Dr Michael Wilby, robotethics.co.uk, July 2018, 25:48 minutes
Elon Musk, July 2019
This video is mainly for recruitment into Neuralink but the whole concept raises many ethical issues.
Is this a good idea? For whom – just for the disabled or for everybody? Are these developments inevitable? How would it change the individual and society? What regulations and controls need to be in place? Are current laws and AI development guidelines adequate for a development like this? What are the impacts on safety, wellbeing, transparency, accountability, affordability, organisational control etc. etc.
YouTube Video, Elon Musk’s Neuralink presentation, June 2019, CNET, 16:02 minutes (edited at start)
AI Now Institute’s Kate Crawford and Meredith Whittaker
Recode Decode, April 2019
Kara Swisher interviewed AI researchers and AI Now Institute co-founders Kate Crawford and Meredith Whittaker about their research into AI technology and why business leaders, politicians, and policymakers need to pay attention.
Youtube video, AI Now Institute’s Kate Crawford and Meredith Whittaker | Recode Decode Live | Full interview, Recode, April 2019, 59:14 minutes
Why We Should Ban Lethal Autonomous Weapons
Future of Life Institute, March 2019
AI experts talk about the dangers of Lethal Autonomous Weapons Systems (LAWS).
Youtube video, Why We Should Ban Lethal Autonomous Weapons, Future of Life Institute, March 2019, 5:43 minutes
Ethically Aligned Design: Prioritizing Wellbeing for AI and Autonomous Systems Webinar Replay – IEEE
IEEE, May 2018
The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems released the first version of its document, Ethically Aligned Design (EAD) on 13 December 2016. The document provides insights and recommendations from over one hundred global AI and ethics experts and is intended to provide a key reference for Artificial Intelligence and Autonomous Systems (AI/AS) technologists to help them prioritize values-driven, ethically aligned design in their work.
This webinar recording address the difference between Codes of Conduct and Applied Ethics/Values-Driven Design methodologies and how they work best together; how your organization can benefit by prioritizing end-user values for technology design in the algorithmic era (“Ethics is the new Green”); why proactively aligning AI/AS design to human values and ethical principles will redefine innovation; how IEEE is prioritizing ethical considerations and how your organization can get involved.
Youtube video, Ethically Aligned Design: Prioritizing Wellbeing for AI and Autonomous Systems Webinar Replay, IEEE Standards Association, May 2018, 58:12 minutes
Regulating Artificial Intelligence: How to Control the Unexplainable
University of Chicago, May 2018
“The technologies we broadly call “AI” are changing industries, from finance to advertising, medicine and logistics. But the biggest hurdle to the adoption of artificial intelligence lies in how well this so-called black box technology can be governed and controlled. This talk will suggest a framework for how artificial intelligence can be created, tested, and deployed ethically-and how its benefits can be harnessed fully in medicine and beyond.”
Youtube video, Regulating Artificial Intelligence: How to Control the Unexplainable, University of Chicago, May 2018, 1:16:48 hours
Will Artificial Intelligence Mean the End of Social Interaction?
Prof. Justine Cassell, May 2018
This video describes some interesting experiments that collect data about how rapport is formed between people, how this is translated into models and then used to design algorithms that demonstrate AI with social skills. Despite the negative comments on this video I found it convincing as a mechanism of building complex human social skills into AI systems.
Youtube Video, Will Artificial Intelligence Mean the End of Social Interaction? – Prof. Justine Cassell, The Artificial Intelligence Channel, November 2018, 1:13:02 hours
Artificial intelligence: impact, opportunity, regulation – Dr Alan Finkel AO
A keynote address on the use of artificial intelligence in the workplace examining its impact, opportunities and regulations. CEDA is the Committee for Economic Development of Australia.
Artificial intelligence: impact, opportunity, regulation – Dr Alan Finkel AO, CEDA News, May 2018, 22:57 minutes
What Can Machine Learning Do? Workforce Implications – Prof. Erik Brynjolfsson
This video sets out the scene, mainly in the US but also globally. It is full of graphs and evidence to validate the thesis that workers performing routine tasks have suffered badly as a result of automation. The distribution of wealth is rapidly becoming more unequal in a ‘winner takes all’ economy. Owners of capital are advantaged over people who only have their labour to sell. Robots are already affecting wage rates for human labour. We can counter this trend by using technology to augment human labour and creating more jobs that capitalise on human capacities (empathetic health care, therapy).
Youtube video, What Can Machine Learning Do? Workforce Implications – Prof. Erik Brynjolfsson, The Artificial Intelligence Channel, May 2018, 56:27 minutes
The following videos are no longer available
Witnesses to the Committee on AI emphasise the importance of teaching the ethics of technology in schools
Digital Minister, Matt Hancock, says the UK can lead the world in understanding the ethical implications of AI
Matt Hancock described the role of the Centre for Data Ethics and Innovation
Full Video – UK Parliament – Artificial Intelligence Committee
Tuesday 12 December 2017 Meeting started at 3.34pm, ended 5.58pm
Witnesses: Professor Rosemary Luckin, Professor of Learner Centred Design, University College London, Mr Miles Berry, Principal Lecturer, School of Education, Roehampton University, Mr Graham Brown-Martin, Author and entrepreneur.
Witnesses: The Rt Hon. Matt Hancock MP, Minister of State, Department for Digital, Culture, Media and Sport, The Rt Hon. the Lord Henley, Parliamentary Under Secretary of State, Department for Business, Energy and Industrial Strategy
Video, UK Parliament, Artificial Intelligence Committee, parliamentlive.tv, 12 December 2017, parts 1 followed by part 2, 2:23:48 hours
Robot Ethics in the 21st Century
with Alan Winfield and Raja Chatila
‘How can we teach robots to make moral judgements, and do they have to be sentient to behave ethically?’ Alan Winfield is a Professor of Robot Ethics at the University of the West of England (UWE) in the Bristol Robotics Lab. Raja Chatila is Professor at the University Pierre and Marie Curie, Paris. Ethical issues identified include: the effects of automation on jobs, emotional attachment of people to machines, anthropomorphism, responsibility for harm, and autonomous weapons.
Youtube video, Robot Ethics in the 21st Century – with Alan Winfield and Raja Chatila, The Royal Institution, November 2017, 35:25 minutes
Building Artificial Intelligence That is Provably Safe & Beneficial
Stuart Russell advocates probabilistic programming languages and demonstrates how they can tackle problems using a deeper understanding of the world than that created by stand-alone machine learning algorithms. He is impressed by the physical capabilities of robots and current AI’s ability to solve some problems. However, he critiques the current state of the art in AI as falling far short of human capabilities. However conceptual breakthroughs are unpredictable and we need a formal framework to mitigate the risks posed by AI. He proposes that robots should always consult a human about whether to go ahead with some action and switch-off if it doesn’t get the go ahead. There is a strong commercial and moral incentive to integrate human values into AI. They will not be accepted unless they have the capacity to learn and align with human values and objectives, and always put these first.
Youtube video Prof. Stuart Russell – Building Artificial Intelligence That is Provably Safe & Beneficial,The Artificial Intelligence Channel, September 2017, 1:05:13 hours
Deep learning in The Brain
This lecture helps convince me that it will not be so long (say 5-10 years) until we have the knowledge to build robot brains that are comparable in their capacity to learn as people. It does a detailed comparison between neural networks and the way in which synapses in the brain change in response to experience. It’s a technical talk and it’s detail suggests that at least the theory will be in place in the relatively near future, even if the processing power may not be sufficient to do this within a mobile robot brain (as opposed to a remote supercomputer).
If this timescale is realistic then there is not much time to address the ethic issues that will ensure that the basic operating systems of such robot brains cannot be ‘trained’ (or weaponised) to do harm or to do harm inadvertently.
Youtube video, Blake Richards – Deep learning in The Brain, The Artificial Intelligence Channel, September 2017, 1:23:37 hours
Ethics of Artificial Intelligence
Yann LeCun, Nick Bostrom & Virginia Dignum
Virginia Dignum asks about bridging between abstract ethical principles and the operational decisions of AI. Nick Bostrum talks about AI and the paper clip. Yann LeCun compares the relationship between the reptilian brain and the cortex, and the cortex and AI.
Youtube video, Ethics of Artificial Intelligence – Yann LeCun, Nick Bostrom & Virginia Dignum, The Artificial Intelligence Channel, September 2017, 29:37 minutes
3 principles for creating safer AI
‘How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.’
Russell proposes three principles:
1 The robot’s only objective is to maximise the realisation of human values (human preferences)
2. The robot is initially uncertain about what those values are
3. Human behaviour provides information about human values
Youtube video, Stuart Russell: 3 principles for creating safer AI, April 2017, 17:30 minutes
What Is It Like to Be a Robot?
Dr. Leila Takayama examines human encounters with new technologies. Having a person remotely operate a robot-type device that has some autonomous functions, raises issues in human robot interaction, both in how people respond to the robot and in how the operator and the autonomous operations of the robot interleave/negotiate control.
Youtube video, What Is It Like to Be a Robot? | Dr. Leila Takayama | TEDxPaloAlto, TEDx Talks, May 2017,12:55 minutes
Sam Harris & Kate Darling
This discussion looks at some of the consequences of people’s tendency to anthropomorphize even inanimate non-responsive objects like soft toys, and how people might react to robots as they become increasingly humanoid. It goes on to some sexually explicit discussion about ethics in relation to child sex robots.
YouTube Video, Sam Harris & Kate Darling … Conversation on Robot Ethics & AI, Cogent Canine, March 2017, 31:37 minutes
Do Robots Deserve Rights?
What if Machines Become Conscious?
We once justified slavery on the grounds that the slaves would benefit. Might we do the same with robots?
YouTube Video, Do Robots Deserve Rights? What if Machines Become Conscious?, Kurzgesagt – In a Nutshell, February 2017, 6:34 minutes
The Philosophy of Westworld
This sets out the plot of the film Westworld (1973 film and 2016 HBO TV series) and identifies freewill and suffering as preconditions for ethics.
Youtube video, The Philosophy of Westworld – Wisecrack Edition, Wisecrack, February 2017, 17:18 minutes
Don’t Fear Superintelligent AI
‘New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don’t need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we’ll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.’
Youtube video, Don’t fear superintelligent AI, TED@IBM, November 2016, 10:19 minutes
Can we build AI without losing control over it?
‘Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.’
Youtube video, Can we build AI without losing control over it?, TEDSummit, June 2016, 14:26 minutes
Kate Darling – Ethical issues in human-robot interaction
Kate Darling conference talk highlights the tendency for people to anthropomorphize. People project intent onto inanimate objects and will empathise with them. As robots become ubiquitous and more physical, mobile and humanoid the anthropomorphic effect becomes more pronounced. Deceit, privacy, data security, covert selling, and emotional attachment are all identified as issues. These are issues not just for human-robot interactions but also for human interactions with each other.
Youtube video, Kate Darling – Ethical issues in human-robot interaction, Media Evolution / The Conference, January 2016
Nick Bostrom – What happens when our computers get smarter than we are?
‘Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?’
Youtube video, Nick Bostrom – What happens when our computers get smarter than we are?, March 2015