A collection of reports, blogs, papers and other written material about artificial intelligence and robot ethics.
Montreal AI Ethics Institute, June 2020
Comprehensive report based on the weekly AI Ethics newsletter and other ongoing research at the institute.
Global Partnership on Artificial Intelligence (GPAI), UK Office for AI, June 2020
GPAI is an international and multistakeholder initiative involving Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States of America, and the European Union, to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.
Ada Lovelace Institute, April 2020
“As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.”. This useful report clarifies terms and approaches.”. This report defines terms and approaches.
AI Policy Exchange, February 2020
A report from India including sections on:
1. Defining AI
2. Adopting AI-Induced Business Automation
3. Global Collective Approach To AI Adoption
4. Regional Domestic Approach To AI Adoption
5. Developing And Deploying AI Ethically
6. Assigning Responsibility And Liability For AI-Induced Mishaps
7. Tackling Bias In AI
8. Making Explicable AI
9. Developing And Deploying AI-Based Weapons
10. Recognizing Rights Related To Artificial Creations
Dynamics of AI Principles: The Big Picture
AI Ethics Lab, February 2020
An excellent resource providing access to about 80 documents describing ethical principles and guidelines for AI.
The map-based user interface is intuitive and generally makes it easy to find what you are looking for.
It looks like the USA, UK and Europe lead the world in putting together ethical principles for AI. The number of publications appear to have peaked in 2018.
AI Ethics Guidelines: Interactive Map
The Windfall Clause – Distributing the Benefits of AI for the Common Good
The Future of Humanities Institute, February 2020
A proposal that companies, gaining windfall profits as a result of AI, undertake to plough these back to the benefit of humanity. Seems an excellent idea!
The Windfall Clause Report
Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for Ai
Berkman Klein Center Research Publication No. 2020-1, January 2020
This is a useful meta-analysis of 36 sets of guidelines for the principled development of AI.
Principled Artificial Intelligence
AI Now 2019 Report
AI Now Institute, December 2019
Contains 12 well researched recommendations related to potential harms of AI.
AI Now 2019 Report
Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: a Roadmap for Research
Nuffield Foundation and CFI, December 2019
“Even where there is a broad consensus on core issues, such as bias, transparency, ownership and consent, they can be subject to different meanings in different contexts – interpretation in technical applications differs to that in the judicial system, for example. Similarly, ethical values such as fairness can be subject to different definitions across different languages, cultures and political systems.”
Report on Ethical and Social Issues
DIVERSITY + ETHICS IN AI – 2020
Lighthouse3, December 2019
Report on Women in AI Ethics including 2019 progress, 100 women working in AI Ethics, a 2020 roadmap, a framework for AI ethics and more.
Report on Women in AI Ethics
UK Election Manifesto references to AI and Data
ADA Lovelace Institute, November 2019
The Ada Lovelace Institute has written an article on the manifesto pledges relating to artificial intelligence and data of all the main political parties.
How will data and AI work for people and society after the UK General Election 2019?
Links to EU Policy Initiative on AI 2015-2019
List Updated 15th October 2019
As part of the background research for the Agency’s project on ‘Artificial intelligence (AI), Big Data and Fundamental Rights’, FRA has collected information on AI-related policy initiatives in EU Member States in the period 2016-2019. The collection currently includes about 180 initiatives. Download the excellent spreadsheet from https://fra.europa.eu/en/project/2018/artificial-intelligence-big-data-and-fundamental-rights/ai-policy-initiatives or click on the link below.
List of EU Policy Initiatives
Links to EU Policy Initiative on AI 2015-2019
Policy Documents & Institutions – ethical, legal and socio-economic issues of robotics and artificial intelligence
A review of ethical guidelines for AI containing a reference list of 136 papers
Anna Jobin, Marcello Ienca & Effy Vayena, (2019), The global landscape of AI ethics guidelines, Nature Machine Intelligence, volume 1, pages389–399
Empowering AI Leadership in Ethics
World Economic Forum, September 2019
Guide to developing AI codes of ethics and professional conduct for technology companies, professional associations, government agencies, NGOs and academic groups.
Empowering AI Leadership
Snapshot papers on ethical issues in AI
Centre for Data Ethics and Innovation (CDEI), 19th September 2019
Three papers in this first series of snapshots:
A Framework for Developing a National Artificial Intelligence Strategy
World Economic Forum, August 2019
Sets out how to develop a national strategy in AI including Key dimension 1: Providing a set of standardized data-protection laws and addressing ethical concerns.
A Framework for Developing a National Artificial Intelligence Strategy
Code of conduct for data-driven health and care technology
Department of Health and Social Care, Updated July 2019
(First published 5 September 2018)
The code of conduct contains a set of principles that set out what we expect from suppliers and users of data-driven technologies.
The aim of the code is to make it easier for suppliers to understand what we need from them, and to help health and care providers choose safe, effective, secure technology to improve the services they provide.
AI Ethics Guidelines Global Inventory
AlgorithmWatch, June 2019
A list of well over 100 AI ethics guidelines from around the world.
AI Ethics Guidelines Global Inventory
Trust Technology and Tech: Building Ethical Data Policies for Public Good
UK All Party Parliamentary Group on Data Analytics, May 2019
Nine recommendations including:
Recommendation 1: To build public confidence and acceptability, providers of public services should address ethics as part of their ‘licence to operate’. A core principle should be that the public’s views on data exploitation are proactively built into an ethical assessment at the service design stage.
Recommendation 2: The citizen should be given access to simple and meaningful information, akin to the transparency principles underpinning Freedom of Information. This duty should apply to all those using data exploitation to deliver public services, as part of their ‘licence to provide public services’.
Recommendation 3: The citizen should have a ‘right to explanation’, via a duty on all those delivering public services to provide easy to understand information on the factors taken into account in algorithm-based ‘black-box’ decisions as they affect the individual.
Report, Trust Technology and Tech, All Party Parliamentary Group on Data Analytics, May 2019, 47 pges
Ethics guidelines for trustworthy AI
European Commission, April 2019
This report highlights that trustworthy AI should respect all applicable laws and regulations, as well as a series of specific requirements (human agency oversight, robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, fairness, society and environmental wellbeing and accountability) that are set out in the report”
Report, Ethics guidelines for trustworthy AI, European Commission Single Digital Market Expert Group on AI, April 2019, 41 pges
Ethically Aligned Design Version 2
IEEE, March 2019
The ethical design, development, and implementation of these technologies should be guided by the following General Principles:
- Human Rights: Ensure they do not infringe on internationally recognized human rights
- Well-being: Prioritize metrics of well-being in their design and use
- Accountability: Ensure that their designers and operators are responsible and accountable
- Transparency: Ensure they operate in a transparent manner
- Awareness of misuse: Minimize the risks of their misuse
Overview of Ethically Aligned Design Version 2
Perspectives on Issues in AI Governance
Google, Undated – probably late 2018
This report highlights “five areas where government, in collaboration with wider civil society and AI practitioners, has a crucial role to play in clarifying expectations about AI’s application on a context-specific basis. These include explainability standards, approaches to appraising fairness, safety considerations, requirements for human-AI collaboration, and general liability frameworks.”
Report, Perspectives on Issues in AI Governance, Google, Late 2018, 30 pges
Statement to the United Nations on behalf of the LAWS open letter signatories
A statement was read on the floor of the United Nations during the August, 2018 Convention on Certain Conventional Weapons (CCW) meeting, in which delegates discussed a possible ban on lethal autonomous weapons (LAWS).
The signatories are nearly 4,000 AI and robotics researchers and scientists from around the world who have called on the United Nations to move forward to negotiations to consider a legally binding instrument on lethal autonomous weapons.
On September 12th 2018, the European Parliament passed (by an 82% majority) a resolution calling for an international ban on lethal autonomous weapons systems (LAWS).
Article, Statement to the United Nations on behalf of the LAWS open letter signatories, August 2018, 1 page
AI Governance: A Research Agenda
“This report outlines an agenda for this research, dividing the field into three research clusters. The first cluster, the technical landscape, seeks to understand the technical inputs, possibilities, and constraints for AI. The second cluster, AI politics, focuses on the political dynamics between firms, governments, publics, researchers, and other actors. The final research cluster of AI ideal governance envisions what structures and dynamics we would ideally create to govern the transition to advanced artificial intelligence.”
Report, AI Governance: A Research Agenda, Governance of AI Program Future of Humanity Institute University of Oxford, August 2018, 53 pages
An Overview of National AI Strategies
Tim Dutton, July 2018
Article summarising the key policies and goals of over 25 country’s national strategis in AI. (Includes: Australia, Canada, China, Denmark, EU Commission, Finland, France, Germany, India, Italy, Japan, Kenya, Malaysia, Mexico, New, Zealand, Nordic-Baltic Region, Poland, Russia, Singapore, South Korea, Sweden, Taiwan,, Tunisia, UAE,United Kingdom, United States)
Article, An Overview of National AI Strategies, Tim Dutton, Politics and AI, July 2018, 25 minute read
Report on Consultation on the Centre for Data Ethics and Innovation
UK Government, June 2018
The government launched a consultation outlining proposals for the Centre for Data Ethics and Innovation in June 2018. The consultation ran for 12 weeks and closed on 5 September 2018. They received 104 responses and feedback through a series of roundtable discussions.
All suggestions and comments were reviewed and assessed by the government as part of its response published in November 2018. The response focuses primarily on clarifying the Centre’s existing functions and strengthening its reporting and recommendation functions.
RSA – The Ethics of Ceding more Power to Machines
Brhmie Balaram, May 2018
The RSA’s report, launched 31st May 2018, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. There is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.
Visit this article to link to the full RSA report ‘Artificial Intelligence: Real Public Engagement’
IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Autonomous and Intelligence Systems
The first edition of the report is available at:
II. General Principles
The ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:
1. Human Rights
A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
A/IS creators shall adopt increased human well-being as a primary success criterion for development.
3. Data Agency
A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
The basis of a particular A/IS decision should always be discoverable.
A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
7. Awareness of Misuse
A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
HOUSE OF LORDS Select Committee on Artificial Intelligence
Report of Session 2017–19, AI in the UK: ready, willing and able?
Concludes that the UK can be a world leader in the development of artificial intelligence. Useful historical documentation of the UK governments previous funding in AI. Includes many references for the setting up of the Centre for Data Ethics and Innovation.
Report, Report of Session 2017–19, AI in the UK: ready, willing and able?, HOUSE OF LORDS Select Committee on Artificial Intelligence, April 2018, about 180 pages
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI (and more)
“Artificial intelligence (AI) and machine learning (ML) are altering the landscape of security risks for citizens, organizations, and states. Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).”
Report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Contributions from several organisations, February 2018, 99 pages
Trust and Technology Initiative – Short Articles
- Talking About Trust, Dr Laura James, Trust & Technology Initiative; Department of Computer Science and Technology
- Trust, Technology and Truth-Claims, Dr Ella McPherson, Trust & Technology Initiative; Department of Sociology
- Fundamentally more secure computer systems: the CHERI approach, Prof Simon Moore, Trust & Technology Initiative; Department of Computer Science and Technology
- The Political Economy of Trust, Prof John Naughton, Trust & Technology Initiative, CRASSH
- Compliant and Accountable Systems, Dr Jat Singh, Trust & Technology Initiative; Department of Computer Science and Technology
- AI Trust & Transparency with the Leverhulme Centre for the Future of Intelligence, Adrian Weller, Trust & Technology Initiative; Department of Engineering
- Why and How Email Communication Undermines Trust in Teams and Organizations, Prof David de Cremer, Judge Business School
- Digital Trust Dissonance: when you’ve got them by the app, their clicks and minds will follow, Richard Dent, Department of Sociology
- Digital Voice, Media and Power in Africa, Dr Stephanie Diepeveen, Department of Politics and International Studies (POLIS)
- Govtech Requires Many Relationships of Trust, Dr Tanya Filer, Bennett Institute for Public Policy
- TRVE Data: Secure and Resilient Collaborative Applications, Dr Martin Kleppmann, Department of Computer Science and Technology
- The Autonomous City, Dr Ian Lewis, Department of Computer Science and Technology
- Trust, Evidence and Local Democracy; how Cambridgeshire County Council has bridged the gap, Ian Manning, Cambridgeshire County Council
- Giving Voice to Digital Democracies, Dr Marcus Tomalin, CRASSH
Articles, Cambridge Perspectives on Trust and Technology, January 2018 (approx), 14 short articles
Growing the artificial intelligence industry in the UK
This independent review, carried out for the UK government (Business Secretary and Culture Secretary) by Professor Dame Wendy Hall (Professor of Computer Science at the University of Southampton, UK) and Jérôme Pesenti (CEO BenevolentTech), reports on how the Artificial Intelligence industry can be grown in the UK.
The recommendations cover:
- Improving data access
- Improving the supply of skills
- Maximising UK AI research
- Supporting the uptake of AI
Notably, a search of the recommendations for the words ‘ethics’, ‘morality’, ‘caution’, ‘safety’, ‘principle’ and ‘harm’ all return zero results. Transparency and accountability are mentioned in relation to supporting the uptake of AI (recommendation 14 in the main report). The report refers the reader to The Royal Society and the British Academy review on the needs of a 21st century data governance system (see below).
In the main report (page 14) it states:
‘Trust, ethics, governance and algorithmic accountability: Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.
However, building public confidence and trust will be vital to successful development of UK AI. Therefore this Review stresses the importance of industry and experts working together to secure and deserve public trust, address public perceptions, gain public confidence, and model how to deliver and demonstrate fair treatment. Fairness will be part of gaining economic benefits, and addressing ethical issues effectively to support wider use of AI could be a source of economic advantage for the UK.’
Page 66 of the main report:
‘As noted above, AI can also create new situations with new implications for fairness, transparency and accountability. AI could also change the nature of many areas of work.
AI in the UK will need to build trust and confidence in AI-enabled complex systems. There is already collective activity to work towards guidelines in ethics for automation, but we can expect this field to grow and change. A publicly visible expert group drawn from industry and academia, which engages with these issues would help to build that trust and confidence.’
Page 68 has a section on ‘Explainability of AI-enabled uses of data’ which concludes with the recommendation:
‘￼Recommendation 14: The Information Commissioner’s Office and the Alan Turing Institute should develop a framework for explaining processes, services and decisions delivered by AI, to improve transparency and accountability.’
‘Further on, it is possible that new applications of AI may hold solutions on transparency and explainability, using dedicated AIs to track and explain AI-driven decisions.’
Report,Growing the artificial intelligence industry in the UK, Professor Dame Wendy Hall, October 2017
The Royal Society and the British Academy review on the needs of a 21st century data governance system.
‘The amount of data generated from the world around us has reached levels that were previously unimaginable. Meanwhile, uses of data-enabled technologies promise benefits, from improving healthcare and treatment discovery, to better managing critical infrastructure such as transport and energy.
These new applications can make a great contribution to human flourishing but to realise these benefits, societies must navigate significant choices and dilemmas: they must consider who reaps the most benefit from capturing, analysing and acting on different types of data, and who bears the most risk.’
Article and Report, The Royal Society, Data management and use: Governance in the 21st century – a British Academy and Royal Society project, October 2017
Report of COMEST on Robot Ethics
World Commission on the Ethics of Scientific Knowledge and Technology (COMEST)
United Nations Educational, Scientific and Cultural Organisation, September 2017
This 62 page report concludes with recommendations in the following area:
VI.1. A technology-based ethical framework
VI.2. Relevant ethical principles and values
VI.2.1. Human Dignity
VI.2.2. Value of Autonomy
VI.2.3. Value of Privacy
VI.2.4. ‘Do not harm’ Principle
VI.2.5. Principle of Responsibility
VI.2.6. Value of Beneficence
VI.2.7. Value of Justice
VI.3. COMEST specific recommendations on robotics ethics
VI.3.1. Recommendation on the Development of the Codes of Ethics for Robotics and Roboticists
VI.3.2. Recommendation on Value Sensitive Design
VI.3.3. Recommendation on Experimentation
VI.3.4. Recommendation on Public Discussion
VI.3.5.Recommendation on Retraining and Retooling of the Workforce
VI.3.6. Recommendations related to Transportation and Autonomous Vehicles
VI.3.7. Recommendations on Armed Military Robotic Systems (‘Armed Drones’)
VI.3.8. Recommendations on Autonomous Weapons
VI.3.9. Recommendations on Surveillance and Policing
VI.3.10. Recommendation relating to Private and Commercial Use of Drones
VI.3.11. Recommendation on Gender Equality
VI.3.12. Recommendation on Environmental Impact Assessment
VI.3.13. Recommendations on Internet of Things
Report, Report of COMEST on Robot Ethics, COMEST, UN, September 2017
Prioritizing Human Well-being in the Age of Artificial Intelligence
On 11 April 2017, IEEE hosted a dinner debate at the European Parliament in Brussels called, Civil Law Rules on Robotics: Prioritizing Human Well-being in the Age of Artificial Intelligence. The event featured experts from The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (“The IEEE Global Initiative”) and was hosted by Member of European Parliament (MEP) Mady Delvaux (The Progressive Alliance of Socialists and Democrats, Luxembourg), who served as Rapporteur on the Parliament’s Civil Law Rules on Robotics report. Among other recommendations, the report proposed a system of registration for advanced robots managed by a potential EU Agency for Robotics and Artificial Intelligence (AI).
The report also suggested autonomous robots be granted the status of electronic personhood under a liability framework regarding the actions of these devices and their users. This idea has been largely misconstrued as a form of robot rights although the way “personhood” is described in the report is similar to the legal notion of corporate personhood. The confusion and heightened interest surrounding this issue paved the way for an in depth discussion on how to ascribe and measure value for technology and the well-being of the people who use it.
Report, Prioritizing Human Well-being in the Age of Artificial Intelligence, IEEE, April 2017
ASILOMAR AI Principles
A set of principles signed by 1,273 AI/Robotics researchers and 2541 others at the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence in January 2017
‘Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.’
Conference Report, ASILOMAR AI Principles, Future of Life Institute, January 2017
BS 8611:2016 – Robot Ethics Standard
Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems
BS 8611 gives guidelines for the identification of potential ethical harm arising from the growing number of robots and autonomous systems being used in everyday life. The standard also provides additional guidelines to eliminate or reduce the risks associated with these ethical hazards to an acceptable level. The standard covers safe design, protective measures and information for the design and application of robots.
This standard for robot and robotics device designers and managers, and the general public. It BS was written by scientists, academics, ethicists, philosophers and users to provide guidance on specifically ethical hazards associated with robots and robotic systems and how to put protective measures in place. It recognizes that these potential ethical hazards have a broader implication than physical hazards, so it is important that different ethical harms and remedial considerations are considered. The new standard builds on existing safety requirements for different types of robots, covering industrial, personal care and medical.
Hardcopy and PDF document, BS 8611:2016 – Robots and robotic devices, British Standards Institute, April 2016
Available from British Standards Institute Shop
EPSRC Principles of Robotics – Regulating robots in the real world
Engineering and Physical Sciences Research Council (EPSRC) principles
Evidence submitted to the House of Lords AI Committee 6th September 2017
Doteveryone – Written evidence (AIC0148)
Evidence around inequality for APPG AI
by Laura James, Doteveryone, October 2017
A presentation given to the All Party Parliamentary Committee on AI, on 16th October 2017
The Quest for Roboethics – A Survey
Last Update April 2019
“The aim of this paper is give a brief account of subjects, projects, groups and authors dealing with ethical aspects of robots. I first start with recent research on roboethics in two EU projects namely ETHICBOTS (2005-2008) and ETICA (2009-2011). I report on the activities of Roboethics.org and particularly of the Technical Committee (TC) on Roboethics of the IEEE and list some ethical issues and principles currently discussed. I also report briefly on the Machine Ethics Consortium.”
Article, The Quest for Roboethics – A Survey, Rafael Capurro, April 2019
How AI can be a force for good
Mariarosaria Taddeo1,2,3, Luciano Floridi1,2
24 Aug 2018
“Artificial intelligence (AI) is not just a new technology that requires regulation. It is a powerful force that is reshaping daily practices, personal and professional interactions, and environments. For the well-being of humanity it is crucial that this power is used as a force of good. Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks.”
Journal Article : How AI can be a force for good, Mariarosaria Taddeo1,2,3, Luciano Floridi1,2, Science, Vol. 361, Issue 6404, pp. 751-752 DOI: 10.1126/science.aat5991
What jobs will still be around in 20 years?
‘Jobs won’t entirely disappear; many will simply be redefined. But people will likely lack new skillsets required for new roles and be out of work anyway’
Newspaper Article, What jobs will still be around in 20 years? Read this to prepare your future, The Guardian, June 2017
The Doomesday Invention
Article, The Doomesday Invention, Raffi Khatchadourian – Review of Nick Bostrom’s warning about AI, The New Yorker, November 2015
Tech Ethics in Practice
Article, Tech Ethics in Practice, doteveryone, Laura James, March 2018
Microsoft’s AI Blog
Regulator Against the Machine
Artificial intelligence (AI) is part of today’s zeitgeist: whether it be parallels with ‘I, Robot’ dystopias or predictions about its impact on society. But for all the potential, the development of machines that learn as they go remains slow in health care.
Blog post, Regulator Against the Machine, British Journal of Health Computing, November 2017
What do Robots Believe? – Ways of Knowing
How do we know what we know? This article considers: (1) the ways we come to believe what we think we know (2) the many issues with the validation of our beliefs (3) the implications for building artificial intelligence and robots based on the human operating system.
Blog Post, Human Operating System 4 – Ways of Knowing, Rod Rivers, September 2017
Cennydd Bowles (Author), September 2018
“Based on Cennydd’s years of research and consulting, Future Ethics transforms modern ethical theory into practical advice for designers, product managers, and software engineers alike. Cennydd uses the three lenses of modern ethics to focus on some of the tech industry’s biggest challenges: unintended consequences and algorithmic bias, the troubling power of persuasive technology, and the dystopias of surveillance, autonomous war, and a post-work future.”
Book, Future Ethics, ISBN-10: 1999601912 ISBN-13: 978-1999601911, Paperback, 242 pages, September 2018
Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence 1st Edition
Patrick Lin (Editor), Keith Abney (Editor), Ryan Jenkins (Editor), October 2017
Expanding discussions on robot ethics ‘means listening to new voices; robot ethics is no longer the concern of a handful of scholars. Experts from different academic disciplines and geographical areas are now playing vital roles in shaping ethical, legal, and policy discussions worldwide. So, for a more complete study, the editors of this volume look beyond the usual suspects for the latest thinking. Many of the views as represented in this cutting-edge volume are provocative–but also what we need to push forward in unfamiliar territory.’
Book, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence 1st Edition, ISBN-13: 978-0190652951 ISBN-10: 0190652950, 440 pages, October 2017
Robot Ethics: The Ethical and Social Implications of Robotics
(Intelligent Robotics and Autonomous Agents series)
Patrick Lin (Editor), Keith Abney (Editor), George A. Bekey (Editor)
‘Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The contributors then turn to human-robot emotional relationships, examining the ethical implications of robots as sexual partners, caregivers, and servants. Finally, they explore the possibility that robots, whether biological-computational hybrids or pure machines, should be given rights or moral consideration.’
Book, Robot Ethics: The Ethical and Social Implications of Robotics, January 2014
Moral Machines: Teaching Robots Right from Wrong
1st Edition by Wendell Wallach (Author), Colin Allen (Author)
‘Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don’t seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun.
Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.’
Book, Moral Machines: Teaching Robots Right from Wrong, ISBN-13: 9780195374049, first published January 2009
Meet the robot that helps you learn from afar
Mike Scialom, Cambridge Independent
PUBLISHED: 13:33 11 March 2018
The AV1 robot avatar – An innovative concept means an interactive education for students who can’t make it to lectures
AI: ‘It’s not just a playground’
Mike Scialom, Cambridge Independent
PUBLISHED: 11:04 16 November 2017
Is Prof Stephen Hawking right to say AI is a danger to humanity? Dr Stella Pachidi says machine intelligence needs closer scrutiny
Data is the fuel for AI
Newsletter Post, Data is the fuel for AI, so let’s ensure we get the ethics right, Birgitte Andersen, City AM Newsletter, December 2017
BBC News: MEPs vote on robots’ legal status – and if a kill switch is required
MEPs have called for the adoption of comprehensive rules for how humans will interact with artificial intelligence and robots.
The report makes it clear that it believes the world is on the cusp of a “new industrial” robot revolution.
It looks at whether to give robots legal status as “electronic persons”.
Designers should make sure any robots have a kill switch, which would allow functions to be shut down if necessary, the report recommends.
Meanwhile users should be able to use robots “without risk or fear of physical or psychological harm”, it states.
BBC News, MEPs vote on robots’ legal status – and if a kill switch isrequired, By Jane Wakefield, Technology reporter, January 2017
BBC News, Sex robots: Experts debate the rise of the love droids
Would you have sex with a robot? Would you marry one? Would a robot have the right to say no to such a union?
These were just a few of the questions being asked at the second Love and Sex with Robots conference hastily rearranged at Goldsmiths University in London after the government in Malaysia – the original location – banned it.
BBC News, Sex robots: Experts debate the rise of the love droids, By Jane Wakefield, Technology reporter, December 2016
No longer accessible
Index compiled by Vincent C. Müller (email@example.com)
A comprehensive list of policy documents published between 2006 and 2019.