Home » Posts tagged 'AI'

Tag Archives: AI

Request contact
If you need to get in touch
Enter your email and click the button

– UK white paper response

 

 

 

Response to the

Department for Science, Innovation and Technology

consultation on the white paper

‘A pro-active approach to AI regulation’

published in March 2023

Date of Response:  8th June 2023

 

Authors:  Citizens Discussion Group on Social Impacts of AI

contact email:  rod.rivers@mac.com

 

Download:   AIethics.AI Response to the Consultation

 

Also see related links to the White Paper etc. below the response

 

 

Introduction

Main Points

Proposed revision to emphasis of the white paper

Rationale and Discussion

Recommended changes to the white paper

Additional Comments

 

Introduction

Please accept and consider this response.  It has emerged from a small group of citizens who have had monthly discussions since November 2022 to consider the social impacts of AI.

Rather than respond to the individual consultation questions we have put our views forward in this free format.   This is so that we are not constrained by the assumptions made in the white paper and so that we can respond at a higher level than the detail of the proposals.

Main Points

Firstly. we are struck by the coincidence in time between the consultation white paper and the proposal by AI experts that there should be a moratorium on AI developments.   There is clearly an anomaly between these two positions that needs to be fully understood and reconciled. On the one hand we have the UK Government proposing very light-touch regulation in order not to stifle innovation and on the other hand a significant number of AI experts who are alarmed by the existential risk presented by AI systems.  The way in which these two rather opposing positions can be reconciled is explored below.

Secondly and related, AI is developing rapidly, and indeed, much has happened in the three months since the publication of the white paper, which itself may have been drafted before the public were fully exposed to systems like Chat GTP.  For example, since then Geoff Hinton (the ‘grandfather of machine learning techniques’) has expressed his concerns about AI and the uncertain potential for AI systems to develop a different but more effective form of intelligence, in which many interacting AIs communicate at electronic speed with access to the accumulation of human knowledge (both well validated and spurious).   We are already seeing AIs that, at present with human help, generate computer code that can feed into the development of another, more powerful and effective, generation of AI systems.  We can anticipate that it will not be too long before AI systems are critiquing and improving themselves in a general (not task specific) way without much, if any, human intervention.  Geoff Hinton, himself, believes that it is very difficult to predict how AI will develop beyond a 5 year horizon.   While the white paper acknowledges the pace of development, it offers no mitigation of a risk that might emerge from the fog within the lifetime of a single government.  We cannot see how either a moratorium on developments of AI or its regulation is likely to prevent an AI ‘arms race’ amongst nations or between companies.  This situation calls for a quite different strategy and our proposals below address this.

Thirdly, we note that the UK cannot act alone with respect to development of AI or its regulation.  While the UK may aspire to being amongst the leading innovators in AI, it is likely to be dwarfed by the US, China, and the EU, both in its investment in AI and in its regulation.  With respect to development this means a potential ‘arms race’ that can only accelerate developments and heighten the existential risks.  With respect to regulation this means that the UK, while it might be able to influence the debate about regulation, could in the end be subsumed into a regulatory regime largely defined by other bigger players.

Fourthly, we note that on first reading the white paper comes across as a ‘do nothing’ approach (light touch regulation and minimal organisational change).  It creates the impression of lacking leadership, firm direction, and moral authority.  AI is an area of increasing expert and public concern.  It is dominated by multi-national commercial players who have already demonstrated an inability to adopt adequate safeguards in relation to harms caused by technology.  Only a state can take the moral lead necessary to control large, short-term commercial interests that often have little interest in minimising harms.

Fifth, the white paper fails to distinguish between the types of AI system that do harm and those that clearly confer benefit.  Its laisser-faire approach fails to acknowledge the full impact of either past or future harms.  The white paper refers to regulating ‘use’ rather than ‘technology’.  We would argue for regulating with respect to ‘harms’.  There is insufficient explanation as to the purpose of regulation as a mechanism for containing the risk of harms.

We support the principles generally and especially the principle of contestability and redress.  Compensation and redress with respect to ‘harms’ gets directly to the purpose of regulation and would encourage the identification, anticipation and mitigation of harms. Making contestability easy is essential to the fuller understanding of fast changing impacts of technology.  Making redress in cases of transgression fully compensate individuals and society (i.e. to compensate for wider social harms) is the only language big players are likely to understand.  The burden of proof of no harm (or benefit clearly and significantly outweighing harms) should be on the companies developing AI and their products rather than on the individual or society.  Contestability and redress are far less abstract than principles like fairness and are the route to operationalising what the more abstract principles mean in practice.  If legislation is to have any teeth it needs to do everything it can to facilitate and fully compensate for both individual and social harms.

Sixth, whilst we welcome the white paper and the opportunity it offers to address an important topic, we believe that by not adequately taking into account the above, the white paper is flawed because it addresses the wrong question.  This is highlighted by the huge gulf in the position of the white paper for light-touch regulation and the perception by experts in the industry that AI may constitute the world’s biggest existential threat. There is clearly a significant mis-match in the frames of reference behind these two apparently opposing positions.

We believe that rather than addressing the extent of regulation – whether it should be light-touch or not, it should be addressing the question ‘what sorts of AI should the UK be developing?’. In the answering of this question, we point to an important role for the UK for which there may be a gap in the market, and a vital role in the development of AI systems generally.

Proposed revision to emphasis of the white paper

We have a suggestion that we believe should be considered at the highest levels of government, that could steer the UK onto a path that would not only be good for the UK but could also benefit all nations.   It avoids the UK becoming implicitly complicit in a dangerous AI arms race, it capitalises on strengths that are deeply routed in British culture, it provides an excellent opportunity for commercial exploitation of AI technology, and it enhances the brand and credibility of the UK as a leader in a matter of importance across the world.

So, what is the suggestion?   In one sentence – the UK Government should set up mechanisms that support AI developments designed to prevent harms (and to capitalise on some strengths in ethical big data) and discourage others.

Mechanisms:  The mechanisms we propose are financial incentives (e.g. grants and tax-breaks in the case of favoured developments, and additional tax burdens and regulation in the case of discouraged developments).  Such an approach provides a more focused, useful and effective regime than that proposed in the white paper.   It operationally defines what is encouraged and discouraged without introducing heavy regulation that will quickly date or stifle innovation.  It clearly indicates the areas where the UK can lead the world, and where other nations may have neither the capability or motivation to compete while still seeing the benefits of and supporting the UK’s initiative.

Areas to Encourage:  While other nations focus on the mainstream development of AI systems, our suggested approach is to encourage focused specialisation on particular AI technologies and applications. These are ethical AI, AI risk mitigation and big data applications (e.g. health).

By coordinating and integrating the skills that UK universities have built up, particularly in the social science, computing and creative industries, the UK can build world-class interdisciplinary teams designed to address some of the hard problems of AI.  These include:

  • AI safely and checking of AI decision-making
  • truth verification, anomaly detection and fraud/fake detection
  • accountability, responsibility and legal liability in relation to AI systems
  • identity verification, protection and management
  • collective intelligence and citizen participation in political decision-making
  • legal judgement and moral reasoning
  • traceability

Included in the above are : watermarking technologies, reflective and corrective algorithms, architectures to facilitate citizen participation in decision-making, anomaly detection, personal/user agents, cyber defence and neutralisation systems, open decision-making and transparency, blame logics (to help formalise the allocation of accountability, responsibility and redress for harms), explanation systems, scientific truth verification, supply chain traceability (e.g. food, energy clothing)., health and safety checking systems, checking for adherence to law and regulation, and many other specific applications that would help ensure the integrity of AI and other systems.

Rationale and Discussion

The approach has parallels with the way in which the UK has encouraged the development of green / carbon neutral technologies. The impacts and benefits accrue not just to the UK but to individuals everywhere and humanity as a whole. Hence the products and services are welcomed by governments, new businesses and citizens alike. The market for these AI developments is world-wide and wide open.

The UK also has the opportunity to capitalise on the use of high quality big data sets.  In particular, data held by the NHS needs to be both protected and exploited by AI.  The UK can use big datasets like health data without either compromising privacy or selling the crown jewels. How?  These big data sets can be used to train AI systems.  The algorithms produced by this training have large commercial value.  They do not expose the raw data itself so patient privacy is maintained and the ownership of the raw data is preserved.

The UK has an opportunity to act on the world stage in a stateman-like way with respect to AI. It could have with its eye firmly on helping mitigate the risks of AI systems by building principles, methods, skills and tools that draw on its already strong capabilities in providing legal frameworks, regulatory systems, democratic government and higher education.   AI is not just another technology.  It is a technology that will have a profound effect on humanity and the UK should position itself as a forward thinker and actor.  It need not join a knee-jerk competitive scrabble to develop short-term commercial AI systems – a race that it will surely lose to bigger international players.

Instead, it could develop a critical mass of capability that steers AI towards beneficial applications and provides the tools to mitigate AI risks.   The risks have already materialised through the use of AI in marketing, manipulating news feeds, deep fakes, dissemination of false and mis-leading information and so on.  We will already face the possibility of the use of AI by organised crime, rogue states and borderline industries like gambling.  Look at the way the UK in particular has fallen behind in fraud detection.  We need a concerted focus and resources for being ahead of the threats made possible by AI to address both short-term threats like fraud but also the longer term) emerging existential threats (and in this case long term may be only 5 years. The UK could develop a commercial edge by leading in ethical AI, AI risk mitigation and use of big data in training algorithms.

For each of the principles set out in the white paper (safety, transparency, explainability, fairness, accountability, contestability and redress) it is possible to identify/develop scenarios illustrating risk.  For example, how might a well-resourced actor build AI systems that threatened safety, transparency etc., either malevolently or inadvertently.  Such scenarios might drive the development of AI tools that can ‘police’ other AI systems and help counter the risks to these worthwhile principles.

All the example areas above require a multi-disciplinary approach.  They need to be addressed from both a social and technical perspective. They need to be developed rapidly to address the many threats and risks posed by the ready availability of AI platforms. As we have seen by the use of AI in social media, these new AI platforms can be used in many nefarious ways ranging from the criminal to unacceptable exploitation (both commercial and political).   There are also many possible applications of AI that can help address currently intractable UK and world problems.  These too can be encouraged.

Recommended changes to the white paper

In order to implement the above we make the following recommendations:

The white paper, while useful as it stands, should be re-oriented (and supplemented) to:

  • position the UK as the leading developer of products and services to support the development (and commercial exploitation) of ethical AI, AI risks mitigation and AI training data
  • identify ethics as a primary driver of the policy along-side innovation and commercial exploitation
  • explicitly set out the areas of AI development where harms and potential harms have already been identified and cite the types of developments and applications that potentially lead to harms
  • Propose the development of mechanisms to identify, measure and allocate the accountability / responsibility for harms as a basis for determining appropriate redress
  • explicit identify areas of AI development to be encouraged and only use examples that conform to the areas encouraged
  • commission the development of a test (that could potentially be implemented as an AI system through training on examples) that would score proposed development for their conformity with the types of development to be encouraged (i.e. ethical and commercially promising)
  • set out mechanisms by which these areas might be encouraged (e.g. grants; tax-breaks; technical, training and managerial support)
  • set out a strategy to develop a world leading workforce able to develop products and services in the areas to be encouraged
  • Develop scenarios that illustrate risks to the principles set out in the white paper (safety, transparency, explainability, fairness, accountability, contestability and redress) and use these to drive the development of AI systems to mitigate the risks
  • set out deterrents to discourage potentially criminal, harmful and otherwise unacceptable developments (e.g. heavy regulation and punitive taxation)

Additional Comments

Otherwise, we like the iterative and sandbox approaches, and especially in light of the above comments we support centralised coordination and oversight.

We felt that case study 2.1 relating to the use of AI to determine insurance premiums might send the wrong messages about useful applications. It is arguable that such an application may go against the principle of insurance as a mechanism of fairly spreading risk. In general, the applications used to illustrate the types of AI development that are to be encouraged need to be thought through more explicitly and illustrate how the principles play through into their selection.  The case studies should clearly and explicitly exemplify the operation of the principles.  Indeed, the AI applications that should be encouraged should be those that would implement the principles – building AIs that aim to achieve greater safety, transparency, explainability, fairness, accountability, contestability and facilitate determining redress are exactly the sort of applications where we should position the UK to become world-leaders.  Applications like these would have great value to society, would be commercially valuable in a world where business would benefit from the greater stability they might engender and put the UK in the forefront of AI safety.

Help us campaign for safer AI

Artificial Intelligence has been rapidly advancing in recent years, with its applications exploding across various industries. However, as its use becomes more and more ubiquitous, concerns have been raised about the ethical implications of AI. To address these concerns, the AIethics.ai invites you to join our movement which is urging a more robust ethical approach to ai regulation in the UK government’s white paper consultation.

To support our response, and become part of a growing body pressuring the government to join here

Related Links

GOVERNMENT WHITE PAPER CONSULTATION ON REGULATION OF AI
The UK government published a white paper called ‘A Pro-Innovation Approach to AI Regulation’.  This is available at:
The white paper references a March 2023 report by Sir Patrick Vallance, the Government Chief Scientific Adviser, called ‘Pro-innovation Regulation of Technologies Review – Digital Technologies’ available at:
The white paper invites consultation by 21st June (see Appendix C). 
CALL FOR A PAUSE IN THE DEVELOPMENT OF AI
There has also been a call by AI Experts for a pause in development of advanced AI while we take stock of direction and regulation.  See the open letter and sign it if you want at:

– Robots & ToM 2

Do Robots Need Theory of Mind? Part 2

https://unsplash.com/@henkmul

Why Robots might need Theory of Mind (ToM)

Existential Risk and the AI Alignment Problem

Russell (2019) argues that we have been thinking about building artificial intelligence (AI) systems the wrong way. Since its inception, AI has attempted to build systems that can achieve ‘their own’ goals, albeit that we might give them those goals in the first instance. Instead, he says, we should be building AIs that understand ‘the preference structure’ that a person has and attempt to satisfy goals within the constraints of that preference structure.

In this way, the AI will be able to understand that acting to achieve one goal (e.g. getting a coffee) may interact or interfere with other preferences, goals or constraints (e.g. not knocking someone out of the way in the process) and thereby moderate its behaviour. An AI needs to understand that a goal is not there to be achieved ‘at all cost’. Instead it should be achieved taking into account many other preferences and priorities that might moderate it. Russell argues that if we think of building AIs in this way, we may be able to avoid the existential risk that superhuman AIs will eventually take over, and either deliberately or inadvertently wipe out humanity.

This is an example of what AI researchers have termed ‘the AI alignment problem’, that potentially creates an existential risk to humanity if we find ourselves, having built super-intelligent machines, unable to control them. Nick Bostrom (Bostrom 2014) has also characterised this threat using the example of setting an AI the goal of producing paperclips and it taking this so literally that it destroys humanity (for example, in its need for more raw materials) in the single-minded execution of this goal and having no appreciation of when to stop. Several other researchers have addressed the AI alignment problem (mainly in terms of laws, regulations and social rules) including Taylor et. al (2017), Hadfield-Menell & Hadfield (2019), Vamplew et. al (2018), Hadfield-Menell, Andrus & Hadfield (2019).

Russell (2019) goes on to describe how an AI should always have some level of uncertainty about what people want. Such uncertainty would put a check on the single-minded execution of a goal at all cost. It would drive a need for the AI to keep monitoring and maintaining its model of what a person might want at any point in time. It would require the AI to keep checking that what it was doing was ‘on-track’ or ‘aligned’ with a person’s whole preference structure. So, if, for example, you had instructed your self-driving car to take you to the airport and you received a message during the trip that your child had been in a road accident, the AI might recognise this as significant, and check whether you wanted to change your plans.

Russell arrives at this position from addressing the problem of existential risk. It is a proposed solution to the AI alignment problem. Working within this frame of reference, he proposes solutions like ‘Cooperative Inverse Reinforcement Learning’ (Malik et. al. 2018) whereby the Autonomous Intelligent System (AIS) attempts to infer the preference structure of a person from an observation of behaviour. This, indeed, seems to be a sensible approach.

However, the exact mechanism by which an AIS coordinates its actions with a person or people may well depend on it being able to accurately infer people’s mental states. Otherwise it might have to explicitly check (e.g. by asking) every few seconds, whether what it was doing was acceptable, and it would need to ‘read’ when a person found it’s behaviour unacceptable (e.g. by noting the frown when about to hit somebody on its mission to get the coffee).

The AI alignment problem is precisely the problem that every person has when interacting with another human being. When interacting with somebody else we are unable to directly observe their internal mental states. We cannot know their preference structure and we can only take on trust that their intentions are what they might say they are. Their real intentions, beliefs, desires, values, and boundaries could, in principle, be anything. What we do, is infer from their behaviours, including what they say (and what we understand from this) what their intentions are. Intentions, beliefs, and preferences are all hidden variables that may be the underlying causes of behaviours but because they are unobservable can only be guessed at.

Russell takes this on board and understands that the alignment problem is one that exists between any two agents, human or artificial. He is saying that robots need to be equipped with similar mechanisms to those that people generally have. These are the mechanisms that can model human beliefs, preferences and intentions by making inferences from observations of behaviour. Fortunately, we are not discovering and inventing these mechanisms for the first time.

Alignment with What?

A potential problem with having an AIS infer, reason and act on its analysis of another person’s mental states is that it may not accurately predict the consequences of its own actions. An action designed to do good may, in fact, do harm. In addition to being mistaken about the direction of its effect on mental states (positive or negative) it may also be inaccurate about the extent. So, an act designed to please may have no effect, or an act that is not intended to cause either pleasure or displeasure may have an effect.

This is quite apart from all manner of other complications that we might describe as its ‘policies’. Should, for example, an AIS always act to minimise harm and maximise a person’s pleasure? How should an AIS react if a person consistently fails to take medication prescribed for their benefit? How should it trade-off short and longer-term benefits? How does an AIS reconcile differences between two or more people, a person’s legal obligations and their desires or the interests of a person and another organisation (a school, a company, their employer, the tax office an so on)?

In all these cases, the issue comes down to how the AIS evaluates it’s own choice of possible actions (or inaction) and which stakeholders it takes into account when performing this evaluation. Numerous guidelines have been produced in recent years to help guide developers of AI systems. The good news is that there is considerable agreement about the kinds of principles that apply – not contravening human rights, not doing harm, increasing wellbeing, transparency and explainability in how the AIS arrives at decisions, elimination of bias and discrimination, and clear accountability and responsibility for the AIS’s decisions. The main mechanism for putting these principles into practice is the training and controls (guidelines, standards and legal) of companies, designers and developers. Comparatively little has been proposed for the controls that might be embedded within the AIS itself, and even less about the principles and mechanisms that might be used to achieve this.

We could turn to economics for models of preference and choice, but these models are discredited by findings in the social sciences (e.g. prospect theory) and many would argue that the incentives encouraged by such models is precisely what has lead to existential risks like nuclear arms races and climate change. We would therefore need to think very carefully before using these same models to drive the design of artificial intelligences because of their potential in adding yet another existential risk.

The existential risk discussed in relation to AISs has tended to focus on the fear that if an artificial intelligence is given autonomy to achieve it’s objectives without constraint, then it might do anything. Even simple systems can become unpredictable very quickly, and if it is unpredictable it is out of control. In the anthropomorphic way, characteristic of human beings, we project onto the AIS that it would be concerned about it’s own self-preservation, or that it would discover that self-preservation was a necessary pre-condition to attaining it’s goal(s). We further project that if it adopts the goal of self-preservation, then it might do this at all cost, putting it’s own self-preservation ahead of even those of its creators. There are some good reasons for these fears because goals like self-preservation and accumulation of resources are instrumental to the achievement of any other goal and an AIS might easily reason that out (Bostrom 2012). There have been challenges to this line of reasoning but this debate is not a central concern here. Rather, I am more concerned with whether an AIS can align with the goals of an individual using the same sorts of social cues that we all use in the informal ways in which we, in general, cooperate with each other.

If we are already concerned that the economic and political systems currently in place can have some undesirable consequences, like other existential risks and concentrations of wealth in the hands of a few, then the last thing we would want to do is build into AISs the same mechanisms for evaluating choices as those assumed by classical economic theory. In these posts, I look primarily to psychology (and sometimes philosophy) to provide evidence and analysis of how people make decisions in a social world, particularly one in which we are taking into account our beliefs about other people’s mental states. Whether this provides an answer to the alignment problem remains to be seen, but it is, at least, another perspective that may help us understand the types of control mechanisms we may need as the development of AIS proceeds at an ever increasing pace.

Cooperation and Collaboration

https://unsplash.com/@brett_jordan

The paradigm in which robots act as slaves to their human masters is gradually being replaced by one in which robots and humans work collaboratively together to achieve some goal (Sheridan 2016). This applies for individual human-robot interactions and for multi-robot teams (Rosenfeld et. al. 2017). If robots and AISs generally could infer the mental sates of the people around them when performing complex tasks, then this could potentially lead to more intuitive and efficient collaboration between the person and the machine. This requires trust on the part of the human that the robot will play its part in the interaction (Hancock et. al 2011).

As a step on the way, systems have been built where robots collaborate with each other without communication to perform complex tasks using only visual cues (Gassner et. al. 2017). Collaboration is especially useful in situations like care giving (Miyachi, Iga & Furuhata 2017) where giving explicit verbal instructions might be difficult (e.g. in cases of Alzheimers or autism). Gray et. al (2005) proposed a system of action parsing and goal simulation whereby a robot might infer goals and mental states of others in a collaborative task scenario.

Potential Benefits

Equipping AISs with the ability to recognise, infer and reason about the mental states of others could have some extra-ordinary advantages. Not only might we avoid existential risk to humanity (and could there be anything of greater significance) and make our interactions with robots and AISs generally easy and intuitive, but also we could be living along-side intelligent artefacts that have the robust capacity to carry out moral reasoning. Not only could they keep themselves in check, so that they made only justifiable moral decisions with respect to their own actions, but they might also help us adjudicate our own actions, offering fair, reasonable and justifiable remedies to human transgressions of the law and other social codes. They might become reliable and trustworthy helpers and companions, politely guiding us in solving currently intractable world problems, and protecting us from our own worse human biases, vices, and deficiencies. If they turned out to be better at moral reasoning than people, like wise philosophers they could offer us considered advice to help us achieve our goals and deal with the dilemmas’ of everyday life.

However, there is much that stands in the way of achieving this utopian relationship with the intelligent artefacts we create, especially if we want an AIS to infer mental states in the same way a person might, by observation and perhaps asking questions. We are beginning to understand patterns of neuronal activity sufficiently well to infer some mental states. For example, Haynes et. al. (2007) report being able to tell which of two choices a person is making from looking at neural activity. Elon Musk is creating ‘Neural Lace’ for such a purpose (Cuthbertson 2016) but could mental states be inferred using a non-invasive approaches.

In particular, could we create AISs that could infer our mental states, without inadvertently creating an even greater and more immediate existential risk? I will later argue that giving AISs theory of mind, without them having the same sort of controls on social behaviour that empathy gives people, could be a disaster that heightens existential risk in our very attempt to avoid it. In subsequent posts I first consider whether the artificial inferencing of human mental states is even a credible possibility?

References

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Bostrom, N., (2014). Superintelligence: Paths, Dangers, Strategies (1st. ed.). Oxford University Press, Inc., USA.

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Gassner, M., Cieslewski, T., & Scaramuzza, D. (2017). Dynamic collaboration without communication: Vision-based cable-suspended load transport with two quadrotors. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 5196-5202). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2017.7989609

Gray, J., Breazeal, C., Berlin, M., Brooks, A., & Lieberman, J. (2005). Action parsing and goal inference using self as simulator. In Proceedings – IEEE International Workshop on Robot and Human Interactive Communication (Vol. 2005, pp. 202–209). https://doi.org/10.1109/ROMAN.2005.1513780

Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 417–422). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314250

Hadfield-Menell, D., Andrus, M., & Hadfield, G. K. (2019). Legible normativity for AI alignment: The value of silly rules. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 115–121). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314258

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Malik D., Palaniappan M., Fisac J., Hadfield-Menell D., Russell S., and Dragan A., (2018) “An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning.” In Proc. ICML-18, Stockholm.

Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators
for Care Robot Innovation. In Procedia Computer Science (Vol. 112, pp. 1254-1262). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.08.078

Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211-231. https://doi.org/10.1016/j.artint.2017.08.005

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Sheridan, T. B. (2016). Human-Robot Interaction: Status and Challenges. Human Factors, 58(4), 525-32. https://doi.org/10.1177/0018720816644364

Taylor, J., Yudkowsky, E., Lavictoire, P., & Critch, A. (2017). Alignment for Advanced Machine Learning Systems. Miri, 1–25. Retrieved from https://intelligence.org/files/AlignmentMachineLearning.pdf

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

– Robots & ToM

Do Robots Need Theory of Mind? – part 1

Robots, and Autonomous Intelligent Systems (AISs) generally, may need to model the mental states of the people they interact with. Russell (2019), for example, has argued that AISs need to understand the complex structures of preferences that people have in order to be able to trade-off many human goals, and thereby avoid the problem of existential risk (Boyd & Wilson 2018) that might follow from an AIS with super-human intelligence doggedly pursuing a single goal. Others have pointed to the need for AISs to maintain models of people’s intentions, knowledge, beliefs and preferences, in order that people and machines can interact cooperatively and efficiently (e.g. Lemaignan et. al. 2017, Ben Amor et. al. 2014, Trafton et. al. 2005).

However, in addition to risks already well documented (e.g. Müller & Bostrom 2016) there are many potential dangers in having artificial intelligence systems closely observe human behaviour, infer human mental states, and then act on those inferences. Some of the potential problems that come to mind include:


  • The risk that self-determining AISs will be built with a limited capability of understanding human mental states and preferences and that humans will lose control of the technology (Meek et. al. 2017, Russell 2019).
  • The risk that the AIS will exhibit biases in what it selects to observe, infer and act that would be unfair (Osoba & Welser 2017)

  • The risk that the AIS will use misleading information, make inaccurate observations and inferences, and base its actions on these (McFarland & McFarland 2015, Rapp et. al. 2014)

  • The risk that even if the AIS observes and infers accurately, that its actions will not align with what a person might do or that it may have unintended consequences (Vamplew et. al. 2018)

  • The risk that an AIS will misuse its knowledge of a person’s hidden mental states resulting in either deliberate or inadvertent harm or criminal acts (Portnoff & Soupizet 2018).

  • The risk that peoples’ human rights and rights to privacy will be infringed because of the ability of AISs to observe, infer, reason and record data that people have not given consent to and may not even know exists (OECD 2019).

  • The risk that if the AIS was making decisions based on unobservable mental states that any explanations of an AIS’s actions based on them would be difficult to validate (Future of Life Institute 2017, Weld & Bansal 2018).

  • The risk that the AIS would, in the interests of a global common good, correct for people’s foibles, biases and dubious (unethical) acts thereby take away their autonomy (Timmers 2019),

  • The risk that using AISs, a few multi-national companies and countries will collect so much data about peoples’ explicit and inferred hidden preferences that power and wealth will become concentrated in even fewer hands (Zuboff 2018)

  • The risk that corporations will rush to be the first to produce AISs that can observe, infer and reason about people’s mental states and in so doing will neglect to take safety precautions (Armstrong et. al. 2016).

  • The risk that in acting out of some greater interest (i.e. the interests of society at large) an AIS will act to restrict the autonomy or dignity of the individual (Zardiashvili & Fosch-Villaronga 2020)

  • The risk that an AIS would itself take unacceptable risks based on inferred uncertain mental states, that may cause a person or itself harm (Merritt et. al. 2011).

Much has been written about the risks of AI, and in the last few years numerous ethical guidelines, principles and recommendations have been made, especially in relation to the regulation of the development of AISs (Floridi et. al. 2018). However, few of these have touched on the real risk that AISs may one day develop such that they can gain a good understanding of people’s unobservable mental states and act on them. We have already seen Facebook being used to target advertisements and persuasive messages on the basis of inferred political preferences (Isaak & Hanna 2018).

In future posts I look at the extent to which an AIS could potentially have the capability to infer other people’s mental states. I touch on some the advantages and dangers and identify some of the issues that may need to be thought through.

I argue that AISs generally (not only robots) may need to both model people’s mental states, known in the psychology literature as Theory of Mind – ToM (Carlson et. al. 2013), but also have some sort of emotional empathy. Neural nets have already been used to create algorithms that demonstrate some aspects of ToM (Rabinowitz 2018). I explore the idea of building AISs with both ToM and some form of empathy and the idea that unless we are able to equip AISs with a balance of control mechanisms we run the risk of creating AISs that have ‘personality disorders’ that we would want to avoid.

In making this case, I look at whether it is conceivable that we could build AISs that have both ToM and emotional empathy, and that if it were possible, how these two capacities would need to be integrated to provide an effective overall control mechanism. Such a control mechanism would involve both fast (but sometimes inaccurate) processes and slower (reflective and corrective) processes, similar to the distinctions Kahneman (Kahneman 2011) makes between system 1 and system 2 thinking. The architecture has the potential for the fine-grained integration of moral reasoning into the decision making of an AIS.

What I hope to add to Russell’s (2019) analysis is a more detailed consideration of what is already known in the psychology literature about the general problem of inferring another agent’s intentions from their behaviour. This may help to join up some of the thinking in AI with some of the thinking in cognitive psychology in a very broad-brushed way such that the main structural relationships between the two might come more into focus.

Subscribe (top left) to follow future blog posts on this topic.

References

Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y

Ben Amor, H., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human-robot cooperation tasks. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 2831–2837). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2014.6907265

Boyd, M., & Wilson, N. (2018). Existential Risks. Policy Quarterly, 14(3). https://doi.org/10.26686/pq.v14i3.5105
Carlson, S. M., Koenig, M. A., & Harms, M. B. (2013). Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science, 4(4), 391–402. https://doi.org/10.1002/wcs.1232

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Floridi, L., Cowls, J., Beltrametti, M. et al., (2018), AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 doi:10.1007/s11023-018-9482-5

Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. Future of Life, 1–23. Retrieved from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Lemaignan, S., Warnier, M., Sisbot, E. A., Clodic, A., & Alami, R. (2017). Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, 45–69. https://doi.org/10.1016/j.artint.2016.07.002

McFarland, D. A., & McFarland, H. R. (2015). Big Data and the danger of being precisely inaccurate. Big Data and Society. SAGE Publications Ltd. https://doi.org/10.1177/2053951715602495

Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2017). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. In PICMET 2016 – Portland International Conference on Management of Engineering and Technology: Technology Management For Social Innovation, Proceedings (pp. 682–693). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/PICMET.2016.7806752

Merritt, T., Ong, C., Chuah, T. L., & McGee, K. (2011). Did you notice? Artificial team-mates take risks for players. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 338–349). https://doi.org/10.1007/978-3-642-23974-8_37

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Oecd/Legal/0449. Retrieved from http://acts.oecd.org/Instruments/ShowInstrumentView.aspx?InstrumentID=219&InstrumentPID=215&Lang=en&Book=False

Osoba, O., & Welser, W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/rr1744

Portnoff, A. Y., & Soupizet, J. F. (2018). Artificial intelligence: Opportunities and risks. Futuribles: Analyse et Prospective, 2018-September(426), 5–26.

Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., & Botvinick, M. (2018). Machine Theory of mind. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 10, pp. 6723–6738). International Machine Learning Society (IMLS).

Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate information. Memory and Cognition, 42(1), 11–26. https://doi.org/10.3758/s13421-013-0339-0

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Timmers, P., (2019), Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds & Machines 29, 635–645 doi:10.1007/s11023-019-09508-4

Trafton, J. G., Cassimatis, N. L., Bugajska, M. D., Brock, D. P., Mintz, F. E., & Schultz, A. C. (2005). Enabling effective human-robot interaction using perspective-taking in robots. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 35(4), 460–470. https://doi.org/10.1109/TSMCA.2005.850592

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

Weld, D. S., & Bansal, G. (2018). Intelligible artificial intelligence. ArXiv, 62(6), 70–79. https://doi.org/10.1145/3282486

Zardiashvili, L., Fosch-Villaronga, E. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds & Machines (2020) doi:10.1007/s11023-019-09514-6

Zuboff S., (2019), The Age of Surveillance Capitalism, Profile Books

– Autonomy now

Making sense of a changing world

It’s difficult to make sense of a fast changing world. But could ‘autonomy’ be at the centre of it? I’ll explain.

There are two main themes – people and technology. Ideas about autonomy are changing for both. For people, it is a matter of their relationship to employment, government and the many institutions of society. For technology, it is the introduction of autonomous intelligence in a wide range of systems including phone apps and all manner of automated decision making systems that affect every aspect of our lives. There is also the increasing inter-dependency between people and technology, both empowering and constraining. Questions of autonomy are at the heart of these changes.

There have been times in history when it has not occurred to people that they could be autonomous in the broad scope of their lives. They were born into a time and place where the control of their destiny was not their own concern. They were conditioned to know their place, accept it and stay in it. In first world democracies, autonomy is, perhaps, a luxury of the here and now. It may not necessarily stay that way.


My particular interest is the way in which we are giving autonomy to the things that we create – computer algorithms, artificial intelligence systems and robots. But it’s broader than that. We all want the freedom to pursue our own goals, to self-determine. We are told repeatedly by an industry concerned with self-development and achieving success, that we should ‘find our authentic self’ and pursue the values and goals that really matter to us.

However, we can only do this within an environment of constraints – physical constraints, resource constraints, psychological constraints and social constraints. It is the dynamic between the individual and their constraints that is in constant flux and that I am trying to examine.

What’s Trending in Autonomy?

There are two main trends – one towards decentralisation and one towards concentrations of wealth and power. This seems something of a contradiction. How can both be true and, if so, where is this going?

There is a long-term trend towards decentralization. First we rejected the ancient gods as the controllers of nature. Much more recently we have started to question other sources of authority – governments, doctors, the church, the law, the mainstream media and many other institutions in society. As we as individuals have become more informed and empowered by technology, we have started to see the flaws in these ‘authorities’.

I believe, along with many other commentators, that we are heading towards a world where autonomy is not just highly valued but is also more possible than it ever has been. As society becomes better educated, as technology enables greater information sharing and flexibility, we can, and perhaps inevitably will, move towards a more decentralized society in which both human and artificial autonomous agents increasingly interact with each other. The interactions will become more frequent, more informed, more fine-grained and more local. The technological infrastructure for this continues to rollout at ever-increasing pace. Soon we will have 5G communications facilitating almost instantaneous communications between ever more intelligent and powerful devices – smart phones, autonomous vehicles, information servers, and a host of smart devices.

On the other hand, there is clear evidence of increased concentrations of wealth and power. Although estimates may vary, it seems that a greater proportion of the worlds wealth is held by fewer and fewer people. Stories abound of fewer that eight people owning more than half the worlds wealth. Economists like Thomas Pickerty have documented in detail the evidence for such a trend.

There is clearly a tension between these trends. As power and wealth become more concentrated, manifesting in the form of surveillance capitalism (not ignoring surveillance by the state) and fake news, there is a fight back by individuals and other institutions.

Individuals increasingly recognise the intrusions on their privacy and this is picked up (often belatedly) in legislation like GDPR and other moves to regulate. The checks and balances do work to modulate the dynamics of the struggle, but when they don’t work, the accumulated frustration at the loss of human dignity can become political and violent. Take a closer look at autonomy.

Why do we need Autonomy?

We each have a biological imperative to survive. While we can count on others to some extent, ‘the buck stops’ with each of us as individuals. The more robust and resilient solutions are self-sufficiency and self-determination. It’s not a fail-safe but it takes out the risk that others might not be there for us all the time. It also appears to be a route to greater wellbeing. Learning, developing competence and mastery, being able to predict and hence increase the possibility that we can control, being less subject to constraints on our actions – all contribute to satisfaction, ultimately in the service of survival.

In the hierarchy of needs, having enough food, shelter, sleep and security releases some of your own resources. It provides the freedom to climb. Somewhere near the top of the hierarchy is what Maslow called self-actualisation – the discovery and expression of your authentic self. But unless you are exceptionally lucky, and find that your circumstances align perfectly with your authentic self, then a pre-requisite is to have freedom from the constraints that prevent you from getting there.

Interactions between people and machines

This is all the human side of autonomy – the bit that applies to us all. This is a world in which both people and artificial agents – computer algorithms, smart devices, robots etc. interact with each other. Interactions between people and people, machines and machines and people and machines are accelerating in both speed and frequency in order that each autonomous agent can achieve its own rapidly changing priorities and goals. There is nothing static or certain in this world. It is changing, ambiguous, and unpredictable.

Different autonomous agents have different goals and different value systems that drive them. For people these are different cultures, social norms and roles. For machines they relate to their different functions and circumstances in which they operate. For interactions to work smoothly there needs to be some stability in the protocols that regulate them. Autonomy may be a way into defining accountability and responsibility. It may lead us towards mechanisms for the justification and explanation of action. Neither machines nor people are very good at this, but autonomy may provide the key that unlocks our understanding of effective communication and protocols.

Still, that’s for later. Right now, this article is just focused on the concept of autonomy.
I hope you are convinced that this is an important and interesting subject. It is at the foundation of our relationships with each other and between people and the increasingly autonomous and intelligent agents we are creating.

Questions that need to be addressed


  • What do we mean by autonomy?
  • How do agents (people and machines) differ in the amount of autonomy they have?
  • Can we measure autonomy?
  • What examples of peoples societies and artefacts can we think of that might help us understand what is it what is not autonomous?
  • What do we mean by autonomy when we talk about artificial autonomous intelligence systems?
  • Are the computer algorithms and robotic systems that we have today truly autonomous?
  • What would it mean to build an artificial intelligence that was truly autonomous?
  • What is the relationship between autonomy and morality?
  • Can we be truly autonomous if we are constrained by ethical principles and social norms?
  • If we want our intelligent artefacts to behave ethically, then how would we approach the design of those systems?

That’s quite a chunk of questions to get through. But they are all on the critical path to understanding how our own human autonomy and the autonomy that we build into artefacts, can relate to and engage with each other. They take us to a point where we can better understand the trade-offs every intelligent agent, be it human or artificial, has to make between the freedom to pursue its own goals and the constraints of living in a society of other intelligent agents.

It also reveals how, in people, the constraints of society are internalised. As adults they have become part of our internal control mechanisms. These internal controls have no absolute morality but reflect the circumstances and society in which we grow up. As our artefacts become increasingly intelligent we may need to develop similar mechanisms for their socialisation.

Definitions of Autonomy

The following definitions are taken from the glossary of the IEEE publication called ‘Ethically Aligned Design’ (version 1). The glossary has several definitions from different perspectives:

Ordinary language: The ability of a person or artefact to govern itself including formation of intentions, goals, motivations, plans of action, and execution of those plans, with or without the assistance of other persons or systems.

Engineering: “Where an agent acts autonomously, it is not possible to hold any one else responsible for its actions. In so far as the agent’s actions were its own and stemmed from its own ends, others cannot be held responsible for them” (Sparrow 2007, 63).

Government: “we define local [government] autonomy conceptually as a system of local government in which local government units have an important role to play in the economy and the intergovernmental system, have discretion in determining what they will do without undue constraint from higher levels of government, and have the means or capacity to do so” (Wolman et al 2008, 4-5).

Ethics and Philosophy: “Put most simply, to be autonomous is to be one’s own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman 2015).

Medical: “Two conditions are ordinarily required before a decision can be regarded as autonomous. The individual has to have the relevant internal capacities for self-government and has to be free from external constraints. In a medical context a decision is ordinarily regarded as autonomous where the individual has the capacity to make the relevant decision, has sufficient information to make the decision and does so voluntarily” (British Medical Association 2016).

More on autonomy later. Sign up to the blog if you want to be notified.

Meanwhile a couple of videos

The first has an interesting take on autonomy. Autonomy is not a matter of what you want, but what you want to want. The more reflective you are about what you want the more autonomous you are.

Youtube Video, What is Autonomy? (Personal and Political), Carneades.org, December 2018, 6:50 minutes

https://www.youtube.com/watch?v=z0uylpfirfM

The second is from a relatively new Youtube channel called ‘Rebel Wisdom’. It starts with the breakdown of trust in traditional media and moves on to themes of decentralisation.

Youtube Video, The War on Sensemaking, Daniel Schmachtenberger, Rebel Wisdom, August 2019, 1:48:49 hours

https://www.youtube.com/watch?v=7LqaotiGWjQ&t=17s

– Sex Robots

A Brief Summary by Eleanor Hancock


Sex robots have been making the headlines recently. We have been told they have the power to endanger humans or fulfil our every sexual fantasy and desire. Despite the obvious media hype and sensationalism, there are many reasons for us to be concerned about sex robots in society.

Considering the huge impact that sexbots may have in the realms of philosophy, psychology and human intimacy, it is hard to pinpoint the primary ethical dilemmas surrounding the production and adoption of sex robots in society, as well as considering who stands to be affected the most.

This article covers the main social and ethical deliberations that currently surround the use of sex robots and what we might expect in the next decade.

What companies are involved in the design and sale of sex robots?

One of the largest and most well-known retailers of sex dolls and sex robots is Realbotix in San Francisco. They designed and produced ‘Realdolls’ for years but in 2016 they released their sex robot Harmony, which also has a corresponding phone application that allows you to ‘customise’ your robotic companion. Spanish developer Sergi also released Samantha the sexbot, who is a life-sized gynoid which can talk and interact with users. When sex robots become more sophisticated and can gather intimate and personal user data from us, we may have more reason to be concerned about who is designing and manufacturing sex robots – and what they are doing with our sexual data.

What will sex robots look like?

The current state of sex dolls and robots has largely commodified the human body, with the female human body appearing to be more popular in the consumer sphere amongst most sex robot and doll retailers. With that in mind, male sex robots appear to be increasing in popularity and two female journalists have documented their experiences with male sex dolls. Furthermore, there are also instances of look-a-like sex dolls who replicate and mimic celebrities. To this effect, sex robot manufacturers have had to make online statements about their refusal to replicate people, without the explicit permission of that person or their estate. The industry is proving hard to regulate and the issue of copyright in sex robots may be a real ethical and social dilemma for policy makers in the future. However, there have also been examples of sex robots and dolls that do not resemble human form, such as the anime and alien-style dolls.

Will sex robots impact gender boundaries?

Sex robots will always be genderless artifice. However, allowing sex robots to enter the human sexual arena may allow humans to broaden their sexual fantasies. Sex robots may even be able to replicate both genders through customisation and add-on parts. As mentioned previously, the introduction of genderless artifice who do not resemble humans may positively impact human sexual relations by broadening sexual and intimate boundaries.

Who will use sex robots?

There has been variation between the research results studying whether people would use sex robots. The fluctuations in research results mean it is difficult to pinpoint who exactly would use a sex robot and why. Intensive research about the motivations to use sex robots has highlighted the complexities behind such choice that mirror our own human sexual relationships. However, most research studies have been consistent when reporting which gender is most likely to have sex with a robot, with most studies suggesting males would always be more likely than females to have sex with a robot and purchase a sex robot.

Can sex robots be used to help those with physical or mental challenges access sexual pleasure?

Sex robots may allow people to practice sexual acts or receive sexual acts that they are otherwise unable to obtain due to serious disabilities. The ethics behind such a practice have been divisive between radical feminists who deny sex is a human-right, and critics who think it could be medically beneficial and therapeutic.

Will sex robots replace human lovers?

There has not been enough empirical research on the effects of sexual relations with robots and to what extent they are able to reciprocate the same qualities in a human relationship. However, it is inferable that some humans will form genuine sexual or/and intimate relationships with sex robots, which may impede their desire to bother or desire human relationships anymore. The Youtube sensation ‘Davecat’ highlights how a man and his wife have been able to incorporate sex dolls into their married life comfortably. In a similar episode, Arran Lee Wright displayed his sexbot on British daytime television and was supportive of the use of sexbots between couples.

Will sex robots lead to social isolation and exclusion?

There are many academics who already warn us against the isolating impact technology has on our real-life relationships. Smartphones and social media have increased our awareness about online and virtual relationships and some academics believe sex robots signal a sad reflection of humanity. There is a risk that some people may become more isolated as they chose robotic lovers over humans but there is not enough empirical research to deliver a conclusion at this stage.

Will sex robot prostitutes replace human sex workers?

As much as there have been examples of robot and doll brothels and rent-a-doll escort agencies, it is difficult to tell whether sex robots will ever be able to replace human sex workers completely. Some believe there are benefits from adopting robots as sex workers and a 2012 paper suggested that by 2050, the Red Light District in Amsterdam would only facilitate sex robot prostitution. Escort agency owners and brothel owners have spoken about the reduction in management and time costs that using dolls or robots would deliver. However, sociological research from the sex industry suggests sex robots will have a tough time replacing all sex workers, and specifically escorts who need a high range of cognitive skills in order to complete their job and successfully manipulative a highly saturated and competitive industry.

How could sex robots be dangerous?

It seems at this stage, there is not enough research about sex robots to jump to any conclusions. Nonetheless, it seems that most roboticists and ethicists consider how humans interact and behave towards robots as a key factor in assessing the dangers of sex robots. It is more about how we will treat sex robots than the dangers they can evoke on humans.

Is it wrong to hurt a Sex Robot?

Sex robots will allow humans to explore sexual boundaries and avenues that they may not have previously been able to practice with humans. However, this could also mean that people choose to use sex robots as ways to enact violent acts, such as rape and assault. Although some would argue robots cannot feel so violence towards them is less morally corrupt than humans, the violent act may still have implications through the reinforcement of such behaviours in society. If we enact violence on a machine that looks human, we may still associate our human counterparts with such artifice. Will negative behaviour we practice on sex robots became more acceptable to reciprocate on humans? Will the fantasy of violence on robots make it commonplace in wider society? Roboticists and ethicists have been concerned about these issues when considering sex robots but there is simply not enough empirical research yet. Although, Kate Darling still believes there is enough reason to consider extending legal protection towards social robots (see footnote).



References

Jason Lee – Sex Robots and the Future of Desire
https://campaignagainstsexrobots.org/about/

Robots, men and sex tourism, Ian Yeoman and Michelle Mars, Futures, Volume 44, Issue 4, May 2012, Pages 365-371
https://www.sciencedirect.com/science/article/pii/S0016328711002850?via%3Dihub

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami
http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf

Attitudes on ‘Sex Robots will liberate the next generation of women
https://www.kialo.com/will-sex-robots-liberate-the-next-generation-of-women-4214?path=4214.0~4214.1

Footnotes

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami