Home » Design

Category Archives: Design

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Sex Robots

A Brief Summary by Eleanor Hancock


Sex robots have been making the headlines recently. We have been told they have the power to endanger humans or fulfil our every sexual fantasy and desire. Despite the obvious media hype and sensationalism, there are many reasons for us to be concerned about sex robots in society.

Considering the huge impact that sexbots may have in the realms of philosophy, psychology and human intimacy, it is hard to pinpoint the primary ethical dilemmas surrounding the production and adoption of sex robots in society, as well as considering who stands to be affected the most.

This article covers the main social and ethical deliberations that currently surround the use of sex robots and what we might expect in the next decade.

What companies are involved in the design and sale of sex robots?

One of the largest and most well-known retailers of sex dolls and sex robots is Realbotix in San Francisco. They designed and produced ‘Realdolls’ for years but in 2016 they released their sex robot Harmony, which also has a corresponding phone application that allows you to ‘customise’ your robotic companion. Spanish developer Sergi also released Samantha the sexbot, who is a life-sized gynoid which can talk and interact with users. When sex robots become more sophisticated and can gather intimate and personal user data from us, we may have more reason to be concerned about who is designing and manufacturing sex robots – and what they are doing with our sexual data.

What will sex robots look like?

The current state of sex dolls and robots has largely commodified the human body, with the female human body appearing to be more popular in the consumer sphere amongst most sex robot and doll retailers. With that in mind, male sex robots appear to be increasing in popularity and two female journalists have documented their experiences with male sex dolls. Furthermore, there are also instances of look-a-like sex dolls who replicate and mimic celebrities. To this effect, sex robot manufacturers have had to make online statements about their refusal to replicate people, without the explicit permission of that person or their estate. The industry is proving hard to regulate and the issue of copyright in sex robots may be a real ethical and social dilemma for policy makers in the future. However, there have also been examples of sex robots and dolls that do not resemble human form, such as the anime and alien-style dolls.

Will sex robots impact gender boundaries?

Sex robots will always be genderless artifice. However, allowing sex robots to enter the human sexual arena may allow humans to broaden their sexual fantasies. Sex robots may even be able to replicate both genders through customisation and add-on parts. As mentioned previously, the introduction of genderless artifice who do not resemble humans may positively impact human sexual relations by broadening sexual and intimate boundaries.

Who will use sex robots?

There has been variation between the research results studying whether people would use sex robots. The fluctuations in research results mean it is difficult to pinpoint who exactly would use a sex robot and why. Intensive research about the motivations to use sex robots has highlighted the complexities behind such choice that mirror our own human sexual relationships. However, most research studies have been consistent when reporting which gender is most likely to have sex with a robot, with most studies suggesting males would always be more likely than females to have sex with a robot and purchase a sex robot.

Can sex robots be used to help those with physical or mental challenges access sexual pleasure?

Sex robots may allow people to practice sexual acts or receive sexual acts that they are otherwise unable to obtain due to serious disabilities. The ethics behind such a practice have been divisive between radical feminists who deny sex is a human-right, and critics who think it could be medically beneficial and therapeutic.

Will sex robots replace human lovers?

There has not been enough empirical research on the effects of sexual relations with robots and to what extent they are able to reciprocate the same qualities in a human relationship. However, it is inferable that some humans will form genuine sexual or/and intimate relationships with sex robots, which may impede their desire to bother or desire human relationships anymore. The Youtube sensation ‘Davecat’ highlights how a man and his wife have been able to incorporate sex dolls into their married life comfortably. In a similar episode, Arran Lee Wright displayed his sexbot on British daytime television and was supportive of the use of sexbots between couples.

Will sex robots lead to social isolation and exclusion?

There are many academics who already warn us against the isolating impact technology has on our real-life relationships. Smartphones and social media have increased our awareness about online and virtual relationships and some academics believe sex robots signal a sad reflection of humanity. There is a risk that some people may become more isolated as they chose robotic lovers over humans but there is not enough empirical research to deliver a conclusion at this stage.

Will sex robot prostitutes replace human sex workers?

As much as there have been examples of robot and doll brothels and rent-a-doll escort agencies, it is difficult to tell whether sex robots will ever be able to replace human sex workers completely. Some believe there are benefits from adopting robots as sex workers and a 2012 paper suggested that by 2050, the Red Light District in Amsterdam would only facilitate sex robot prostitution. Escort agency owners and brothel owners have spoken about the reduction in management and time costs that using dolls or robots would deliver. However, sociological research from the sex industry suggests sex robots will have a tough time replacing all sex workers, and specifically escorts who need a high range of cognitive skills in order to complete their job and successfully manipulative a highly saturated and competitive industry.

How could sex robots be dangerous?

It seems at this stage, there is not enough research about sex robots to jump to any conclusions. Nonetheless, it seems that most roboticists and ethicists consider how humans interact and behave towards robots as a key factor in assessing the dangers of sex robots. It is more about how we will treat sex robots than the dangers they can evoke on humans.

Is it wrong to hurt a Sex Robot?

Sex robots will allow humans to explore sexual boundaries and avenues that they may not have previously been able to practice with humans. However, this could also mean that people choose to use sex robots as ways to enact violent acts, such as rape and assault. Although some would argue robots cannot feel so violence towards them is less morally corrupt than humans, the violent act may still have implications through the reinforcement of such behaviours in society. If we enact violence on a machine that looks human, we may still associate our human counterparts with such artifice. Will negative behaviour we practice on sex robots became more acceptable to reciprocate on humans? Will the fantasy of violence on robots make it commonplace in wider society? Roboticists and ethicists have been concerned about these issues when considering sex robots but there is simply not enough empirical research yet. Although, Kate Darling still believes there is enough reason to consider extending legal protection towards social robots (see footnote).



References

Jason Lee – Sex Robots and the Future of Desire
https://campaignagainstsexrobots.org/about/

Robots, men and sex tourism, Ian Yeoman and Michelle Mars, Futures, Volume 44, Issue 4, May 2012, Pages 365-371
https://www.sciencedirect.com/science/article/pii/S0016328711002850?via%3Dihub

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami
http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf

Attitudes on ‘Sex Robots will liberate the next generation of women
https://www.kialo.com/will-sex-robots-liberate-the-next-generation-of-women-4214?path=4214.0~4214.1

Footnotes

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami

– Next Stop, Biological AI

This truly startling talk by Professor Michael Levin, from the Allen Discovery Center at Tufts University, has implications for everything – not just regenerative medicine.

It is no exaggeration to describe the work done in Levin’s lab as Frankensteinian. This is not a criticism, just an inevitable observation.

Levin describes biochemical interventions that can effect electrical transmission at the inter-cellular level in a range of organisms. These change the parameters for regeneration of body parts and reveal that a non-neural regenerative memory can exist throughout an organism. From the start of evolution of ‘primitive’ life forms, anatomical decision-making is taking place in every cell, and at every level of body structure.

Levin gives a highly informed factual account of findings in bioelectrical computation. Although he only touches on the implications, these techniques potentially lead to a technology that can design new life-forms and biologically-based computation devices.

It seems incredible that research results like these are possible now. It may be years or decades before it translates into medical interventions for humans, or is applied to creating biologically-based artificial intelligence, but the vision is clear.

To me, more frightening than the content of this talk, is the Facebook logo hanging over Levin’s head (no doubt just promotion, but still!).

YouTube Video, What Bodies Think About: Bioelectric Computation Outside the Nervous System – NeurIPS 2018, Artificial Intelligence Channel, December 2018, 52:06 minutes

– Ethics of Eavesdropping

It has been recently reported (e.g. see: Bloomberg News ) that the likes of Amazon, Google and Apple employ people to listen to sample recordings made by the Amazon Echo, Google Home and Siri, respectively. They do this to improve the speech recognition capabilities of these devices.

Ethical Issues

What are the ethical issues here? The problem is not with these companies using people to assist in the training of machine-learning algorithms in order to improve the capabilities of the devices. However there are issues with the following:


  • While information like names and addresses may not accompany the speech clips being listened to, it seems quite possible that other identification would potentially enable tracing back to this information. This seems unnecessary for the purpose of training the speech recognition algorithms.

  • It has been reported that employees performing this function in some companies, have been required to sign agreements that they will not disclose what they are doing. To my mind this seems wrong. If the function is necessary and innocent then companies should be open about it.

  • These companies do not always make it clear to purchasers of devices that they may be recorded, and listened to, by people. This should be clear to users in all advertising and documentation.

  • The most contentious ethical issue is what to do if any employee of one of these companies hears a crime being committed or planned. Another situation arises if an employee overhears something that is clearly private, like bank details, or information that, although legal, could be used to blackmail. In the first situation, are these companies to be regarded as having the same status as a priest in a confessional or any other person that might hear sensitive information? A possible approach is that whatever law applies to human individuals, should also apply to the employees and the companies like Amazon, Google and Apple. So in the UK for example, some workers (such as social workers and teachers) who are likely to occasionally hear sensitive information relating to potential harm to minors, are required to report it. In the second case, companies could be legally liable for losses arising from the information being revealed or used against the user.

It seems likely that companies are reluctant to admit publicly that interactions with these devices may be listened to by people, is because it might affect sales. That’s does not seem a good enough reason.

– Making Algorithms Trustworthy

Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live.  It’s kind of important we can trust them!

David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.

I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.



Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.

But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.

David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.


The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.

The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


As an aside, David entertained us by reporting on how the machine learning community have become obsessed with training algorithms to assess the characteristics of who did or did not survive the Titanic.   Unsurprisingly, being a woman or a child helped a lot. David used this example to present a statistically derived decision tree.  The point he was making was that the decision tree could (at least sometimes) be used as an explanation, whereas machine learning algorithms are generally black boxes (i.e. you can't inspect the algorithm itself).  

Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable. 

Algorithms need to be both globally and locally explainable. Global explainability relates to the validity of the algorithm in general, while local explainability relates to how the algorithm arrived at a particular decision. One important way of being able to test an algorithm, even when it’s a black box, is to be able to play with inputting different parameters and seeing the result.

Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.

Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used.  Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable.  I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


Without these types of explanation, algorithms such as the one used by the US company Compas to predict rates of  recidivism, are difficult to trust. 

It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently  good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions. 

Most of David’s points apply more broadly than to artificial intelligence and robots.  They are general principles applying to the transparency, accountabilityand user acceptance of any system.  Trust and trustworthiness are everything.

See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html ,      . And read his new book “The Art of Statistics: Learning from Data”, available shortly.

David Spiegelhalter

– Ethical themes in artificial Intelligence and robotics

Useful categorisation of ethical themes

I was at the seminar the other day where I was fortunate enough to encounter Josephine Young from www.methods.co.uk (who mainly do public sector work in the UK).


Josie recently carried out an analysis of the main themes relating to ethics and AI that she found in a variety of sources related to this topic. I have reported these themes below with a few comments. 
Many thanks, Josie for this really useful and interesting work.



THEMES

(Numbers in brackets reflect the number of times this issue was identified).

Data

Data treatment

Data treatment, focus on bias identification (10)
Interrogate the data (9)

Data collection / Use of personal data

Keep data secure (3)
Personal privacy – access, manage and control of personal data (1, 5, 6)
Use data and tools which have the minimum intrusion necessary – privacy (3)
Transparency of data/meta data collection and usage (8)
Self-disclosure and changing the algorithm’s assumptions (10)

Data models

Awareness of bias in data and models (8)
Create robust data science models – quality, representation of demographics (3)
Practice understanding of accuracy – transparency (8)

robotethics.co.uk comment on data: Trying to structure this a little, the themes might be categorised into [1] data ownership and collection (who can collect what data, when and for what purpose), [2] data storage and security (how is the data securely stored and controlled without loss and any un-permitted access [3] data processing (what are the permitted operations on the data and the unbiased / reasonable inferences / models that can be derived from it) and [4] data usage (what applications and processes can use the data or any inferences made from it).


Impact

Safety – verifiable (1)
Anticipate the impacts the might arise – economic, social, environmental etc. (4)
Evaluate impact of algorithms in decision-making and publish the results (2)
Algorithms are rated on a risk scale based on impact on individual (2)
Act using these Responsible Innovation processes to influence the direction and trajectory of research (4)

robotethics.co.uk comment on impact: Impact is about assessing the positive and negative effects of AI in the future, whether that be in the short, medium or long term. There is also the question of who is impacted as it is quite possible that the impact of any particular AI product or service might impact one group of people positively and another negatively. Therefore a framework of effect x timescale x affected persons/group might make a start on providing some structure for assessing impact.


Purpose

Non-subversion – power conferred to AI should respect and improve social and civic processes (1)
Reflect on the purpose, motivations, implications and uncertainties this research may bring (4)
Ensure augmented – not just artificial – AI (8)
Purpose and ecology for the AI system (10)
Human control – choose how and whether to delegate decisions to AI (1)
Backwards compatibility and versioning (8)

robotethics.co.uk comment on purpose: Clearly the intent behind any AI development should be to confer a net benefit on the individual and/or the society generally. The intent should never be to cause harm – even drone warfare is, in principle, justified in terms of conferring a clear net benefit. But this again raises the question of net benefit to whom exactly, how large that benefit is when compared to any downside, and how certain it is that the benefit will materialise (without any unanticipated harmful consequences). It is a matter of how strong and certain the argument is for justifying the intent behind building or deploying a particular AI product or service.


Transparency

Transparency for how AI systems make decisions (7)
Be as open and accountable as possible – provide explanations, recourse, accountability (3)
Failure transparency (1)
Responsibility and accountability for explaining how AI systems work (7)
Awareness and plan for audit train (8)
Publish details describing the data used to train AI, with assumptions and risk assessment – including bias (2)
A list of inputs used by an algorithm to make a decision should be published (2)
Every algorithm should be accompanied with a description of function, objective and intended impact (2)
Every algorithm should have an identical sandbox version for auditing (2)

robotethics.co.uk comment on transparency: Transparency and accountability are closely related but can be separated out. Transparency is about determining how or why (e.g. how or why an AI made a certain decision) whereas accountability is about determining who is responsible. Having transparency may well help in establishing accountability but they are different. The problem for AI is that, by normal human standards, responsibility resides with the autonomous decision-making agent so long as they are regarded as having ‘capacity’ (e.g. they are not a child or insane) and even then, there can be mitigating circumstances (provocation, self-defence etc.). We are a long way from regarding AIs as having ‘capacity’ in the sense of being able to make their own ethical judgements, so in the short to medium term, the accountability must be traceable to a human, or other corporate, agent. The issue of accountability is further complicated in cases where people and AIs are cooperatively engaged in the same task, since there is human involvement in both the design of the AI and its operational use.


Civic rights

A named member of staff is formally responsible for the algorithm’s actions and decisions (2)
Judicial transparency – auditible by humans (1)
3rd parties that run algorithms on behalf of public sector should be subject to same principles as government algorithms (2)
Intelligibility and fairness (6)
Dedicated insurance scheme, to provide compensation if negative impact (2)
Citizens must be informed when their treatment has been decided/informed by an algorithm (2)
Liberty and privacy – use of personal data should not, or not be perceived to curtail personal liberities (1)
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems (7)

robotethics.co.uk comment on civic rights: It seems clear that an AI should have no more license to contravene a person’s civil liberties or human rights than another person or corporate entity would. Definitions of human rights are not always clear-cut and differ from place to place. In human society this is dealt with by defaulting to local laws and cultural norms. It seems likely that a care robot made in Japan but operating in, say, the UK would have to operate according to the local laws, as would apply to any other person, product or service.


Highest purpose of AI

Shared prosperity – economic prosperity shared broadly to benefit all of humanity (1)
Flourishing alongside AI (6)
Prioritise the maximum benefit to humanity and the natural environment (7)
Shared benefit – technology should benefit and empower as many people as possible (1)
Purpose of AI should be human flourishing (1)
AI should be developed for the common good (6)
Beneficial intelligence (1)
Compatible with human dignity, rights, freedoms and cultural diversity (1, 5)
Align values and goals with human values (1)
AI will prevent harm (5)
Start with clear user need and public benefit (3)
Embody highest ideals of human rights (7)

robotethics.co.uk comment on the higher purpose of AI: This seems to address themes of human flourishing, equality, values and again touches on rights. It focuses mainly on, and details, the potential benefits and how these are distributed. These can be slotted into the frameworks already set out above.


Negative consequences / Crossing the ‘line’

An AI arms race should be permitted (1)
Identify and address cybersecurity risks (8)
Confronting the power to destroy (6)

robotethics.co.uk comment on the negative consequences of AI: The main threats are set out to be in relation to weapons, cyber-security and the existential risks posed by AIs that cease to be controlled by human agency. There are also many more subtle and shorter term risks such as bias in models and decision making addressed elsewhere. As with benefits, these can be slotted into the frameworks already set out above.


User

Consider the marginal user (9)
Collaborate with humans – rather than displace them (5)
Marginal user and participation (10)
Address job displacement implications (8)

robotethics.co.uk comment on user: This is mainly about the social implications of AI and the risks to individuals in relation to jobs and becoming marginalised. These implications seem likely to arise in the short to medium term and given their potential scale, there seems a comparative paucity of attention being paid to them by governments, especially in the UK where Brexit dominates the political agenda. Little attempt seems to be being made to consider the significance of AI in relation to the more habitual political concerns of migration and trade.


AI Industry

AI researchers <-> policymakers (1)
Establish industry partnerships (9)
Responsibility of designers and builders for moral implications (1, 5)
Establish industry partnerships (9)
Culture of trust and transparency between researchers and developers (1)
Resist the ‘race’ – no more ‘move fast and break things’ mentality (1)

robotethics.co.uk comment on AI industry: The industry players that are building AI products and services have a pivotal role to play in their ethical development and deployment. In addition to design and manufacture, this affects education and training, regulation and monitoring of the development of AI systems, their marketing and constraints on their use. AI is likely to be used throughout the supply chain of other products and services and AI components will become increasingly integrated with each other into more and more powerful systems. The need to create policy, regulate, certify, train and licence the industry creating AI products and services needs to be addressed more urgently given the pace of technological development.


Public dialogue

Engage – opening up such work to broader deliberation in an inclusive way (4)
Education and awareness of public (7)
Be alert to public perceptions (3)

robotethics.co.uk comment on public dialogue: At present, public debate on AI is often focussed on the activities of the big players and their high profile products such as Amazon Echo, Google Home, and Apple’s Siri. These give clues as to some of the ethical issues that require public attention, but there is a lot more AI development going on in the background. Given the potentially large and fast pace of societal impacts of AI, there needs to be greater public awareness and debate, not least so that society can be prepared and adjust other systems (such as taxation, benefits, universal income etc.) to absorb the impacts.


Interface design

Representation of AI system, user interface design (10)

robotethics.co.uk comment on interface design: With AIs capable of machine learning they are developing knowledge and skills in similar ways to how people do, and just like people, they often cannot explain how they do things or arrive at some judgement or decision. The ways in which people and AIs will interface and interact is as complex a topic as how people interact with each other. Can we ever know what another person is really thinking or whether the image they present of themselves is accurate. If AIs become even half as complex as people, able to integrate knowledge and skills from many different sources, able to express (if not actually feel) emotions, able to reason with super-human logic, able to communicate instantaneously with other AIs, there is no knowing how people and AIs will ‘interface’. Just as with computers that have become both tools for people to use and constraints on human activity (‘I’m sorry but the computer will not let me do that’) the relationships will be complex, especially as computer components become implanted in the human body and not just carried on the wrist. It seems more likely that the relationship will be cooperative rather than competitive or one in which AIs come to dominate.


The original source material from Josie, (who gave me permission to reference this material) can be found at:

https://docs.google.com/document/d/1LrBk-LOEu4LwnyUg8i5oN3ZKjl55aDpL6l1BxVcHIi8/edit


See other work by Josie Young: https://methods.co.uk/blog/different-ai-terms-actually-mean/