Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Autonomy now

Making sense of a changing world

It’s difficult to make sense of a fast changing world. But could ‘autonomy’ be at the centre of it? I’ll explain.

There are two main themes – people and technology. Ideas about autonomy are changing for both. For people, it is a matter of their relationship to employment, government and the many institutions of society. For technology, it is the introduction of autonomous intelligence in a wide range of systems including phone apps and all manner of automated decision making systems that affect every aspect of our lives. There is also the increasing inter-dependency between people and technology, both empowering and constraining. Questions of autonomy are at the heart of these changes.

There have been times in history when it has not occurred to people that they could be autonomous in the broad scope of their lives. They were born into a time and place where the control of their destiny was not their own concern. They were conditioned to know their place, accept it and stay in it. In first world democracies, autonomy is, perhaps, a luxury of the here and now. It may not necessarily stay that way.


My particular interest is the way in which we are giving autonomy to the things that we create – computer algorithms, artificial intelligence systems and robots. But it’s broader than that. We all want the freedom to pursue our own goals, to self-determine. We are told repeatedly by an industry concerned with self-development and achieving success, that we should ‘find our authentic self’ and pursue the values and goals that really matter to us.

However, we can only do this within an environment of constraints – physical constraints, resource constraints, psychological constraints and social constraints. It is the dynamic between the individual and their constraints that is in constant flux and that I am trying to examine.

What’s Trending in Autonomy?

There are two main trends – one towards decentralisation and one towards concentrations of wealth and power. This seems something of a contradiction. How can both be true and, if so, where is this going?

There is a long-term trend towards decentralization. First we rejected the ancient gods as the controllers of nature. Much more recently we have started to question other sources of authority – governments, doctors, the church, the law, the mainstream media and many other institutions in society. As we as individuals have become more informed and empowered by technology, we have started to see the flaws in these ‘authorities’.

I believe, along with many other commentators, that we are heading towards a world where autonomy is not just highly valued but is also more possible than it ever has been. As society becomes better educated, as technology enables greater information sharing and flexibility, we can, and perhaps inevitably will, move towards a more decentralized society in which both human and artificial autonomous agents increasingly interact with each other. The interactions will become more frequent, more informed, more fine-grained and more local. The technological infrastructure for this continues to rollout at ever-increasing pace. Soon we will have 5G communications facilitating almost instantaneous communications between ever more intelligent and powerful devices – smart phones, autonomous vehicles, information servers, and a host of smart devices.

On the other hand, there is clear evidence of increased concentrations of wealth and power. Although estimates may vary, it seems that a greater proportion of the worlds wealth is held by fewer and fewer people. Stories abound of fewer that eight people owning more than half the worlds wealth. Economists like Thomas Pickerty have documented in detail the evidence for such a trend.

There is clearly a tension between these trends. As power and wealth become more concentrated, manifesting in the form of surveillance capitalism (not ignoring surveillance by the state) and fake news, there is a fight back by individuals and other institutions.

Individuals increasingly recognise the intrusions on their privacy and this is picked up (often belatedly) in legislation like GDPR and other moves to regulate. The checks and balances do work to modulate the dynamics of the struggle, but when they don’t work, the accumulated frustration at the loss of human dignity can become political and violent. Take a closer look at autonomy.

Why do we need Autonomy?

We each have a biological imperative to survive. While we can count on others to some extent, ‘the buck stops’ with each of us as individuals. The more robust and resilient solutions are self-sufficiency and self-determination. It’s not a fail-safe but it takes out the risk that others might not be there for us all the time. It also appears to be a route to greater wellbeing. Learning, developing competence and mastery, being able to predict and hence increase the possibility that we can control, being less subject to constraints on our actions – all contribute to satisfaction, ultimately in the service of survival.

In the hierarchy of needs, having enough food, shelter, sleep and security releases some of your own resources. It provides the freedom to climb. Somewhere near the top of the hierarchy is what Maslow called self-actualisation – the discovery and expression of your authentic self. But unless you are exceptionally lucky, and find that your circumstances align perfectly with your authentic self, then a pre-requisite is to have freedom from the constraints that prevent you from getting there.

Interactions between people and machines

This is all the human side of autonomy – the bit that applies to us all. This is a world in which both people and artificial agents – computer algorithms, smart devices, robots etc. interact with each other. Interactions between people and people, machines and machines and people and machines are accelerating in both speed and frequency in order that each autonomous agent can achieve its own rapidly changing priorities and goals. There is nothing static or certain in this world. It is changing, ambiguous, and unpredictable.

Different autonomous agents have different goals and different value systems that drive them. For people these are different cultures, social norms and roles. For machines they relate to their different functions and circumstances in which they operate. For interactions to work smoothly there needs to be some stability in the protocols that regulate them. Autonomy may be a way into defining accountability and responsibility. It may lead us towards mechanisms for the justification and explanation of action. Neither machines nor people are very good at this, but autonomy may provide the key that unlocks our understanding of effective communication and protocols.

Still, that’s for later. Right now, this article is just focused on the concept of autonomy.
I hope you are convinced that this is an important and interesting subject. It is at the foundation of our relationships with each other and between people and the increasingly autonomous and intelligent agents we are creating.

Questions that need to be addressed


  • What do we mean by autonomy?
  • How do agents (people and machines) differ in the amount of autonomy they have?
  • Can we measure autonomy?
  • What examples of peoples societies and artefacts can we think of that might help us understand what is it what is not autonomous?
  • What do we mean by autonomy when we talk about artificial autonomous intelligence systems?
  • Are the computer algorithms and robotic systems that we have today truly autonomous?
  • What would it mean to build an artificial intelligence that was truly autonomous?
  • What is the relationship between autonomy and morality?
  • Can we be truly autonomous if we are constrained by ethical principles and social norms?
  • If we want our intelligent artefacts to behave ethically, then how would we approach the design of those systems?

That’s quite a chunk of questions to get through. But they are all on the critical path to understanding how our own human autonomy and the autonomy that we build into artefacts, can relate to and engage with each other. They take us to a point where we can better understand the trade-offs every intelligent agent, be it human or artificial, has to make between the freedom to pursue its own goals and the constraints of living in a society of other intelligent agents.

It also reveals how, in people, the constraints of society are internalised. As adults they have become part of our internal control mechanisms. These internal controls have no absolute morality but reflect the circumstances and society in which we grow up. As our artefacts become increasingly intelligent we may need to develop similar mechanisms for their socialisation.

Definitions of Autonomy

The following definitions are taken from the glossary of the IEEE publication called ‘Ethically Aligned Design’ (version 1). The glossary has several definitions from different perspectives:

Ordinary language: The ability of a person or artefact to govern itself including formation of intentions, goals, motivations, plans of action, and execution of those plans, with or without the assistance of other persons or systems.

Engineering: “Where an agent acts autonomously, it is not possible to hold any one else responsible for its actions. In so far as the agent’s actions were its own and stemmed from its own ends, others cannot be held responsible for them” (Sparrow 2007, 63).

Government: “we define local [government] autonomy conceptually as a system of local government in which local government units have an important role to play in the economy and the intergovernmental system, have discretion in determining what they will do without undue constraint from higher levels of government, and have the means or capacity to do so” (Wolman et al 2008, 4-5).

Ethics and Philosophy: “Put most simply, to be autonomous is to be one’s own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman 2015).

Medical: “Two conditions are ordinarily required before a decision can be regarded as autonomous. The individual has to have the relevant internal capacities for self-government and has to be free from external constraints. In a medical context a decision is ordinarily regarded as autonomous where the individual has the capacity to make the relevant decision, has sufficient information to make the decision and does so voluntarily” (British Medical Association 2016).

More on autonomy later. Sign up to the blog if you want to be notified.

Meanwhile a couple of videos

The first has an interesting take on autonomy. Autonomy is not a matter of what you want, but what you want to want. The more reflective you are about what you want the more autonomous you are.

Youtube Video, What is Autonomy? (Personal and Political), Carneades.org, December 2018, 6:50 minutes

https://www.youtube.com/watch?v=z0uylpfirfM

The second is from a relatively new Youtube channel called ‘Rebel Wisdom’. It starts with the breakdown of trust in traditional media and moves on to themes of decentralisation.

Youtube Video, The War on Sensemaking, Daniel Schmachtenberger, Rebel Wisdom, August 2019, 1:48:49 hours

https://www.youtube.com/watch?v=7LqaotiGWjQ&t=17s

– Sex Robots

A Brief Summary by Eleanor Hancock


Sex robots have been making the headlines recently. We have been told they have the power to endanger humans or fulfil our every sexual fantasy and desire. Despite the obvious media hype and sensationalism, there are many reasons for us to be concerned about sex robots in society.

Considering the huge impact that sexbots may have in the realms of philosophy, psychology and human intimacy, it is hard to pinpoint the primary ethical dilemmas surrounding the production and adoption of sex robots in society, as well as considering who stands to be affected the most.

This article covers the main social and ethical deliberations that currently surround the use of sex robots and what we might expect in the next decade.

What companies are involved in the design and sale of sex robots?

One of the largest and most well-known retailers of sex dolls and sex robots is Realbotix in San Francisco. They designed and produced ‘Realdolls’ for years but in 2016 they released their sex robot Harmony, which also has a corresponding phone application that allows you to ‘customise’ your robotic companion. Spanish developer Sergi also released Samantha the sexbot, who is a life-sized gynoid which can talk and interact with users. When sex robots become more sophisticated and can gather intimate and personal user data from us, we may have more reason to be concerned about who is designing and manufacturing sex robots – and what they are doing with our sexual data.

What will sex robots look like?

The current state of sex dolls and robots has largely commodified the human body, with the female human body appearing to be more popular in the consumer sphere amongst most sex robot and doll retailers. With that in mind, male sex robots appear to be increasing in popularity and two female journalists have documented their experiences with male sex dolls. Furthermore, there are also instances of look-a-like sex dolls who replicate and mimic celebrities. To this effect, sex robot manufacturers have had to make online statements about their refusal to replicate people, without the explicit permission of that person or their estate. The industry is proving hard to regulate and the issue of copyright in sex robots may be a real ethical and social dilemma for policy makers in the future. However, there have also been examples of sex robots and dolls that do not resemble human form, such as the anime and alien-style dolls.

Will sex robots impact gender boundaries?

Sex robots will always be genderless artifice. However, allowing sex robots to enter the human sexual arena may allow humans to broaden their sexual fantasies. Sex robots may even be able to replicate both genders through customisation and add-on parts. As mentioned previously, the introduction of genderless artifice who do not resemble humans may positively impact human sexual relations by broadening sexual and intimate boundaries.

Who will use sex robots?

There has been variation between the research results studying whether people would use sex robots. The fluctuations in research results mean it is difficult to pinpoint who exactly would use a sex robot and why. Intensive research about the motivations to use sex robots has highlighted the complexities behind such choice that mirror our own human sexual relationships. However, most research studies have been consistent when reporting which gender is most likely to have sex with a robot, with most studies suggesting males would always be more likely than females to have sex with a robot and purchase a sex robot.

Can sex robots be used to help those with physical or mental challenges access sexual pleasure?

Sex robots may allow people to practice sexual acts or receive sexual acts that they are otherwise unable to obtain due to serious disabilities. The ethics behind such a practice have been divisive between radical feminists who deny sex is a human-right, and critics who think it could be medically beneficial and therapeutic.

Will sex robots replace human lovers?

There has not been enough empirical research on the effects of sexual relations with robots and to what extent they are able to reciprocate the same qualities in a human relationship. However, it is inferable that some humans will form genuine sexual or/and intimate relationships with sex robots, which may impede their desire to bother or desire human relationships anymore. The Youtube sensation ‘Davecat’ highlights how a man and his wife have been able to incorporate sex dolls into their married life comfortably. In a similar episode, Arran Lee Wright displayed his sexbot on British daytime television and was supportive of the use of sexbots between couples.

Will sex robots lead to social isolation and exclusion?

There are many academics who already warn us against the isolating impact technology has on our real-life relationships. Smartphones and social media have increased our awareness about online and virtual relationships and some academics believe sex robots signal a sad reflection of humanity. There is a risk that some people may become more isolated as they chose robotic lovers over humans but there is not enough empirical research to deliver a conclusion at this stage.

Will sex robot prostitutes replace human sex workers?

As much as there have been examples of robot and doll brothels and rent-a-doll escort agencies, it is difficult to tell whether sex robots will ever be able to replace human sex workers completely. Some believe there are benefits from adopting robots as sex workers and a 2012 paper suggested that by 2050, the Red Light District in Amsterdam would only facilitate sex robot prostitution. Escort agency owners and brothel owners have spoken about the reduction in management and time costs that using dolls or robots would deliver. However, sociological research from the sex industry suggests sex robots will have a tough time replacing all sex workers, and specifically escorts who need a high range of cognitive skills in order to complete their job and successfully manipulative a highly saturated and competitive industry.

How could sex robots be dangerous?

It seems at this stage, there is not enough research about sex robots to jump to any conclusions. Nonetheless, it seems that most roboticists and ethicists consider how humans interact and behave towards robots as a key factor in assessing the dangers of sex robots. It is more about how we will treat sex robots than the dangers they can evoke on humans.

Is it wrong to hurt a Sex Robot?

Sex robots will allow humans to explore sexual boundaries and avenues that they may not have previously been able to practice with humans. However, this could also mean that people choose to use sex robots as ways to enact violent acts, such as rape and assault. Although some would argue robots cannot feel so violence towards them is less morally corrupt than humans, the violent act may still have implications through the reinforcement of such behaviours in society. If we enact violence on a machine that looks human, we may still associate our human counterparts with such artifice. Will negative behaviour we practice on sex robots became more acceptable to reciprocate on humans? Will the fantasy of violence on robots make it commonplace in wider society? Roboticists and ethicists have been concerned about these issues when considering sex robots but there is simply not enough empirical research yet. Although, Kate Darling still believes there is enough reason to consider extending legal protection towards social robots (see footnote).



References

Jason Lee – Sex Robots and the Future of Desire
https://campaignagainstsexrobots.org/about/

Robots, men and sex tourism, Ian Yeoman and Michelle Mars, Futures, Volume 44, Issue 4, May 2012, Pages 365-371
https://www.sciencedirect.com/science/article/pii/S0016328711002850?via%3Dihub

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami
http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf

Attitudes on ‘Sex Robots will liberate the next generation of women
https://www.kialo.com/will-sex-robots-liberate-the-next-generation-of-women-4214?path=4214.0~4214.1

Footnotes

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami

– Next Stop, Biological AI

This truly startling talk by Professor Michael Levin, from the Allen Discovery Center at Tufts University, has implications for everything – not just regenerative medicine.

It is no exaggeration to describe the work done in Levin’s lab as Frankensteinian. This is not a criticism, just an inevitable observation.

Levin describes biochemical interventions that can effect electrical transmission at the inter-cellular level in a range of organisms. These change the parameters for regeneration of body parts and reveal that a non-neural regenerative memory can exist throughout an organism. From the start of evolution of ‘primitive’ life forms, anatomical decision-making is taking place in every cell, and at every level of body structure.

Levin gives a highly informed factual account of findings in bioelectrical computation. Although he only touches on the implications, these techniques potentially lead to a technology that can design new life-forms and biologically-based computation devices.

It seems incredible that research results like these are possible now. It may be years or decades before it translates into medical interventions for humans, or is applied to creating biologically-based artificial intelligence, but the vision is clear.

To me, more frightening than the content of this talk, is the Facebook logo hanging over Levin’s head (no doubt just promotion, but still!).

YouTube Video, What Bodies Think About: Bioelectric Computation Outside the Nervous System – NeurIPS 2018, Artificial Intelligence Channel, December 2018, 52:06 minutes

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– Ethics of Eavesdropping

It has been recently reported (e.g. see: Bloomberg News ) that the likes of Amazon, Google and Apple employ people to listen to sample recordings made by the Amazon Echo, Google Home and Siri, respectively. They do this to improve the speech recognition capabilities of these devices.

Ethical Issues

What are the ethical issues here? The problem is not with these companies using people to assist in the training of machine-learning algorithms in order to improve the capabilities of the devices. However there are issues with the following:


  • While information like names and addresses may not accompany the speech clips being listened to, it seems quite possible that other identification would potentially enable tracing back to this information. This seems unnecessary for the purpose of training the speech recognition algorithms.

  • It has been reported that employees performing this function in some companies, have been required to sign agreements that they will not disclose what they are doing. To my mind this seems wrong. If the function is necessary and innocent then companies should be open about it.

  • These companies do not always make it clear to purchasers of devices that they may be recorded, and listened to, by people. This should be clear to users in all advertising and documentation.

  • The most contentious ethical issue is what to do if any employee of one of these companies hears a crime being committed or planned. Another situation arises if an employee overhears something that is clearly private, like bank details, or information that, although legal, could be used to blackmail. In the first situation, are these companies to be regarded as having the same status as a priest in a confessional or any other person that might hear sensitive information? A possible approach is that whatever law applies to human individuals, should also apply to the employees and the companies like Amazon, Google and Apple. So in the UK for example, some workers (such as social workers and teachers) who are likely to occasionally hear sensitive information relating to potential harm to minors, are required to report it. In the second case, companies could be legally liable for losses arising from the information being revealed or used against the user.

It seems likely that companies are reluctant to admit publicly that interactions with these devices may be listened to by people, is because it might affect sales. That’s does not seem a good enough reason.