Home » Posts tagged 'Trust'

Tag Archives: Trust

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Sex Robots

A Brief Summary by Eleanor Hancock


Sex robots have been making the headlines recently. We have been told they have the power to endanger humans or fulfil our every sexual fantasy and desire. Despite the obvious media hype and sensationalism, there are many reasons for us to be concerned about sex robots in society.

Considering the huge impact that sexbots may have in the realms of philosophy, psychology and human intimacy, it is hard to pinpoint the primary ethical dilemmas surrounding the production and adoption of sex robots in society, as well as considering who stands to be affected the most.

This article covers the main social and ethical deliberations that currently surround the use of sex robots and what we might expect in the next decade.

What companies are involved in the design and sale of sex robots?

One of the largest and most well-known retailers of sex dolls and sex robots is Realbotix in San Francisco. They designed and produced ‘Realdolls’ for years but in 2016 they released their sex robot Harmony, which also has a corresponding phone application that allows you to ‘customise’ your robotic companion. Spanish developer Sergi also released Samantha the sexbot, who is a life-sized gynoid which can talk and interact with users. When sex robots become more sophisticated and can gather intimate and personal user data from us, we may have more reason to be concerned about who is designing and manufacturing sex robots – and what they are doing with our sexual data.

What will sex robots look like?

The current state of sex dolls and robots has largely commodified the human body, with the female human body appearing to be more popular in the consumer sphere amongst most sex robot and doll retailers. With that in mind, male sex robots appear to be increasing in popularity and two female journalists have documented their experiences with male sex dolls. Furthermore, there are also instances of look-a-like sex dolls who replicate and mimic celebrities. To this effect, sex robot manufacturers have had to make online statements about their refusal to replicate people, without the explicit permission of that person or their estate. The industry is proving hard to regulate and the issue of copyright in sex robots may be a real ethical and social dilemma for policy makers in the future. However, there have also been examples of sex robots and dolls that do not resemble human form, such as the anime and alien-style dolls.

Will sex robots impact gender boundaries?

Sex robots will always be genderless artifice. However, allowing sex robots to enter the human sexual arena may allow humans to broaden their sexual fantasies. Sex robots may even be able to replicate both genders through customisation and add-on parts. As mentioned previously, the introduction of genderless artifice who do not resemble humans may positively impact human sexual relations by broadening sexual and intimate boundaries.

Who will use sex robots?

There has been variation between the research results studying whether people would use sex robots. The fluctuations in research results mean it is difficult to pinpoint who exactly would use a sex robot and why. Intensive research about the motivations to use sex robots has highlighted the complexities behind such choice that mirror our own human sexual relationships. However, most research studies have been consistent when reporting which gender is most likely to have sex with a robot, with most studies suggesting males would always be more likely than females to have sex with a robot and purchase a sex robot.

Can sex robots be used to help those with physical or mental challenges access sexual pleasure?

Sex robots may allow people to practice sexual acts or receive sexual acts that they are otherwise unable to obtain due to serious disabilities. The ethics behind such a practice have been divisive between radical feminists who deny sex is a human-right, and critics who think it could be medically beneficial and therapeutic.

Will sex robots replace human lovers?

There has not been enough empirical research on the effects of sexual relations with robots and to what extent they are able to reciprocate the same qualities in a human relationship. However, it is inferable that some humans will form genuine sexual or/and intimate relationships with sex robots, which may impede their desire to bother or desire human relationships anymore. The Youtube sensation ‘Davecat’ highlights how a man and his wife have been able to incorporate sex dolls into their married life comfortably. In a similar episode, Arran Lee Wright displayed his sexbot on British daytime television and was supportive of the use of sexbots between couples.

Will sex robots lead to social isolation and exclusion?

There are many academics who already warn us against the isolating impact technology has on our real-life relationships. Smartphones and social media have increased our awareness about online and virtual relationships and some academics believe sex robots signal a sad reflection of humanity. There is a risk that some people may become more isolated as they chose robotic lovers over humans but there is not enough empirical research to deliver a conclusion at this stage.

Will sex robot prostitutes replace human sex workers?

As much as there have been examples of robot and doll brothels and rent-a-doll escort agencies, it is difficult to tell whether sex robots will ever be able to replace human sex workers completely. Some believe there are benefits from adopting robots as sex workers and a 2012 paper suggested that by 2050, the Red Light District in Amsterdam would only facilitate sex robot prostitution. Escort agency owners and brothel owners have spoken about the reduction in management and time costs that using dolls or robots would deliver. However, sociological research from the sex industry suggests sex robots will have a tough time replacing all sex workers, and specifically escorts who need a high range of cognitive skills in order to complete their job and successfully manipulative a highly saturated and competitive industry.

How could sex robots be dangerous?

It seems at this stage, there is not enough research about sex robots to jump to any conclusions. Nonetheless, it seems that most roboticists and ethicists consider how humans interact and behave towards robots as a key factor in assessing the dangers of sex robots. It is more about how we will treat sex robots than the dangers they can evoke on humans.

Is it wrong to hurt a Sex Robot?

Sex robots will allow humans to explore sexual boundaries and avenues that they may not have previously been able to practice with humans. However, this could also mean that people choose to use sex robots as ways to enact violent acts, such as rape and assault. Although some would argue robots cannot feel so violence towards them is less morally corrupt than humans, the violent act may still have implications through the reinforcement of such behaviours in society. If we enact violence on a machine that looks human, we may still associate our human counterparts with such artifice. Will negative behaviour we practice on sex robots became more acceptable to reciprocate on humans? Will the fantasy of violence on robots make it commonplace in wider society? Roboticists and ethicists have been concerned about these issues when considering sex robots but there is simply not enough empirical research yet. Although, Kate Darling still believes there is enough reason to consider extending legal protection towards social robots (see footnote).



References

Jason Lee – Sex Robots and the Future of Desire
https://campaignagainstsexrobots.org/about/

Robots, men and sex tourism, Ian Yeoman and Michelle Mars, Futures, Volume 44, Issue 4, May 2012, Pages 365-371
https://www.sciencedirect.com/science/article/pii/S0016328711002850?via%3Dihub

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami
http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf

Attitudes on ‘Sex Robots will liberate the next generation of women
https://www.kialo.com/will-sex-robots-liberate-the-next-generation-of-women-4214?path=4214.0~4214.1

Footnotes

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– Making Algorithms Trustworthy

Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live.  It’s kind of important we can trust them!

David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.

I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.



Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.

But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.

David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.


The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.

The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


As an aside, David entertained us by reporting on how the machine learning community have become obsessed with training algorithms to assess the characteristics of who did or did not survive the Titanic.   Unsurprisingly, being a woman or a child helped a lot. David used this example to present a statistically derived decision tree.  The point he was making was that the decision tree could (at least sometimes) be used as an explanation, whereas machine learning algorithms are generally black boxes (i.e. you can't inspect the algorithm itself).  

Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable. 

Algorithms need to be both globally and locally explainable. Global explainability relates to the validity of the algorithm in general, while local explainability relates to how the algorithm arrived at a particular decision. One important way of being able to test an algorithm, even when it’s a black box, is to be able to play with inputting different parameters and seeing the result.

Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.

Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used.  Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable.  I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


Without these types of explanation, algorithms such as the one used by the US company Compas to predict rates of  recidivism, are difficult to trust. 

It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently  good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions. 

Most of David’s points apply more broadly than to artificial intelligence and robots.  They are general principles applying to the transparency, accountabilityand user acceptance of any system.  Trust and trustworthiness are everything.

See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html ,      . And read his new book “The Art of Statistics: Learning from Data”, available shortly.

David Spiegelhalter

– Can we trust blockchain in an era of post truth?

Post Truth and Trust

The term ‘post truth’ implies that there was once a time when the ‘truth’ was apparent or easy to establish. We can question whether such a time ever existed, and indeed the ‘truth’, even in science, is constantly changing as new discoveries are made. ‘Truth’, ‘Reality’ and ‘History’, it seems, are constantly being re-constructed to meet the needs of the moment. Philosophers have written extensively about the nature of truth and this is an entire branch of philosophy called ‘epistemology’. Indeed my own series of blogs starts with a posting called ‘It’s Like This’ that considers the foundation of our beliefs.

Nevertheless there is something behind the notion of ‘post truth’. It arises out of the large-scale manufacture and distribution of false news and information made possible by the internet and facilitated by the widespread use of social media. This combines with a disillusionment in relation to almost all types of authority including politicians, media, doctors, pharmaceutical companies, lawyers and the operation of law generally, global corporations, and almost any other centralised institution you care to think of. In a volatile, uncertain, changing and ambiguous world who or what is left that we can trust?

YouTube Video, Astroturf and manipulation of media messages | Sharyl Attkisson | TEDxUniversityofNevada, TEDx Talks, February 2015, 10:26 minutes

All this may have contributed to the popularism that has led to Brexit and Trump and can be said to threaten our systems of democracy. However, to paraphrase Churchill’s famous remark ‘democracy is the worst form of Government, except for all the others’. But, does the new generation of distributed and decentralising technologies provide a new model in which any citizen can transact with any other citizen, on any terms of their choosing, bypassing all systems of state regulation, whether they be democratic or not. Will democracy become redundant once power is fully devolved to the individual and individuals become fully accountable for their every action?

Trust is the crucial notion that underlies belief. We believe who we trust and we put our trust in the things we believe in. However, in a world where we experience so many differing and conflicting viewpoints, and we no longer unquestioningly accept any one authority, it becomes increasingly difficult to know what to trust and what to believe.

To trust something is to put your faith in it without necessarily having good evidence that it is worthy of trust. If I could be sure that you could deliver on a promise then I would not need to trust you. In religion, you put your trust in God on faith alone. You forsake the need for evidence altogether, or at least, your appeal is not to the sort of evidence that would stand up to scientific scrutiny or in a court of law.

Blockchain to the rescue

Blockchain is a decentralised technology for recording and validating transactions. It relies on computer networks to widely duplicate and cross-validate records. Records are visible to everybody providing total transparency. Like the internet it is highly distributed and resilient. It is a disruptive technology that has the potential to decentralise almost every transactional aspect of everyday life and replace third parties and central authorities.

YouTube Video, Block chain technology, GO-Science, January 2016, 5:14 minutes

Blockchain is often described as a ‘technology of trust’, but its relationship to trust is more subtle than first appears. Whilst Blockchain promises to solve the problem of trust, in a twist of irony, it does this by creating a kind of guarantee, and by creating the guarantee you no longer have to be concerned about trusting another party to a transaction because what you can trust is the Blockchain record of what you agreed. You can trust this record, because, once you understand how it works, it becomes apparent that the record is secure and cannot be changed, corrupted, denied or mis-represented.

Youtube Video, Blockchain 101 – A Visual Demo, Anders Brownworth, November 2016, 17:49 minutes

It has been argued that Blockchain is the next revolution in the internet, and indeed, is what the internet should have been based on all along. If, for example, we could trace the providence of every posting on Facebook, then, in principle, we would be able to determine its true source. There would no longer be doubt about whether or not the Russian’s hacked into the Democratic party computer systems because all access would be held in a publicly available, widely distributed, indelible record.

However, the words ‘in principle’ are crucial and gloss over the reality that Blockchain is just one of many building-blocks towards the guarantee of trustworthiness. What if the Russians paid a third-party in untraceable cash to hack into records or to create false news stories? What if A and B carry out a transaction but unknowing to A, B has stolen C’s identity? What if there are some transactions that are off the Blockchain record (e.g. the subsequent sale of an asset) – how do they get reconciled with what is on the record? What if somebody one day creates a method of bringing all computers to a halt or erasing all electronic records? What if somebody creates a method by which the provenance captured in a Blockchain record were so convoluted, complex and circular that it was impossible to resolve however much computing power was thrown at it?

I am not saying that Blockchain is no good. It seems to be an essential underlying component in the complicated world of trusting relationships. It can form the basis on which almost every aspect of life from communication, to finance, to law and to production can be distributed, potentially creating a fairer and more equitable world.

YouTube Video, The four pillars of a decentralized society | Johann Gevers | TEDxZug, TEDx Talks, July 2014, 16:12 minutes

Also, many organisations are working hard to try and validate what politicians and others say in public. These are worthy organisations and deserve our support. Here are just a couple:

Full Fact is an independent charity that, for example, checks the facts behind what politicians and other say on TV programmes like BBC Question Time. See: https://fullfact.org. You can donate to the charity at: https://fullfact.org/donate/

More or Less is a BBC Radio programme (over 300 episodes) that checks behind purported facts of all sorts (from political claims to ‘facts’ that we all take for granted without questioning them). http://www.bbc.co.uk/programmes/p00msxfl/episodes/player

However, even if ‘the facts’ can be reasonably established, there are two perspectives that undermine what may seem like a definitive answer to the question of trust. These are the perspectives of constructivism and intent.

Constructivism, intent, and the question of trust

From a constructivist perspective it is impossible to put a definitive meaning on any data. Meaning will always be an interpretation. You only need to look at what happens in a court of law to understand this. Whatever the evidence, however robust it is, it is always possible to argue that it can be interpreted in a different way. There is always another ‘take’ on it. The prosecution and the defence may present an entirely different interpretation of much the same evidence. As Tony Benn once said, ‘one man’s terrorist is another man’s freedom fighter’. It all depends on the perspective you take. Even a financial transaction can be read a different ways. While it’s existence may not be in dispute, it may be claimed that it took place as a result of coercion or error rather than freely entered into. The meaning of the data is not an attribute of the data itself. It is at least, in part, at attribute of the perceiver.

Furthermore, whatever is recorded in the data, it is impossible to be sure of the intent of the parties. Intent is subjective. It is sealed in the minds of the actors and inevitably has to be taken on trust. I may transfer the ownership of something to you knowing that it will harm you (for example a house or a car that, unknown to you, is unsafe or has unsustainable running costs). On the face of it the act may look benevolent whereas, in fact, the intent is to do harm (or vice versa).

Whilst for the most part we can take transactions at their face value, and it hardly makes sense to do anything else, the trust between the parties extends beyond the raw existence of the record of the transaction, and always will. This is not necessarily any different when an authority or intermediary is involved, although the presence of a third-party may have subtle effects on the nature of the trust between the parties.

Lastly, there is the pragmatic matter of adjudication and enforcement in the case of breaches to a contract. For instantaneous financial transactions there may be little possibility of breach in terms of delivery (i.e. the electronic payments are effected immediately and irrevocably). For other forms of contract though, the situation is not very different from non-Blockchain transactions. Although we may be able to put anything we like in a Blockchain contract – we could, for example, appoint a mutual friend as the adjudicator over a relationship contract, and empower family members to enforce it, we will still need the system of appeals and an enforcer of last resort.

I am not saying is that Blockchain is unnecessarily or unworkable, but I am saying that it is not the whole story and we need to maintain a healthy scepticism about everything. Nothing is certain.


Further Viewing

Psychological experiments in Trust. Trust is more situational than we normally think. Whether we trust somebody often depends on situational cues such as appearance and mannerisms. Some cues are to do with how similar one persona feels to another. Cues can be used to ascribe moral intent to robots and other artificial agents.

YouTube Video, David DeSteno: “The Truth About Trust” | Talks at Google, Talks at Google, February 2014, 54:36 minutes


Trust is a dynamic process involving vulnerability and forgiveness and sometimes needs to be re-built.

YouTube Video, The Psychology of Trust | Anne Böckler-Raettig | TEDxFrankfurt, TEDx Talks, January 2017, 14:26 minutes


More than half the world lives in societies that document identity, financial transactions and asset ownership, but about 3 billion people do not have the advantages that the ability to prove identity and asset ownership confers. Blockchain and other distributed technologies can provide mechanisms that can directly service the documentation, reputational, transactional and contractual needs of everybody, without the intervention of nation states or other third parties.

YouTube Video, The future will be decentralized | Charles Hoskinson | TEDxBermuda, TEDx Talks, December 2014, 13:35 minutes

– It’s like this

We are all deluded. And for the most part we don’t know it. We often feel as though we have control over our own decisions and destiny, but how true is it?  It’s a bit like what US Secretary of Defence, Donald Rumsfeld, famously said in February 2002 about the ‘known knowns’, the ‘known unknowns’ and the ‘unknown unknowns’.

Youtube video, Donald Rumsfeld Unknown Unknowns !, Ali, August 2009, 34 seconds


 

The significance for ROBOT ETHICS: If people can only act on the basis of what they know, then it is easy to see the implications for artificial Autonomous Intelligent Agents (A/ISs) like robots, that ‘know’ so much less. They may act with the same confidence as people, who have a bias to thinking that what they know and their interpretation of the world, is the only way to see it. Understanding the ‘goggles’ through which people see the world, how they learn, how they classify, how they form concepts and how they validate and communicate knowledge is fundamental to embedding ethical self-regulation into A/ISs.

 


How can a brain that is deluded even get an inkling that it is?  For the most part, the individual finds it very difficult.  Interestingly, it is often those who are most confident that they are right who are most wrong (and dangerously, who we most trust). The 2002 Nobel Prize winner, Daniel Kahneman has spent a lifetime studying the systematic biases in our thinking.  Here is what he says about confidence:

Youtube video, Daniel Kahneman: The Trouble with Confidence, Big Think, February 2012, 2:56 minutes

The fact is, that when it comes to our own interpretations of the world, there is very little that either you or I can absolutely know as demonstrated by René Descartes in 1637It has long been know that we have deficiencies in our abilities to understand and interpret the world, and indeed, it can be argued that the whole system of education is motivated by the need to help individuals make more informed and more rational decisions (although it can be equally argued that education and training in particular, is a sausage factory in the service of employers whose interests may not align with those of the individual).


 

The significance for ROBOT ETHICS: Whilst people may have some idea that there are things they do not know, this is generally untrue of most computer programs. Young children start to develop ethical ideas (e.g. a sense of fairness) from an early age. Then it takes years of schooling and good parenting to get to the point where, as an adult, the law assumes you have full responsibility for your actions. This highlights the huge gap between an adult human’s understanding of ethics and what A/ISs are likely to understand for the foreseeable future.

 


First Principles

The debate about whether we should act by reason or by our intuitions and emotions is not new. The classic work on this is Kant’s ‘Critique of Pure Reason’ published in 1781. This is a masterpiece of epistemological analysis covering science, mathematics, the psychology of mind and belief based on faith and emotion. Kant distinguishes between truth by definition, truth by inference and truth by faith, setting out the main strands of debate for centuries to come. Here is a short, clear presentation of this work.

Introduction to Kant’s Critique of Pure Reason (Part 1 of 4), teach philosophy, September 2013, 4:52 minutes


Beliefs

From an individual’s point of view, by a process of cross validation between different sources of evidence (people we trust,  the media and society generally, our own reasoned thinking, sometimes scientific research and our feelings), we are continuously challenged to construct a consistent view about the world and about ourselves. We feel a need to create at least some kind of semi-coherent account. It’s a primary mechanism of reducing anxiety.  It keeps us orientated and safe. We need to account for it personally, and in this sense we are all ‘personal’ scientists, sifting the evidence and coming to our own conclusions.  We also need to account for it as a society, which is why we engage in science and research to build a robust body of knowledge to guide us.

George Kelly, in 1955, set out ‘personal construct theory’ to describe this from the perspective of the individual – see, for example this straight-forward account of constructivism which also, interestingly, proposes how to reconcile it with Christianity – a belief system based on an entirely different premise, methodology and pedigree):

 

But for the most part there are inconsistencies – between what we thought would happen and what actually did happen, between how we felt and how we thought, between how we thought and what we did, between how we thought somebody would react and how they did react, between our theories about the world and the evidence. Some of the time things are pretty well what we expect but almost as frequently, things don’t hang together, they just don’t add up.   This drives us on a continuous search for patterns and consistency.  We need to make sense of it all:

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011,4:31 minutes

 

But it turns out that really, as Kahneman demonstrates, we are not particularly good scientists after all.  Yes, we have to grapple with the problems of interpreting evidence.  Yes, we have to try and understand the world in order to reduce our own anxieties and make it a safer place.  But, no, we do not do this particularly systematically or rationally.  We are lazy and we are also as much artists as we are scientists. In fact, what we are is ‘story tellers’. We make up stories about how the world works – for ourselves and for others.


 

The significance for ROBOT ETHICS: The implications for A/ISs is that they must learn to see the world in a manner that is similar (or at least understandable) to the people around them. Also, they must have mechanisms to deal with ambiguous inputs and uncertain knowledge, because not much is straightforward when it comes to processing at the abstract level of ethics. Dealing with contradictory evidence by denial, forgetting and ignoring, as people often do, may not be the way we would like A/ISs to deal with ethical issues.

 


Stories

Sifting evidence is not the only way that we come to ‘know’. There is another method that, in many ways, is a lot more efficient and used just as often. This is to believe what somebody else says. So instead of having to understand and reconcile all the evidence yourself you can, as it were, delegate the responsibility to somebody you trust. This could be an expert, or a friend, or a God. After all, what does it matter whether what you (or anybody else) believe is true or not, so long as your needs are being met. If somebody (or something) repeatedly comes up with the goods, you learn to trust them and when you trust, you can breathe a sigh of relief – you no longer have to make the effort to evaluate the evidence yourself. The source of information is often just as important as the information itself. Despite the inconsistencies we believe the stories of those we trust, and if others trust us, they believe our stories.

Stories provide the explanations for what has happened and stories help us understand and predict what will happen.  Our anxiety is most relieved by ‘a good story’. And while the story needs to have some resemblance to the evidence, and like in court can be challenged and cross-examined, what seems to matter most is that it is a ‘good’ story.  And to be a ‘good’ story it must be interesting, revealing, surprising and challenging.  Its consistency is just one factor.  In fact, there can be many different stories, or accounts, of precisely the same incident or event – each account from a different perspective; interpreting, weighing and presenting the evidence from a different viewpoint or through a different value system.  The ‘truth’ is not just how well the story accounts for the evidence but is also to do with a correspondence between the interpretive framework of the listener and that of the teller:

YouTube Video, The danger of a single story | Chimamanda Ngozi Adichie, TED, October 2009, 19:16 minutes

Both as individuals and as societies, we often deny, gloss over and suppress the inconsistencies.  They can be conveniently forgotten or repressed long enough for something else to demand our attention and pre-occupy us.  But also sometimes, for the sake of a ‘better’ story (often one that better reflects the biases in our own value system), the inconsistencies and the evidence about ourselves and the human condition fight back.  Inconsistencies can re-emerge to create nagging doubts, and over time we start to wonder – is our story really true?


 

The significance for ROBOT ETHICS:Just like people, A/ISs will have to learn who to trust, identify and resolve inconsistencies in belief, and how to construct a variety of accounts of the world and their own decision making processes in order to explain themselves and generally communicate in forms that are understandable to people. Like in human dialogue, these accounts will need to bring out certain facets of it’s own beliefs, and afford certain interpretations, depending on the intent of the A/IS and taking into account a knowledge of the person or people it is in dialogue with. Unlike, in human dialogue, the intent of the A/IS must be to enhance the wellbeing of the people it serves (except when their intent is malicious with respect to other people), and to communicate transparently with this intent in mind.

 


Some Epistemological Assumptions

In these blog postings, I try not to take for granted any particular story about how we are and how we relate to each other? What really lies behind our motivations, decisions and choices?  Is it the story that classical economists tell us about rational people in a world of perfect information?  Is it the story neuroscientists tell us about how the brain works?  Is it the story about the constant struggle between the id and the super-ego told to us by Freud?  Is it the story that the advertising industry tell us about what we need for a more fulfilled life?  Or is it the story that cognitive psychologists tell us about how we process information?  Which account tells the best story?  Can these different accounts be reconciled?

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make, become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.


 

The significance for ROBOT ETHICS:To embed ethical self-regulation in artificial Autonomous, Intelligent Systems (A/ISs) will require an understanding of how people learn, interpret, reflect and act on the world and may require a similar decision-making architecture. This is partly for the A/IS’s own ‘operating system’ but also so that it can model how the people around them operate so that it can engage with them ethically and effectively.

 


This Blog Post: ‘It’s Like This’ sets the epistemological framework for what follows in later posts. It’s the underlying assumptions about how we know, justify and explain what we know – both as individuals and in society.