Home » Posts tagged 'Validation'

Tag Archives: Validation

Request contact
If you need to get in touch
Enter your email and click the button

– UK white paper response

 

 

 

Response to the

Department for Science, Innovation and Technology

consultation on the white paper

‘A pro-active approach to AI regulation’

published in March 2023

Date of Response:  8th June 2023

 

Authors:  Citizens Discussion Group on Social Impacts of AI

contact email:  rod.rivers@mac.com

 

Download:   AIethics.AI Response to the Consultation

 

Also see related links to the White Paper etc. below the response

 

 

Introduction

Main Points

Proposed revision to emphasis of the white paper

Rationale and Discussion

Recommended changes to the white paper

Additional Comments

 

Introduction

Please accept and consider this response.  It has emerged from a small group of citizens who have had monthly discussions since November 2022 to consider the social impacts of AI.

Rather than respond to the individual consultation questions we have put our views forward in this free format.   This is so that we are not constrained by the assumptions made in the white paper and so that we can respond at a higher level than the detail of the proposals.

Main Points

Firstly. we are struck by the coincidence in time between the consultation white paper and the proposal by AI experts that there should be a moratorium on AI developments.   There is clearly an anomaly between these two positions that needs to be fully understood and reconciled. On the one hand we have the UK Government proposing very light-touch regulation in order not to stifle innovation and on the other hand a significant number of AI experts who are alarmed by the existential risk presented by AI systems.  The way in which these two rather opposing positions can be reconciled is explored below.

Secondly and related, AI is developing rapidly, and indeed, much has happened in the three months since the publication of the white paper, which itself may have been drafted before the public were fully exposed to systems like Chat GTP.  For example, since then Geoff Hinton (the ‘grandfather of machine learning techniques’) has expressed his concerns about AI and the uncertain potential for AI systems to develop a different but more effective form of intelligence, in which many interacting AIs communicate at electronic speed with access to the accumulation of human knowledge (both well validated and spurious).   We are already seeing AIs that, at present with human help, generate computer code that can feed into the development of another, more powerful and effective, generation of AI systems.  We can anticipate that it will not be too long before AI systems are critiquing and improving themselves in a general (not task specific) way without much, if any, human intervention.  Geoff Hinton, himself, believes that it is very difficult to predict how AI will develop beyond a 5 year horizon.   While the white paper acknowledges the pace of development, it offers no mitigation of a risk that might emerge from the fog within the lifetime of a single government.  We cannot see how either a moratorium on developments of AI or its regulation is likely to prevent an AI ‘arms race’ amongst nations or between companies.  This situation calls for a quite different strategy and our proposals below address this.

Thirdly, we note that the UK cannot act alone with respect to development of AI or its regulation.  While the UK may aspire to being amongst the leading innovators in AI, it is likely to be dwarfed by the US, China, and the EU, both in its investment in AI and in its regulation.  With respect to development this means a potential ‘arms race’ that can only accelerate developments and heighten the existential risks.  With respect to regulation this means that the UK, while it might be able to influence the debate about regulation, could in the end be subsumed into a regulatory regime largely defined by other bigger players.

Fourthly, we note that on first reading the white paper comes across as a ‘do nothing’ approach (light touch regulation and minimal organisational change).  It creates the impression of lacking leadership, firm direction, and moral authority.  AI is an area of increasing expert and public concern.  It is dominated by multi-national commercial players who have already demonstrated an inability to adopt adequate safeguards in relation to harms caused by technology.  Only a state can take the moral lead necessary to control large, short-term commercial interests that often have little interest in minimising harms.

Fifth, the white paper fails to distinguish between the types of AI system that do harm and those that clearly confer benefit.  Its laisser-faire approach fails to acknowledge the full impact of either past or future harms.  The white paper refers to regulating ‘use’ rather than ‘technology’.  We would argue for regulating with respect to ‘harms’.  There is insufficient explanation as to the purpose of regulation as a mechanism for containing the risk of harms.

We support the principles generally and especially the principle of contestability and redress.  Compensation and redress with respect to ‘harms’ gets directly to the purpose of regulation and would encourage the identification, anticipation and mitigation of harms. Making contestability easy is essential to the fuller understanding of fast changing impacts of technology.  Making redress in cases of transgression fully compensate individuals and society (i.e. to compensate for wider social harms) is the only language big players are likely to understand.  The burden of proof of no harm (or benefit clearly and significantly outweighing harms) should be on the companies developing AI and their products rather than on the individual or society.  Contestability and redress are far less abstract than principles like fairness and are the route to operationalising what the more abstract principles mean in practice.  If legislation is to have any teeth it needs to do everything it can to facilitate and fully compensate for both individual and social harms.

Sixth, whilst we welcome the white paper and the opportunity it offers to address an important topic, we believe that by not adequately taking into account the above, the white paper is flawed because it addresses the wrong question.  This is highlighted by the huge gulf in the position of the white paper for light-touch regulation and the perception by experts in the industry that AI may constitute the world’s biggest existential threat. There is clearly a significant mis-match in the frames of reference behind these two apparently opposing positions.

We believe that rather than addressing the extent of regulation – whether it should be light-touch or not, it should be addressing the question ‘what sorts of AI should the UK be developing?’. In the answering of this question, we point to an important role for the UK for which there may be a gap in the market, and a vital role in the development of AI systems generally.

Proposed revision to emphasis of the white paper

We have a suggestion that we believe should be considered at the highest levels of government, that could steer the UK onto a path that would not only be good for the UK but could also benefit all nations.   It avoids the UK becoming implicitly complicit in a dangerous AI arms race, it capitalises on strengths that are deeply routed in British culture, it provides an excellent opportunity for commercial exploitation of AI technology, and it enhances the brand and credibility of the UK as a leader in a matter of importance across the world.

So, what is the suggestion?   In one sentence – the UK Government should set up mechanisms that support AI developments designed to prevent harms (and to capitalise on some strengths in ethical big data) and discourage others.

Mechanisms:  The mechanisms we propose are financial incentives (e.g. grants and tax-breaks in the case of favoured developments, and additional tax burdens and regulation in the case of discouraged developments).  Such an approach provides a more focused, useful and effective regime than that proposed in the white paper.   It operationally defines what is encouraged and discouraged without introducing heavy regulation that will quickly date or stifle innovation.  It clearly indicates the areas where the UK can lead the world, and where other nations may have neither the capability or motivation to compete while still seeing the benefits of and supporting the UK’s initiative.

Areas to Encourage:  While other nations focus on the mainstream development of AI systems, our suggested approach is to encourage focused specialisation on particular AI technologies and applications. These are ethical AI, AI risk mitigation and big data applications (e.g. health).

By coordinating and integrating the skills that UK universities have built up, particularly in the social science, computing and creative industries, the UK can build world-class interdisciplinary teams designed to address some of the hard problems of AI.  These include:

  • AI safely and checking of AI decision-making
  • truth verification, anomaly detection and fraud/fake detection
  • accountability, responsibility and legal liability in relation to AI systems
  • identity verification, protection and management
  • collective intelligence and citizen participation in political decision-making
  • legal judgement and moral reasoning
  • traceability

Included in the above are : watermarking technologies, reflective and corrective algorithms, architectures to facilitate citizen participation in decision-making, anomaly detection, personal/user agents, cyber defence and neutralisation systems, open decision-making and transparency, blame logics (to help formalise the allocation of accountability, responsibility and redress for harms), explanation systems, scientific truth verification, supply chain traceability (e.g. food, energy clothing)., health and safety checking systems, checking for adherence to law and regulation, and many other specific applications that would help ensure the integrity of AI and other systems.

Rationale and Discussion

The approach has parallels with the way in which the UK has encouraged the development of green / carbon neutral technologies. The impacts and benefits accrue not just to the UK but to individuals everywhere and humanity as a whole. Hence the products and services are welcomed by governments, new businesses and citizens alike. The market for these AI developments is world-wide and wide open.

The UK also has the opportunity to capitalise on the use of high quality big data sets.  In particular, data held by the NHS needs to be both protected and exploited by AI.  The UK can use big datasets like health data without either compromising privacy or selling the crown jewels. How?  These big data sets can be used to train AI systems.  The algorithms produced by this training have large commercial value.  They do not expose the raw data itself so patient privacy is maintained and the ownership of the raw data is preserved.

The UK has an opportunity to act on the world stage in a stateman-like way with respect to AI. It could have with its eye firmly on helping mitigate the risks of AI systems by building principles, methods, skills and tools that draw on its already strong capabilities in providing legal frameworks, regulatory systems, democratic government and higher education.   AI is not just another technology.  It is a technology that will have a profound effect on humanity and the UK should position itself as a forward thinker and actor.  It need not join a knee-jerk competitive scrabble to develop short-term commercial AI systems – a race that it will surely lose to bigger international players.

Instead, it could develop a critical mass of capability that steers AI towards beneficial applications and provides the tools to mitigate AI risks.   The risks have already materialised through the use of AI in marketing, manipulating news feeds, deep fakes, dissemination of false and mis-leading information and so on.  We will already face the possibility of the use of AI by organised crime, rogue states and borderline industries like gambling.  Look at the way the UK in particular has fallen behind in fraud detection.  We need a concerted focus and resources for being ahead of the threats made possible by AI to address both short-term threats like fraud but also the longer term) emerging existential threats (and in this case long term may be only 5 years. The UK could develop a commercial edge by leading in ethical AI, AI risk mitigation and use of big data in training algorithms.

For each of the principles set out in the white paper (safety, transparency, explainability, fairness, accountability, contestability and redress) it is possible to identify/develop scenarios illustrating risk.  For example, how might a well-resourced actor build AI systems that threatened safety, transparency etc., either malevolently or inadvertently.  Such scenarios might drive the development of AI tools that can ‘police’ other AI systems and help counter the risks to these worthwhile principles.

All the example areas above require a multi-disciplinary approach.  They need to be addressed from both a social and technical perspective. They need to be developed rapidly to address the many threats and risks posed by the ready availability of AI platforms. As we have seen by the use of AI in social media, these new AI platforms can be used in many nefarious ways ranging from the criminal to unacceptable exploitation (both commercial and political).   There are also many possible applications of AI that can help address currently intractable UK and world problems.  These too can be encouraged.

Recommended changes to the white paper

In order to implement the above we make the following recommendations:

The white paper, while useful as it stands, should be re-oriented (and supplemented) to:

  • position the UK as the leading developer of products and services to support the development (and commercial exploitation) of ethical AI, AI risks mitigation and AI training data
  • identify ethics as a primary driver of the policy along-side innovation and commercial exploitation
  • explicitly set out the areas of AI development where harms and potential harms have already been identified and cite the types of developments and applications that potentially lead to harms
  • Propose the development of mechanisms to identify, measure and allocate the accountability / responsibility for harms as a basis for determining appropriate redress
  • explicit identify areas of AI development to be encouraged and only use examples that conform to the areas encouraged
  • commission the development of a test (that could potentially be implemented as an AI system through training on examples) that would score proposed development for their conformity with the types of development to be encouraged (i.e. ethical and commercially promising)
  • set out mechanisms by which these areas might be encouraged (e.g. grants; tax-breaks; technical, training and managerial support)
  • set out a strategy to develop a world leading workforce able to develop products and services in the areas to be encouraged
  • Develop scenarios that illustrate risks to the principles set out in the white paper (safety, transparency, explainability, fairness, accountability, contestability and redress) and use these to drive the development of AI systems to mitigate the risks
  • set out deterrents to discourage potentially criminal, harmful and otherwise unacceptable developments (e.g. heavy regulation and punitive taxation)

Additional Comments

Otherwise, we like the iterative and sandbox approaches, and especially in light of the above comments we support centralised coordination and oversight.

We felt that case study 2.1 relating to the use of AI to determine insurance premiums might send the wrong messages about useful applications. It is arguable that such an application may go against the principle of insurance as a mechanism of fairly spreading risk. In general, the applications used to illustrate the types of AI development that are to be encouraged need to be thought through more explicitly and illustrate how the principles play through into their selection.  The case studies should clearly and explicitly exemplify the operation of the principles.  Indeed, the AI applications that should be encouraged should be those that would implement the principles – building AIs that aim to achieve greater safety, transparency, explainability, fairness, accountability, contestability and facilitate determining redress are exactly the sort of applications where we should position the UK to become world-leaders.  Applications like these would have great value to society, would be commercially valuable in a world where business would benefit from the greater stability they might engender and put the UK in the forefront of AI safety.

Help us campaign for safer AI

Artificial Intelligence has been rapidly advancing in recent years, with its applications exploding across various industries. However, as its use becomes more and more ubiquitous, concerns have been raised about the ethical implications of AI. To address these concerns, the AIethics.ai invites you to join our movement which is urging a more robust ethical approach to ai regulation in the UK government’s white paper consultation.

To support our response, and become part of a growing body pressuring the government to join here

Related Links

GOVERNMENT WHITE PAPER CONSULTATION ON REGULATION OF AI
The UK government published a white paper called ‘A Pro-Innovation Approach to AI Regulation’.  This is available at:
The white paper references a March 2023 report by Sir Patrick Vallance, the Government Chief Scientific Adviser, called ‘Pro-innovation Regulation of Technologies Review – Digital Technologies’ available at:
The white paper invites consultation by 21st June (see Appendix C). 
CALL FOR A PAUSE IN THE DEVELOPMENT OF AI
There has also been a call by AI Experts for a pause in development of advanced AI while we take stock of direction and regulation.  See the open letter and sign it if you want at:

– AI: Measures, Maps and Taxonomies

Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.

José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.

José Hernández-Orallo
José Hernández-Orallo

José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.

Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.

Intelligence not included
Intelligence not included

My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.

Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.

When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.

But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.

– Can we trust blockchain in an era of post truth?

Post Truth and Trust

The term ‘post truth’ implies that there was once a time when the ‘truth’ was apparent or easy to establish. We can question whether such a time ever existed, and indeed the ‘truth’, even in science, is constantly changing as new discoveries are made. ‘Truth’, ‘Reality’ and ‘History’, it seems, are constantly being re-constructed to meet the needs of the moment. Philosophers have written extensively about the nature of truth and this is an entire branch of philosophy called ‘epistemology’. Indeed my own series of blogs starts with a posting called ‘It’s Like This’ that considers the foundation of our beliefs.

Nevertheless there is something behind the notion of ‘post truth’. It arises out of the large-scale manufacture and distribution of false news and information made possible by the internet and facilitated by the widespread use of social media. This combines with a disillusionment in relation to almost all types of authority including politicians, media, doctors, pharmaceutical companies, lawyers and the operation of law generally, global corporations, and almost any other centralised institution you care to think of. In a volatile, uncertain, changing and ambiguous world who or what is left that we can trust?

YouTube Video, Astroturf and manipulation of media messages | Sharyl Attkisson | TEDxUniversityofNevada, TEDx Talks, February 2015, 10:26 minutes

All this may have contributed to the popularism that has led to Brexit and Trump and can be said to threaten our systems of democracy. However, to paraphrase Churchill’s famous remark ‘democracy is the worst form of Government, except for all the others’. But, does the new generation of distributed and decentralising technologies provide a new model in which any citizen can transact with any other citizen, on any terms of their choosing, bypassing all systems of state regulation, whether they be democratic or not. Will democracy become redundant once power is fully devolved to the individual and individuals become fully accountable for their every action?

Trust is the crucial notion that underlies belief. We believe who we trust and we put our trust in the things we believe in. However, in a world where we experience so many differing and conflicting viewpoints, and we no longer unquestioningly accept any one authority, it becomes increasingly difficult to know what to trust and what to believe.

To trust something is to put your faith in it without necessarily having good evidence that it is worthy of trust. If I could be sure that you could deliver on a promise then I would not need to trust you. In religion, you put your trust in God on faith alone. You forsake the need for evidence altogether, or at least, your appeal is not to the sort of evidence that would stand up to scientific scrutiny or in a court of law.

Blockchain to the rescue

Blockchain is a decentralised technology for recording and validating transactions. It relies on computer networks to widely duplicate and cross-validate records. Records are visible to everybody providing total transparency. Like the internet it is highly distributed and resilient. It is a disruptive technology that has the potential to decentralise almost every transactional aspect of everyday life and replace third parties and central authorities.

YouTube Video, Block chain technology, GO-Science, January 2016, 5:14 minutes

Blockchain is often described as a ‘technology of trust’, but its relationship to trust is more subtle than first appears. Whilst Blockchain promises to solve the problem of trust, in a twist of irony, it does this by creating a kind of guarantee, and by creating the guarantee you no longer have to be concerned about trusting another party to a transaction because what you can trust is the Blockchain record of what you agreed. You can trust this record, because, once you understand how it works, it becomes apparent that the record is secure and cannot be changed, corrupted, denied or mis-represented.

Youtube Video, Blockchain 101 – A Visual Demo, Anders Brownworth, November 2016, 17:49 minutes

It has been argued that Blockchain is the next revolution in the internet, and indeed, is what the internet should have been based on all along. If, for example, we could trace the providence of every posting on Facebook, then, in principle, we would be able to determine its true source. There would no longer be doubt about whether or not the Russian’s hacked into the Democratic party computer systems because all access would be held in a publicly available, widely distributed, indelible record.

However, the words ‘in principle’ are crucial and gloss over the reality that Blockchain is just one of many building-blocks towards the guarantee of trustworthiness. What if the Russians paid a third-party in untraceable cash to hack into records or to create false news stories? What if A and B carry out a transaction but unknowing to A, B has stolen C’s identity? What if there are some transactions that are off the Blockchain record (e.g. the subsequent sale of an asset) – how do they get reconciled with what is on the record? What if somebody one day creates a method of bringing all computers to a halt or erasing all electronic records? What if somebody creates a method by which the provenance captured in a Blockchain record were so convoluted, complex and circular that it was impossible to resolve however much computing power was thrown at it?

I am not saying that Blockchain is no good. It seems to be an essential underlying component in the complicated world of trusting relationships. It can form the basis on which almost every aspect of life from communication, to finance, to law and to production can be distributed, potentially creating a fairer and more equitable world.

YouTube Video, The four pillars of a decentralized society | Johann Gevers | TEDxZug, TEDx Talks, July 2014, 16:12 minutes

Also, many organisations are working hard to try and validate what politicians and others say in public. These are worthy organisations and deserve our support. Here are just a couple:

Full Fact is an independent charity that, for example, checks the facts behind what politicians and other say on TV programmes like BBC Question Time. See: https://fullfact.org. You can donate to the charity at: https://fullfact.org/donate/

More or Less is a BBC Radio programme (over 300 episodes) that checks behind purported facts of all sorts (from political claims to ‘facts’ that we all take for granted without questioning them). http://www.bbc.co.uk/programmes/p00msxfl/episodes/player

However, even if ‘the facts’ can be reasonably established, there are two perspectives that undermine what may seem like a definitive answer to the question of trust. These are the perspectives of constructivism and intent.

Constructivism, intent, and the question of trust

From a constructivist perspective it is impossible to put a definitive meaning on any data. Meaning will always be an interpretation. You only need to look at what happens in a court of law to understand this. Whatever the evidence, however robust it is, it is always possible to argue that it can be interpreted in a different way. There is always another ‘take’ on it. The prosecution and the defence may present an entirely different interpretation of much the same evidence. As Tony Benn once said, ‘one man’s terrorist is another man’s freedom fighter’. It all depends on the perspective you take. Even a financial transaction can be read a different ways. While it’s existence may not be in dispute, it may be claimed that it took place as a result of coercion or error rather than freely entered into. The meaning of the data is not an attribute of the data itself. It is at least, in part, at attribute of the perceiver.

Furthermore, whatever is recorded in the data, it is impossible to be sure of the intent of the parties. Intent is subjective. It is sealed in the minds of the actors and inevitably has to be taken on trust. I may transfer the ownership of something to you knowing that it will harm you (for example a house or a car that, unknown to you, is unsafe or has unsustainable running costs). On the face of it the act may look benevolent whereas, in fact, the intent is to do harm (or vice versa).

Whilst for the most part we can take transactions at their face value, and it hardly makes sense to do anything else, the trust between the parties extends beyond the raw existence of the record of the transaction, and always will. This is not necessarily any different when an authority or intermediary is involved, although the presence of a third-party may have subtle effects on the nature of the trust between the parties.

Lastly, there is the pragmatic matter of adjudication and enforcement in the case of breaches to a contract. For instantaneous financial transactions there may be little possibility of breach in terms of delivery (i.e. the electronic payments are effected immediately and irrevocably). For other forms of contract though, the situation is not very different from non-Blockchain transactions. Although we may be able to put anything we like in a Blockchain contract – we could, for example, appoint a mutual friend as the adjudicator over a relationship contract, and empower family members to enforce it, we will still need the system of appeals and an enforcer of last resort.

I am not saying is that Blockchain is unnecessarily or unworkable, but I am saying that it is not the whole story and we need to maintain a healthy scepticism about everything. Nothing is certain.


Further Viewing

Psychological experiments in Trust. Trust is more situational than we normally think. Whether we trust somebody often depends on situational cues such as appearance and mannerisms. Some cues are to do with how similar one persona feels to another. Cues can be used to ascribe moral intent to robots and other artificial agents.

YouTube Video, David DeSteno: “The Truth About Trust” | Talks at Google, Talks at Google, February 2014, 54:36 minutes


Trust is a dynamic process involving vulnerability and forgiveness and sometimes needs to be re-built.

YouTube Video, The Psychology of Trust | Anne Böckler-Raettig | TEDxFrankfurt, TEDx Talks, January 2017, 14:26 minutes


More than half the world lives in societies that document identity, financial transactions and asset ownership, but about 3 billion people do not have the advantages that the ability to prove identity and asset ownership confers. Blockchain and other distributed technologies can provide mechanisms that can directly service the documentation, reputational, transactional and contractual needs of everybody, without the intervention of nation states or other third parties.

YouTube Video, The future will be decentralized | Charles Hoskinson | TEDxBermuda, TEDx Talks, December 2014, 13:35 minutes

– Ways of knowing (HOS 4)

How do we know what we know?

This article considers:

(1) the ways we come to believe what we think we know

(2) the many issues with the validation of our beliefs

(3) the implications for building artificial intelligence and robots based on the human operating system.


I recently came across a video (on the site http://www.theoryofknowledge.net) that identified the following ‘ways of knowing’:

  • Sensory perception
  • Memory
  • Intuition
  • Reason
  • Emotion
  • Imagination
  • Faith
  • Language

This list is mainly about mechanisms or processes by which an individual acquires knowledge. It could be supplemented by other processes, for example, ‘meditation’, ‘science’ or ‘history’, each of which provides its own set of approaches to generating new knowledge for both the individual and society as a whole. There are many difference ways in which we come to formulate beliefs and understand the world.

Youtube Video, TOK Ways of Knowing EXPLAINED | Theory of Knowledge Advice, Ivy Lilia, October 2018, 6:16 minutes


In the spirit of working towards a description of the ‘human operating system’, it is interesting to consider how a robot or other Artificial Intelligence (AI), that was ‘running’ the human operating system, would draw on its knowledge and beliefs in order to solve a problem (e.g. resolve some inconsistency in its beliefs). This forces us to operationalize the process and define the control mechanism more precisely. I will work through the above list of ‘ways of knowing’ and illustrate how each might be used.


Let’s say that the robot is about to go and do some work outside and, for a variety of reasons, needs to know what the weather is like (e.g. in deciding whether to wear protective clothing, or how suitable the ground is for sowing seeds or digging up for some construction work etc.) .

First it might consult its senses. It might attend to its visual input and note the patterns of light and dark, comparing this to known states and conclude that it was sunny. The absence of the familiar sound patterns (and smell) of rain might provide confirmation. The whole process of matching the pattern of data it is receiving through its multiple senses, with its store of known patterns, can be regarded as ‘intuitive’ because it is not a reasoning process as such. In the Khanemman sense of ‘system 1’ thinking, the robot just knows without having to perform any reasoning task.

Youtube Video, System 1 and System 2, Stoic Academy, February 2017, 1:26 minutes

The knowledge obtained from matching perception to memory can nevertheless be supplemented by reasoning, or other forms of knowledge that confirm or question the intuitively-reached conclusion. If we introduce some conflicting knowledge, e.g. that the robot thinks it’s the middle of the night in it’s current location, we then create a circumstance in which there is dissonance between two sources of knowledge – the perception of sunlight and the time of day. This assumes the robot has elaborated knowledge about where and when the sun is above the horizon and can potentially shine (e.g. through language – see below).

In people the dissonance triggers the emotional state of ‘surprise’ and the accompanying motivation to account for the contradiction.

Youtube Video, Cognitive Dissonance, B2Bwhiteboard, February 2012, 1:37 minutes

Likewise, we might label the process that causes the search for an explanation in the robot as ‘surprise’. An attempt may be made to resolve this dissonance through Kahneman’s slower, more reasoned, system 2 thinking. Either the perception is somehow faulty, or the knowledge about the time of day is inaccurate. Maybe the robot has mistaken the visual and audio input as coming from its local senses when in fact the input has originated from the other side of the world. (Fortunately, people do not have to confront the contradictions caused by having distributed sensory systems).

Probably in the course of reasoning about how to reconcile the conflicting inputs, the robot will have had to run through some alternative possible scenarios that could account for the discrepancy. These may have been generated by working through other memories associated with either the perceptual inputs or other factors that have frequently led to mis-interpretations in the past. Sometimes it may be necessary to construct unique possible explanations out of component part explanations. Sometimes an explanation may emerge through the effect of numerous ideas being ‘primed’ through the spreading activation of associated memories. Under these circumstances, you might easily say that the robot was using it’s imagination in searching for a solution that had not previously been encountered.

Youtube Video, TEDxCarletonU 2010 – Jim Davies – The Science of Imagination, TEDx Talks, September 2010, 12:56 minutes

Lastly, to faith and language as sources of knowledge. Faith is different because, unlike all the other sources, it does not rely on evidence or proof. If the robot believed, on faith, that the sun was shining, any contradictory evidence would be discounted, perhaps either as being in error or as being irrelevant. Faith is often maintained by others, and this could be regarded as a form of evidence, but in general if you have faith in or trust something, it is at least filling the gap between the belief and the direct evidence for it.

Here is a religious account of faith that identifies it with trust in the reliability of God to deliver, where the main delivery is eternal life.

Youtube video, What is Faith – Matt Morton – The Essence of Faith – Grace 360 conference 2015,Grace Bible Church, September 2015, 12:15 minutes

Language as a source of evidence is a catch-all for the knowledge that comes second hand from the teachings and reports of others. This is indirect knowledge, much of which we take on trust (i.e. faith), and some of which is validated by direct evidence or other indirect evidence. Most of us take on trust that the solar system exists, that the sun is at the centre, and that earth is in the third orbit. We have gained this knowledge through teachers, friends, family, tv, radio, books and other sources that in their turn may have relied on astronomers and other scientist who have arrived at these conclusions through observation and reason. Few of us have made the necessary direct observations and reasoned inferences to have arrived at the conclusion directly. If our robot were to consult databases of known ‘facts’, put together by people and other robots, then it would be relying on knowledge through this source.

Pitfalls

People like to think that their own beliefs are ‘true’ and that these beliefs provide a solid basis for their behaviour. However, the more we find out about the psychology of human belief systems the more we discover the difficulties in constructing consistent and coherent beliefs, and the shortcomings in our abilities to construct accurate models of ‘reality’. This creates all kinds of difficulties amongst people in their agreements about what beliefs are true and therefore how we should relate to each other in peaceful and productive ways.


If we are now going on to construct artificial intelligences and robots that we interact with and have behaviours that impact the world, we want to be pretty sure that the beliefs a robot develops still provide a basis for understanding their behaviour.


Unfortunately, every one of the ‘ways of knowing’ is subject to error. We can again go through them one by one and look at the pitfalls.

Sensory perception: We only have to look at the vast body of research on visual illusion (e.g. see ‘Representations of Reality – Part 1’) to appreciate that our senses are often fooled. Here are some examples related to colour vision:

Youtube Video, Optical illusions show how we see | Beau Lotto,TED, October 2009, 18:59 minutes

Furthermore, our perceptions are heavily guided by what we pay attention to, meaning that we can miss all sorts of significant and even life-threatening information in our environment. Would a robot be similarly misled by its sensory inputs? It’s difficult to predict whether a robot would be subject to sensory illusions, and this might depend on the precise engineering of the input devices, but almost certainly a robot would have to be selective in what input it attended to. Like people, there could be a massive volume of raw sensory input and every stage of processing from there on would contain an element of selection and interpretation. Even differences in what input devices are available (for vision, sound, touch or even super-human senses like perception of non-visual parts of the electromagnetic spectrum), will create a sensory environment (referred to as the ‘umwelt’ or ‘merkwelt’in ethology) that could be quite at variance with human perceptions of the world.

YouTube Video, What is MERKWELT? What does MERKWELT mean? MERKWELT meaning, definition & explanation, The Audiopedia, July 2017, 1:38 minutes


Memory: The fallibility of human memory is well documented. See, for example, ‘The Story of Your Life’, especially the work done by Elizabeth Loftus on the reliability of memory. A robot, however, could in principle, given sufficient storage capacity, maintain a perfect and stable record of all its inputs. This is at variance with the human experience but could potentially mean that memory per se was more accurate, albeit that it would be subject to variance in what input was stored and the mechanisms of retrieval and processing.


Intuition and reason: This is the area where some of the greatest gains (and surprises) in understanding have been made in recent years. Much of this progress is reported in the work of Daniel Kahneman that is cited many times in these writings. Errors and biases in both intuition (system 1 thinking) and reason (system 2 thinking) are now very well documented. A long list of cognitive biases can be found at:

https://en.wikipedia.org/wiki/List_of_cognitive_biases

Would a robot be subject to the same type of biases? It is already established that many algorithms, used in business and political campaigning, routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.

Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes

As for reasoning, there is some hope that the robustness of proofs that can be achieved computationally may save the artificial intelligence or robot from at least some of the biases of system 2 thinking.


Emotion: Biases in people due to emotional reactions are commonplace. See, for example:

Youtube Video, Unconscious Emotional Influences on Decision Making, The Rational Channel, February 2017, 8:56 minutes

However, it is also the case that emotions are crucial in decision–making. Emotions often provide the criteria and motivation on which decisions are made and without them, people can be severely impaired in effective decision-making. Also, emotions provide at least one mechanism for approaching the subject of ethics in decision-making.

Youtube Video, When Emotions Make Better Decisions – Antonio Damasio, FORA.tv, August 2009, 3:22 minutes

Can robots have emotions? Will robots need emotions to make effective decisions? Will emotions bias or impair a robot’s decision-making. These are big questions and are only touched on here, but briefly, there is no reason why emotions cannot be simulated computationally although we can never know if an artificial computational device will have the subjective experience of emotion (or thought). Probably some simulation of emotion will be necessary for robot decision-making to align with human values (e.g. empathy) and, yes, a side-effect of this may well be to introduce bias into decision-making.

For a selection of BBC programmes on emotions see:
http://www.bbc.co.uk/programmes/topics/Emotions?page=1


Imagination: While it doesn’t make much sense to talk about ‘error’ when it comes to imagination, we might easily make value-judgments about what types of imagination might be encouraged and what might be discouraged. Leaving aside debates about how, say excessive experience of violent video games, might effect imagination in people, we can at least speculate as to what might or should go on in the imagination of a robot as it searches through or creates new models to help predict the impacts of its own and others behaviours.

A big issue has arisen as to how an artificial intelligence can explain its decision-making to people. While AI based on symbolic reasoning can potentially offer a trace describing the steps it took to arrice at a conclusion, AIs based on machine learning would be able to say little more than ‘I recognized the pattern as corresponding to so and so’, which to a person is not very explanatory. It turns out that even human experts are often unable to provide coherent accounts of their decision-making, even when they are accurate.

Having an AI or robot account for its decision-making in a way understandable to people is a problem that I will address in later analysis of the human operating system and, I hope, provide a mechanism that bridges between machine learning and more symbolic approaches.


Faith: It is often said that discussing faith and religion is one of the easiest ways to lose friends. Any belief based on faith is regarded as true by definition, and any attempt to bring evidence to refute it, stands a good chance of being regarded as an insult. Yet people have different beliefs based on faith and they cannot all be right. This not only creates a problem for people, who will fight wars over it, but it is also a significant problem for the design of AIs and robots. Do we plug in the Muslim or the Christian ethics module, or leave it out altogether? How do we build values and ethical principles into robots anyway, or will they be an emergent property of its deep learning algorithms. Whatever the answer, it is apparent that quite a lot can go badly wrong if we do not understand how to endow computational devices with this ‘way of knowing’.


Language: As observed above, this is a catch-all for all indirect ‘ways of knowing’ communicated to people through media, teaching, books or any other form of communication. We only have to consider world wars and other genocides to appreciate that not everything communicated by other people is believable or ethical. People (and organizations) communicate erroneous information and can deliberately lie, mislead and deceive.

We strongly tend to believe information that comes from the people around us, our friends and associates, those people that form part of our sub-culture or in-group. We trust these sources for no other reason than we are familiar with them. These social systems often form a mutually supporting belief system, whether or not it is grounded in any direct evidence.

Youtube Video, The Psychology of Facts: How Do Humans (mis)Trust Information?, YaleCampus, January 2017

Taking on trust the beliefs of others that form part of our mutually supporting social bubble is a ‘way of knowing’ that is highly error prone. This is especially the case when combined with other ‘ways of knowing’, such as faith, that in their nature cannot be validated. Will robot communities develop, who can talk to each other instantaneously and ‘telepathically’ over wireless connections, also be prone to the bias of groupthink?


The validation of beliefs

So, there are multiple ways in which we come to know or believe things. As Descartes argued, no knowledge is certain (see ‘It’s Like This’). There are only beliefs, albeit that we can be more sure of some that others, normally by virtue of their consistency with other beliefs. Also, we note that our beliefs are highly vulnerable to error. Any robot operating system that mimics humans will also need to draw on the many different ‘ways of knowing’ including a basic set of assumptions that it takes to be true without necessarily any supporting evidence (it’s ‘faith’ if you like). There will also need to be many precautions against AIs and robots developing erroneous or otherwise unacceptable beliefs and basing their behaviours on these.

There is a mechanism by which we try to reconcile differences between knowledge coming from different sources, or contradictory knowledge coming from the same source. Most people seem to be able to tolerate a fair degree of contradiction or ambiguity about all sorts of things, including the fundamental questions of life.

Youtube Video, Defining Ambiguity, Corey Anton, October 2009, 9:52 minutes

We can hold and work with knowledge that is inconsistent for long periods of time, but nevertheless there is a drive to seek consistency.

In the description of the human operating system, it would seem that there are many ways in which we establish what we believe and what beliefs we will recruit to the solving of any particular problem. Also, the many sources of knowledge may be inconsistent or contradictory. When we see inconsistencies in others we take this as evidence that we should doubt them and trust them less.

Youtube Video, Why Everyone (Else) is a Hypocrite, The RSA, April 2011, 17:13 minutes

However, there is, at least, a strong tendency in most people, to establish consistency between beliefs (or between beliefs and behaviours), and to account for inconsistencies. The only problem is that we are often prone to achieve consistency by changing sound evidence-based beliefs in preference to the strongly held beliefs based on faith or our need to protect our sense of self-worth.

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011. 4:31 minutes

From this analysis we can see that building AIs and robots is fraught with problems. The human operating system has evolved to survive, not to be rational or hold high ethical values. If we just blunder into building AIs and robots based on the human operating system we can potentially make all sorts of mistakes and give artificial agents power and autonomy without understanding how their beliefs will develop and the consequences that might have for people.

Fortunately there are some precautions we can take. There are ways of thinking that have been developed to counter the many biases that people have by default. Science is one method that aims to establish the best explanations based on current knowledge and the principle of simplicity. Also, critical thinking has been taught since Aristotle and fortunately many courses have been developed to spread knowledge about how to assess claims and their supporting arguments.

Youtube Video, Critical Thinking: Issues, Claims, Arguments, fayettevillestatenc, January 2011

Implications

To summarise:

Sensory perception – The robot’s ‘umwelt’ (what it can sense) may well differ from that of people, even to the extent that the robot can have super-human senses such as infra-red / x-ray vision, super-sensitive hearing and smell etc. We may not even know what it’s perceptual world is like. It may perceive things we cannot and miss things we find obvious.

Memory – human memory is remarkably fallible. It is not so much a recording, as a reconstruction based on clues, and influenced by previously encountered patterns and current intentions. Given sufficient storage capacity, robots may be able to maintain memories as accurate recording of the states of their sensory inputs. However, they may be subject to similar constraints and biases as people in the way that memories are retrieved and used to drive decision-making and behaviour.

Intuition – if the robot’s pattern-matching capabilities are based on the machine learning of historical training sets then bias will be built into its basic processes. Alternatively, if the robot is left to develop from it’s own experience then, as with people, great care has to be taken to ensure it’s early experience will not lead to maladaptive behaviours (i.e. behaviours not acceptable to the people around it).

Reason – through the use of mathematical and logical proofs, robots may well have the capacity to reason with far greater ability than people. They can potentially spot (and resolve) inconsistencies arising out of different ‘ways of knowing’ with far greater adeptness than people. This may create a quite different balance between how robots make decisions and how people do using emotion and reason in tandem.

Emotion – human emotion are general states that arise in response to both internal and external events and provide both the motivation and the criteria on which decisions are made. In a robot, emerging global states could also potentially act to control decision-making. Both people, and potentially robots, can develop the capacity to explicitly recognize and control these global states (e.g. as when suppressing anger). This ability to reflect, and to cause changes in perspective and behaviour, is a kind of feedback loop that is inherently unpredictable. Not having sufficient understanding to predict how either people or robots will react under particular circumstances, creates significant uncertainty.

Imagination – much the same argument about predictability can be made about imagination. Who knows where either a person’s or a robot’s imagination may take them? Chess computers out-performed human players because of their capacity to reason in depth about the outcomes of every move, not because they used pattern-matching based on machine learning (although it seems likely that this approach will have been tried and succeeded by now). Robots can far exceed human capacities to reason through and model future states. A combination of brute force computing and heuristics to guide search, may have far-reaching consequences for a robot’s ability to model the world and predict future outcomes, and may far exceed that of people.

Faith – faith is axiomatic for people and might also be for robots. People can change their faith (especially in a religious, political or ethical sense) but more likely, when confronted with contradictory evidence or sufficient need (i.e. to align with a partner’s faith) people with either ignore the evidence or find reasons to discount it. This way can lead to multiple interpretations of the same basic axioms, in the same way as there are many religious denominations and many interpretations of key texts within these. In robots, Asimov’s three laws of robotics would equate to their faith. However, if robots used similar mechanisms as people (e.g. cognitive dissonance) to resolve conflicting beliefs, then in the same way as God’s will can be used to justify any behaviour, a robot may be able to construct a rationale for any behaviour whatever its axioms. There would be no guarantee that a robot would obey its own axiomatic laws.

Communication – The term language is better labeled ‘communication’ in order to make it more apparent that it extends to all methods by which we ‘come to know’ from sources outside ourselves. Since communication of knowledge from others is not direct experience, it is effectively taken on trust. In one sense it is a matter of faith. However, the degree of consistency across external sources and between what is communicated (i.e. that a teacher or TV will re-enforce what a parent has said etc.) and between what is communicated and what is directly observed (for example, that a person does what he says he will do) will reveal some sources as more believable than others. Also we appeal to motive as a method of assessing degree of trust. People are notoriously influenced by the norms, opinions and behaviours of their own reference groups. Robots with their potential for high bandwidth communication could, in principle, behave with the same psychology of the crowd as humans, only much more rapidly and ‘single-mindedly’. It is not difficult to see how the Dr Who image of the Borg, acting a one consciousness, could come about.

Other Ways of Knowing

It is worth considering just a few of the many other ‘ways’ of knowing’ not considered above, partly because some of these might help mitigate some of the risks of human ‘ways of knowing’ .

Science – Science has evolved methods that are deliberately designed to create impartial, robust and consistent models and explanations of the world. If we want robots to create accurate models, then an appeal to scientific method is one approach. In science, patterns are observed, hypotheses are formulated to account for these patterns, and the hypotheses are then tested as impartially as possible. Science also seeks consistency by reconciling disparate findings into coherent overall theories. While we may want robots to use scientific methods in their reasoning, we may want to ensure that robots do not perform experiments in the real world simply for the sake of making their own discoveries. An image of concentration camp scientists comes to mind. Nevertheless, in many small ways robots will need to be empirical rather than theoretical in order to operate at all.

Argument – Just like people, robots of any complexity will encounter ambiguity and inconsistencies. These will be inconsistencies between expectation and actuality, between data from one way of knowing and another (e.g. between reason and faith, or between perception and imagination etc.), or between a current state and a goal state. The mechanisms by which these inconsistencies are resolved will be crucial. The formulation of claims; the identification, gathering and marshalling of evidence; the assessment of the relevance of evidence; and the weighing of the evidence, are all processes akin to science but can cut across many ‘ways of knowing’ as an aid to decision making. Also, this approach may help provide explanations of a robot’s behaviour that would be understandable to people and thereby help bridge the gap between opaque mechanisms, such as pattern matching, and what people will accept as valid explanations.

Meditation – Meditation is a place-holder for the many ways in which altered states of consciousness can lead to new knowledge. Dreaming, for example, is another altered state that may lead to new hypotheses and models based on novel combination of elements that would not otherwise have been brought together. People certainly have these altered states of consciousness. Could there be an equivalent in the robot, and would we want robots to indulge in such extreme imaginative states where we would have no idea what they might consist of? This is not to necessarily attribute consciousness to robots, which is a separate, and probably meta-physical question.

Theory of mind – For any autonomous agent with its own beliefs and intentions, including a robot, it is crucial to its survival to have some notion of the intentions of other autonomous agents, especially when they might be a direct threat to survival. People have sophisticated but highly biased and error-prone mechanisms for modelling the intentions of others. These mechanisms are particularly alert for any sign of threat and, as a proven mechanism, tend to assume threat even when none is present. The people that did not do this, died out. Work in robotics already recognizes that, to be useful, robots have to cooperate with people and this requires some modelling of their intentions. As this last video illustrates, the modelling of others intentions is inherently complex because it is recursive.

YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes

If there is a conclusion to this analysis of ‘ways of knowing’ it is that creating intelligent, autonomous mechanisms, such as robots and AIs, will have inherently unpredictable consequences, and that, because the human operating system is so highly error-prone and subject to bias, we should not necessarily build them in our own image.

– It’s like this

We are all deluded. And for the most part we don’t know it. We often feel as though we have control over our own decisions and destiny, but how true is it?  It’s a bit like what US Secretary of Defence, Donald Rumsfeld, famously said in February 2002 about the ‘known knowns’, the ‘known unknowns’ and the ‘unknown unknowns’.

Youtube video, Donald Rumsfeld Unknown Unknowns !, Ali, August 2009, 34 seconds


 

The significance for ROBOT ETHICS: If people can only act on the basis of what they know, then it is easy to see the implications for artificial Autonomous Intelligent Agents (A/ISs) like robots, that ‘know’ so much less. They may act with the same confidence as people, who have a bias to thinking that what they know and their interpretation of the world, is the only way to see it. Understanding the ‘goggles’ through which people see the world, how they learn, how they classify, how they form concepts and how they validate and communicate knowledge is fundamental to embedding ethical self-regulation into A/ISs.

 


How can a brain that is deluded even get an inkling that it is?  For the most part, the individual finds it very difficult.  Interestingly, it is often those who are most confident that they are right who are most wrong (and dangerously, who we most trust). The 2002 Nobel Prize winner, Daniel Kahneman has spent a lifetime studying the systematic biases in our thinking.  Here is what he says about confidence:

Youtube video, Daniel Kahneman: The Trouble with Confidence, Big Think, February 2012, 2:56 minutes

The fact is, that when it comes to our own interpretations of the world, there is very little that either you or I can absolutely know as demonstrated by René Descartes in 1637It has long been know that we have deficiencies in our abilities to understand and interpret the world, and indeed, it can be argued that the whole system of education is motivated by the need to help individuals make more informed and more rational decisions (although it can be equally argued that education and training in particular, is a sausage factory in the service of employers whose interests may not align with those of the individual).


 

The significance for ROBOT ETHICS: Whilst people may have some idea that there are things they do not know, this is generally untrue of most computer programs. Young children start to develop ethical ideas (e.g. a sense of fairness) from an early age. Then it takes years of schooling and good parenting to get to the point where, as an adult, the law assumes you have full responsibility for your actions. This highlights the huge gap between an adult human’s understanding of ethics and what A/ISs are likely to understand for the foreseeable future.

 


First Principles

The debate about whether we should act by reason or by our intuitions and emotions is not new. The classic work on this is Kant’s ‘Critique of Pure Reason’ published in 1781. This is a masterpiece of epistemological analysis covering science, mathematics, the psychology of mind and belief based on faith and emotion. Kant distinguishes between truth by definition, truth by inference and truth by faith, setting out the main strands of debate for centuries to come. Here is a short, clear presentation of this work.

Introduction to Kant’s Critique of Pure Reason (Part 1 of 4), teach philosophy, September 2013, 4:52 minutes


Beliefs

From an individual’s point of view, by a process of cross validation between different sources of evidence (people we trust,  the media and society generally, our own reasoned thinking, sometimes scientific research and our feelings), we are continuously challenged to construct a consistent view about the world and about ourselves. We feel a need to create at least some kind of semi-coherent account. It’s a primary mechanism of reducing anxiety.  It keeps us orientated and safe. We need to account for it personally, and in this sense we are all ‘personal’ scientists, sifting the evidence and coming to our own conclusions.  We also need to account for it as a society, which is why we engage in science and research to build a robust body of knowledge to guide us.

George Kelly, in 1955, set out ‘personal construct theory’ to describe this from the perspective of the individual – see, for example this straight-forward account of constructivism which also, interestingly, proposes how to reconcile it with Christianity – a belief system based on an entirely different premise, methodology and pedigree):

 

But for the most part there are inconsistencies – between what we thought would happen and what actually did happen, between how we felt and how we thought, between how we thought and what we did, between how we thought somebody would react and how they did react, between our theories about the world and the evidence. Some of the time things are pretty well what we expect but almost as frequently, things don’t hang together, they just don’t add up.   This drives us on a continuous search for patterns and consistency.  We need to make sense of it all:

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011,4:31 minutes

 

But it turns out that really, as Kahneman demonstrates, we are not particularly good scientists after all.  Yes, we have to grapple with the problems of interpreting evidence.  Yes, we have to try and understand the world in order to reduce our own anxieties and make it a safer place.  But, no, we do not do this particularly systematically or rationally.  We are lazy and we are also as much artists as we are scientists. In fact, what we are is ‘story tellers’. We make up stories about how the world works – for ourselves and for others.


 

The significance for ROBOT ETHICS: The implications for A/ISs is that they must learn to see the world in a manner that is similar (or at least understandable) to the people around them. Also, they must have mechanisms to deal with ambiguous inputs and uncertain knowledge, because not much is straightforward when it comes to processing at the abstract level of ethics. Dealing with contradictory evidence by denial, forgetting and ignoring, as people often do, may not be the way we would like A/ISs to deal with ethical issues.

 


Stories

Sifting evidence is not the only way that we come to ‘know’. There is another method that, in many ways, is a lot more efficient and used just as often. This is to believe what somebody else says. So instead of having to understand and reconcile all the evidence yourself you can, as it were, delegate the responsibility to somebody you trust. This could be an expert, or a friend, or a God. After all, what does it matter whether what you (or anybody else) believe is true or not, so long as your needs are being met. If somebody (or something) repeatedly comes up with the goods, you learn to trust them and when you trust, you can breathe a sigh of relief – you no longer have to make the effort to evaluate the evidence yourself. The source of information is often just as important as the information itself. Despite the inconsistencies we believe the stories of those we trust, and if others trust us, they believe our stories.

Stories provide the explanations for what has happened and stories help us understand and predict what will happen.  Our anxiety is most relieved by ‘a good story’. And while the story needs to have some resemblance to the evidence, and like in court can be challenged and cross-examined, what seems to matter most is that it is a ‘good’ story.  And to be a ‘good’ story it must be interesting, revealing, surprising and challenging.  Its consistency is just one factor.  In fact, there can be many different stories, or accounts, of precisely the same incident or event – each account from a different perspective; interpreting, weighing and presenting the evidence from a different viewpoint or through a different value system.  The ‘truth’ is not just how well the story accounts for the evidence but is also to do with a correspondence between the interpretive framework of the listener and that of the teller:

YouTube Video, The danger of a single story | Chimamanda Ngozi Adichie, TED, October 2009, 19:16 minutes

Both as individuals and as societies, we often deny, gloss over and suppress the inconsistencies.  They can be conveniently forgotten or repressed long enough for something else to demand our attention and pre-occupy us.  But also sometimes, for the sake of a ‘better’ story (often one that better reflects the biases in our own value system), the inconsistencies and the evidence about ourselves and the human condition fight back.  Inconsistencies can re-emerge to create nagging doubts, and over time we start to wonder – is our story really true?


 

The significance for ROBOT ETHICS:Just like people, A/ISs will have to learn who to trust, identify and resolve inconsistencies in belief, and how to construct a variety of accounts of the world and their own decision making processes in order to explain themselves and generally communicate in forms that are understandable to people. Like in human dialogue, these accounts will need to bring out certain facets of it’s own beliefs, and afford certain interpretations, depending on the intent of the A/IS and taking into account a knowledge of the person or people it is in dialogue with. Unlike, in human dialogue, the intent of the A/IS must be to enhance the wellbeing of the people it serves (except when their intent is malicious with respect to other people), and to communicate transparently with this intent in mind.

 


Some Epistemological Assumptions

In these blog postings, I try not to take for granted any particular story about how we are and how we relate to each other? What really lies behind our motivations, decisions and choices?  Is it the story that classical economists tell us about rational people in a world of perfect information?  Is it the story neuroscientists tell us about how the brain works?  Is it the story about the constant struggle between the id and the super-ego told to us by Freud?  Is it the story that the advertising industry tell us about what we need for a more fulfilled life?  Or is it the story that cognitive psychologists tell us about how we process information?  Which account tells the best story?  Can these different accounts be reconciled?

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make, become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.


 

The significance for ROBOT ETHICS:To embed ethical self-regulation in artificial Autonomous, Intelligent Systems (A/ISs) will require an understanding of how people learn, interpret, reflect and act on the world and may require a similar decision-making architecture. This is partly for the A/IS’s own ‘operating system’ but also so that it can model how the people around them operate so that it can engage with them ethically and effectively.

 


This Blog Post: ‘It’s Like This’ sets the epistemological framework for what follows in later posts. It’s the underlying assumptions about how we know, justify and explain what we know – both as individuals and in society.