Home » Human Operating System (HOS)
Category Archives: Human Operating System (HOS)
Recent blog posts
Categories
– Robots & ToM 2
March 18, 2020 / Leave a comment
Do Robots Need Theory of Mind? Part 2
Why Robots might need Theory of Mind (ToM)
Existential Risk and the AI Alignment Problem
Russell (2019) argues that we have been thinking about building artificial intelligence (AI) systems the wrong way. Since its inception, AI has attempted to build systems that can achieve ‘their own’ goals, albeit that we might give them those goals in the first instance. Instead, he says, we should be building AIs that understand ‘the preference structure’ that a person has and attempt to satisfy goals within the constraints of that preference structure.
In this way, the AI will be able to understand that acting to achieve one goal (e.g. getting a coffee) may interact or interfere with other preferences, goals or constraints (e.g. not knocking someone out of the way in the process) and thereby moderate its behaviour. An AI needs to understand that a goal is not there to be achieved ‘at all cost’. Instead it should be achieved taking into account many other preferences and priorities that might moderate it. Russell argues that if we think of building AIs in this way, we may be able to avoid the existential risk that superhuman AIs will eventually take over, and either deliberately or inadvertently wipe out humanity.
This is an example of what AI researchers have termed ‘the AI alignment problem’, that potentially creates an existential risk to humanity if we find ourselves, having built super-intelligent machines, unable to control them. Nick Bostrom (Bostrom 2014) has also characterised this threat using the example of setting an AI the goal of producing paperclips and it taking this so literally that it destroys humanity (for example, in its need for more raw materials) in the single-minded execution of this goal and having no appreciation of when to stop. Several other researchers have addressed the AI alignment problem (mainly in terms of laws, regulations and social rules) including Taylor et. al (2017), Hadfield-Menell & Hadfield (2019), Vamplew et. al (2018), Hadfield-Menell, Andrus & Hadfield (2019).
Russell (2019) goes on to describe how an AI should always have some level of uncertainty about what people want. Such uncertainty would put a check on the single-minded execution of a goal at all cost. It would drive a need for the AI to keep monitoring and maintaining its model of what a person might want at any point in time. It would require the AI to keep checking that what it was doing was ‘on-track’ or ‘aligned’ with a person’s whole preference structure. So, if, for example, you had instructed your self-driving car to take you to the airport and you received a message during the trip that your child had been in a road accident, the AI might recognise this as significant, and check whether you wanted to change your plans.
Russell arrives at this position from addressing the problem of existential risk. It is a proposed solution to the AI alignment problem. Working within this frame of reference, he proposes solutions like ‘Cooperative Inverse Reinforcement Learning’ (Malik et. al. 2018) whereby the Autonomous Intelligent System (AIS) attempts to infer the preference structure of a person from an observation of behaviour. This, indeed, seems to be a sensible approach.
However, the exact mechanism by which an AIS coordinates its actions with a person or people may well depend on it being able to accurately infer people’s mental states. Otherwise it might have to explicitly check (e.g. by asking) every few seconds, whether what it was doing was acceptable, and it would need to ‘read’ when a person found it’s behaviour unacceptable (e.g. by noting the frown when about to hit somebody on its mission to get the coffee).
The AI alignment problem is precisely the problem that every person has when interacting with another human being. When interacting with somebody else we are unable to directly observe their internal mental states. We cannot know their preference structure and we can only take on trust that their intentions are what they might say they are. Their real intentions, beliefs, desires, values, and boundaries could, in principle, be anything. What we do, is infer from their behaviours, including what they say (and what we understand from this) what their intentions are. Intentions, beliefs, and preferences are all hidden variables that may be the underlying causes of behaviours but because they are unobservable can only be guessed at.
Russell takes this on board and understands that the alignment problem is one that exists between any two agents, human or artificial. He is saying that robots need to be equipped with similar mechanisms to those that people generally have. These are the mechanisms that can model human beliefs, preferences and intentions by making inferences from observations of behaviour. Fortunately, we are not discovering and inventing these mechanisms for the first time.
Alignment with What?
A potential problem with having an AIS infer, reason and act on its analysis of another person’s mental states is that it may not accurately predict the consequences of its own actions. An action designed to do good may, in fact, do harm. In addition to being mistaken about the direction of its effect on mental states (positive or negative) it may also be inaccurate about the extent. So, an act designed to please may have no effect, or an act that is not intended to cause either pleasure or displeasure may have an effect.
This is quite apart from all manner of other complications that we might describe as its ‘policies’. Should, for example, an AIS always act to minimise harm and maximise a person’s pleasure? How should an AIS react if a person consistently fails to take medication prescribed for their benefit? How should it trade-off short and longer-term benefits? How does an AIS reconcile differences between two or more people, a person’s legal obligations and their desires or the interests of a person and another organisation (a school, a company, their employer, the tax office an so on)?
In all these cases, the issue comes down to how the AIS evaluates it’s own choice of possible actions (or inaction) and which stakeholders it takes into account when performing this evaluation. Numerous guidelines have been produced in recent years to help guide developers of AI systems. The good news is that there is considerable agreement about the kinds of principles that apply – not contravening human rights, not doing harm, increasing wellbeing, transparency and explainability in how the AIS arrives at decisions, elimination of bias and discrimination, and clear accountability and responsibility for the AIS’s decisions. The main mechanism for putting these principles into practice is the training and controls (guidelines, standards and legal) of companies, designers and developers. Comparatively little has been proposed for the controls that might be embedded within the AIS itself, and even less about the principles and mechanisms that might be used to achieve this.
We could turn to economics for models of preference and choice, but these models are discredited by findings in the social sciences (e.g. prospect theory) and many would argue that the incentives encouraged by such models is precisely what has lead to existential risks like nuclear arms races and climate change. We would therefore need to think very carefully before using these same models to drive the design of artificial intelligences because of their potential in adding yet another existential risk.
The existential risk discussed in relation to AISs has tended to focus on the fear that if an artificial intelligence is given autonomy to achieve it’s objectives without constraint, then it might do anything. Even simple systems can become unpredictable very quickly, and if it is unpredictable it is out of control. In the anthropomorphic way, characteristic of human beings, we project onto the AIS that it would be concerned about it’s own self-preservation, or that it would discover that self-preservation was a necessary pre-condition to attaining it’s goal(s). We further project that if it adopts the goal of self-preservation, then it might do this at all cost, putting it’s own self-preservation ahead of even those of its creators. There are some good reasons for these fears because goals like self-preservation and accumulation of resources are instrumental to the achievement of any other goal and an AIS might easily reason that out (Bostrom 2012). There have been challenges to this line of reasoning but this debate is not a central concern here. Rather, I am more concerned with whether an AIS can align with the goals of an individual using the same sorts of social cues that we all use in the informal ways in which we, in general, cooperate with each other.
If we are already concerned that the economic and political systems currently in place can have some undesirable consequences, like other existential risks and concentrations of wealth in the hands of a few, then the last thing we would want to do is build into AISs the same mechanisms for evaluating choices as those assumed by classical economic theory. In these posts, I look primarily to psychology (and sometimes philosophy) to provide evidence and analysis of how people make decisions in a social world, particularly one in which we are taking into account our beliefs about other people’s mental states. Whether this provides an answer to the alignment problem remains to be seen, but it is, at least, another perspective that may help us understand the types of control mechanisms we may need as the development of AIS proceeds at an ever increasing pace.
Cooperation and Collaboration
The paradigm in which robots act as slaves to their human masters is gradually being replaced by one in which robots and humans work collaboratively together to achieve some goal (Sheridan 2016). This applies for individual human-robot interactions and for multi-robot teams (Rosenfeld et. al. 2017). If robots and AISs generally could infer the mental sates of the people around them when performing complex tasks, then this could potentially lead to more intuitive and efficient collaboration between the person and the machine. This requires trust on the part of the human that the robot will play its part in the interaction (Hancock et. al 2011).
As a step on the way, systems have been built where robots collaborate with each other without communication to perform complex tasks using only visual cues (Gassner et. al. 2017). Collaboration is especially useful in situations like care giving (Miyachi, Iga & Furuhata 2017) where giving explicit verbal instructions might be difficult (e.g. in cases of Alzheimers or autism). Gray et. al (2005) proposed a system of action parsing and goal simulation whereby a robot might infer goals and mental states of others in a collaborative task scenario.
Potential Benefits
Equipping AISs with the ability to recognise, infer and reason about the mental states of others could have some extra-ordinary advantages. Not only might we avoid existential risk to humanity (and could there be anything of greater significance) and make our interactions with robots and AISs generally easy and intuitive, but also we could be living along-side intelligent artefacts that have the robust capacity to carry out moral reasoning. Not only could they keep themselves in check, so that they made only justifiable moral decisions with respect to their own actions, but they might also help us adjudicate our own actions, offering fair, reasonable and justifiable remedies to human transgressions of the law and other social codes. They might become reliable and trustworthy helpers and companions, politely guiding us in solving currently intractable world problems, and protecting us from our own worse human biases, vices, and deficiencies. If they turned out to be better at moral reasoning than people, like wise philosophers they could offer us considered advice to help us achieve our goals and deal with the dilemmas’ of everyday life.
However, there is much that stands in the way of achieving this utopian relationship with the intelligent artefacts we create, especially if we want an AIS to infer mental states in the same way a person might, by observation and perhaps asking questions. We are beginning to understand patterns of neuronal activity sufficiently well to infer some mental states. For example, Haynes et. al. (2007) report being able to tell which of two choices a person is making from looking at neural activity. Elon Musk is creating ‘Neural Lace’ for such a purpose (Cuthbertson 2016) but could mental states be inferred using a non-invasive approaches.
In particular, could we create AISs that could infer our mental states, without inadvertently creating an even greater and more immediate existential risk? I will later argue that giving AISs theory of mind, without them having the same sort of controls on social behaviour that empathy gives people, could be a disaster that heightens existential risk in our very attempt to avoid it. In subsequent posts I first consider whether the artificial inferencing of human mental states is even a credible possibility?
References
Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3
Bostrom, N., (2014). Superintelligence: Paths, Dangers, Strategies (1st. ed.). Oxford University Press, Inc., USA.
Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu
Gassner, M., Cieslewski, T., & Scaramuzza, D. (2017). Dynamic collaboration without communication: Vision-based cable-suspended load transport with two quadrotors. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 5196-5202). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2017.7989609
Gray, J., Breazeal, C., Berlin, M., Brooks, A., & Lieberman, J. (2005). Action parsing and goal inference using self as simulator. In Proceedings – IEEE International Workshop on Robot and Human Interactive Communication (Vol. 2005, pp. 202–209). https://doi.org/10.1109/ROMAN.2005.1513780
Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 417–422). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314250
Hadfield-Menell, D., Andrus, M., & Hadfield, G. K. (2019). Legible normativity for AI alignment: The value of silly rules. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 115–121). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314258
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072
Malik D., Palaniappan M., Fisac J., Hadfield-Menell D., Russell S., and Dragan A., (2018) “An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning.” In Proc. ICML-18, Stockholm.
Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators
for Care Robot Innovation. In Procedia Computer Science (Vol. 112, pp. 1254-1262). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.08.078
Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211-231. https://doi.org/10.1016/j.artint.2017.08.005
Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208
Sheridan, T. B. (2016). Human-Robot Interaction: Status and Challenges. Human Factors, 58(4), 525-32. https://doi.org/10.1177/0018720816644364
Taylor, J., Yudkowsky, E., Lavictoire, P., & Critch, A. (2017). Alignment for Advanced Machine Learning Systems. Miri, 1–25. Retrieved from https://intelligence.org/files/AlignmentMachineLearning.pdf
Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6
– Robots & ToM
February 8, 2020 / 2 Comments on – Robots & ToM
Do Robots Need Theory of Mind? – part 1
Robots, and Autonomous Intelligent Systems (AISs) generally, may need to model the mental states of the people they interact with. Russell (2019), for example, has argued that AISs need to understand the complex structures of preferences that people have in order to be able to trade-off many human goals, and thereby avoid the problem of existential risk (Boyd & Wilson 2018) that might follow from an AIS with super-human intelligence doggedly pursuing a single goal. Others have pointed to the need for AISs to maintain models of people’s intentions, knowledge, beliefs and preferences, in order that people and machines can interact cooperatively and efficiently (e.g. Lemaignan et. al. 2017, Ben Amor et. al. 2014, Trafton et. al. 2005).
However, in addition to risks already well documented (e.g. Müller & Bostrom 2016) there are many potential dangers in having artificial intelligence systems closely observe human behaviour, infer human mental states, and then act on those inferences. Some of the potential problems that come to mind include:
- The risk that self-determining AISs will be built with a limited capability of understanding human mental states and preferences and that humans will lose control of the technology (Meek et. al. 2017, Russell 2019).
- The risk that the AIS will exhibit biases in what it selects to observe, infer and act that would be unfair (Osoba & Welser 2017)
- The risk that the AIS will use misleading information, make inaccurate observations and inferences, and base its actions on these (McFarland & McFarland 2015, Rapp et. al. 2014)
- The risk that even if the AIS observes and infers accurately, that its actions will not align with what a person might do or that it may have unintended consequences (Vamplew et. al. 2018)
- The risk that an AIS will misuse its knowledge of a person’s hidden mental states resulting in either deliberate or inadvertent harm or criminal acts (Portnoff & Soupizet 2018).
- The risk that peoples’ human rights and rights to privacy will be infringed because of the ability of AISs to observe, infer, reason and record data that people have not given consent to and may not even know exists (OECD 2019).
- The risk that if the AIS was making decisions based on unobservable mental states that any explanations of an AIS’s actions based on them would be difficult to validate (Future of Life Institute 2017, Weld & Bansal 2018).
- The risk that the AIS would, in the interests of a global common good, correct for people’s foibles, biases and dubious (unethical) acts thereby take away their autonomy (Timmers 2019),
- The risk that using AISs, a few multi-national companies and countries will collect so much data about peoples’ explicit and inferred hidden preferences that power and wealth will become concentrated in even fewer hands (Zuboff 2018)
- The risk that corporations will rush to be the first to produce AISs that can observe, infer and reason about people’s mental states and in so doing will neglect to take safety precautions (Armstrong et. al. 2016).
- The risk that in acting out of some greater interest (i.e. the interests of society at large) an AIS will act to restrict the autonomy or dignity of the individual (Zardiashvili & Fosch-Villaronga 2020)
- The risk that an AIS would itself take unacceptable risks based on inferred uncertain mental states, that may cause a person or itself harm (Merritt et. al. 2011).
Much has been written about the risks of AI, and in the last few years numerous ethical guidelines, principles and recommendations have been made, especially in relation to the regulation of the development of AISs (Floridi et. al. 2018). However, few of these have touched on the real risk that AISs may one day develop such that they can gain a good understanding of people’s unobservable mental states and act on them. We have already seen Facebook being used to target advertisements and persuasive messages on the basis of inferred political preferences (Isaak & Hanna 2018).
In future posts I look at the extent to which an AIS could potentially have the capability to infer other people’s mental states. I touch on some the advantages and dangers and identify some of the issues that may need to be thought through.
I argue that AISs generally (not only robots) may need to both model people’s mental states, known in the psychology literature as Theory of Mind – ToM (Carlson et. al. 2013), but also have some sort of emotional empathy. Neural nets have already been used to create algorithms that demonstrate some aspects of ToM (Rabinowitz 2018). I explore the idea of building AISs with both ToM and some form of empathy and the idea that unless we are able to equip AISs with a balance of control mechanisms we run the risk of creating AISs that have ‘personality disorders’ that we would want to avoid.
In making this case, I look at whether it is conceivable that we could build AISs that have both ToM and emotional empathy, and that if it were possible, how these two capacities would need to be integrated to provide an effective overall control mechanism. Such a control mechanism would involve both fast (but sometimes inaccurate) processes and slower (reflective and corrective) processes, similar to the distinctions Kahneman (Kahneman 2011) makes between system 1 and system 2 thinking. The architecture has the potential for the fine-grained integration of moral reasoning into the decision making of an AIS.
What I hope to add to Russell’s (2019) analysis is a more detailed consideration of what is already known in the psychology literature about the general problem of inferring another agent’s intentions from their behaviour. This may help to join up some of the thinking in AI with some of the thinking in cognitive psychology in a very broad-brushed way such that the main structural relationships between the two might come more into focus.
Subscribe (top left) to follow future blog posts on this topic.
References
Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y
Ben Amor, H., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human-robot cooperation tasks. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 2831–2837). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2014.6907265
Boyd, M., & Wilson, N. (2018). Existential Risks. Policy Quarterly, 14(3). https://doi.org/10.26686/pq.v14i3.5105
Carlson, S. M., Koenig, M. A., & Harms, M. B. (2013). Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science, 4(4), 391–402. https://doi.org/10.1002/wcs.1232
Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu
Floridi, L., Cowls, J., Beltrametti, M. et al., (2018), AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 doi:10.1007/s11023-018-9482-5
Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. Future of Life, 1–23. Retrieved from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072
Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Lemaignan, S., Warnier, M., Sisbot, E. A., Clodic, A., & Alami, R. (2017). Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, 45–69. https://doi.org/10.1016/j.artint.2016.07.002
McFarland, D. A., & McFarland, H. R. (2015). Big Data and the danger of being precisely inaccurate. Big Data and Society. SAGE Publications Ltd. https://doi.org/10.1177/2053951715602495
Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2017). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. In PICMET 2016 – Portland International Conference on Management of Engineering and Technology: Technology Management For Social Innovation, Proceedings (pp. 682–693). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/PICMET.2016.7806752
Merritt, T., Ong, C., Chuah, T. L., & McGee, K. (2011). Did you notice? Artificial team-mates take risks for players. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 338–349). https://doi.org/10.1007/978-3-642-23974-8_37
Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33
OECD. (2019). Recommendation of the Council on Artificial Intelligence. Oecd/Legal/0449. Retrieved from http://acts.oecd.org/Instruments/ShowInstrumentView.aspx?InstrumentID=219&InstrumentPID=215&Lang=en&Book=False
Osoba, O., & Welser, W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/rr1744
Portnoff, A. Y., & Soupizet, J. F. (2018). Artificial intelligence: Opportunities and risks. Futuribles: Analyse et Prospective, 2018-September(426), 5–26.
Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., & Botvinick, M. (2018). Machine Theory of mind. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 10, pp. 6723–6738). International Machine Learning Society (IMLS).
Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate information. Memory and Cognition, 42(1), 11–26. https://doi.org/10.3758/s13421-013-0339-0
Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208
Timmers, P., (2019), Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds & Machines 29, 635–645 doi:10.1007/s11023-019-09508-4
Trafton, J. G., Cassimatis, N. L., Bugajska, M. D., Brock, D. P., Mintz, F. E., & Schultz, A. C. (2005). Enabling effective human-robot interaction using perspective-taking in robots. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 35(4), 460–470. https://doi.org/10.1109/TSMCA.2005.850592
Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6
Weld, D. S., & Bansal, G. (2018). Intelligible artificial intelligence. ArXiv, 62(6), 70–79. https://doi.org/10.1145/3282486
Zardiashvili, L., Fosch-Villaronga, E. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds & Machines (2020) doi:10.1007/s11023-019-09514-6
Zuboff S., (2019), The Age of Surveillance Capitalism, Profile Books
– Autonomy now
September 17, 2019 / Leave a comment
Making sense of a changing world
It’s difficult to make sense of a fast changing world. But could ‘autonomy’ be at the centre of it? I’ll explain.
There are two main themes – people and technology. Ideas about autonomy are changing for both. For people, it is a matter of their relationship to employment, government and the many institutions of society. For technology, it is the introduction of autonomous intelligence in a wide range of systems including phone apps and all manner of automated decision making systems that affect every aspect of our lives. There is also the increasing inter-dependency between people and technology, both empowering and constraining. Questions of autonomy are at the heart of these changes.
There have been times in history when it has not occurred to people that they could be autonomous in the broad scope of their lives. They were born into a time and place where the control of their destiny was not their own concern. They were conditioned to know their place, accept it and stay in it. In first world democracies, autonomy is, perhaps, a luxury of the here and now. It may not necessarily stay that way.
My particular interest is the way in which we are giving autonomy to the things that we create – computer algorithms, artificial intelligence systems and robots. But it’s broader than that. We all want the freedom to pursue our own goals, to self-determine. We are told repeatedly by an industry concerned with self-development and achieving success, that we should ‘find our authentic self’ and pursue the values and goals that really matter to us.
However, we can only do this within an environment of constraints – physical constraints, resource constraints, psychological constraints and social constraints. It is the dynamic between the individual and their constraints that is in constant flux and that I am trying to examine.
What’s Trending in Autonomy?
There are two main trends – one towards decentralisation and one towards concentrations of wealth and power. This seems something of a contradiction. How can both be true and, if so, where is this going?
There is a long-term trend towards decentralization. First we rejected the ancient gods as the controllers of nature. Much more recently we have started to question other sources of authority – governments, doctors, the church, the law, the mainstream media and many other institutions in society. As we as individuals have become more informed and empowered by technology, we have started to see the flaws in these ‘authorities’.
I believe, along with many other commentators, that we are heading towards a world where autonomy is not just highly valued but is also more possible than it ever has been. As society becomes better educated, as technology enables greater information sharing and flexibility, we can, and perhaps inevitably will, move towards a more decentralized society in which both human and artificial autonomous agents increasingly interact with each other. The interactions will become more frequent, more informed, more fine-grained and more local. The technological infrastructure for this continues to rollout at ever-increasing pace. Soon we will have 5G communications facilitating almost instantaneous communications between ever more intelligent and powerful devices – smart phones, autonomous vehicles, information servers, and a host of smart devices.
On the other hand, there is clear evidence of increased concentrations of wealth and power. Although estimates may vary, it seems that a greater proportion of the worlds wealth is held by fewer and fewer people. Stories abound of fewer that eight people owning more than half the worlds wealth. Economists like Thomas Pickerty have documented in detail the evidence for such a trend.
There is clearly a tension between these trends. As power and wealth become more concentrated, manifesting in the form of surveillance capitalism (not ignoring surveillance by the state) and fake news, there is a fight back by individuals and other institutions.
Individuals increasingly recognise the intrusions on their privacy and this is picked up (often belatedly) in legislation like GDPR and other moves to regulate. The checks and balances do work to modulate the dynamics of the struggle, but when they don’t work, the accumulated frustration at the loss of human dignity can become political and violent. Take a closer look at autonomy.
Why do we need Autonomy?
We each have a biological imperative to survive. While we can count on others to some extent, ‘the buck stops’ with each of us as individuals. The more robust and resilient solutions are self-sufficiency and self-determination. It’s not a fail-safe but it takes out the risk that others might not be there for us all the time. It also appears to be a route to greater wellbeing. Learning, developing competence and mastery, being able to predict and hence increase the possibility that we can control, being less subject to constraints on our actions – all contribute to satisfaction, ultimately in the service of survival.
In the hierarchy of needs, having enough food, shelter, sleep and security releases some of your own resources. It provides the freedom to climb. Somewhere near the top of the hierarchy is what Maslow called self-actualisation – the discovery and expression of your authentic self. But unless you are exceptionally lucky, and find that your circumstances align perfectly with your authentic self, then a pre-requisite is to have freedom from the constraints that prevent you from getting there.
Interactions between people and machines
This is all the human side of autonomy – the bit that applies to us all. This is a world in which both people and artificial agents – computer algorithms, smart devices, robots etc. interact with each other. Interactions between people and people, machines and machines and people and machines are accelerating in both speed and frequency in order that each autonomous agent can achieve its own rapidly changing priorities and goals. There is nothing static or certain in this world. It is changing, ambiguous, and unpredictable.
Different autonomous agents have different goals and different value systems that drive them. For people these are different cultures, social norms and roles. For machines they relate to their different functions and circumstances in which they operate. For interactions to work smoothly there needs to be some stability in the protocols that regulate them. Autonomy may be a way into defining accountability and responsibility. It may lead us towards mechanisms for the justification and explanation of action. Neither machines nor people are very good at this, but autonomy may provide the key that unlocks our understanding of effective communication and protocols.
Still, that’s for later. Right now, this article is just focused on the concept of autonomy.
I hope you are convinced that this is an important and interesting subject. It is at the foundation of our relationships with each other and between people and the increasingly autonomous and intelligent agents we are creating.
Questions that need to be addressed
- What do we mean by autonomy?
- How do agents (people and machines) differ in the amount of autonomy they have?
- Can we measure autonomy?
- What examples of peoples societies and artefacts can we think of that might help us understand what is it what is not autonomous?
- What do we mean by autonomy when we talk about artificial autonomous intelligence systems?
- Are the computer algorithms and robotic systems that we have today truly autonomous?
- What would it mean to build an artificial intelligence that was truly autonomous?
- What is the relationship between autonomy and morality?
- Can we be truly autonomous if we are constrained by ethical principles and social norms?
- If we want our intelligent artefacts to behave ethically, then how would we approach the design of those systems?
That’s quite a chunk of questions to get through. But they are all on the critical path to understanding how our own human autonomy and the autonomy that we build into artefacts, can relate to and engage with each other. They take us to a point where we can better understand the trade-offs every intelligent agent, be it human or artificial, has to make between the freedom to pursue its own goals and the constraints of living in a society of other intelligent agents.
It also reveals how, in people, the constraints of society are internalised. As adults they have become part of our internal control mechanisms. These internal controls have no absolute morality but reflect the circumstances and society in which we grow up. As our artefacts become increasingly intelligent we may need to develop similar mechanisms for their socialisation.
Definitions of Autonomy
The following definitions are taken from the glossary of the IEEE publication called ‘Ethically Aligned Design’ (version 1). The glossary has several definitions from different perspectives:
Ordinary language: The ability of a person or artefact to govern itself including formation of intentions, goals, motivations, plans of action, and execution of those plans, with or without the assistance of other persons or systems.
Engineering: “Where an agent acts autonomously, it is not possible to hold any one else responsible for its actions. In so far as the agent’s actions were its own and stemmed from its own ends, others cannot be held responsible for them” (Sparrow 2007, 63).
Government: “we define local [government] autonomy conceptually as a system of local government in which local government units have an important role to play in the economy and the intergovernmental system, have discretion in determining what they will do without undue constraint from higher levels of government, and have the means or capacity to do so” (Wolman et al 2008, 4-5).
Ethics and Philosophy: “Put most simply, to be autonomous is to be one’s own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman 2015).
Medical: “Two conditions are ordinarily required before a decision can be regarded as autonomous. The individual has to have the relevant internal capacities for self-government and has to be free from external constraints. In a medical context a decision is ordinarily regarded as autonomous where the individual has the capacity to make the relevant decision, has sufficient information to make the decision and does so voluntarily” (British Medical Association 2016).
More on autonomy later. Sign up to the blog if you want to be notified.
Meanwhile a couple of videos
The first has an interesting take on autonomy. Autonomy is not a matter of what you want, but what you want to want. The more reflective you are about what you want the more autonomous you are.
Youtube Video, What is Autonomy? (Personal and Political), Carneades.org, December 2018, 6:50 minutes
https://www.youtube.com/watch?v=z0uylpfirfM
The second is from a relatively new Youtube channel called ‘Rebel Wisdom’. It starts with the breakdown of trust in traditional media and moves on to themes of decentralisation.
Youtube Video, The War on Sensemaking, Daniel Schmachtenberger, Rebel Wisdom, August 2019, 1:48:49 hours
– Its All Broken, but we can fix it
March 20, 2019 / Leave a comment
Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.
Moral Machines?
This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.
The Unpredictable...
Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam! Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.
...But Possible
Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.
AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.
Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.
While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.
Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).
We can fix it - yes we can
On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.
For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.
Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.
Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.
I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

– Ways of knowing (HOS 4)
February 25, 2018 / Leave a comment
How do we know what we know?
This article considers:
(1) the ways we come to believe what we think we know
(2) the many issues with the validation of our beliefs
(3) the implications for building artificial intelligence and robots based on the human operating system.
I recently came across a video (on the site http://www.theoryofknowledge.net) that identified the following ‘ways of knowing’:
- Sensory perception
- Memory
- Intuition
- Reason
- Emotion
- Imagination
- Faith
- Language
This list is mainly about mechanisms or processes by which an individual acquires knowledge. It could be supplemented by other processes, for example, ‘meditation’, ‘science’ or ‘history’, each of which provides its own set of approaches to generating new knowledge for both the individual and society as a whole. There are many difference ways in which we come to formulate beliefs and understand the world.
Youtube Video, TOK Ways of Knowing EXPLAINED | Theory of Knowledge Advice, Ivy Lilia, October 2018, 6:16 minutes
In the spirit of working towards a description of the ‘human operating system’, it is interesting to consider how a robot or other Artificial Intelligence (AI), that was ‘running’ the human operating system, would draw on its knowledge and beliefs in order to solve a problem (e.g. resolve some inconsistency in its beliefs). This forces us to operationalize the process and define the control mechanism more precisely. I will work through the above list of ‘ways of knowing’ and illustrate how each might be used.
Let’s say that the robot is about to go and do some work outside and, for a variety of reasons, needs to know what the weather is like (e.g. in deciding whether to wear protective clothing, or how suitable the ground is for sowing seeds or digging up for some construction work etc.) .
First it might consult its senses. It might attend to its visual input and note the patterns of light and dark, comparing this to known states and conclude that it was sunny. The absence of the familiar sound patterns (and smell) of rain might provide confirmation. The whole process of matching the pattern of data it is receiving through its multiple senses, with its store of known patterns, can be regarded as ‘intuitive’ because it is not a reasoning process as such. In the Khanemman sense of ‘system 1’ thinking, the robot just knows without having to perform any reasoning task.
Youtube Video, System 1 and System 2, Stoic Academy, February 2017, 1:26 minutes
The knowledge obtained from matching perception to memory can nevertheless be supplemented by reasoning, or other forms of knowledge that confirm or question the intuitively-reached conclusion. If we introduce some conflicting knowledge, e.g. that the robot thinks it’s the middle of the night in it’s current location, we then create a circumstance in which there is dissonance between two sources of knowledge – the perception of sunlight and the time of day. This assumes the robot has elaborated knowledge about where and when the sun is above the horizon and can potentially shine (e.g. through language – see below).
In people the dissonance triggers the emotional state of ‘surprise’ and the accompanying motivation to account for the contradiction.
Youtube Video, Cognitive Dissonance, B2Bwhiteboard, February 2012, 1:37 minutes
Likewise, we might label the process that causes the search for an explanation in the robot as ‘surprise’. An attempt may be made to resolve this dissonance through Kahneman’s slower, more reasoned, system 2 thinking. Either the perception is somehow faulty, or the knowledge about the time of day is inaccurate. Maybe the robot has mistaken the visual and audio input as coming from its local senses when in fact the input has originated from the other side of the world. (Fortunately, people do not have to confront the contradictions caused by having distributed sensory systems).
Probably in the course of reasoning about how to reconcile the conflicting inputs, the robot will have had to run through some alternative possible scenarios that could account for the discrepancy. These may have been generated by working through other memories associated with either the perceptual inputs or other factors that have frequently led to mis-interpretations in the past. Sometimes it may be necessary to construct unique possible explanations out of component part explanations. Sometimes an explanation may emerge through the effect of numerous ideas being ‘primed’ through the spreading activation of associated memories. Under these circumstances, you might easily say that the robot was using it’s imagination in searching for a solution that had not previously been encountered.
Youtube Video, TEDxCarletonU 2010 – Jim Davies – The Science of Imagination, TEDx Talks, September 2010, 12:56 minutes
Lastly, to faith and language as sources of knowledge. Faith is different because, unlike all the other sources, it does not rely on evidence or proof. If the robot believed, on faith, that the sun was shining, any contradictory evidence would be discounted, perhaps either as being in error or as being irrelevant. Faith is often maintained by others, and this could be regarded as a form of evidence, but in general if you have faith in or trust something, it is at least filling the gap between the belief and the direct evidence for it.
Here is a religious account of faith that identifies it with trust in the reliability of God to deliver, where the main delivery is eternal life.
Youtube video, What is Faith – Matt Morton – The Essence of Faith – Grace 360 conference 2015,Grace Bible Church, September 2015, 12:15 minutes
Language as a source of evidence is a catch-all for the knowledge that comes second hand from the teachings and reports of others. This is indirect knowledge, much of which we take on trust (i.e. faith), and some of which is validated by direct evidence or other indirect evidence. Most of us take on trust that the solar system exists, that the sun is at the centre, and that earth is in the third orbit. We have gained this knowledge through teachers, friends, family, tv, radio, books and other sources that in their turn may have relied on astronomers and other scientist who have arrived at these conclusions through observation and reason. Few of us have made the necessary direct observations and reasoned inferences to have arrived at the conclusion directly. If our robot were to consult databases of known ‘facts’, put together by people and other robots, then it would be relying on knowledge through this source.
Pitfalls
People like to think that their own beliefs are ‘true’ and that these beliefs provide a solid basis for their behaviour. However, the more we find out about the psychology of human belief systems the more we discover the difficulties in constructing consistent and coherent beliefs, and the shortcomings in our abilities to construct accurate models of ‘reality’. This creates all kinds of difficulties amongst people in their agreements about what beliefs are true and therefore how we should relate to each other in peaceful and productive ways.
If we are now going on to construct artificial intelligences and robots that we interact with and have behaviours that impact the world, we want to be pretty sure that the beliefs a robot develops still provide a basis for understanding their behaviour.
Unfortunately, every one of the ‘ways of knowing’ is subject to error. We can again go through them one by one and look at the pitfalls.
Sensory perception: We only have to look at the vast body of research on visual illusion (e.g. see ‘Representations of Reality – Part 1’) to appreciate that our senses are often fooled. Here are some examples related to colour vision:
Youtube Video, Optical illusions show how we see | Beau Lotto,TED, October 2009, 18:59 minutes
Furthermore, our perceptions are heavily guided by what we pay attention to, meaning that we can miss all sorts of significant and even life-threatening information in our environment. Would a robot be similarly misled by its sensory inputs? It’s difficult to predict whether a robot would be subject to sensory illusions, and this might depend on the precise engineering of the input devices, but almost certainly a robot would have to be selective in what input it attended to. Like people, there could be a massive volume of raw sensory input and every stage of processing from there on would contain an element of selection and interpretation. Even differences in what input devices are available (for vision, sound, touch or even super-human senses like perception of non-visual parts of the electromagnetic spectrum), will create a sensory environment (referred to as the ‘umwelt’ or ‘merkwelt’in ethology) that could be quite at variance with human perceptions of the world.
YouTube Video, What is MERKWELT? What does MERKWELT mean? MERKWELT meaning, definition & explanation, The Audiopedia, July 2017, 1:38 minutes
Memory: The fallibility of human memory is well documented. See, for example, ‘The Story of Your Life’, especially the work done by Elizabeth Loftus on the reliability of memory. A robot, however, could in principle, given sufficient storage capacity, maintain a perfect and stable record of all its inputs. This is at variance with the human experience but could potentially mean that memory per se was more accurate, albeit that it would be subject to variance in what input was stored and the mechanisms of retrieval and processing.
Intuition and reason: This is the area where some of the greatest gains (and surprises) in understanding have been made in recent years. Much of this progress is reported in the work of Daniel Kahneman that is cited many times in these writings. Errors and biases in both intuition (system 1 thinking) and reason (system 2 thinking) are now very well documented. A long list of cognitive biases can be found at:
https://en.wikipedia.org/wiki/List_of_cognitive_biases
Would a robot be subject to the same type of biases? It is already established that many algorithms, used in business and political campaigning, routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.
Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes
As for reasoning, there is some hope that the robustness of proofs that can be achieved computationally may save the artificial intelligence or robot from at least some of the biases of system 2 thinking.
Emotion: Biases in people due to emotional reactions are commonplace. See, for example:
Youtube Video, Unconscious Emotional Influences on Decision Making, The Rational Channel, February 2017, 8:56 minutes
However, it is also the case that emotions are crucial in decision–making. Emotions often provide the criteria and motivation on which decisions are made and without them, people can be severely impaired in effective decision-making. Also, emotions provide at least one mechanism for approaching the subject of ethics in decision-making.
Youtube Video, When Emotions Make Better Decisions – Antonio Damasio, FORA.tv, August 2009, 3:22 minutes
Can robots have emotions? Will robots need emotions to make effective decisions? Will emotions bias or impair a robot’s decision-making. These are big questions and are only touched on here, but briefly, there is no reason why emotions cannot be simulated computationally although we can never know if an artificial computational device will have the subjective experience of emotion (or thought). Probably some simulation of emotion will be necessary for robot decision-making to align with human values (e.g. empathy) and, yes, a side-effect of this may well be to introduce bias into decision-making.
For a selection of BBC programmes on emotions see:
http://www.bbc.co.uk/programmes/topics/Emotions?page=1
Imagination: While it doesn’t make much sense to talk about ‘error’ when it comes to imagination, we might easily make value-judgments about what types of imagination might be encouraged and what might be discouraged. Leaving aside debates about how, say excessive experience of violent video games, might effect imagination in people, we can at least speculate as to what might or should go on in the imagination of a robot as it searches through or creates new models to help predict the impacts of its own and others behaviours.
A big issue has arisen as to how an artificial intelligence can explain its decision-making to people. While AI based on symbolic reasoning can potentially offer a trace describing the steps it took to arrice at a conclusion, AIs based on machine learning would be able to say little more than ‘I recognized the pattern as corresponding to so and so’, which to a person is not very explanatory. It turns out that even human experts are often unable to provide coherent accounts of their decision-making, even when they are accurate.
Having an AI or robot account for its decision-making in a way understandable to people is a problem that I will address in later analysis of the human operating system and, I hope, provide a mechanism that bridges between machine learning and more symbolic approaches.
Faith: It is often said that discussing faith and religion is one of the easiest ways to lose friends. Any belief based on faith is regarded as true by definition, and any attempt to bring evidence to refute it, stands a good chance of being regarded as an insult. Yet people have different beliefs based on faith and they cannot all be right. This not only creates a problem for people, who will fight wars over it, but it is also a significant problem for the design of AIs and robots. Do we plug in the Muslim or the Christian ethics module, or leave it out altogether? How do we build values and ethical principles into robots anyway, or will they be an emergent property of its deep learning algorithms. Whatever the answer, it is apparent that quite a lot can go badly wrong if we do not understand how to endow computational devices with this ‘way of knowing’.
Language: As observed above, this is a catch-all for all indirect ‘ways of knowing’ communicated to people through media, teaching, books or any other form of communication. We only have to consider world wars and other genocides to appreciate that not everything communicated by other people is believable or ethical. People (and organizations) communicate erroneous information and can deliberately lie, mislead and deceive.
We strongly tend to believe information that comes from the people around us, our friends and associates, those people that form part of our sub-culture or in-group. We trust these sources for no other reason than we are familiar with them. These social systems often form a mutually supporting belief system, whether or not it is grounded in any direct evidence.
Youtube Video, The Psychology of Facts: How Do Humans (mis)Trust Information?, YaleCampus, January 2017
Taking on trust the beliefs of others that form part of our mutually supporting social bubble is a ‘way of knowing’ that is highly error prone. This is especially the case when combined with other ‘ways of knowing’, such as faith, that in their nature cannot be validated. Will robot communities develop, who can talk to each other instantaneously and ‘telepathically’ over wireless connections, also be prone to the bias of groupthink?
The validation of beliefs
So, there are multiple ways in which we come to know or believe things. As Descartes argued, no knowledge is certain (see ‘It’s Like This’). There are only beliefs, albeit that we can be more sure of some that others, normally by virtue of their consistency with other beliefs. Also, we note that our beliefs are highly vulnerable to error. Any robot operating system that mimics humans will also need to draw on the many different ‘ways of knowing’ including a basic set of assumptions that it takes to be true without necessarily any supporting evidence (it’s ‘faith’ if you like). There will also need to be many precautions against AIs and robots developing erroneous or otherwise unacceptable beliefs and basing their behaviours on these.
There is a mechanism by which we try to reconcile differences between knowledge coming from different sources, or contradictory knowledge coming from the same source. Most people seem to be able to tolerate a fair degree of contradiction or ambiguity about all sorts of things, including the fundamental questions of life.
Youtube Video, Defining Ambiguity, Corey Anton, October 2009, 9:52 minutes
We can hold and work with knowledge that is inconsistent for long periods of time, but nevertheless there is a drive to seek consistency.
In the description of the human operating system, it would seem that there are many ways in which we establish what we believe and what beliefs we will recruit to the solving of any particular problem. Also, the many sources of knowledge may be inconsistent or contradictory. When we see inconsistencies in others we take this as evidence that we should doubt them and trust them less.
Youtube Video, Why Everyone (Else) is a Hypocrite, The RSA, April 2011, 17:13 minutes
However, there is, at least, a strong tendency in most people, to establish consistency between beliefs (or between beliefs and behaviours), and to account for inconsistencies. The only problem is that we are often prone to achieve consistency by changing sound evidence-based beliefs in preference to the strongly held beliefs based on faith or our need to protect our sense of self-worth.
Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011. 4:31 minutes
From this analysis we can see that building AIs and robots is fraught with problems. The human operating system has evolved to survive, not to be rational or hold high ethical values. If we just blunder into building AIs and robots based on the human operating system we can potentially make all sorts of mistakes and give artificial agents power and autonomy without understanding how their beliefs will develop and the consequences that might have for people.
Fortunately there are some precautions we can take. There are ways of thinking that have been developed to counter the many biases that people have by default. Science is one method that aims to establish the best explanations based on current knowledge and the principle of simplicity. Also, critical thinking has been taught since Aristotle and fortunately many courses have been developed to spread knowledge about how to assess claims and their supporting arguments.
Youtube Video, Critical Thinking: Issues, Claims, Arguments, fayettevillestatenc, January 2011
Implications
To summarise:
Sensory perception – The robot’s ‘umwelt’ (what it can sense) may well differ from that of people, even to the extent that the robot can have super-human senses such as infra-red / x-ray vision, super-sensitive hearing and smell etc. We may not even know what it’s perceptual world is like. It may perceive things we cannot and miss things we find obvious.
Memory – human memory is remarkably fallible. It is not so much a recording, as a reconstruction based on clues, and influenced by previously encountered patterns and current intentions. Given sufficient storage capacity, robots may be able to maintain memories as accurate recording of the states of their sensory inputs. However, they may be subject to similar constraints and biases as people in the way that memories are retrieved and used to drive decision-making and behaviour.
Intuition – if the robot’s pattern-matching capabilities are based on the machine learning of historical training sets then bias will be built into its basic processes. Alternatively, if the robot is left to develop from it’s own experience then, as with people, great care has to be taken to ensure it’s early experience will not lead to maladaptive behaviours (i.e. behaviours not acceptable to the people around it).
Reason – through the use of mathematical and logical proofs, robots may well have the capacity to reason with far greater ability than people. They can potentially spot (and resolve) inconsistencies arising out of different ‘ways of knowing’ with far greater adeptness than people. This may create a quite different balance between how robots make decisions and how people do using emotion and reason in tandem.
Emotion – human emotion are general states that arise in response to both internal and external events and provide both the motivation and the criteria on which decisions are made. In a robot, emerging global states could also potentially act to control decision-making. Both people, and potentially robots, can develop the capacity to explicitly recognize and control these global states (e.g. as when suppressing anger). This ability to reflect, and to cause changes in perspective and behaviour, is a kind of feedback loop that is inherently unpredictable. Not having sufficient understanding to predict how either people or robots will react under particular circumstances, creates significant uncertainty.
Imagination – much the same argument about predictability can be made about imagination. Who knows where either a person’s or a robot’s imagination may take them? Chess computers out-performed human players because of their capacity to reason in depth about the outcomes of every move, not because they used pattern-matching based on machine learning (although it seems likely that this approach will have been tried and succeeded by now). Robots can far exceed human capacities to reason through and model future states. A combination of brute force computing and heuristics to guide search, may have far-reaching consequences for a robot’s ability to model the world and predict future outcomes, and may far exceed that of people.
Faith – faith is axiomatic for people and might also be for robots. People can change their faith (especially in a religious, political or ethical sense) but more likely, when confronted with contradictory evidence or sufficient need (i.e. to align with a partner’s faith) people with either ignore the evidence or find reasons to discount it. This way can lead to multiple interpretations of the same basic axioms, in the same way as there are many religious denominations and many interpretations of key texts within these. In robots, Asimov’s three laws of robotics would equate to their faith. However, if robots used similar mechanisms as people (e.g. cognitive dissonance) to resolve conflicting beliefs, then in the same way as God’s will can be used to justify any behaviour, a robot may be able to construct a rationale for any behaviour whatever its axioms. There would be no guarantee that a robot would obey its own axiomatic laws.
Communication – The term language is better labeled ‘communication’ in order to make it more apparent that it extends to all methods by which we ‘come to know’ from sources outside ourselves. Since communication of knowledge from others is not direct experience, it is effectively taken on trust. In one sense it is a matter of faith. However, the degree of consistency across external sources and between what is communicated (i.e. that a teacher or TV will re-enforce what a parent has said etc.) and between what is communicated and what is directly observed (for example, that a person does what he says he will do) will reveal some sources as more believable than others. Also we appeal to motive as a method of assessing degree of trust. People are notoriously influenced by the norms, opinions and behaviours of their own reference groups. Robots with their potential for high bandwidth communication could, in principle, behave with the same psychology of the crowd as humans, only much more rapidly and ‘single-mindedly’. It is not difficult to see how the Dr Who image of the Borg, acting a one consciousness, could come about.
Other Ways of Knowing
It is worth considering just a few of the many other ‘ways’ of knowing’ not considered above, partly because some of these might help mitigate some of the risks of human ‘ways of knowing’ .
Science – Science has evolved methods that are deliberately designed to create impartial, robust and consistent models and explanations of the world. If we want robots to create accurate models, then an appeal to scientific method is one approach. In science, patterns are observed, hypotheses are formulated to account for these patterns, and the hypotheses are then tested as impartially as possible. Science also seeks consistency by reconciling disparate findings into coherent overall theories. While we may want robots to use scientific methods in their reasoning, we may want to ensure that robots do not perform experiments in the real world simply for the sake of making their own discoveries. An image of concentration camp scientists comes to mind. Nevertheless, in many small ways robots will need to be empirical rather than theoretical in order to operate at all.
Argument – Just like people, robots of any complexity will encounter ambiguity and inconsistencies. These will be inconsistencies between expectation and actuality, between data from one way of knowing and another (e.g. between reason and faith, or between perception and imagination etc.), or between a current state and a goal state. The mechanisms by which these inconsistencies are resolved will be crucial. The formulation of claims; the identification, gathering and marshalling of evidence; the assessment of the relevance of evidence; and the weighing of the evidence, are all processes akin to science but can cut across many ‘ways of knowing’ as an aid to decision making. Also, this approach may help provide explanations of a robot’s behaviour that would be understandable to people and thereby help bridge the gap between opaque mechanisms, such as pattern matching, and what people will accept as valid explanations.
Meditation – Meditation is a place-holder for the many ways in which altered states of consciousness can lead to new knowledge. Dreaming, for example, is another altered state that may lead to new hypotheses and models based on novel combination of elements that would not otherwise have been brought together. People certainly have these altered states of consciousness. Could there be an equivalent in the robot, and would we want robots to indulge in such extreme imaginative states where we would have no idea what they might consist of? This is not to necessarily attribute consciousness to robots, which is a separate, and probably meta-physical question.
Theory of mind – For any autonomous agent with its own beliefs and intentions, including a robot, it is crucial to its survival to have some notion of the intentions of other autonomous agents, especially when they might be a direct threat to survival. People have sophisticated but highly biased and error-prone mechanisms for modelling the intentions of others. These mechanisms are particularly alert for any sign of threat and, as a proven mechanism, tend to assume threat even when none is present. The people that did not do this, died out. Work in robotics already recognizes that, to be useful, robots have to cooperate with people and this requires some modelling of their intentions. As this last video illustrates, the modelling of others intentions is inherently complex because it is recursive.
YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes
If there is a conclusion to this analysis of ‘ways of knowing’ it is that creating intelligent, autonomous mechanisms, such as robots and AIs, will have inherently unpredictable consequences, and that, because the human operating system is so highly error-prone and subject to bias, we should not necessarily build them in our own image.