Home » Posts tagged 'Mental Models'

Tag Archives: Mental Models

Request contact
If you need to get in touch
Enter your email and click the button

– Robots & ToM 2

Do Robots Need Theory of Mind? Part 2

https://unsplash.com/@henkmul

Why Robots might need Theory of Mind (ToM)

Existential Risk and the AI Alignment Problem

Russell (2019) argues that we have been thinking about building artificial intelligence (AI) systems the wrong way. Since its inception, AI has attempted to build systems that can achieve ‘their own’ goals, albeit that we might give them those goals in the first instance. Instead, he says, we should be building AIs that understand ‘the preference structure’ that a person has and attempt to satisfy goals within the constraints of that preference structure.

In this way, the AI will be able to understand that acting to achieve one goal (e.g. getting a coffee) may interact or interfere with other preferences, goals or constraints (e.g. not knocking someone out of the way in the process) and thereby moderate its behaviour. An AI needs to understand that a goal is not there to be achieved ‘at all cost’. Instead it should be achieved taking into account many other preferences and priorities that might moderate it. Russell argues that if we think of building AIs in this way, we may be able to avoid the existential risk that superhuman AIs will eventually take over, and either deliberately or inadvertently wipe out humanity.

This is an example of what AI researchers have termed ‘the AI alignment problem’, that potentially creates an existential risk to humanity if we find ourselves, having built super-intelligent machines, unable to control them. Nick Bostrom (Bostrom 2014) has also characterised this threat using the example of setting an AI the goal of producing paperclips and it taking this so literally that it destroys humanity (for example, in its need for more raw materials) in the single-minded execution of this goal and having no appreciation of when to stop. Several other researchers have addressed the AI alignment problem (mainly in terms of laws, regulations and social rules) including Taylor et. al (2017), Hadfield-Menell & Hadfield (2019), Vamplew et. al (2018), Hadfield-Menell, Andrus & Hadfield (2019).

Russell (2019) goes on to describe how an AI should always have some level of uncertainty about what people want. Such uncertainty would put a check on the single-minded execution of a goal at all cost. It would drive a need for the AI to keep monitoring and maintaining its model of what a person might want at any point in time. It would require the AI to keep checking that what it was doing was ‘on-track’ or ‘aligned’ with a person’s whole preference structure. So, if, for example, you had instructed your self-driving car to take you to the airport and you received a message during the trip that your child had been in a road accident, the AI might recognise this as significant, and check whether you wanted to change your plans.

Russell arrives at this position from addressing the problem of existential risk. It is a proposed solution to the AI alignment problem. Working within this frame of reference, he proposes solutions like ‘Cooperative Inverse Reinforcement Learning’ (Malik et. al. 2018) whereby the Autonomous Intelligent System (AIS) attempts to infer the preference structure of a person from an observation of behaviour. This, indeed, seems to be a sensible approach.

However, the exact mechanism by which an AIS coordinates its actions with a person or people may well depend on it being able to accurately infer people’s mental states. Otherwise it might have to explicitly check (e.g. by asking) every few seconds, whether what it was doing was acceptable, and it would need to ‘read’ when a person found it’s behaviour unacceptable (e.g. by noting the frown when about to hit somebody on its mission to get the coffee).

The AI alignment problem is precisely the problem that every person has when interacting with another human being. When interacting with somebody else we are unable to directly observe their internal mental states. We cannot know their preference structure and we can only take on trust that their intentions are what they might say they are. Their real intentions, beliefs, desires, values, and boundaries could, in principle, be anything. What we do, is infer from their behaviours, including what they say (and what we understand from this) what their intentions are. Intentions, beliefs, and preferences are all hidden variables that may be the underlying causes of behaviours but because they are unobservable can only be guessed at.

Russell takes this on board and understands that the alignment problem is one that exists between any two agents, human or artificial. He is saying that robots need to be equipped with similar mechanisms to those that people generally have. These are the mechanisms that can model human beliefs, preferences and intentions by making inferences from observations of behaviour. Fortunately, we are not discovering and inventing these mechanisms for the first time.

Alignment with What?

A potential problem with having an AIS infer, reason and act on its analysis of another person’s mental states is that it may not accurately predict the consequences of its own actions. An action designed to do good may, in fact, do harm. In addition to being mistaken about the direction of its effect on mental states (positive or negative) it may also be inaccurate about the extent. So, an act designed to please may have no effect, or an act that is not intended to cause either pleasure or displeasure may have an effect.

This is quite apart from all manner of other complications that we might describe as its ‘policies’. Should, for example, an AIS always act to minimise harm and maximise a person’s pleasure? How should an AIS react if a person consistently fails to take medication prescribed for their benefit? How should it trade-off short and longer-term benefits? How does an AIS reconcile differences between two or more people, a person’s legal obligations and their desires or the interests of a person and another organisation (a school, a company, their employer, the tax office an so on)?

In all these cases, the issue comes down to how the AIS evaluates it’s own choice of possible actions (or inaction) and which stakeholders it takes into account when performing this evaluation. Numerous guidelines have been produced in recent years to help guide developers of AI systems. The good news is that there is considerable agreement about the kinds of principles that apply – not contravening human rights, not doing harm, increasing wellbeing, transparency and explainability in how the AIS arrives at decisions, elimination of bias and discrimination, and clear accountability and responsibility for the AIS’s decisions. The main mechanism for putting these principles into practice is the training and controls (guidelines, standards and legal) of companies, designers and developers. Comparatively little has been proposed for the controls that might be embedded within the AIS itself, and even less about the principles and mechanisms that might be used to achieve this.

We could turn to economics for models of preference and choice, but these models are discredited by findings in the social sciences (e.g. prospect theory) and many would argue that the incentives encouraged by such models is precisely what has lead to existential risks like nuclear arms races and climate change. We would therefore need to think very carefully before using these same models to drive the design of artificial intelligences because of their potential in adding yet another existential risk.

The existential risk discussed in relation to AISs has tended to focus on the fear that if an artificial intelligence is given autonomy to achieve it’s objectives without constraint, then it might do anything. Even simple systems can become unpredictable very quickly, and if it is unpredictable it is out of control. In the anthropomorphic way, characteristic of human beings, we project onto the AIS that it would be concerned about it’s own self-preservation, or that it would discover that self-preservation was a necessary pre-condition to attaining it’s goal(s). We further project that if it adopts the goal of self-preservation, then it might do this at all cost, putting it’s own self-preservation ahead of even those of its creators. There are some good reasons for these fears because goals like self-preservation and accumulation of resources are instrumental to the achievement of any other goal and an AIS might easily reason that out (Bostrom 2012). There have been challenges to this line of reasoning but this debate is not a central concern here. Rather, I am more concerned with whether an AIS can align with the goals of an individual using the same sorts of social cues that we all use in the informal ways in which we, in general, cooperate with each other.

If we are already concerned that the economic and political systems currently in place can have some undesirable consequences, like other existential risks and concentrations of wealth in the hands of a few, then the last thing we would want to do is build into AISs the same mechanisms for evaluating choices as those assumed by classical economic theory. In these posts, I look primarily to psychology (and sometimes philosophy) to provide evidence and analysis of how people make decisions in a social world, particularly one in which we are taking into account our beliefs about other people’s mental states. Whether this provides an answer to the alignment problem remains to be seen, but it is, at least, another perspective that may help us understand the types of control mechanisms we may need as the development of AIS proceeds at an ever increasing pace.

Cooperation and Collaboration

https://unsplash.com/@brett_jordan

The paradigm in which robots act as slaves to their human masters is gradually being replaced by one in which robots and humans work collaboratively together to achieve some goal (Sheridan 2016). This applies for individual human-robot interactions and for multi-robot teams (Rosenfeld et. al. 2017). If robots and AISs generally could infer the mental sates of the people around them when performing complex tasks, then this could potentially lead to more intuitive and efficient collaboration between the person and the machine. This requires trust on the part of the human that the robot will play its part in the interaction (Hancock et. al 2011).

As a step on the way, systems have been built where robots collaborate with each other without communication to perform complex tasks using only visual cues (Gassner et. al. 2017). Collaboration is especially useful in situations like care giving (Miyachi, Iga & Furuhata 2017) where giving explicit verbal instructions might be difficult (e.g. in cases of Alzheimers or autism). Gray et. al (2005) proposed a system of action parsing and goal simulation whereby a robot might infer goals and mental states of others in a collaborative task scenario.

Potential Benefits

Equipping AISs with the ability to recognise, infer and reason about the mental states of others could have some extra-ordinary advantages. Not only might we avoid existential risk to humanity (and could there be anything of greater significance) and make our interactions with robots and AISs generally easy and intuitive, but also we could be living along-side intelligent artefacts that have the robust capacity to carry out moral reasoning. Not only could they keep themselves in check, so that they made only justifiable moral decisions with respect to their own actions, but they might also help us adjudicate our own actions, offering fair, reasonable and justifiable remedies to human transgressions of the law and other social codes. They might become reliable and trustworthy helpers and companions, politely guiding us in solving currently intractable world problems, and protecting us from our own worse human biases, vices, and deficiencies. If they turned out to be better at moral reasoning than people, like wise philosophers they could offer us considered advice to help us achieve our goals and deal with the dilemmas’ of everyday life.

However, there is much that stands in the way of achieving this utopian relationship with the intelligent artefacts we create, especially if we want an AIS to infer mental states in the same way a person might, by observation and perhaps asking questions. We are beginning to understand patterns of neuronal activity sufficiently well to infer some mental states. For example, Haynes et. al. (2007) report being able to tell which of two choices a person is making from looking at neural activity. Elon Musk is creating ‘Neural Lace’ for such a purpose (Cuthbertson 2016) but could mental states be inferred using a non-invasive approaches.

In particular, could we create AISs that could infer our mental states, without inadvertently creating an even greater and more immediate existential risk? I will later argue that giving AISs theory of mind, without them having the same sort of controls on social behaviour that empathy gives people, could be a disaster that heightens existential risk in our very attempt to avoid it. In subsequent posts I first consider whether the artificial inferencing of human mental states is even a credible possibility?

References

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Bostrom, N., (2014). Superintelligence: Paths, Dangers, Strategies (1st. ed.). Oxford University Press, Inc., USA.

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Gassner, M., Cieslewski, T., & Scaramuzza, D. (2017). Dynamic collaboration without communication: Vision-based cable-suspended load transport with two quadrotors. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 5196-5202). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2017.7989609

Gray, J., Breazeal, C., Berlin, M., Brooks, A., & Lieberman, J. (2005). Action parsing and goal inference using self as simulator. In Proceedings – IEEE International Workshop on Robot and Human Interactive Communication (Vol. 2005, pp. 202–209). https://doi.org/10.1109/ROMAN.2005.1513780

Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 417–422). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314250

Hadfield-Menell, D., Andrus, M., & Hadfield, G. K. (2019). Legible normativity for AI alignment: The value of silly rules. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 115–121). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314258

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Malik D., Palaniappan M., Fisac J., Hadfield-Menell D., Russell S., and Dragan A., (2018) “An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning.” In Proc. ICML-18, Stockholm.

Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators
for Care Robot Innovation. In Procedia Computer Science (Vol. 112, pp. 1254-1262). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.08.078

Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211-231. https://doi.org/10.1016/j.artint.2017.08.005

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Sheridan, T. B. (2016). Human-Robot Interaction: Status and Challenges. Human Factors, 58(4), 525-32. https://doi.org/10.1177/0018720816644364

Taylor, J., Yudkowsky, E., Lavictoire, P., & Critch, A. (2017). Alignment for Advanced Machine Learning Systems. Miri, 1–25. Retrieved from https://intelligence.org/files/AlignmentMachineLearning.pdf

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

– Robots & ToM

Do Robots Need Theory of Mind? – part 1

Robots, and Autonomous Intelligent Systems (AISs) generally, may need to model the mental states of the people they interact with. Russell (2019), for example, has argued that AISs need to understand the complex structures of preferences that people have in order to be able to trade-off many human goals, and thereby avoid the problem of existential risk (Boyd & Wilson 2018) that might follow from an AIS with super-human intelligence doggedly pursuing a single goal. Others have pointed to the need for AISs to maintain models of people’s intentions, knowledge, beliefs and preferences, in order that people and machines can interact cooperatively and efficiently (e.g. Lemaignan et. al. 2017, Ben Amor et. al. 2014, Trafton et. al. 2005).

However, in addition to risks already well documented (e.g. Müller & Bostrom 2016) there are many potential dangers in having artificial intelligence systems closely observe human behaviour, infer human mental states, and then act on those inferences. Some of the potential problems that come to mind include:


  • The risk that self-determining AISs will be built with a limited capability of understanding human mental states and preferences and that humans will lose control of the technology (Meek et. al. 2017, Russell 2019).
  • The risk that the AIS will exhibit biases in what it selects to observe, infer and act that would be unfair (Osoba & Welser 2017)

  • The risk that the AIS will use misleading information, make inaccurate observations and inferences, and base its actions on these (McFarland & McFarland 2015, Rapp et. al. 2014)

  • The risk that even if the AIS observes and infers accurately, that its actions will not align with what a person might do or that it may have unintended consequences (Vamplew et. al. 2018)

  • The risk that an AIS will misuse its knowledge of a person’s hidden mental states resulting in either deliberate or inadvertent harm or criminal acts (Portnoff & Soupizet 2018).

  • The risk that peoples’ human rights and rights to privacy will be infringed because of the ability of AISs to observe, infer, reason and record data that people have not given consent to and may not even know exists (OECD 2019).

  • The risk that if the AIS was making decisions based on unobservable mental states that any explanations of an AIS’s actions based on them would be difficult to validate (Future of Life Institute 2017, Weld & Bansal 2018).

  • The risk that the AIS would, in the interests of a global common good, correct for people’s foibles, biases and dubious (unethical) acts thereby take away their autonomy (Timmers 2019),

  • The risk that using AISs, a few multi-national companies and countries will collect so much data about peoples’ explicit and inferred hidden preferences that power and wealth will become concentrated in even fewer hands (Zuboff 2018)

  • The risk that corporations will rush to be the first to produce AISs that can observe, infer and reason about people’s mental states and in so doing will neglect to take safety precautions (Armstrong et. al. 2016).

  • The risk that in acting out of some greater interest (i.e. the interests of society at large) an AIS will act to restrict the autonomy or dignity of the individual (Zardiashvili & Fosch-Villaronga 2020)

  • The risk that an AIS would itself take unacceptable risks based on inferred uncertain mental states, that may cause a person or itself harm (Merritt et. al. 2011).

Much has been written about the risks of AI, and in the last few years numerous ethical guidelines, principles and recommendations have been made, especially in relation to the regulation of the development of AISs (Floridi et. al. 2018). However, few of these have touched on the real risk that AISs may one day develop such that they can gain a good understanding of people’s unobservable mental states and act on them. We have already seen Facebook being used to target advertisements and persuasive messages on the basis of inferred political preferences (Isaak & Hanna 2018).

In future posts I look at the extent to which an AIS could potentially have the capability to infer other people’s mental states. I touch on some the advantages and dangers and identify some of the issues that may need to be thought through.

I argue that AISs generally (not only robots) may need to both model people’s mental states, known in the psychology literature as Theory of Mind – ToM (Carlson et. al. 2013), but also have some sort of emotional empathy. Neural nets have already been used to create algorithms that demonstrate some aspects of ToM (Rabinowitz 2018). I explore the idea of building AISs with both ToM and some form of empathy and the idea that unless we are able to equip AISs with a balance of control mechanisms we run the risk of creating AISs that have ‘personality disorders’ that we would want to avoid.

In making this case, I look at whether it is conceivable that we could build AISs that have both ToM and emotional empathy, and that if it were possible, how these two capacities would need to be integrated to provide an effective overall control mechanism. Such a control mechanism would involve both fast (but sometimes inaccurate) processes and slower (reflective and corrective) processes, similar to the distinctions Kahneman (Kahneman 2011) makes between system 1 and system 2 thinking. The architecture has the potential for the fine-grained integration of moral reasoning into the decision making of an AIS.

What I hope to add to Russell’s (2019) analysis is a more detailed consideration of what is already known in the psychology literature about the general problem of inferring another agent’s intentions from their behaviour. This may help to join up some of the thinking in AI with some of the thinking in cognitive psychology in a very broad-brushed way such that the main structural relationships between the two might come more into focus.

Subscribe (top left) to follow future blog posts on this topic.

References

Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y

Ben Amor, H., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human-robot cooperation tasks. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 2831–2837). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2014.6907265

Boyd, M., & Wilson, N. (2018). Existential Risks. Policy Quarterly, 14(3). https://doi.org/10.26686/pq.v14i3.5105
Carlson, S. M., Koenig, M. A., & Harms, M. B. (2013). Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science, 4(4), 391–402. https://doi.org/10.1002/wcs.1232

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Floridi, L., Cowls, J., Beltrametti, M. et al., (2018), AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 doi:10.1007/s11023-018-9482-5

Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. Future of Life, 1–23. Retrieved from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Lemaignan, S., Warnier, M., Sisbot, E. A., Clodic, A., & Alami, R. (2017). Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, 45–69. https://doi.org/10.1016/j.artint.2016.07.002

McFarland, D. A., & McFarland, H. R. (2015). Big Data and the danger of being precisely inaccurate. Big Data and Society. SAGE Publications Ltd. https://doi.org/10.1177/2053951715602495

Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2017). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. In PICMET 2016 – Portland International Conference on Management of Engineering and Technology: Technology Management For Social Innovation, Proceedings (pp. 682–693). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/PICMET.2016.7806752

Merritt, T., Ong, C., Chuah, T. L., & McGee, K. (2011). Did you notice? Artificial team-mates take risks for players. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 338–349). https://doi.org/10.1007/978-3-642-23974-8_37

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Oecd/Legal/0449. Retrieved from http://acts.oecd.org/Instruments/ShowInstrumentView.aspx?InstrumentID=219&InstrumentPID=215&Lang=en&Book=False

Osoba, O., & Welser, W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/rr1744

Portnoff, A. Y., & Soupizet, J. F. (2018). Artificial intelligence: Opportunities and risks. Futuribles: Analyse et Prospective, 2018-September(426), 5–26.

Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., & Botvinick, M. (2018). Machine Theory of mind. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 10, pp. 6723–6738). International Machine Learning Society (IMLS).

Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate information. Memory and Cognition, 42(1), 11–26. https://doi.org/10.3758/s13421-013-0339-0

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Timmers, P., (2019), Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds & Machines 29, 635–645 doi:10.1007/s11023-019-09508-4

Trafton, J. G., Cassimatis, N. L., Bugajska, M. D., Brock, D. P., Mintz, F. E., & Schultz, A. C. (2005). Enabling effective human-robot interaction using perspective-taking in robots. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 35(4), 460–470. https://doi.org/10.1109/TSMCA.2005.850592

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

Weld, D. S., & Bansal, G. (2018). Intelligible artificial intelligence. ArXiv, 62(6), 70–79. https://doi.org/10.1145/3282486

Zardiashvili, L., Fosch-Villaronga, E. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds & Machines (2020) doi:10.1007/s11023-019-09514-6

Zuboff S., (2019), The Age of Surveillance Capitalism, Profile Books

– Representations of reality 1

What is the relationship between the world and our mental representation of it? What is the representation that we use to model the world, and run through in our minds alternative futures enabling us to anticipate and predict what might happen? How do we ‘mind the gap‘ between our expectations and our experience and work out how to fill our unmet needs? Things are not always what we expect.

YouTube Video, 10 Amazing Illusions – Richard Wiseman, Quirkology, November 2012, 2:36 minutes

Previous blogs considered how being oriented, and having purpose, formed the basis for having control, and how when needs were un-met, without control, wellbeing will suffer. Orientation was seen as a mental map or model that allows us to navigate around our knowledge and thoughts, to know where we are going and to plan the necessary steps on the way.

Representation is Crucial

I want to know whether it is shorter to go from B to D via A or C. I am told that A is 80 miles west of B. B is 33 miles south of C. C is 95 miles south east of D. D is 83 miles north of A. A is 103 south west of C. What’s the answer?

It is very difficult to figure this out without drawing a map or diagram. With a map the answer is visually obvious. Even knowing that A is Swindon, B is London, C is Stevenage, and D is Birmingham doesn’t help much unless you have a good knowledge of UK geography and can see the problem in your ‘mind’s eye’.

But even problems like ‘will I be happier taking a boring but highly paid job at the bank or a more challenging teaching job?’ are difficult to think about without employing some spatial reasoning, perhaps because they can involve some degree of quantitative comparison (across several dimensions – happiness, financial reward, degree of challenge etc.).
How you represent a problem is crucial to whether or not it is easy or difficult to solve.

The ‘framing’ of a problem and the mindset you bring to it, considerably influences which kinds of solutions are easy to find and which are near to impossible. If we think the sun goes round the earth then we will have considerably more difficulty predicting the positions of the planets than if we think the earth goes round the sun. If we think somebody is driven by a depressive disease when in fact their circumstances are appalling, we may give them medication rather than practical help. Having a suitable representation and mindset are crucial to enabling control.

The wonderful thing is that people can re-invent representations and make difficult problems easy. However, this often takes effort and because we are lazy, for the most part we do not bother and continue to do things in the same old way – until, that is, we get a surprise or shock that makes us think again.

Language and Thought

So familiar and ingrained is the notion of orientation and navigation that spatial metaphors are rife in language – ‘I don’t know which way to turn’, ‘she’s a distant relative but a close friend’, ‘house prices are climbing’, ‘I take a different position’ etc. However, language may only be a symptom or product of our thoughts and not the mental representation itself.

Philosophers and linguists have long speculated on the relationship between language and thought. Is it possible to think about certain things without the aid of linguistics hooks to hang the thoughts on?

Steven Pinker considers language as a window on how we think. Our choice and use of different linguistic constructions reveals much of the subtlety and nuances of our thoughts and intentions. How we phrase a sentence is as much to do with allowing space for interpretation, negotiation and the management of social roles as it is to do with the ‘face value’ communicating of information.

TED Video, Steven Pinker: What our Language Habits reveal, TED, September 2007, 17:41 minutes

Pinker also differentiates thought and language, demonstrating that it is possible to have thought without language and that we think first and then put language to the thoughts in order to communicate. For example, babies and animals are able to make sense of the world without being able to put it into language. We translate between different languages by reference to underlying meaning. Pinker uses the term ‘mentalise’ as the ‘language’ of thought. We often think with our senses, in images, sounds and probably also our other senses. We can also think non-linguistically in terms of propositions and abstract notions. This is not to say that language and thought are not intimately bound up – what one person says influences what another person thinks. However, the fact that words can be invented to convey new concepts suggest that the thoughts can come first and the language is created as a tool to capture and convey the thought.

TED Video, Stephen Pinker: Language and Consciousness, Part 1 Complete: Thinking Allowed w/ J. Mishlove , ThinkingAllowedTV, October 2012, 27:17 minutes

But just as language reflects and may constrain thought, it also facilitates it and allows us to see things from different perspectives without very much effort. In general, metaphor allows us to think of one concept in terms of another. In so doing it provides an opportunity to compare the metaphor to the characteristics of the thing we are referring to – ‘shall I compare thee to a summer’s day?’. A summer’s day is bright, care-free, timeless and so forth. Metaphor opens up the possibility of attributing new characteristics that were not at first considered. It releases us from literal, figurative thought and takes us into the realm of possibility and new perspectives.

TED Video, James Geary, Metaphorically Speaking, TED, December 2009, 10:44 minutes


Mental Models

Despite the importance of language as both a mechanism of capturing and shaping thought, it is not the only way that thought is represented. In fact it is a comparatively high level and symbolic form of representation. Thoughts, for example, can be driven by perception, and to illustrate this it is useful to think about perceptual illusions. The following video shows a strong visual illusion that people would describe in language one way, when in fact, it can be revealed to be something else.

YouTube Video, Illusion and Mental Models, What are the odds, March 2014, 2:36 minutes

This video also illustrates the interaction between prior knowledge and the interpretation of what you perceive. It also mentions the tendency to ignore or find the easiest (most available) explanation for information that is ambiguous or difficult to deal with.

Mental representations are often referred to as mental models. Here’s one take of what they are:

Youtube Video, Mental Models, kfw., March 2011, 3:59 minutes

It turns out that much of the most advanced work on mental models has been in the applied area of user interface design. Understanding how a user thinks or models some aspect of the world is the key to the difference between producing a slick, usable design and a design that is unfathomable, frustrating and leads to making slips and mistakes.

Youtube Video, 4 2 Lecture 4 2 Mental Models 15 28, OpenCourseOnline, June 2012, 15:28 minutes

Mental models apply to people’s behaviour (output) in much the same way as they apply to sensory input.

Youtube Video, Visualization – A Mental Skill to learn, Wally Kozak, May 2010, 4:05 minutes

In the same way that an expert learns to ‘see’ patterns quickly and easily (e.g. in recognising a disease), they also learn skilled behaviours (e.g. how to perform an examination or play a game of tennis) by developing an appropriate mental representation. It is possible to apply expert knowledge in, for example, diagnosis or decision making without either language or thought. Once we have attained a high degree of expertise in some subject, much ‘problem solving’ becomes recognition rather than reasoning.

YouTube Video, How do Medical Experts Think?, MjSylvesterMD, June 2013, 4:44 minutes

So mental representations apply at the level of senses and behaviours as well as at the higher levels of problem solving. We can distinguish between ‘automatic’, relatively effort-free thinking (system 1 thinking in Kahneman’s terms) and conscious problem solving thought (system 2 thinking).

System 1 thinking is intuitive and can be the product of sustained practice and mastery. Most perceptual and motor skills are learned in infancy and practiced to the point of mastery without explicitly realising it. In language, a child’s intuitive understanding of grammar (e.g. that you add an s to make a plural) is automatic. System 1 thinking can also be applied to seemingly simple skills, like catching a ball or something seemingly complex, like diagnosing the illness of a patient. A skilled general practitioner often does not have to think about a diagnosis. It is so familiar that it is a kind of pattern recognition. With the automated mechanisms of system 1 thinking you just know how to do it or just see it. It requires no effort.

System 2 thinking, by contrast, requires effort and resource. It is the type of thinking that requires conscious navigation across the territory of one’s knowledge and beliefs. Because this consumes limited resources, it involves avoiding the pitfall, locating the easier downhill slopes and only climbing when absolutely necessary on the way to the destination. It is as if it needs some sort of central cognitive control to allocate attention to the most productive paths.

Computational Approaches

Although, to my knowledge, Daniel Kahneman does not reference it, the mechanism whereby system 2 problem solving type thinking becomes system 1 type automated thinking was described and then thoroughly modelled back in the 1970s and 80s. It is a process called ‘universal sub-goaling and chunking’ and accounts well for empirical data on how skills are learned and improve with practice.

http://www.springer.com/computer/ai/book/978-0-89838-213-6

This theoretical model gave rise to the development of Artificial Intelligence (AI) software called ‘Soar’ to model a general problem solving mechanism.

http://en.wikipedia.org/wiki/Soar_(cognitive_architecture)

According to this mechanism, when confronted with a problem, a search is performed of the ‘problem space’ for a solution. If a solution is not found then the problem is broken down into sub-tasks and a variety of standard methods are used to manage the search for solutions to these. If solutions to sub-goals cannot be found then deeper level sub-goals can be spawned. Once a solution, or path to a solution, is found (at any level in the goal hierarchy) it is stored (or chunked) so that when confronted with the same problem next time it is available without the need for further problem solving or search.

In this way, novel problems can be tackled, and as solutions are found they effectively become automated and easy to access using minimal resource.

The ambitions of the Soar project, which continue at the University of Michigan, are to ‘support all the capabilities of an intelligent agent’. Project funding comes from a variety of sources including the US department of Defense (DARPA).

http://soar.eecs.umich.edu

The Soar architecture is covered in the following Open Courseware Module from MIT.

Youtube Video, 19. Architectures: GPS, SOAR, Subsumption, Society of Mind, MIT OpenCourseWare, January 2014, 40:05 minutes

Whatever the state of the implementation, the Soar cognitive architecture is in close alignment with much else that is described here. It provided insight into the following:


  • How system 1 and system 2 type thinking can be integrated into a single framework
  • How ‘navigation’ around what is currently believed or known might be managed
  • How learning occurs and an explanation for the ‘power law of practice’ (the well established and consistent relationship between practice and skill development over a wide range of tasks)
  • How it is possible to create solutions out of fragmentary and incomplete knowledge
  • How the ‘availability principle’ described by Kahneman can operate to perform quick fixes and conserve resources
  • What a top-down central cognitive control mechanism might look like
  • The possible ways in which disruption to the normal operation of this high level control mechanism might help explain conditions such as autism and dementia


In this blog: ‘The Representation of Reality Enables Control – Part 1’ looked at language and thought, mental models and computational approaches to how the mind represents what it knows about the world (and itself).

Part 2 contrasts thinking in words with thinking in pictures, looking first at how evidence from brain studies inform the debate, and then concludes how all these approaches – linguistic, psychological, computational, neurophysiological and phenomenological are addressing much the same set of phenomena from different perspectives. Can freedom be defined in terms of our ability to reflect on our own perceptions and thoughts?