Home » Posts tagged 'Intent'

Tag Archives: Intent

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Robots & ToM 2

Do Robots Need Theory of Mind? Part 2


Why Robots might need Theory of Mind (ToM)

Existential Risk and the AI Alignment Problem

Russell (2019) argues that we have been thinking about building artificial intelligence (AI) systems the wrong way. Since its inception, AI has attempted to build systems that can achieve ‘their own’ goals, albeit that we might give them those goals in the first instance. Instead, he says, we should be building AIs that understand ‘the preference structure’ that a person has and attempt to satisfy goals within the constraints of that preference structure.

In this way, the AI will be able to understand that acting to achieve one goal (e.g. getting a coffee) may interact or interfere with other preferences, goals or constraints (e.g. not knocking someone out of the way in the process) and thereby moderate its behaviour. An AI needs to understand that a goal is not there to be achieved ‘at all cost’. Instead it should be achieved taking into account many other preferences and priorities that might moderate it. Russell argues that if we think of building AIs in this way, we may be able to avoid the existential risk that superhuman AIs will eventually take over, and either deliberately or inadvertently wipe out humanity.

This is an example of what AI researchers have termed ‘the AI alignment problem’, that potentially creates an existential risk to humanity if we find ourselves, having built super-intelligent machines, unable to control them. Nick Bostrom (Bostrom 2014) has also characterised this threat using the example of setting an AI the goal of producing paperclips and it taking this so literally that it destroys humanity (for example, in its need for more raw materials) in the single-minded execution of this goal and having no appreciation of when to stop. Several other researchers have addressed the AI alignment problem (mainly in terms of laws, regulations and social rules) including Taylor et. al (2017), Hadfield-Menell & Hadfield (2019), Vamplew et. al (2018), Hadfield-Menell, Andrus & Hadfield (2019).

Russell (2019) goes on to describe how an AI should always have some level of uncertainty about what people want. Such uncertainty would put a check on the single-minded execution of a goal at all cost. It would drive a need for the AI to keep monitoring and maintaining its model of what a person might want at any point in time. It would require the AI to keep checking that what it was doing was ‘on-track’ or ‘aligned’ with a person’s whole preference structure. So, if, for example, you had instructed your self-driving car to take you to the airport and you received a message during the trip that your child had been in a road accident, the AI might recognise this as significant, and check whether you wanted to change your plans.

Russell arrives at this position from addressing the problem of existential risk. It is a proposed solution to the AI alignment problem. Working within this frame of reference, he proposes solutions like ‘Cooperative Inverse Reinforcement Learning’ (Malik et. al. 2018) whereby the Autonomous Intelligent System (AIS) attempts to infer the preference structure of a person from an observation of behaviour. This, indeed, seems to be a sensible approach.

However, the exact mechanism by which an AIS coordinates its actions with a person or people may well depend on it being able to accurately infer people’s mental states. Otherwise it might have to explicitly check (e.g. by asking) every few seconds, whether what it was doing was acceptable, and it would need to ‘read’ when a person found it’s behaviour unacceptable (e.g. by noting the frown when about to hit somebody on its mission to get the coffee).

The AI alignment problem is precisely the problem that every person has when interacting with another human being. When interacting with somebody else we are unable to directly observe their internal mental states. We cannot know their preference structure and we can only take on trust that their intentions are what they might say they are. Their real intentions, beliefs, desires, values, and boundaries could, in principle, be anything. What we do, is infer from their behaviours, including what they say (and what we understand from this) what their intentions are. Intentions, beliefs, and preferences are all hidden variables that may be the underlying causes of behaviours but because they are unobservable can only be guessed at.

Russell takes this on board and understands that the alignment problem is one that exists between any two agents, human or artificial. He is saying that robots need to be equipped with similar mechanisms to those that people generally have. These are the mechanisms that can model human beliefs, preferences and intentions by making inferences from observations of behaviour. Fortunately, we are not discovering and inventing these mechanisms for the first time.

Alignment with What?

A potential problem with having an AIS infer, reason and act on its analysis of another person’s mental states is that it may not accurately predict the consequences of its own actions. An action designed to do good may, in fact, do harm. In addition to being mistaken about the direction of its effect on mental states (positive or negative) it may also be inaccurate about the extent. So, an act designed to please may have no effect, or an act that is not intended to cause either pleasure or displeasure may have an effect.

This is quite apart from all manner of other complications that we might describe as its ‘policies’. Should, for example, an AIS always act to minimise harm and maximise a person’s pleasure? How should an AIS react if a person consistently fails to take medication prescribed for their benefit? How should it trade-off short and longer-term benefits? How does an AIS reconcile differences between two or more people, a person’s legal obligations and their desires or the interests of a person and another organisation (a school, a company, their employer, the tax office an so on)?

In all these cases, the issue comes down to how the AIS evaluates it’s own choice of possible actions (or inaction) and which stakeholders it takes into account when performing this evaluation. Numerous guidelines have been produced in recent years to help guide developers of AI systems. The good news is that there is considerable agreement about the kinds of principles that apply – not contravening human rights, not doing harm, increasing wellbeing, transparency and explainability in how the AIS arrives at decisions, elimination of bias and discrimination, and clear accountability and responsibility for the AIS’s decisions. The main mechanism for putting these principles into practice is the training and controls (guidelines, standards and legal) of companies, designers and developers. Comparatively little has been proposed for the controls that might be embedded within the AIS itself, and even less about the principles and mechanisms that might be used to achieve this.

We could turn to economics for models of preference and choice, but these models are discredited by findings in the social sciences (e.g. prospect theory) and many would argue that the incentives encouraged by such models is precisely what has lead to existential risks like nuclear arms races and climate change. We would therefore need to think very carefully before using these same models to drive the design of artificial intelligences because of their potential in adding yet another existential risk.

The existential risk discussed in relation to AISs has tended to focus on the fear that if an artificial intelligence is given autonomy to achieve it’s objectives without constraint, then it might do anything. Even simple systems can become unpredictable very quickly, and if it is unpredictable it is out of control. In the anthropomorphic way, characteristic of human beings, we project onto the AIS that it would be concerned about it’s own self-preservation, or that it would discover that self-preservation was a necessary pre-condition to attaining it’s goal(s). We further project that if it adopts the goal of self-preservation, then it might do this at all cost, putting it’s own self-preservation ahead of even those of its creators. There are some good reasons for these fears because goals like self-preservation and accumulation of resources are instrumental to the achievement of any other goal and an AIS might easily reason that out (Bostrom 2012). There have been challenges to this line of reasoning but this debate is not a central concern here. Rather, I am more concerned with whether an AIS can align with the goals of an individual using the same sorts of social cues that we all use in the informal ways in which we, in general, cooperate with each other.

If we are already concerned that the economic and political systems currently in place can have some undesirable consequences, like other existential risks and concentrations of wealth in the hands of a few, then the last thing we would want to do is build into AISs the same mechanisms for evaluating choices as those assumed by classical economic theory. In these posts, I look primarily to psychology (and sometimes philosophy) to provide evidence and analysis of how people make decisions in a social world, particularly one in which we are taking into account our beliefs about other people’s mental states. Whether this provides an answer to the alignment problem remains to be seen, but it is, at least, another perspective that may help us understand the types of control mechanisms we may need as the development of AIS proceeds at an ever increasing pace.

Cooperation and Collaboration


The paradigm in which robots act as slaves to their human masters is gradually being replaced by one in which robots and humans work collaboratively together to achieve some goal (Sheridan 2016). This applies for individual human-robot interactions and for multi-robot teams (Rosenfeld et. al. 2017). If robots and AISs generally could infer the mental sates of the people around them when performing complex tasks, then this could potentially lead to more intuitive and efficient collaboration between the person and the machine. This requires trust on the part of the human that the robot will play its part in the interaction (Hancock et. al 2011).

As a step on the way, systems have been built where robots collaborate with each other without communication to perform complex tasks using only visual cues (Gassner et. al. 2017). Collaboration is especially useful in situations like care giving (Miyachi, Iga & Furuhata 2017) where giving explicit verbal instructions might be difficult (e.g. in cases of Alzheimers or autism). Gray et. al (2005) proposed a system of action parsing and goal simulation whereby a robot might infer goals and mental states of others in a collaborative task scenario.

Potential Benefits

Equipping AISs with the ability to recognise, infer and reason about the mental states of others could have some extra-ordinary advantages. Not only might we avoid existential risk to humanity (and could there be anything of greater significance) and make our interactions with robots and AISs generally easy and intuitive, but also we could be living along-side intelligent artefacts that have the robust capacity to carry out moral reasoning. Not only could they keep themselves in check, so that they made only justifiable moral decisions with respect to their own actions, but they might also help us adjudicate our own actions, offering fair, reasonable and justifiable remedies to human transgressions of the law and other social codes. They might become reliable and trustworthy helpers and companions, politely guiding us in solving currently intractable world problems, and protecting us from our own worse human biases, vices, and deficiencies. If they turned out to be better at moral reasoning than people, like wise philosophers they could offer us considered advice to help us achieve our goals and deal with the dilemmas’ of everyday life.

However, there is much that stands in the way of achieving this utopian relationship with the intelligent artefacts we create, especially if we want an AIS to infer mental states in the same way a person might, by observation and perhaps asking questions. We are beginning to understand patterns of neuronal activity sufficiently well to infer some mental states. For example, Haynes et. al. (2007) report being able to tell which of two choices a person is making from looking at neural activity. Elon Musk is creating ‘Neural Lace’ for such a purpose (Cuthbertson 2016) but could mental states be inferred using a non-invasive approaches.

In particular, could we create AISs that could infer our mental states, without inadvertently creating an even greater and more immediate existential risk? I will later argue that giving AISs theory of mind, without them having the same sort of controls on social behaviour that empathy gives people, could be a disaster that heightens existential risk in our very attempt to avoid it. In subsequent posts I first consider whether the artificial inferencing of human mental states is even a credible possibility?


Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Bostrom, N., (2014). Superintelligence: Paths, Dangers, Strategies (1st. ed.). Oxford University Press, Inc., USA.

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Gassner, M., Cieslewski, T., & Scaramuzza, D. (2017). Dynamic collaboration without communication: Vision-based cable-suspended load transport with two quadrotors. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 5196-5202). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2017.7989609

Gray, J., Breazeal, C., Berlin, M., Brooks, A., & Lieberman, J. (2005). Action parsing and goal inference using self as simulator. In Proceedings – IEEE International Workshop on Robot and Human Interactive Communication (Vol. 2005, pp. 202–209). https://doi.org/10.1109/ROMAN.2005.1513780

Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 417–422). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314250

Hadfield-Menell, D., Andrus, M., & Hadfield, G. K. (2019). Legible normativity for AI alignment: The value of silly rules. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 115–121). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314258

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Malik D., Palaniappan M., Fisac J., Hadfield-Menell D., Russell S., and Dragan A., (2018) “An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning.” In Proc. ICML-18, Stockholm.

Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators
for Care Robot Innovation. In Procedia Computer Science (Vol. 112, pp. 1254-1262). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.08.078

Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211-231. https://doi.org/10.1016/j.artint.2017.08.005

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Sheridan, T. B. (2016). Human-Robot Interaction: Status and Challenges. Human Factors, 58(4), 525-32. https://doi.org/10.1177/0018720816644364

Taylor, J., Yudkowsky, E., Lavictoire, P., & Critch, A. (2017). Alignment for Advanced Machine Learning Systems. Miri, 1–25. Retrieved from https://intelligence.org/files/AlignmentMachineLearning.pdf

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

– Robots & ToM

Do Robots Need Theory of Mind? – part 1

Robots, and Autonomous Intelligent Systems (AISs) generally, may need to model the mental states of the people they interact with. Russell (2019), for example, has argued that AISs need to understand the complex structures of preferences that people have in order to be able to trade-off many human goals, and thereby avoid the problem of existential risk (Boyd & Wilson 2018) that might follow from an AIS with super-human intelligence doggedly pursuing a single goal. Others have pointed to the need for AISs to maintain models of people’s intentions, knowledge, beliefs and preferences, in order that people and machines can interact cooperatively and efficiently (e.g. Lemaignan et. al. 2017, Ben Amor et. al. 2014, Trafton et. al. 2005).

However, in addition to risks already well documented (e.g. Müller & Bostrom 2016) there are many potential dangers in having artificial intelligence systems closely observe human behaviour, infer human mental states, and then act on those inferences. Some of the potential problems that come to mind include:

  • The risk that self-determining AISs will be built with a limited capability of understanding human mental states and preferences and that humans will lose control of the technology (Meek et. al. 2017, Russell 2019).
  • The risk that the AIS will exhibit biases in what it selects to observe, infer and act that would be unfair (Osoba & Welser 2017)

  • The risk that the AIS will use misleading information, make inaccurate observations and inferences, and base its actions on these (McFarland & McFarland 2015, Rapp et. al. 2014)

  • The risk that even if the AIS observes and infers accurately, that its actions will not align with what a person might do or that it may have unintended consequences (Vamplew et. al. 2018)

  • The risk that an AIS will misuse its knowledge of a person’s hidden mental states resulting in either deliberate or inadvertent harm or criminal acts (Portnoff & Soupizet 2018).

  • The risk that peoples’ human rights and rights to privacy will be infringed because of the ability of AISs to observe, infer, reason and record data that people have not given consent to and may not even know exists (OECD 2019).

  • The risk that if the AIS was making decisions based on unobservable mental states that any explanations of an AIS’s actions based on them would be difficult to validate (Future of Life Institute 2017, Weld & Bansal 2018).

  • The risk that the AIS would, in the interests of a global common good, correct for people’s foibles, biases and dubious (unethical) acts thereby take away their autonomy (Timmers 2019),

  • The risk that using AISs, a few multi-national companies and countries will collect so much data about peoples’ explicit and inferred hidden preferences that power and wealth will become concentrated in even fewer hands (Zuboff 2018)

  • The risk that corporations will rush to be the first to produce AISs that can observe, infer and reason about people’s mental states and in so doing will neglect to take safety precautions (Armstrong et. al. 2016).

  • The risk that in acting out of some greater interest (i.e. the interests of society at large) an AIS will act to restrict the autonomy or dignity of the individual (Zardiashvili & Fosch-Villaronga 2020)

  • The risk that an AIS would itself take unacceptable risks based on inferred uncertain mental states, that may cause a person or itself harm (Merritt et. al. 2011).

Much has been written about the risks of AI, and in the last few years numerous ethical guidelines, principles and recommendations have been made, especially in relation to the regulation of the development of AISs (Floridi et. al. 2018). However, few of these have touched on the real risk that AISs may one day develop such that they can gain a good understanding of people’s unobservable mental states and act on them. We have already seen Facebook being used to target advertisements and persuasive messages on the basis of inferred political preferences (Isaak & Hanna 2018).

In future posts I look at the extent to which an AIS could potentially have the capability to infer other people’s mental states. I touch on some the advantages and dangers and identify some of the issues that may need to be thought through.

I argue that AISs generally (not only robots) may need to both model people’s mental states, known in the psychology literature as Theory of Mind – ToM (Carlson et. al. 2013), but also have some sort of emotional empathy. Neural nets have already been used to create algorithms that demonstrate some aspects of ToM (Rabinowitz 2018). I explore the idea of building AISs with both ToM and some form of empathy and the idea that unless we are able to equip AISs with a balance of control mechanisms we run the risk of creating AISs that have ‘personality disorders’ that we would want to avoid.

In making this case, I look at whether it is conceivable that we could build AISs that have both ToM and emotional empathy, and that if it were possible, how these two capacities would need to be integrated to provide an effective overall control mechanism. Such a control mechanism would involve both fast (but sometimes inaccurate) processes and slower (reflective and corrective) processes, similar to the distinctions Kahneman (Kahneman 2011) makes between system 1 and system 2 thinking. The architecture has the potential for the fine-grained integration of moral reasoning into the decision making of an AIS.

What I hope to add to Russell’s (2019) analysis is a more detailed consideration of what is already known in the psychology literature about the general problem of inferring another agent’s intentions from their behaviour. This may help to join up some of the thinking in AI with some of the thinking in cognitive psychology in a very broad-brushed way such that the main structural relationships between the two might come more into focus.

Subscribe (top left) to follow future blog posts on this topic.


Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y

Ben Amor, H., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human-robot cooperation tasks. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 2831–2837). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2014.6907265

Boyd, M., & Wilson, N. (2018). Existential Risks. Policy Quarterly, 14(3). https://doi.org/10.26686/pq.v14i3.5105
Carlson, S. M., Koenig, M. A., & Harms, M. B. (2013). Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science, 4(4), 391–402. https://doi.org/10.1002/wcs.1232

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Floridi, L., Cowls, J., Beltrametti, M. et al., (2018), AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 doi:10.1007/s11023-018-9482-5

Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. Future of Life, 1–23. Retrieved from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Lemaignan, S., Warnier, M., Sisbot, E. A., Clodic, A., & Alami, R. (2017). Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, 45–69. https://doi.org/10.1016/j.artint.2016.07.002

McFarland, D. A., & McFarland, H. R. (2015). Big Data and the danger of being precisely inaccurate. Big Data and Society. SAGE Publications Ltd. https://doi.org/10.1177/2053951715602495

Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2017). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. In PICMET 2016 – Portland International Conference on Management of Engineering and Technology: Technology Management For Social Innovation, Proceedings (pp. 682–693). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/PICMET.2016.7806752

Merritt, T., Ong, C., Chuah, T. L., & McGee, K. (2011). Did you notice? Artificial team-mates take risks for players. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 338–349). https://doi.org/10.1007/978-3-642-23974-8_37

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Oecd/Legal/0449. Retrieved from http://acts.oecd.org/Instruments/ShowInstrumentView.aspx?InstrumentID=219&InstrumentPID=215&Lang=en&Book=False

Osoba, O., & Welser, W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/rr1744

Portnoff, A. Y., & Soupizet, J. F. (2018). Artificial intelligence: Opportunities and risks. Futuribles: Analyse et Prospective, 2018-September(426), 5–26.

Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., & Botvinick, M. (2018). Machine Theory of mind. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 10, pp. 6723–6738). International Machine Learning Society (IMLS).

Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate information. Memory and Cognition, 42(1), 11–26. https://doi.org/10.3758/s13421-013-0339-0

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Timmers, P., (2019), Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds & Machines 29, 635–645 doi:10.1007/s11023-019-09508-4

Trafton, J. G., Cassimatis, N. L., Bugajska, M. D., Brock, D. P., Mintz, F. E., & Schultz, A. C. (2005). Enabling effective human-robot interaction using perspective-taking in robots. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 35(4), 460–470. https://doi.org/10.1109/TSMCA.2005.850592

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

Weld, D. S., & Bansal, G. (2018). Intelligible artificial intelligence. ArXiv, 62(6), 70–79. https://doi.org/10.1145/3282486

Zardiashvili, L., Fosch-Villaronga, E. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds & Machines (2020) doi:10.1007/s11023-019-09514-6

Zuboff S., (2019), The Age of Surveillance Capitalism, Profile Books

– What does it mean to be human?

John Wyatt is a doctor, author and research scientist. His concern is the ethical challenges that arise with technologies like artificial intelligence and robotics. On Tuesday this week (11th March 2019) he gave a talk called ‘What does it mean to be human?’ at the Wesley Methodist Church in Cambridge.

To a packed audience, he pointed out how interactions with artificial intelligence and robots will never be the same as the type of ‘I – you’ relationships that occur between people. He emphasised the important distinction between ‘beings that are born’ and ‘beings that are made’ and how this distinction will become increasingly blurred as our interactions with artificial intelligence become commonplace. We must be ever vigilant against the use of technology to dehumanise and manipulate.

I can see where this is going. The tendency for people to anthropomorphise is remarkably strong - ‘the computer won’t let me do that’, ‘the car has decided not to start this morning’. Research shows that we can even attribute intentions to animated geometrical shapes ‘chasing’ each other around a computer screen, let alone cartoons. Just how difficult is it going to be to not attribute the ‘human condition’ to a chatbot with an indistinguishably human voice or a realistically human robot. Children are already being taught to say ‘please’ and ‘thank you’ to devices like Alexa, Siri and Google Home – maybe a good thing in some ways, but …

One message I took away from this talk was a suggestion for a number of new human rights in this technological age. These are: (1) The right to cognitive liberty (to think whatever you want), (2) The right to mental privacy (without others knowing) (3) The right to mental integrity and (4) The right to psychological continuity - the last two concerning the preservation of ‘self’ and ‘identity’.

A second message was to consider which country was most likely to make advances in the ethics of artificial intelligence and robotics. His conclusion – the UK. That reassures me that I’m in the right place.

See more of John’s work, such as his essay ‘God, neuroscience and human identity’ at his website johnwyatt.com

John Wyatt

– Managing demands (HOS 2)

How we manage the demands on us has been a pre-occupation since the day I came to the realisation that a lot of what runs through my own mind can be explained in terms of what psychologists call the management of ‘cognitive load’ or ‘mental workload’. We all, to some extent, ‘manage’ what we think about, but we rarely reflect on exactly how we do it.

Sometimes there are so many things that need to be thought about (and acted upon) that it is overwhelming, and some management of attention is needed, just to get through the day and maintain performance. If you need convincing that workload can affect performance then consider the research on distractions when driving. (A more comprehensive analysis on ‘the distracted mind’ can be found at the end of this posting).

YouTube Video, The distracted mind, TEDPartners, December 2013, 1:39 minutes

At other times you find yourself twiddling your thumbs, as if waiting for something to happen, or a thought to occur that will trigger action. We sometimes cease to be in the grip of circumstances and our minds can run free.

If you keep asking the question ‘why?’ about anything that you do, you eventually arrive at a small number of answers. If we leave aside metaphysical answers like ‘because it is the will of God’ for the moment, these are generally ‘to keep safe’ or ‘to be efficient’. On the way to these fundamental, and intimately related to them, is ‘to optimize cognitive load’. Not to do so, compromises both safety and efficiency.

To be overwhelmed with the need to act and, therefore, the thinking this necessitates in the evaluation of choices that are the precursors of action, leads to anxiety and anxiety interferes with the capacity to make good choices. To be under-whelmed leads to boredom and lethargy, a lack of caring about choice and the tendency to procrastinate.

It seems that to perform well we need an optimal level of arousal or stimulation.

Youtube Video, Performance and Arousal – Part 1of 3: Inverted U Hypothesis, HumberEDU, January 2015, 5:05 minutes

In the longer term, to be ‘psychologically healthy’ we need optimal levels of arousal ‘on average’ over a period of time.

Being constantly overwhelmed leads from stress, to anxiety and onwards to depression. It can even lead to an early death.

TED Video, The science of cells that never get old’ – Elizabeth Blackburn, TED, April 2017, 18:46 minutes

Being constantly underwhelmed also leads to depression via a different route. How much load we can take depends on our resources – both cognitive and otherwise. We can draw on reserves of energy and a stock of strategies, such as prioritizing, for managing mental workload. If the demands on us are too great and we have some external resources, like somebody else that can provide advice or direction, or the money to pay for it, then we can use those to lessen the load. Whenever we draw on our own capacities and resources we can both enhance and deplete them. Like exercising a muscle, regular and moderate use can strengthen but prolonged and heavy use will tire or deplete them. When we draw on external resources, like money or favours, their use tends to deplete them.

Measurement of Load

So how can we measure the amount of load a person is carrying. This is going to be tricky as some people have more resource and capacity (both internal and external) than others, so observing their activity may not be a very accurate measure of load. If you are very practiced or skilled at something it is much easier to do than if you are learning it for the first time. Also, some people are simply less bothered about whether they achieve what they have to do (or want to do) than others. Even the same person can ‘re-calibrate’, so for example, if pressure of work is causing stress, they can re-assess how much it matters that they get the job done. Some capacities replenish with rest, so something may be easy at one time but harder, say at the end of a long day.

In fact, there are so many factors, some interacting, that any measure, say of stress, through looking at the chemicals in the blood or the amount of sweating on the skin is difficult to attribute to a particular cause.

The capacity of our thinking processes is limited. We can really only focus on one difficult task at once. We even stop doing whatever we were doing (even an automatic task like walking) when formulating the response to a difficult question.

BBC Radio 4, The Human Zoo, Series 1 Episode 1, First Broadcast about 2014, 28 minutes

We can use our thinking capacity to further our intentions but we so often get caught up in the distractions of everyday life that none is left for addressing the important issues.

The Personal Agenda

Another way of looking at it is to consider it from the point of view of each person’s agenda and how they deal with it. This is as if you ask a person to write down a ‘to do’ list which has everything they could think of on it. We all do this from time to time, especially when there is too much going on in our heads and we need to set everything out and see what is important.
I will construct such a list for myself now:

  • Continue with what I am writing
  • Get ready for my friend who is coming for coffee
  • Figure out how to pay my bills this month
  • Check with my son that he has chosen his GCSE options
  • Tell my other friends what time I will meet them tonight
  • Check that everything is OK with my house at home (as I am away at the moment)

Each of these agenda items is a demand on my attention. It is as if each intention competes with the others to get my focus. They each shout their demands, and whichever is shouting loudest at the time, wins. Maybe not for long. If I realise that I can put something off until later, it can quickly be dismissed and slip back down the agenda.

But the above list is a particular type that is just concerned with a few short-term goals – it’s the things that are on my mind today. I could add:

  • Progress my project to landscape the garden
  • Think through how to handle a difficult relationship

Or some even longer term, more aspirational and less defined goals

  • Work out how I will help starving children in Africa
  • Maintain and enhance my wellbeing

The extended agenda still misses out a whole host of things that ‘go without saying’ such as looking after my children, activities that are defined during the course of going to work, making sure I eat and sleep regularly, and all tasks that are performed on ‘autopilot’ such as changing gear when driving. It also misses out things that I would do ‘if the opportunity arose’ but which I do not explicitly set out to do.

What characterizes the items that form the agenda? They are all intentions of one sort or another but they can be classified in various ways. Many concern obligations – either to family, friends, employers or society more generally. Some are entirely self-motivated. Some have significant consequences if they are not acted upon, especially the obligations, whereas others matter less. Some need immediate attention while others are not so time critical. Some are easy to implement while others require some considerable training, preparation or the sustained execution of a detailed plan. Some are one-offs while others are recurring, either regularly or in response to circumstances. This variation tends to mask the common characteristic that they are all drivers of thought and behaviour.

Intentions bridge between and include both motives and goals. Generally we can think of motives as the inputs and goals as the outputs (although either can be either). Both the motives and the goals of an intention can be vague. In fact, an intention can exist without you knowing either why or what it is to achieve. You can copy somebody else’s intention in ignorance of motive and goal. In the sense of intention as only a pre-disposition to act, you need not be aware of an intention. Often you don’t know how you will act until the occasion demands.

Agenda Management

Given that there are perhaps hundreds or even thousands of intentions large or small, all subsisting in the same individual, what determines what a person does at any particular point in time. It all depends on priority and circumstance. Priority will push items to the top of the list and circumstance often determines when and how they drive thought and behaviour.


Priority itself is not a simple idea. There are many factors affecting priority including emotion, certainty of outcome and timing. These factors tend to interact. I may feel strongly that I must help starving children in Africa and although I know that every moment I delay may mean a life lost, I cannot be certain that my actions will make a difference or that I may think of a more effective plan at a later time. When I have just seen a programme on TV about Africa I may be highly motivated, but a day later, I may have become distracted by the need to deal with what now appear to be more urgent issues where I can be more certain of the outcome.

Priority and Emotion

It is as if my emotional reaction to the current content of my experience is constantly jiggling the priorities on my agenda of intentions. As the time approaches for my friend to arrive I start to feel increasingly uncomfortable that I have not cleared up. Events may occur that ‘grab’ my attention and shoot priorities to the top of the agenda. If I am surprised by something I turn my attention to it. Similarly, if I feel threatened. Whereas, when I am relaxed my mind can wander to matters inside my head – perhaps my personal agenda. If I am depressed my overall capacity to attend to and progress intentions is reduced.

Emotions steer our attention. They determine priority. Attention is focused on the highest priority item. Emotion, priority, and attentions are intimately related. Changing emotions continuously wash across intentions, reordering their priority. They modulate the priorities of the intentions of the now.

Emotion provides the motive force that drives attention to whatever it is that you are attending to. If you are working out something complicated in your head, it is the emotion associated with wanting to know the answer that provides the motive force to turn the cogs. This applies even when the intention is to think through something rationally. When in the flow of rational thought (say in doing a mental arithmetic problem) it is emotion that motivates it.

There is a host of literature on emotional memory (i.e. how emotions, especially traumatic ones, are laid down in memory). There is also a large literature on how memories may be re-constructed, often inaccurately, rather than retrieved. The following illustrates both emotional memory of traumatic events and the frequent inaccuracies of re-construction:

TED Video, Emotional Memory: Shawn Hayes at TEDxSacramento 2012, TEDx Talks, March 2013, 8:10 minutes

It is well established that the context in which a memory is laid down effects the circumstances in which the memory is retried. For example, being in a particular place, or experiencing a particular smell or taste may trigger the retrieval of memories specific to the place and smell. The context supplies the cue or key to ’unlocking’ the memory. However, there is comparatively little literature on how emotions trigger memories although there has been research on ‘mood-dependent memory’ (MDM) e.g.

Eric Eich, Dawn Macaulay, and Lee Ryan (1994), Mood Dependent Memory for Events of the Personal Past, Journal of Experimental Psychology: General 1994, Vol. 123, No. 2, 201-215


It seems plausible that emotions act as keys or triggers that prime particular memories, thoughts and intentions. In fact, the research indicates that mood dependent memory is more salient in relation to internal phenomena (e.g. thoughts) than external ones (such as place). Sadness steers my attention to sad things and the intentions I associate with the object(s) of my sadness. Indifference will steer my attention away from whatever I am indifferent about and release attention for something more emotionally charged. Love and hate might equally steer attention to its objects. Injustice will steer attention to ascertaining blame. The task of identifying who or what to blame can be as much an intention as any other.

Priority and Time – The Significance of Now

Intentions formulated and executable in ‘the now’, assume greater priority than those formulated in the past, or those that may only have consequences in the future.

The now is of special significance because that is where attention is focused. Past intentions slip down the list like old messages in an email inbox. You focus on the latest delivery – the now.

The special significance of ‘the now’ is increasingly recognised, not just as a fact of life but as something to become increasingly conscious of and savoured.

‪Youtube Video, The Enjoyment of Being with Eckhart Tolle author of THE POWER OF NOW, New World Library, July 2013, 4:35 minutes

Indeed the whole movement of mindfulness, with its focus on ‘the now’ and conscious experience, has grown up as approach to the management of stress and the development of mental strategies.

‪Youtube Video, The Science of Mindfulness, Professor Mark Williams, OxfordMindfulness, December 2011, 3:34 minutes

Priority and Time in Agenda Management

If I am angry now then my propensity will be high to act on that anger now if I am able to. Tomorrow I will have cooled off and other intentions will have assumed priority. Tomorrow I may not have ready access to the object of my anger. On the other hand, if tomorrow an opportunity arises by chance (without me having created it), then perhaps I will seize it and act on the anger then. As in crime, we are driven by the motive, the means and the opportunity.

Many intentions recur – the intentions to eat, drink, sleep, and seek social interaction all have a cyclical pattern and act to maintain a steady state or a state that falls within certain boundaries (homeostasis). It may be that you need to revive an old intention whether or not it is cyclically recurring. Revival of an intention pushes it back up the list (towards the now) and when some homeostatic system (like hunger and eating) get out of balance a recurring intention is pushed back up the list.

Intentions that impact the near future also take priority over intentions that affect the far future. So, it is easier to make a cup of tea than sit down and write your will (except when death is in the near future). We exponentially discount the future. 1 minute, 1 hour and 1 day, 1 week, 1 month, 1 season and 1 year are equally far apart. What happens in the next minute is as important as what will happen in the next year.

Intentions Interact

However, from the point of view of establishing the principles of a control mechanism that determines our actions at any point there are other complications and considerations. Often our intentions are incompatible or compete with each other. I cannot vent my anger and fulfil an intention not to hurt anybody. I cannot eat and stay thin. I cannot both go to work tomorrow and stay home to look after my sick child. Therefore, some intentions inhibit others leading to a further jiggling of the priorities.

Prioritising what is Easy

A major determinant of what we actually do is what is easiest to do. So actions that are well learned or matters of habit get done without a second thought but intentions that are complicated or difficult to achieve are constantly pushed down the stack, however important they are. Easy actions consume less resource. If they are sufficiently difficult and also sufficiently important we become pre-occupied by thinking about them but are unable to act.

Daniel Kahneman in his book ‘Thinking Fast and Slow’ sets out much of the experimental evidence that shows how in thought we tend towards the easy options.

Youtube Video, Cognitive ease, confirmation bias, endownment effect – Thinking, Fast and Slow (Part 2), Fight Mediocrity, June 2015, 5:50 minutes

How often do we get up in the morning with the firm resolve to do a particular thing and then become distracted during the day by what seem to be more immediate demands or attractive alternatives? It is as if our intentions are being constantly pushed around by circumstances and our reactions to them and all that gets done are the easy things – where by chance the motive, the means, and the opportunity all fortuitously concur in time.

Staying on task is difficult. It requires a single-minded focus of attention and a resistance to distraction. It is sometimes said that ‘focus’ is what differentiates successful people from others, and while that may be true in the achievement of a particular goal, it is at the expense of paying attention to other competing intentions.

Implications for The Human Operating System

The above account demonstrates how as people we interleave multiple tasks in real time, partly in response to what is going on around us and partly in response to our internal agenda items. We do this with limited resources, depleting and restoring capacities as we go. What differentiates us from computers is the way in which priorities are continuously and globally changing such that attention is re-directed in real time to high priority items (such as threats and the unexpected). Part of this is in response to our ability to retrieve relevant memories cued by the external world and our internal states, reflect on (and inhibit) our own thinking and thinking processes and to run through and evaluate mental simulations of possible futures.


  • In order to perform effectively we need to manage the demands on us
  • Having too much, or too little, to do and think about can lead to stress in the short term and depression, if it goes on for too long.
  • We have limited resources and capacities which can become depleted but that can also be restored (e.g. with rest)
  • Measuring the amount of load a person is under is not simple as people have different resources, abilities and capacities
  • Whether or not we write it down or say it, we all have an implicit list of intentions
  • We prioritise the items on this list in a variety of ways
  • Circumstances, our emotional reactions and timing are all crucial factors in determining priority
  • We also tend to prioritise things that are easy to do (i.e. do not use up effort, time or other resources)
  • Being able to manage priorities and interleaving our intentions in response to circumstances and opportunity, is a key aspect of the human operating system

This Blog Post: ‘Human Operating System 2 – Managing Demands’ introduces how we deal with the complex web of intentions (our own and those externally imposed) that form part of our complex daily lives

Next Up: ‘Policy Regulates Behaviour’ shows that not all intentions are equal. Some intentions regulate others, in both the individual and society.

Further Information

Youtube Vide, The Distracted Mind, UCI Open, April 2013, 1:12:37 hours

– Are we free?

Consciousness, freedom and moral responsibility

Answering the question ‘Are we free?’ at the level of society suggests that the big multi-nationals have assumed control over many aspects of our lives (see: “What is control?”). Answering the same question at the level of the individual is much more difficult. It raises some profound philosophical questions to do with consciousness, freewill and moral responsibility. In considering issues of moral responsibility it is worth first examining ideas about the nature of consciousness and freewill.

There are legal definitions of responsibility and culpability that can vary from one legislative system to another. There are definitions within moral philosophy (e.g. Kant’s Categorical Imperative). There are mental health definitions that aim to ascertain whether a person has ‘mental capacity’. However, it is generally accepted that to have moral responsibility people need to consciously exercise freewill over the choices they make. Moral responsibility entails having freewill, and for people, freewill entails a self-determined and deliberate conscious decision.


John Searle regards consciousness as an emergent property of biological processes. There are no contradictions between materialistic, mentalistic and spiritual accounts. They are just different levels of description of the same phenomena. Consciousness is to neuroscience as liquid is to the chemistry of H2O. There is no mind / body problem – mind and body are again just different levels of description. It’s linguistic usage that confuses us. Consciousness does confer meaning onto things but that does not imply that subjective reality cannot be studied using objective methods.

YouTube Video, John Searle: Our shared condition — consciousness, TED, July 2013, 14:59 minutes

David Chalmers addresses head on the question of why we have conscious subjective experience and how reductionist explanations fail to provide answers. He suggests that consciousness could be one of the fundamental building blocks of the universe like space, time, and mass. He suggests the possibility that all information processing systems, whether they are ‘alive’ or not may have some degree of consciousness.

TED Video, David Chalmers: How do you explain consciousness?, Big Think, March 2014, 18:37 minutes

The view that degree of consciousness might correlate with how much a system is able to process information is set out in more detail in the following:

YouTube Video, Michio Kaku: Consciousness Can be Quantified, Big Think, March 2014, 4:45 minutes

Building on the idea that consciousness involves feedback, John Dunne from the Center for Investigating Healthy Minds, looks at self-reflexivity as practiced in many religions, and in mindfulness. Consciousness confers the capacity to report on the object of experience (Is ‘I think therefore I am’ a reported reflection on something we all take for granted?).

YouTube Video, WPT University Place: Consciousness, Reflexivity and Subjectivity, Wisconsin Public Television, March 2016, 40:17 minutes

The BBC have put together a short documentary on consciousness as part of its series called ‘The Story of Now’. Progress has been made in consciousness research over the last 20 years including in the measurement of consciousness and understanding some mental conditions s disorders of consciousness.

BBC, The Story of Now – Consciousness, February 2015, About 15 minutes

Susan Greenfield, addressing an audience of neuroscientists, says consciousness cannot be defined but suggests a working definition of consciousness as the ‘first person subjective world as it seems to you’. She distinguishes between consciousness, self-consciousness, unconsciousness and sub-consciousness. She considers boundaries such as ‘when does a baby become conscious?’, ‘are animals conscious?’, ‘what happens between being asleep or awake?’. Having ‘degrees of consciousness‘ seems to make sense and locates consciousness in transient (sub-second duration), variable ‘neural assemblies’ that have epicentres – like a stone creating ripples when thrown in a pond. The stone might be a strong stimulus (like an alarm clock) which interacts with learned connections in the brain formulated during your life experience, modulated by chemical ‘fountains’ that affect neural transmission. Depression involves a disruption to the chemical fountains and the experience of pain is dependent on the size of the active neuronal assembly. Consciousness is manifested when the activation of the neural assemblies is communicated to the rest of the brain and body. Sub-consiousness arises out of assemblies that are, in some sense, too small.

YouTube Video, The Neuroscience of Consciousness – Susan Greenfield, The University of Melbourne, November 2012, 1:34:17 hours

Some of the latest research on where in the brain consciousness seems to manifest can be found at:

Big Think Article, Harvard Researchers Have Found the Source of Human Consciousness, Phil Perry, January 2017

Prof. Raymond Tallis, however, has some issues with reductionists theories that seek to explain humankind in biological terms and attacks the trend towards what he calls neuromania. He also rejects mystical and theological explanations and, while not embracing dualism, argues that we have to use the language of mind and society if we are to further our understanding.

YouTube Video, Prof. Raymond Tallis – “Aping Mankind? Neuromania, Darwinitis and the Misrepresentation of Humanity”, IanRamseyCentre, December 2012, 18:16 minutes


YouTube Video, David Eagleman: Brain over mind?, pop tech, April 2013, 22:25 minutes

Here is a radio introduction:

BBC Radio 4, Neuroscientist Pauls Broks on Freewill and the Brain, November 2014, 11 minutes

Pinker thinks that our freewill arises out of the complexity of the brain and that there is no reason to postulate any non-mechanical entity such as the soul. He distinguishes automatic responses (such as pupil dilation) from those that are based on mental models and can anticipate possible consequences which are sufficient to account for freewill.

YouTube Video, Steven Pinker: On Free Will, Big Think, June 2011, 2:17 minutes

Alfred Mele speculates on their being different grades of freewill and throws doubt on experiments which claim to show that decisions are made prior to our becoming consciously aware of them.

YouTube Video, Does Free Will Exist – Alfred Mele, Big Think, April 2012, 15:10 minutes

Is consciousness necessary for freewill? Do we make decisions while we are not consciously aware of them? If we do, then does that mean that we are not exercising freewill? If we are not exercising freewill then does that mean we have no moral responsibility for our decisions?

According to Denett, consciousness is nothing special. We only think its special because we associate it with freewill. However, the only freewill that matters is the responsibility for our actions that biology has given us through mental competence. The competence to reflect on our own thoughts and those of others, to anticipate consequences of our actions, and to see and evaluate the consequences, gives us both freewill and a responsibility for our actions.

YouTube Video, Daniel Dennett Explains Consciousness and Free Will, Big Think, April 2012, 6:33 minutes

Freewill and moral responsibility

Where do we draw the line between behaviour that we explain as driven by neurological/ neuro-chemical factors and those we explain in psychological, disease and demonic terms? Professor Robert Sapolsky shows how behaviours that were once explained as demonic are now explained neurologically. This parallels a shift from believing that the locus of control of peoples’ (unusual and other) behaviour has moved from demons and gods, to people, to disease, to brain structures and chemistry. What does this say about our sense of autonomy, individuality and ability to create moral positions?

Youtube video, 25. Individual Differences, Stanford, February 2011, 53:53 minutes

Assuming that we do have choice then this brings with it moral responsibility for our actions. But moral responsibility according to which system of values? Sam Harris argues that we take an odd stance when considering moral questions. In general we are willing to accept that different people are entitled to take different stands on moral questions and that there are no right or wrong answers. We tend to leave moral judgements to religions and are prepared to accept that in principle any moral value system could be right and therefore we cannot criticise any. However, Sam Harris points out that we do not do this in other domains. In health, for example, we are prepared to say that good health is better than bad health and than certain things lead to good health and should be encouraged while other don’t and should be discouraged. By the same token, if we accept that certain moral choices lead towards enhanced wellbeing (in others and ourselves) while other choices lead to pain and suffering then the normal application of scientific method can inform us about moral decisions (and we can abandon religious dogma).

TED Video, Sam Harris: Science can answer moral questions, TED, March 2010, 23:34 minutes

Peter Millican discusses the relationship between freewill, determinism and moral responsibility. He describes Hume’s notion of responsibility, how ideas of right and wrong arise out of our feelings, and how this is independent of whether an act was determined or not. However, our feelings can often be in conflict with lower order feelings (the desire to smoke) constraining higher order feelings (wanting to give up smoking) and that our higher order freewill can therefore be constrained, giving us ‘degrees of freewill’ in relation to particular circumstances.

YouTube Video, 7.4 Making Sense of Free Will and Moral Responsibility – Peter Millican, Oxford, April 2011, 9:48 minutes

Corey Anton sets out a philosophical position – there is ‘motion without motivation’ and ‘motion with motivation’. We call ‘motion with motivation’ ‘action’. Some motivations result from being pushed along by the past (x did y because of some past event or experience) and some motivations are driven by the future (x did y in order to). Freewill is more typically associated with actions motivated by the intention to bring about future states.

YouTube Video, The Motives of Questioning Free Will, Corey Anton, 8:12 minutes

Intentionality and Theory of Mind

If it is our ability to reflect on our own perceptions and thoughts that gives us the capacity to make decisions, then, in the social world, we must also consider our capacity to reflect on other people’s perceptions and thoughts. This creates a whole new order of complexity and opportunity for misunderstanding and feeling misunderstood (whether we are or not). Watch the video below or get the full paper.

YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes

How do our ideas about other people’s intentions affect our moral judgements about them, and what is going on in the brain when we make moral judgements? Liane Young highlights the extent to which our view about a person’s intentions influences our judgements with respect to the outcomes of their actions, and goes on to described the brain area in which these moral evaluations appear to be taking place.

TED Video, TEDxHogeschoolUtrecht – Liane Young – The Brain on Intention, TEDx Talks, January 2012, 14:34 minutes


Even though we may not have a definitive answer to the question ‘Are we free?’, we can say some things about it that may affect the way we think.

  • We cannot say definitively whether the world is pre-determined in the sense that every state of the universe at any one time could not have been otherwise. This partly arises out of our ignorance about physics and whether in some sense there is an inherent lack of causality.
  • If the universe does obey causal laws then that does not mean that the state of the universe would be necessarily knowable.
  • Whether or not the universe is knowably pre-determined is independent of our subjective feelings of consciousness and freewill. We behave as if we have freewill, we assume others are conscious sentient beings with freewill and the moral responsibility that arises out of this.
  • However, within this framework there are acknowledged limitations on freewill, degrees of consciousness and consequently degrees of moral responsibility.
  • These limitations and degrees arise in numerous ways including our own resources, imagination and capacity for reflections (self-consciousness), cognitive biases and controlling factors (including our own genetics, families, cultures, organisations and governments) that either subconsciously or consciously constrain our options and freedom to make choices.
  • There could be a correlation between degree of consciousness and the integrated information processing capacity of a system, perhaps even regardless of whether that system is regarded as ‘alive’.
  • Wellbeing seems to be enhanced by the feeling that we have the freedom to control our own destiny whether or not this freedom is an illusion.
  • The more we find out about psychology, the mind and the brain, the more it looks as if we can explain and predict our actions and choices more accurately by an appeal to science than an appeal to our own intuitions.
  • Our intuitions seem largely based on the pragmatic need to survive and deal effectively with threat within our limited resources. They are not inherently geared to finding the ‘truth’ or accurately modelling reality unless it has payoff in terms of survival.
  • Some of our behaviour is ‘automatic’, either driven by physiology or by learning. Other behaviour is mediated by consulting internal states such as our interpretations and models of reality, and testing possible outcomes against these models as opposed to against reality itself.
  • Our internal models can include models of our own states (e.g. when we anticipate how we might feel given a future set of circumstances and thereby re-evaluate our options).
  • Our internal models can include speculations on the models and motivations of other people, organisations, other sentient beings and even inanimate objects (e.g. I’ll pretend I do not know that he is thinking that I will deceive him). Anything, in fact, can be the content of our models.
  • We associate freedom with our capacity to have higher levels of reflection, and we attribute greater moral responsibility to those who we perceive to have greater freedom.
  • We evaluate the moral culpability of others in terms of their intentions and have specialised areas in the brain where these evaluations are made.
  • We evaluate the morality of a choice against some value system. Science offers a value system that we are prepared to accept in other domains, such as health. As in health there are clearly some actions that enhance wellbeing and others that do not. If we accept science as a method to assess the effects on wellbeing of particular moral choices, rather than use our fallible intuitions or religious dogma, then we can move forward in the achievement of greater wellbeing.
  • Even if we could ascertain whether and how we are conscious and free, the ultimate question of ‘why?’ looks impossible to resolve.

Given the multitude of factors from physiology to society that control or at least constrain our decisions (and our speculations about them), it is no wonder that human behaviour appears so unpredictable. However, there are also many regularities, as will become apparent later.


Another take, by a physicist, on consciousness as an emergent property of the integrated processing of information.

YouTube Video, Consciousness is a mathematical pattern, June 2014, 16:36 minutes

Corey Anton illustrates how language contains within it, its own reflexivity. We can talk about how we talk about something as well as the thing itself.

YouTube Video, Talk-Reflexive Consciousness, Corey Anton, April 2010, 9:58 minutes