Home » Posts tagged 'Control'

Tag Archives: Control

Request contact
If you need to get in touch
Enter your email and click the button

– Robots & ToM 2

Do Robots Need Theory of Mind? Part 2

https://unsplash.com/@henkmul

Why Robots might need Theory of Mind (ToM)

Existential Risk and the AI Alignment Problem

Russell (2019) argues that we have been thinking about building artificial intelligence (AI) systems the wrong way. Since its inception, AI has attempted to build systems that can achieve ‘their own’ goals, albeit that we might give them those goals in the first instance. Instead, he says, we should be building AIs that understand ‘the preference structure’ that a person has and attempt to satisfy goals within the constraints of that preference structure.

In this way, the AI will be able to understand that acting to achieve one goal (e.g. getting a coffee) may interact or interfere with other preferences, goals or constraints (e.g. not knocking someone out of the way in the process) and thereby moderate its behaviour. An AI needs to understand that a goal is not there to be achieved ‘at all cost’. Instead it should be achieved taking into account many other preferences and priorities that might moderate it. Russell argues that if we think of building AIs in this way, we may be able to avoid the existential risk that superhuman AIs will eventually take over, and either deliberately or inadvertently wipe out humanity.

This is an example of what AI researchers have termed ‘the AI alignment problem’, that potentially creates an existential risk to humanity if we find ourselves, having built super-intelligent machines, unable to control them. Nick Bostrom (Bostrom 2014) has also characterised this threat using the example of setting an AI the goal of producing paperclips and it taking this so literally that it destroys humanity (for example, in its need for more raw materials) in the single-minded execution of this goal and having no appreciation of when to stop. Several other researchers have addressed the AI alignment problem (mainly in terms of laws, regulations and social rules) including Taylor et. al (2017), Hadfield-Menell & Hadfield (2019), Vamplew et. al (2018), Hadfield-Menell, Andrus & Hadfield (2019).

Russell (2019) goes on to describe how an AI should always have some level of uncertainty about what people want. Such uncertainty would put a check on the single-minded execution of a goal at all cost. It would drive a need for the AI to keep monitoring and maintaining its model of what a person might want at any point in time. It would require the AI to keep checking that what it was doing was ‘on-track’ or ‘aligned’ with a person’s whole preference structure. So, if, for example, you had instructed your self-driving car to take you to the airport and you received a message during the trip that your child had been in a road accident, the AI might recognise this as significant, and check whether you wanted to change your plans.

Russell arrives at this position from addressing the problem of existential risk. It is a proposed solution to the AI alignment problem. Working within this frame of reference, he proposes solutions like ‘Cooperative Inverse Reinforcement Learning’ (Malik et. al. 2018) whereby the Autonomous Intelligent System (AIS) attempts to infer the preference structure of a person from an observation of behaviour. This, indeed, seems to be a sensible approach.

However, the exact mechanism by which an AIS coordinates its actions with a person or people may well depend on it being able to accurately infer people’s mental states. Otherwise it might have to explicitly check (e.g. by asking) every few seconds, whether what it was doing was acceptable, and it would need to ‘read’ when a person found it’s behaviour unacceptable (e.g. by noting the frown when about to hit somebody on its mission to get the coffee).

The AI alignment problem is precisely the problem that every person has when interacting with another human being. When interacting with somebody else we are unable to directly observe their internal mental states. We cannot know their preference structure and we can only take on trust that their intentions are what they might say they are. Their real intentions, beliefs, desires, values, and boundaries could, in principle, be anything. What we do, is infer from their behaviours, including what they say (and what we understand from this) what their intentions are. Intentions, beliefs, and preferences are all hidden variables that may be the underlying causes of behaviours but because they are unobservable can only be guessed at.

Russell takes this on board and understands that the alignment problem is one that exists between any two agents, human or artificial. He is saying that robots need to be equipped with similar mechanisms to those that people generally have. These are the mechanisms that can model human beliefs, preferences and intentions by making inferences from observations of behaviour. Fortunately, we are not discovering and inventing these mechanisms for the first time.

Alignment with What?

A potential problem with having an AIS infer, reason and act on its analysis of another person’s mental states is that it may not accurately predict the consequences of its own actions. An action designed to do good may, in fact, do harm. In addition to being mistaken about the direction of its effect on mental states (positive or negative) it may also be inaccurate about the extent. So, an act designed to please may have no effect, or an act that is not intended to cause either pleasure or displeasure may have an effect.

This is quite apart from all manner of other complications that we might describe as its ‘policies’. Should, for example, an AIS always act to minimise harm and maximise a person’s pleasure? How should an AIS react if a person consistently fails to take medication prescribed for their benefit? How should it trade-off short and longer-term benefits? How does an AIS reconcile differences between two or more people, a person’s legal obligations and their desires or the interests of a person and another organisation (a school, a company, their employer, the tax office an so on)?

In all these cases, the issue comes down to how the AIS evaluates it’s own choice of possible actions (or inaction) and which stakeholders it takes into account when performing this evaluation. Numerous guidelines have been produced in recent years to help guide developers of AI systems. The good news is that there is considerable agreement about the kinds of principles that apply – not contravening human rights, not doing harm, increasing wellbeing, transparency and explainability in how the AIS arrives at decisions, elimination of bias and discrimination, and clear accountability and responsibility for the AIS’s decisions. The main mechanism for putting these principles into practice is the training and controls (guidelines, standards and legal) of companies, designers and developers. Comparatively little has been proposed for the controls that might be embedded within the AIS itself, and even less about the principles and mechanisms that might be used to achieve this.

We could turn to economics for models of preference and choice, but these models are discredited by findings in the social sciences (e.g. prospect theory) and many would argue that the incentives encouraged by such models is precisely what has lead to existential risks like nuclear arms races and climate change. We would therefore need to think very carefully before using these same models to drive the design of artificial intelligences because of their potential in adding yet another existential risk.

The existential risk discussed in relation to AISs has tended to focus on the fear that if an artificial intelligence is given autonomy to achieve it’s objectives without constraint, then it might do anything. Even simple systems can become unpredictable very quickly, and if it is unpredictable it is out of control. In the anthropomorphic way, characteristic of human beings, we project onto the AIS that it would be concerned about it’s own self-preservation, or that it would discover that self-preservation was a necessary pre-condition to attaining it’s goal(s). We further project that if it adopts the goal of self-preservation, then it might do this at all cost, putting it’s own self-preservation ahead of even those of its creators. There are some good reasons for these fears because goals like self-preservation and accumulation of resources are instrumental to the achievement of any other goal and an AIS might easily reason that out (Bostrom 2012). There have been challenges to this line of reasoning but this debate is not a central concern here. Rather, I am more concerned with whether an AIS can align with the goals of an individual using the same sorts of social cues that we all use in the informal ways in which we, in general, cooperate with each other.

If we are already concerned that the economic and political systems currently in place can have some undesirable consequences, like other existential risks and concentrations of wealth in the hands of a few, then the last thing we would want to do is build into AISs the same mechanisms for evaluating choices as those assumed by classical economic theory. In these posts, I look primarily to psychology (and sometimes philosophy) to provide evidence and analysis of how people make decisions in a social world, particularly one in which we are taking into account our beliefs about other people’s mental states. Whether this provides an answer to the alignment problem remains to be seen, but it is, at least, another perspective that may help us understand the types of control mechanisms we may need as the development of AIS proceeds at an ever increasing pace.

Cooperation and Collaboration

https://unsplash.com/@brett_jordan

The paradigm in which robots act as slaves to their human masters is gradually being replaced by one in which robots and humans work collaboratively together to achieve some goal (Sheridan 2016). This applies for individual human-robot interactions and for multi-robot teams (Rosenfeld et. al. 2017). If robots and AISs generally could infer the mental sates of the people around them when performing complex tasks, then this could potentially lead to more intuitive and efficient collaboration between the person and the machine. This requires trust on the part of the human that the robot will play its part in the interaction (Hancock et. al 2011).

As a step on the way, systems have been built where robots collaborate with each other without communication to perform complex tasks using only visual cues (Gassner et. al. 2017). Collaboration is especially useful in situations like care giving (Miyachi, Iga & Furuhata 2017) where giving explicit verbal instructions might be difficult (e.g. in cases of Alzheimers or autism). Gray et. al (2005) proposed a system of action parsing and goal simulation whereby a robot might infer goals and mental states of others in a collaborative task scenario.

Potential Benefits

Equipping AISs with the ability to recognise, infer and reason about the mental states of others could have some extra-ordinary advantages. Not only might we avoid existential risk to humanity (and could there be anything of greater significance) and make our interactions with robots and AISs generally easy and intuitive, but also we could be living along-side intelligent artefacts that have the robust capacity to carry out moral reasoning. Not only could they keep themselves in check, so that they made only justifiable moral decisions with respect to their own actions, but they might also help us adjudicate our own actions, offering fair, reasonable and justifiable remedies to human transgressions of the law and other social codes. They might become reliable and trustworthy helpers and companions, politely guiding us in solving currently intractable world problems, and protecting us from our own worse human biases, vices, and deficiencies. If they turned out to be better at moral reasoning than people, like wise philosophers they could offer us considered advice to help us achieve our goals and deal with the dilemmas’ of everyday life.

However, there is much that stands in the way of achieving this utopian relationship with the intelligent artefacts we create, especially if we want an AIS to infer mental states in the same way a person might, by observation and perhaps asking questions. We are beginning to understand patterns of neuronal activity sufficiently well to infer some mental states. For example, Haynes et. al. (2007) report being able to tell which of two choices a person is making from looking at neural activity. Elon Musk is creating ‘Neural Lace’ for such a purpose (Cuthbertson 2016) but could mental states be inferred using a non-invasive approaches.

In particular, could we create AISs that could infer our mental states, without inadvertently creating an even greater and more immediate existential risk? I will later argue that giving AISs theory of mind, without them having the same sort of controls on social behaviour that empathy gives people, could be a disaster that heightens existential risk in our very attempt to avoid it. In subsequent posts I first consider whether the artificial inferencing of human mental states is even a credible possibility?

References

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Bostrom, N., (2014). Superintelligence: Paths, Dangers, Strategies (1st. ed.). Oxford University Press, Inc., USA.

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Gassner, M., Cieslewski, T., & Scaramuzza, D. (2017). Dynamic collaboration without communication: Vision-based cable-suspended load transport with two quadrotors. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 5196-5202). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2017.7989609

Gray, J., Breazeal, C., Berlin, M., Brooks, A., & Lieberman, J. (2005). Action parsing and goal inference using self as simulator. In Proceedings – IEEE International Workshop on Robot and Human Interactive Communication (Vol. 2005, pp. 202–209). https://doi.org/10.1109/ROMAN.2005.1513780

Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 417–422). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314250

Hadfield-Menell, D., Andrus, M., & Hadfield, G. K. (2019). Legible normativity for AI alignment: The value of silly rules. In AIES 2019 – Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 115–121). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314258

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Malik D., Palaniappan M., Fisac J., Hadfield-Menell D., Russell S., and Dragan A., (2018) “An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning.” In Proc. ICML-18, Stockholm.

Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators
for Care Robot Innovation. In Procedia Computer Science (Vol. 112, pp. 1254-1262). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.08.078

Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211-231. https://doi.org/10.1016/j.artint.2017.08.005

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Sheridan, T. B. (2016). Human-Robot Interaction: Status and Challenges. Human Factors, 58(4), 525-32. https://doi.org/10.1177/0018720816644364

Taylor, J., Yudkowsky, E., Lavictoire, P., & Critch, A. (2017). Alignment for Advanced Machine Learning Systems. Miri, 1–25. Retrieved from https://intelligence.org/files/AlignmentMachineLearning.pdf

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

– Robots & ToM

Do Robots Need Theory of Mind? – part 1

Robots, and Autonomous Intelligent Systems (AISs) generally, may need to model the mental states of the people they interact with. Russell (2019), for example, has argued that AISs need to understand the complex structures of preferences that people have in order to be able to trade-off many human goals, and thereby avoid the problem of existential risk (Boyd & Wilson 2018) that might follow from an AIS with super-human intelligence doggedly pursuing a single goal. Others have pointed to the need for AISs to maintain models of people’s intentions, knowledge, beliefs and preferences, in order that people and machines can interact cooperatively and efficiently (e.g. Lemaignan et. al. 2017, Ben Amor et. al. 2014, Trafton et. al. 2005).

However, in addition to risks already well documented (e.g. Müller & Bostrom 2016) there are many potential dangers in having artificial intelligence systems closely observe human behaviour, infer human mental states, and then act on those inferences. Some of the potential problems that come to mind include:


  • The risk that self-determining AISs will be built with a limited capability of understanding human mental states and preferences and that humans will lose control of the technology (Meek et. al. 2017, Russell 2019).
  • The risk that the AIS will exhibit biases in what it selects to observe, infer and act that would be unfair (Osoba & Welser 2017)

  • The risk that the AIS will use misleading information, make inaccurate observations and inferences, and base its actions on these (McFarland & McFarland 2015, Rapp et. al. 2014)

  • The risk that even if the AIS observes and infers accurately, that its actions will not align with what a person might do or that it may have unintended consequences (Vamplew et. al. 2018)

  • The risk that an AIS will misuse its knowledge of a person’s hidden mental states resulting in either deliberate or inadvertent harm or criminal acts (Portnoff & Soupizet 2018).

  • The risk that peoples’ human rights and rights to privacy will be infringed because of the ability of AISs to observe, infer, reason and record data that people have not given consent to and may not even know exists (OECD 2019).

  • The risk that if the AIS was making decisions based on unobservable mental states that any explanations of an AIS’s actions based on them would be difficult to validate (Future of Life Institute 2017, Weld & Bansal 2018).

  • The risk that the AIS would, in the interests of a global common good, correct for people’s foibles, biases and dubious (unethical) acts thereby take away their autonomy (Timmers 2019),

  • The risk that using AISs, a few multi-national companies and countries will collect so much data about peoples’ explicit and inferred hidden preferences that power and wealth will become concentrated in even fewer hands (Zuboff 2018)

  • The risk that corporations will rush to be the first to produce AISs that can observe, infer and reason about people’s mental states and in so doing will neglect to take safety precautions (Armstrong et. al. 2016).

  • The risk that in acting out of some greater interest (i.e. the interests of society at large) an AIS will act to restrict the autonomy or dignity of the individual (Zardiashvili & Fosch-Villaronga 2020)

  • The risk that an AIS would itself take unacceptable risks based on inferred uncertain mental states, that may cause a person or itself harm (Merritt et. al. 2011).

Much has been written about the risks of AI, and in the last few years numerous ethical guidelines, principles and recommendations have been made, especially in relation to the regulation of the development of AISs (Floridi et. al. 2018). However, few of these have touched on the real risk that AISs may one day develop such that they can gain a good understanding of people’s unobservable mental states and act on them. We have already seen Facebook being used to target advertisements and persuasive messages on the basis of inferred political preferences (Isaak & Hanna 2018).

In future posts I look at the extent to which an AIS could potentially have the capability to infer other people’s mental states. I touch on some the advantages and dangers and identify some of the issues that may need to be thought through.

I argue that AISs generally (not only robots) may need to both model people’s mental states, known in the psychology literature as Theory of Mind – ToM (Carlson et. al. 2013), but also have some sort of emotional empathy. Neural nets have already been used to create algorithms that demonstrate some aspects of ToM (Rabinowitz 2018). I explore the idea of building AISs with both ToM and some form of empathy and the idea that unless we are able to equip AISs with a balance of control mechanisms we run the risk of creating AISs that have ‘personality disorders’ that we would want to avoid.

In making this case, I look at whether it is conceivable that we could build AISs that have both ToM and emotional empathy, and that if it were possible, how these two capacities would need to be integrated to provide an effective overall control mechanism. Such a control mechanism would involve both fast (but sometimes inaccurate) processes and slower (reflective and corrective) processes, similar to the distinctions Kahneman (Kahneman 2011) makes between system 1 and system 2 thinking. The architecture has the potential for the fine-grained integration of moral reasoning into the decision making of an AIS.

What I hope to add to Russell’s (2019) analysis is a more detailed consideration of what is already known in the psychology literature about the general problem of inferring another agent’s intentions from their behaviour. This may help to join up some of the thinking in AI with some of the thinking in cognitive psychology in a very broad-brushed way such that the main structural relationships between the two might come more into focus.

Subscribe (top left) to follow future blog posts on this topic.

References

Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y

Ben Amor, H., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human-robot cooperation tasks. In Proceedings – IEEE International Conference on Robotics and Automation (pp. 2831–2837). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA.2014.6907265

Boyd, M., & Wilson, N. (2018). Existential Risks. Policy Quarterly, 14(3). https://doi.org/10.26686/pq.v14i3.5105
Carlson, S. M., Koenig, M. A., & Harms, M. B. (2013). Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science, 4(4), 391–402. https://doi.org/10.1002/wcs.1232

Cuthbertson, A. (2016). Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI. Retrieved from http://europe.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638?rm=eu

Floridi, L., Cowls, J., Beltrametti, M. et al., (2018), AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 doi:10.1007/s11023-018-9482-5

Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. Future of Life, 1–23. Retrieved from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading Hidden Intentions in the Human Brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Lemaignan, S., Warnier, M., Sisbot, E. A., Clodic, A., & Alami, R. (2017). Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, 45–69. https://doi.org/10.1016/j.artint.2016.07.002

McFarland, D. A., & McFarland, H. R. (2015). Big Data and the danger of being precisely inaccurate. Big Data and Society. SAGE Publications Ltd. https://doi.org/10.1177/2053951715602495

Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2017). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. In PICMET 2016 – Portland International Conference on Management of Engineering and Technology: Technology Management For Social Innovation, Proceedings (pp. 682–693). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/PICMET.2016.7806752

Merritt, T., Ong, C., Chuah, T. L., & McGee, K. (2011). Did you notice? Artificial team-mates take risks for players. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 338–349). https://doi.org/10.1007/978-3-642-23974-8_37

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Oecd/Legal/0449. Retrieved from http://acts.oecd.org/Instruments/ShowInstrumentView.aspx?InstrumentID=219&InstrumentPID=215&Lang=en&Book=False

Osoba, O., & Welser, W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/rr1744

Portnoff, A. Y., & Soupizet, J. F. (2018). Artificial intelligence: Opportunities and risks. Futuribles: Analyse et Prospective, 2018-September(426), 5–26.

Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., & Botvinick, M. (2018). Machine Theory of mind. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 10, pp. 6723–6738). International Machine Learning Society (IMLS).

Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate information. Memory and Cognition, 42(1), 11–26. https://doi.org/10.3758/s13421-013-0339-0

Russell S., (2019), ‘Human Compatible Artificial Intelligence and the Problem of Control’, Allen Lane; 1st edition, ISBN-10: 0241335205, ISBN-13: 978-0241335208

Timmers, P., (2019), Ethics of AI and Cybersecurity When Sovereignty is at Stake. Minds & Machines 29, 635–645 doi:10.1007/s11023-019-09508-4

Trafton, J. G., Cassimatis, N. L., Bugajska, M. D., Brock, D. P., Mintz, F. E., & Schultz, A. C. (2005). Enabling effective human-robot interaction using perspective-taking in robots. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 35(4), 460–470. https://doi.org/10.1109/TSMCA.2005.850592

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

Weld, D. S., & Bansal, G. (2018). Intelligible artificial intelligence. ArXiv, 62(6), 70–79. https://doi.org/10.1145/3282486

Zardiashvili, L., Fosch-Villaronga, E. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds & Machines (2020) doi:10.1007/s11023-019-09514-6

Zuboff S., (2019), The Age of Surveillance Capitalism, Profile Books

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– Executive function (HOS 3)

Executive Function
Secret of Success

Executive Function in the Individual and the Organisation

Successful organisations like Google and Facebook allow their employees an opportunity to experiment and pursue their own projects. Many public sector organisations also allow their employees opportunity for personal development. Why does this work and what does it say about how organisations need to be run in a world of increasingly rapid change? What kind of executive control is appropriate for organisations in the 21st Century?

To answer this we could look at all kinds of management, organisational and accounting theory. But there is another perspective. This is to look at what psychology has revealed about the executive function (REFs 1, 2, 3) in the individual and then to map that back onto what it means in terms of the organisation. This perspective can be revealing. It highlights why organisations behave in certain ways, it can help distinguish useful and healthy behaviours from those that are ineffective, aberrant and perhaps eventually self-defeating, and it can give us a way of looking at the executive function that is grounded in an increasingly sophisticated understanding of the human condition. It can point the way to making organisations more resilient.

REF 1
YouTube Video, ‪2012 Burnett Lecture Part 2 ADHD, Self-Regulation and Executive Functioning Theory‬, UNCCHLearningCentre, November 2012, 58:44

REF 2
YouTube Video, InBrief: Executive Function: Skills for Life and Learning, Center on the Developing Child at Harvard University, June 2012, 5:35 minutes

REF 3
Youtube Video, Executive Function and the Developing Brain: Implications for Education, AMSDMN’s channel, November 2013, 58:22 minutes

There are several distinct components to executive function in the individual. These develop from infancy to adulthood more or less in order. This article looks at the overall architecture of control within an organisation then goes through seven executive functions one by one, first looking at what it means in psychological terms, then mapping it onto what it might mean in terms of organisational behaviour and the functions of an executive board.

In both the individual and the organisation, executive function is self-regulation. It is ‘actions on oneself’ or, in the organisational context, the executive actions in relation to the organisation itself. When fully developed the several aspects of executive function go together to provide the capacity for self-control in a complex and changing world.

There are numerous accounts of what makes for success in both the individual (and in the organisation). Many of these emphasise one or other aspect of executive function such as self-awareness, self-direction or emotional intelligence. However, all aspects of the executive function have a part to play, and understanding executive function helps demonstrate how all these parts develop and integrate to provide the many capabilities needed for success.

The Architecture of Control

The overall architecture of control in both the individual and the organisation can be seen as a two-part system with executive function residing in the second part.

Part 1 – An Automatic System

Much of what happens in both individuals and organisations goes on without much thought or reflection.

Individuals follow their routines and habits. When everything is predictable, actions like cooking or driving can be carried out largely on ‘autopilot’, often while thinking about something entirely different. In this manner, we can operate adequately on the basis of responding to cues in the immediate environment with little conscious control or effort. This is what Kahneman in his book ‘Thinking fast and Slow’ (REF 4) calls ‘System 1’ or intuitive thinking. It deals with the here and now when everything is familiar and reasonably certain.

REF 4
YouTube Video, Kahneman: “Thinking, Fast and Slow” | Talks at Google, November 2011, 1:02 hours

Similarly, in the organisation many activities can be carried out according to it’s established procedures and practices and require no executive intervention. They may have needed executive intervention to set them up but once bedded-in they can run without further executive input unless something unexpected or out of the ordinary happens.

Part 2 – A System for Exception Handling and Taking Proactive Control

Exception Handling

This system is engaged when the automatic system needs help. In terms of Khaneman’s theory, it is ‘System 2’ thinking. It is engaged when encountering difficulty, novelty and in matters that are not in the here and now. The executive level allows ‘action at a distance’ from the here and now, and deals with circumstances that are less certain and predictable.

In the individual, when something unpredictable happens, this system seems to pop items into consciousness and then relies on a somewhat slow and labourious form of conscious processing to effect a resolution. This takes effort and resource. It takes willpower and can use up cognitive capacity. What can be done is limited by the available capacity, and focus of attention on one thing will limit the capacity to pay attention to another.

In the organisation the executive may be called in to deal with some problem or may step in when it sees something going off-track (like profits, sales, production, costs, staff-turnover etc). As with the individual, this system, is slow and labourious by contrast to the ‘business as usual’ operation. It also requires the consumption of precious resources and dealing with one situation can detract from dealing with another, perhaps causing the organisation to take it’s eye off the ball.

In practice, in a healthy individual or organisation, the automatic system and the exception handling system work together in a highly interleaved manner. They also develop together. Functions that start out as requiring exception handling, become automated over time as they become embedded.

Pro-active Control

In the individual, the pro-active element of executive function emerges in childhood. The extent of the ability to inhibit certain behaviours correlates highly with many factors in later life including academic and social competence, wealth, health and (negatively with) criminality. It seems that the developing child moves gradually from reactive control to pro-active control. (REF 5).

REF 5
YouTube Video, Integrative Science Symposium: Lifespan Development of Executive Control, July 2015, 2:10:08 hours

This is more than just being able to stop or inhibit certain behaviours. There is an extra step. This is to re-construe the world in a slightly different way and become alert to other things going on in the environment. The extra step leaves open the option to continue the behaviour, stop it or do something more subtle.

Similarly, in the organisation, spotting an undesirable behaviour, trend or process does not generally result in immediately shutting it down. There is a period of reflection, where attention may be re-focused and alternatives considered. In both organisations and individuals, this may take time. An individual may think through the potential consequences of taking particular courses of action. The organisation may do the same by embarking on investigations or sophisticated modelling and simulations to help clarify the consequences of running with different options.

A simplified model of the control and executive function is:

  • No problems – continue on autopilot
  • Problem – re-focus attention, generate and evaluate options

Also, it is notable that problem solving can go on recursively. So, if a problem is encountered with any of the sub-processes of problem solving, then that’s a new problem that is subject to the same processes in achieving resolution. Meanwhile the whole process is being recursively and externally evaluated such that if it’s not going anywhere useful, it itself can be re-considered.

7 Stages of Development of Executive Function

Seven stages of development of the executive function are described in terms of what’s going on for both the individual and the organisation. There are striking parallels.
Many of these stages have a time dimension. An infant lives in the here and now, a teenager in perhaps weeks. An adult, like an organisation, may have a time horizon of months or years. The development of executive function enables longer time horizons.

In Early Development (0 years to 5 years)

Stage 1 – Self-Awareness

In the individual, self-awareness develops in infancy (from 3 months and continues to develop for a further 10 years). The capacity to turn one’s attention away from the environment and towards ones own actions and thoughts, grows. The self-monitoring function redirects attention back on the self. An executive has developed that watches the self.

An organisation might be self-aware from the start if it has been set up with management information systems. The executives can inspect the reports and consider actions designed to affect trends they see in the data. The executives normally act within a stable framework of parameters albeit that the values on the parameters are changing. Also an organisation may grow in self-awareness by introducing new systems to provide feedback on what are deemed to be key parameters.

When (informal or formal) management information systems come into play, they can become the basis of a prevailing viewpoint on the direction of the organisation, both past and future. This is the backdrop against which executive decision-making takes place. The organisation has become to some degree self-aware and able to turn attention onto itself.

However, both the individual and the organisation operate in what Simon refers to as ‘bounded rationality’ (REF 6). Organisational self-awareness is subjective in the sense that the organisation is only aware of what it is aware of. It may not be aware of all manner of things and what it is aware of may not be representative or accurate. In this sense, the organisation mimics the individual and may be subject to the same misconceptions about the self.

REF 6
YouTube Video, Herbert Simon, rationalLeft, July 2013, 3:42 mins

For example, Baring Bank may have been blind to the extent of the damage a rogue trader could do. Kodak may have deluded itself into thinking that there would always be a market for film (as opposed to digital). In 2008, the banking industry may have been unaware of the damage it might do to itself by not containing risk. Just as likely, these organisations were aware but didn’t care or know how to react.

These are just examples of self-awareness of organisations and begs the question of who holds this awareness. Is it distributed throughout the organisation or is it held by the executive? Just as in the brain, any one neuron is ‘aware’ of the activity of other neurons it is connected to, whether they be near neighbours or in some more remote location, awareness is distributed throughout the individuals and departments in an organisation. Each department and organisational role is tasked with being informed about particular things – production, human resources, suppliers and so on. In collecting and reporting both quantitative and qualitative data about local activity, the executive is fed information from across the organisation and can build a broader awareness. However, just as the brain can be deceived by it’s senses or selective and biased in its interpretation, the uncritical executive can also be led astray.

Stage 2 – Self-Restraint

Self-restraint develops In the individual between the ages of 3 and 5 years. This is the ability for a child to stop him or herself doing something that they would otherwise do automatically (like taking a sweet). It is inhibition of action. Children between 3 and 5 years will put their hands over their mouths to stop themselves saying something. In time this ‘executive inhibition’ can be done internally but it takes effort. It draws down on a limited resource.

In the organisation there are several mechanisms for self-restraint. Budgets act as inhibitors by containing costs in parts of the organisation and overall. Also, organisational policies, procedures, standards and guidelines are often designed to inhibit behaviours other than those already approved. The development of procedures is often motivated by ‘error’ – something has gone wrong and the organisation tries to make sure it doesn’t happen again. Procedures, new or just changing, often meet some resistance and it takes effort or resource to overcome it. Once bedded in, however, they can be executed more cheaply. The operation of restraint has become automatic. It has moved from the proactive and exception handling control system to the routine and automatic control system.

Stage 3 – Imagery and rapid brain development

In the individual imagery is ‘the mind’s eye’ or ‘a theatre in your mind’. It is the ability to resurrect (visual and other) images from the past (together with accompanying emotions) to deal with the present. In the child, imagery develops between 3 and 5 years. This is mainly imagery of situations represented in all the salient senses – visual, auditory, tactile, taste and smell – whatever was relevant at the time. The imagery may be stored along with how you feel about it, good or bad (to some degree). Pattern-matching triggers memories and resurrects relevant imagery from the past to act as a guide or a map that can be used in the here and now.

The developing brain at this stage consumes 60% of the glucose consumed in food and is creating new connections between neurones at a rapid rate. Although the visual systems in the brain can be fully wired up from the age of one, other sensory modalities take longer. At some stage a tipping point is reached when infrequently used connections are purged. This developmental progression is thought to facilitate innovation and hypothesis testing about the environment up to the point where consolidation on viable interpretations set in. The young mind mimics the progression of science in exploration before consolidation into useful knowledge that can drive applications.

Youtube Video, Alison Gopnik Lecture at CFI – When and why children are more intelligent than adults are, Future of Intelligence, September 2017, 1:31:50 hours starting at 15:35 minutes

In the organisation, the memories of staff and the management of files, file sharing, databases and information systems is its ‘working memory’. These systems are largely designed with retrieval in mind. Although, the data captured is not inherently what you would call image-evoking, it does perform the same function of retrieving memories (records, documents, anecdotes) that help guide action in the here and now. Envisioning activities, prototyping, simulation and modelling activities in the organisation, parallel the ‘theatre of the mind’. They are part of the organisations imaginative activity, where ideas can be tried out before they are fully implemented (and incur the full costs and consequences of acting on the world outside the organisation).

Stage 3a – Theory of Mind

There is another important stage that appears to develop between the ages of 3 and 5 years, that tends not to be emphasised in the mainstream literature on executive function. This is the so-called ‘theory of mind’ (REF6a) – the ability of a person to model what another person is thinking and feeling. Experiments with children show that a three year old expects everybody else to know what they themselves know, while by 5 years a child understands that other people can have different beliefs from themselves. If, for example, the content of a chocolate box is replaced with, say crayons, in front of a three year old, they will think that somebody later coming into the room will expect to find crayons in the box rather than chocolates. They are unable to differentiate between their own knowledge and the knowledge of others.

REF 6a
YouTube Video, Robert Seyfarth: Theory of Mind, Richard Dawkins Foundation for Reason & Science, May 2010, 3:36 minutes

Theory of mind may not be addressed in mainstream accounts of executive function because it is thought of as a social skill rather than a fundamental information processing capability, but I think it should be thought of as a key part of executive function because it is a base on which later executive functions develop. The multiple voices that develop in private speech, for example, are akin to the playing out in the mind of multiple belief systems and theory of mind must also impact on management of ones own emotions and motivations. In fact, the implications of theory of mind are so significant that it has generated its own large literature. Autism and Asperger’s Spectrum disorders increasingly reference both deficiencies in executive function and in theory of mind, adding further support to the argument that theory of mind should be seen as an aspect of executive function.

Research at the Max Planck Institute suggests that the maturation of fibres of a brain structure called the arcuate fascicle, between the ages of three and four years, establishes a connection between (1) a region at the back of the temporal lobe that supports adults thinking about others and their thoughts and (2) a region in the frontal lobe that is involved in keeping things at different levels of abstraction.

REF 6b
Article, Brain Structues that help us understand What Others Think Revealed, Neuroscience News, March 2017
http://neurosciencenews.com/fiber-connection-cognition-6293/

In the organisation, ‘theory of mind’ is akin to understanding your competitors and your markets. If an organisation’s theory is accurate then it will be better able to anticipate the consequences of events, both those that it control and those that are external (e.g. government legislation and changes in market conditions). For example, if an organisation changes the price of one of its products, it would be useful to be able to predict what its customers and competitors would think about this and how they are likely to respond. A good businessman, like a good car salesman, may have an instinct about how customers will respond and may be able to construct a more or less complicated strategy that will drive the behaviours of others in particular ways.

From 5 Years

Stage 4 – Private Speech

In the individual, at 3 years old everything is public. Children talk to themselves about the world. Listening to their own speech is a mechanism facilitating reflection and self-control. Between 3 and 5 years vocal actions and accompanying facial expressions become suppressed and the voice becomes internalized as a silent mechanisms of self-control.

Artificial intelligence is now being recruited to re-create the kind of dialogue we have with our inner voices.

REF 6c
BBC Radio 4, The Digital Human – Series 11, Echo, May 2017, 5:27 minutes
http://www.bbc.co.uk/programmes/b08npnh6

However, even as adults, the nuances of facial expression leak information about what is going on in the mind, but most adults learn to distinguish between situations in which this is useful and those in which it presents some danger. Also, they can learn how to dissociate what is going on in the mind from what leaks out in the face and body language, thereby conferring the ability to deceive. (REF 7)

REF 7
BBC Radio 4, Where do voices inside our heads come from?, April 2016, 5:27 minutes
http://www.bbc.co.uk/programmes/p03qq9mt

Youtube Video, What Causes The Voice In Your Head?, Thoughty2, August 2015, 6:57 minutes

In the organisation, executives are only too aware that they cannot air all of their thoughts in public. Executives are ultra-careful about what they communicate, to whom and how, or they soon learn. Board meetings are often closed and communications can be deliberately targeted, sometime with the help of a communications or PR department. Wise executives rarely blurt out the first thing that comes into their heads. They inhibit that tendency and use their own thoughts to first control their own behaviour. They exercise self-control. Some private speech ends up in the boardroom, especially the closed sessions, while the public speech is crafted by the PR department. Just as in the individual this can be crafted to deceive or mislead, but also, just as in the individual, the real danger comes when the executive starts to believe its own deceptions.

Stage 5 – Management of own emotions

The individual, by resurrecting images of the past, can ‘control’ his or her own emotional states in order to be able to socialize more effectively and not drive other people away. An individual can act to put themselves in a better frame of mind, not make important decision while angry, and otherwise act to exert some control over their own emotional state.

Daniel Goleman (REF 8) in his book ‘Emotional Intelligence’ identifies 4 aspects of emotional intelligence – (1) Self Awareness (2) Self-Management (3) Empathy and (4) Relationship Management. The first two of these are regarded as key executive functions whilst empathy and relationship management extend executive function into the social sphere that do not fully develop until adulthood.

REF 8
YouTube Video, Daniel Goleman Introduces Emotional Intelligence, April 2012, 5:31 minutes

Can organisations be said to have emotions? The answer is ‘yes’. Announcing profits, losses, redundancies, being given awards or a bad press can have emotional repercussions throughout the organisation. Sustained ‘moods’ can have implications for the organisation culture. Some organisations have enthusiasm and optimism while others have low morale and become depressed and dysfunctional. Some organisations feel threatened, get anxious and show some of the common human defence mechanisms such as denial, over-compensation, projection and compartmentalisation (REF 9).

REF 9
Article, 15 Common Defense Mechanisms, John M. Grohol, Psy.D.
http://psychcentral.com/lib/15-common-defense-mechanisms/

The organisational memory of emotional events in the past (both traumatic and elating) can help to manage a current situation but often relies on there still being people engaged that remember the past. Most executives are only too aware of the relationship between the mood and culture of an organisation and its performance, and act to manage the mood. Even when times are hard they convey a positive message and vision that helps take the organisation forward. However, an unrealistic representation, or a glossing over of current circumstances, risks losing the trust of the people that are the key to future success.

Stage 6 – Management of own motivations

In the individual, this is self-motivation or self-determination. Management of your own motivations frees the individual from thinking and acting in ways that have been learnt, either through practice in response to circumstances or by copying others. It opens new doors. You no longer have to be driven by habits or others expectations. You can think for yourself, determine your own goals, prioritise them as you think fit and work towards them in any way that you like.

Imagery has already developed to allow external consequences to be substituted by mental representation. Motivations can thereby be created in relation to events that are distant in space and time, and these can be reasoned about and managed without recourse to acting on the outside world.

In the organisation: Organisations specialise in the management of motivation. In particular they manage motivations with respect to profit (or at least self sustainability) but this is achieved with reference to the organisation’s mission. Many companies will have a list of strategic goals or intentions, and although profit often comes high on the list, there are others such as customer and staff satisfaction. Often separate departments take charge of these different motivations but eventually it is the executive that must coordinate them. At worst it must suppress conflicts. At best it provides orchestration, aligning motivations so that parts of the organisation support each other.

Daniel Pink in his book ‘Drive’ (REF 10) describes recent studies on how organisational incentives affect employee motivation. For routine tasks that can be performed on ‘auto-pilot’ (system 1 thinking) monitory incentives seem to work well as a means of keeping people on task and increasing performance. However, and running counter to previous views, it seems that financial incentives either do not work or actually impair performance when the work involves higher level executive functions such as reasoning and problem solving. The more effective incentives for these types of tasks are: autonomy, mastery and purpose. Autonomy means allowing employees to have the freedom to achieve goals in a manner of their own choosing, rather than having the method defined and prescribed. Mastery means giving the employee the freedom and resources to develop their own skills to a high standard, helping engender a greater degree of self-worth. Purpose refers to a socially useful purpose beyond that of the individual. It means joining with others to achieve something great, that the individual could not have achieved alone. Pink argues that the 21st Century worker must be incentivised in this way or they will not be sufficiently agile , resourceful, flexible and resilient to cope with the rapidly changing demands of a modern global economy.

REF 10
YouTube Video, The puzzle of motivation | Dan Pink, TED, August 2009, 18:36 minutes

Stage 7 – Internalised Play

In the individual, the last manifestation of executive function is internalised play (REF 11). Internal play involves self-awareness and analysis, imagery, synthesis, planning, emotional and motivational control, and problem solving. It builds on all the other executive functions to allow us to take apart any object of our thoughts and re-construct them in the mind, in novel ways, to meet the needs of the moment.

REF 11
YouTube Video, Learning Through Play: Developing Children’s Executive Function, Center on the Developing Child at Harvard University, September 2015, 27 seconds

In the organisation: Organisations that are big and profitable enough, make room for a lot of internal play and experimentation, only some of which will lead anywhere. Play itself allows the organisation to exercise its muscles, fine tune its processes and see where ideas might lead without heavy financial commitment.

Several references are provided below to elaborate on this and to show how play is a necessary ingredient in the development of the highest levels of executive function.

Implications for Organisational Development

Self-Awareness: Without self-awareness there is no self-control, but equally damaging is an inaccurate or biased self-awareness. Key Process Indicators (KPIs) and other management information systems can provide self-awareness but in the same way that an individual can become pre-occupied with their own inaccurate perception of themselves, an organisation can become equally distracted by KPIs that are easy to measure but are not closely aligned to its mission and strategy. It is only too easy to be deceived by the apparent objectivity of KPIs, especially when it is in the interests of different parts of the organisation to supply data that it knows the executive wants to see. In the same way that individuals tend to select the information that confirms their prejudices, an organisation can be similarly ‘blind’ to information it feels uncomfortable with.

Self-Restraint: An individual without self-restraint is often impulsive, easily distracted, lacks focus and fails to finish tasks. Too much restraint, by contrast, makes the individual inflexible and fixated. Organisations without self-restraint often pursue short-term goals at the expense of longer-term profitability. They are unable to defer gratification. By contrast, some organisations the have a tendency to over-control and tie themselves up in their own bureaucracy. Over time more and more procedure is put into place, often to correct errors of the past, until it is so rigid that it cannot respond to change. This is one reason that organisations often continuously re-structure and why some organisations seem to pulsate as control alternates between being drawn into the centre and distributed to autonomous operating units. Each process seems to run away with itself then needs to be reigned in again. The best organisations, rather than control, simply provide services to their management making it easy to carry out the functions that are central to its strategy, and more difficult to do anything else.

Imagery: The capacity to create, store and retrieve information is key to the individual in their personal development. Without the capability to learn from the past and retrieve that information when relevant an individual would act like an amnesic. An organisation without a memory of the past is similarly disoriented, and will stumble about without an understanding of what works and what doesn’t. Furthermore, without memory it is impossible to imagine what could be. Images of the past are the building blocks on which futures are built, often combining elements of the past in new ways to create novel solutions.

Theory of Mind: Understanding how customers and markets will react to events, including those events an organisation has control over, is critical in navigating and organisation through a constantly changing world. An organisation, say a government, that fails to anticipate a negative reaction to a new policy, law or budget change may find itself having to backtrack and even apologise. This is perceived as a weakness, precisely because it demonstrates that the organisation has an inadequate theory about other players. In politics in particular, it shows incompetence because politics is all about the anticipation and management of others’ reactions.

Private Speech: Private speech is more than just suppressing what is shown in public. It is the capacity to create internal dialogue and debate, to model and speculate on possible consequences of actions that have not yet been performed. The evaluation of actions before they are performed is essential to good decision making in both the individual and the organisation. Investment decisions benefit greatly from hearing a range of voices, from both within and outside the organisation, before they are acted on.

Management of Emotions: Emotions are at the route of most decision-making because they impact both an individual’s and an organisation’s priorities. The prevailing ethos of an organisation and how it affects the way staff feel, can be critical to the smooth functioning of the organisation. Some organisations have a ‘blame’ culture and, all factors being equal, any spontaneous activity on behalf of employees is suppressed. Others encourage free-thinking and innovation. How many organisations monitor these cultural and emotional factors and manage them as standing agenda items? Even when managed, most organisations are ineffective in the control of the prevailing emotional ethos and their interventions to control can easily backfire, especially if they look manipulative.

Management of Motivations: Like other functions ‘management of motivations’ can be done over-zealously or in too relaxed a manner. Self-determination is an asset so long as it does not fly in the face of circumstances. Motivations and intentions have to compromise with circumstance. The market has to be ready for your brilliant idea. Having said that, switching motivations has a cost. Even introducing a new service is expensive, let alone an entire change of strategy. Balancing the benefits of sticking to your guns with the cost of being flexible is a necessary skill. Organisations that manage to successfully grow organically achieve this balance.

Play: An individual cannot be fully functional without the opportunity to integrate all its mechanisms of executive control around the activity of play. Play allows experimentation and innovation in a safe environment, away from the dangers of the real world. Similarly, the organisation cannot be said to be fully functional without some room to play. The organisational practices of accounting for everything, monitoring every key performance indicator or extorting every last drop of employee or shareholder value, leads to organisations that are essentially reactive and immature. They are unpractised at thinking deeply at all organisational levels, and therefore lack resilience and the ability to adapt smoothly to changing circumstances. Like the individual that has failed to develop a wide range of coping strategies, they may lurch from crisis to crisis. Essentially, any change in circumstances can result in them becoming ‘out of control’.