Home » Posts tagged 'Concepts'

Tag Archives: Concepts

Request contact
If you need to get in touch
Enter your email and click the button

– AI: Measures, Maps and Taxonomies

Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.

José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.

José Hernández-Orallo
José Hernández-Orallo

José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.

Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.

Intelligence not included
Intelligence not included

My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.

Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.

When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.

But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.

– Ways of knowing (HOS 4)

How do we know what we know?

This article considers:

(1) the ways we come to believe what we think we know

(2) the many issues with the validation of our beliefs

(3) the implications for building artificial intelligence and robots based on the human operating system.


I recently came across a video (on the site http://www.theoryofknowledge.net) that identified the following ‘ways of knowing’:

  • Sensory perception
  • Memory
  • Intuition
  • Reason
  • Emotion
  • Imagination
  • Faith
  • Language

This list is mainly about mechanisms or processes by which an individual acquires knowledge. It could be supplemented by other processes, for example, ‘meditation’, ‘science’ or ‘history’, each of which provides its own set of approaches to generating new knowledge for both the individual and society as a whole. There are many difference ways in which we come to formulate beliefs and understand the world.

Youtube Video, TOK Ways of Knowing EXPLAINED | Theory of Knowledge Advice, Ivy Lilia, October 2018, 6:16 minutes


In the spirit of working towards a description of the ‘human operating system’, it is interesting to consider how a robot or other Artificial Intelligence (AI), that was ‘running’ the human operating system, would draw on its knowledge and beliefs in order to solve a problem (e.g. resolve some inconsistency in its beliefs). This forces us to operationalize the process and define the control mechanism more precisely. I will work through the above list of ‘ways of knowing’ and illustrate how each might be used.


Let’s say that the robot is about to go and do some work outside and, for a variety of reasons, needs to know what the weather is like (e.g. in deciding whether to wear protective clothing, or how suitable the ground is for sowing seeds or digging up for some construction work etc.) .

First it might consult its senses. It might attend to its visual input and note the patterns of light and dark, comparing this to known states and conclude that it was sunny. The absence of the familiar sound patterns (and smell) of rain might provide confirmation. The whole process of matching the pattern of data it is receiving through its multiple senses, with its store of known patterns, can be regarded as ‘intuitive’ because it is not a reasoning process as such. In the Khanemman sense of ‘system 1’ thinking, the robot just knows without having to perform any reasoning task.

Youtube Video, System 1 and System 2, Stoic Academy, February 2017, 1:26 minutes

The knowledge obtained from matching perception to memory can nevertheless be supplemented by reasoning, or other forms of knowledge that confirm or question the intuitively-reached conclusion. If we introduce some conflicting knowledge, e.g. that the robot thinks it’s the middle of the night in it’s current location, we then create a circumstance in which there is dissonance between two sources of knowledge – the perception of sunlight and the time of day. This assumes the robot has elaborated knowledge about where and when the sun is above the horizon and can potentially shine (e.g. through language – see below).

In people the dissonance triggers the emotional state of ‘surprise’ and the accompanying motivation to account for the contradiction.

Youtube Video, Cognitive Dissonance, B2Bwhiteboard, February 2012, 1:37 minutes

Likewise, we might label the process that causes the search for an explanation in the robot as ‘surprise’. An attempt may be made to resolve this dissonance through Kahneman’s slower, more reasoned, system 2 thinking. Either the perception is somehow faulty, or the knowledge about the time of day is inaccurate. Maybe the robot has mistaken the visual and audio input as coming from its local senses when in fact the input has originated from the other side of the world. (Fortunately, people do not have to confront the contradictions caused by having distributed sensory systems).

Probably in the course of reasoning about how to reconcile the conflicting inputs, the robot will have had to run through some alternative possible scenarios that could account for the discrepancy. These may have been generated by working through other memories associated with either the perceptual inputs or other factors that have frequently led to mis-interpretations in the past. Sometimes it may be necessary to construct unique possible explanations out of component part explanations. Sometimes an explanation may emerge through the effect of numerous ideas being ‘primed’ through the spreading activation of associated memories. Under these circumstances, you might easily say that the robot was using it’s imagination in searching for a solution that had not previously been encountered.

Youtube Video, TEDxCarletonU 2010 – Jim Davies – The Science of Imagination, TEDx Talks, September 2010, 12:56 minutes

Lastly, to faith and language as sources of knowledge. Faith is different because, unlike all the other sources, it does not rely on evidence or proof. If the robot believed, on faith, that the sun was shining, any contradictory evidence would be discounted, perhaps either as being in error or as being irrelevant. Faith is often maintained by others, and this could be regarded as a form of evidence, but in general if you have faith in or trust something, it is at least filling the gap between the belief and the direct evidence for it.

Here is a religious account of faith that identifies it with trust in the reliability of God to deliver, where the main delivery is eternal life.

Youtube video, What is Faith – Matt Morton – The Essence of Faith – Grace 360 conference 2015,Grace Bible Church, September 2015, 12:15 minutes

Language as a source of evidence is a catch-all for the knowledge that comes second hand from the teachings and reports of others. This is indirect knowledge, much of which we take on trust (i.e. faith), and some of which is validated by direct evidence or other indirect evidence. Most of us take on trust that the solar system exists, that the sun is at the centre, and that earth is in the third orbit. We have gained this knowledge through teachers, friends, family, tv, radio, books and other sources that in their turn may have relied on astronomers and other scientist who have arrived at these conclusions through observation and reason. Few of us have made the necessary direct observations and reasoned inferences to have arrived at the conclusion directly. If our robot were to consult databases of known ‘facts’, put together by people and other robots, then it would be relying on knowledge through this source.

Pitfalls

People like to think that their own beliefs are ‘true’ and that these beliefs provide a solid basis for their behaviour. However, the more we find out about the psychology of human belief systems the more we discover the difficulties in constructing consistent and coherent beliefs, and the shortcomings in our abilities to construct accurate models of ‘reality’. This creates all kinds of difficulties amongst people in their agreements about what beliefs are true and therefore how we should relate to each other in peaceful and productive ways.


If we are now going on to construct artificial intelligences and robots that we interact with and have behaviours that impact the world, we want to be pretty sure that the beliefs a robot develops still provide a basis for understanding their behaviour.


Unfortunately, every one of the ‘ways of knowing’ is subject to error. We can again go through them one by one and look at the pitfalls.

Sensory perception: We only have to look at the vast body of research on visual illusion (e.g. see ‘Representations of Reality – Part 1’) to appreciate that our senses are often fooled. Here are some examples related to colour vision:

Youtube Video, Optical illusions show how we see | Beau Lotto,TED, October 2009, 18:59 minutes

Furthermore, our perceptions are heavily guided by what we pay attention to, meaning that we can miss all sorts of significant and even life-threatening information in our environment. Would a robot be similarly misled by its sensory inputs? It’s difficult to predict whether a robot would be subject to sensory illusions, and this might depend on the precise engineering of the input devices, but almost certainly a robot would have to be selective in what input it attended to. Like people, there could be a massive volume of raw sensory input and every stage of processing from there on would contain an element of selection and interpretation. Even differences in what input devices are available (for vision, sound, touch or even super-human senses like perception of non-visual parts of the electromagnetic spectrum), will create a sensory environment (referred to as the ‘umwelt’ or ‘merkwelt’in ethology) that could be quite at variance with human perceptions of the world.

YouTube Video, What is MERKWELT? What does MERKWELT mean? MERKWELT meaning, definition & explanation, The Audiopedia, July 2017, 1:38 minutes


Memory: The fallibility of human memory is well documented. See, for example, ‘The Story of Your Life’, especially the work done by Elizabeth Loftus on the reliability of memory. A robot, however, could in principle, given sufficient storage capacity, maintain a perfect and stable record of all its inputs. This is at variance with the human experience but could potentially mean that memory per se was more accurate, albeit that it would be subject to variance in what input was stored and the mechanisms of retrieval and processing.


Intuition and reason: This is the area where some of the greatest gains (and surprises) in understanding have been made in recent years. Much of this progress is reported in the work of Daniel Kahneman that is cited many times in these writings. Errors and biases in both intuition (system 1 thinking) and reason (system 2 thinking) are now very well documented. A long list of cognitive biases can be found at:

https://en.wikipedia.org/wiki/List_of_cognitive_biases

Would a robot be subject to the same type of biases? It is already established that many algorithms, used in business and political campaigning, routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.

Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes

As for reasoning, there is some hope that the robustness of proofs that can be achieved computationally may save the artificial intelligence or robot from at least some of the biases of system 2 thinking.


Emotion: Biases in people due to emotional reactions are commonplace. See, for example:

Youtube Video, Unconscious Emotional Influences on Decision Making, The Rational Channel, February 2017, 8:56 minutes

However, it is also the case that emotions are crucial in decision–making. Emotions often provide the criteria and motivation on which decisions are made and without them, people can be severely impaired in effective decision-making. Also, emotions provide at least one mechanism for approaching the subject of ethics in decision-making.

Youtube Video, When Emotions Make Better Decisions – Antonio Damasio, FORA.tv, August 2009, 3:22 minutes

Can robots have emotions? Will robots need emotions to make effective decisions? Will emotions bias or impair a robot’s decision-making. These are big questions and are only touched on here, but briefly, there is no reason why emotions cannot be simulated computationally although we can never know if an artificial computational device will have the subjective experience of emotion (or thought). Probably some simulation of emotion will be necessary for robot decision-making to align with human values (e.g. empathy) and, yes, a side-effect of this may well be to introduce bias into decision-making.

For a selection of BBC programmes on emotions see:
http://www.bbc.co.uk/programmes/topics/Emotions?page=1


Imagination: While it doesn’t make much sense to talk about ‘error’ when it comes to imagination, we might easily make value-judgments about what types of imagination might be encouraged and what might be discouraged. Leaving aside debates about how, say excessive experience of violent video games, might effect imagination in people, we can at least speculate as to what might or should go on in the imagination of a robot as it searches through or creates new models to help predict the impacts of its own and others behaviours.

A big issue has arisen as to how an artificial intelligence can explain its decision-making to people. While AI based on symbolic reasoning can potentially offer a trace describing the steps it took to arrice at a conclusion, AIs based on machine learning would be able to say little more than ‘I recognized the pattern as corresponding to so and so’, which to a person is not very explanatory. It turns out that even human experts are often unable to provide coherent accounts of their decision-making, even when they are accurate.

Having an AI or robot account for its decision-making in a way understandable to people is a problem that I will address in later analysis of the human operating system and, I hope, provide a mechanism that bridges between machine learning and more symbolic approaches.


Faith: It is often said that discussing faith and religion is one of the easiest ways to lose friends. Any belief based on faith is regarded as true by definition, and any attempt to bring evidence to refute it, stands a good chance of being regarded as an insult. Yet people have different beliefs based on faith and they cannot all be right. This not only creates a problem for people, who will fight wars over it, but it is also a significant problem for the design of AIs and robots. Do we plug in the Muslim or the Christian ethics module, or leave it out altogether? How do we build values and ethical principles into robots anyway, or will they be an emergent property of its deep learning algorithms. Whatever the answer, it is apparent that quite a lot can go badly wrong if we do not understand how to endow computational devices with this ‘way of knowing’.


Language: As observed above, this is a catch-all for all indirect ‘ways of knowing’ communicated to people through media, teaching, books or any other form of communication. We only have to consider world wars and other genocides to appreciate that not everything communicated by other people is believable or ethical. People (and organizations) communicate erroneous information and can deliberately lie, mislead and deceive.

We strongly tend to believe information that comes from the people around us, our friends and associates, those people that form part of our sub-culture or in-group. We trust these sources for no other reason than we are familiar with them. These social systems often form a mutually supporting belief system, whether or not it is grounded in any direct evidence.

Youtube Video, The Psychology of Facts: How Do Humans (mis)Trust Information?, YaleCampus, January 2017

Taking on trust the beliefs of others that form part of our mutually supporting social bubble is a ‘way of knowing’ that is highly error prone. This is especially the case when combined with other ‘ways of knowing’, such as faith, that in their nature cannot be validated. Will robot communities develop, who can talk to each other instantaneously and ‘telepathically’ over wireless connections, also be prone to the bias of groupthink?


The validation of beliefs

So, there are multiple ways in which we come to know or believe things. As Descartes argued, no knowledge is certain (see ‘It’s Like This’). There are only beliefs, albeit that we can be more sure of some that others, normally by virtue of their consistency with other beliefs. Also, we note that our beliefs are highly vulnerable to error. Any robot operating system that mimics humans will also need to draw on the many different ‘ways of knowing’ including a basic set of assumptions that it takes to be true without necessarily any supporting evidence (it’s ‘faith’ if you like). There will also need to be many precautions against AIs and robots developing erroneous or otherwise unacceptable beliefs and basing their behaviours on these.

There is a mechanism by which we try to reconcile differences between knowledge coming from different sources, or contradictory knowledge coming from the same source. Most people seem to be able to tolerate a fair degree of contradiction or ambiguity about all sorts of things, including the fundamental questions of life.

Youtube Video, Defining Ambiguity, Corey Anton, October 2009, 9:52 minutes

We can hold and work with knowledge that is inconsistent for long periods of time, but nevertheless there is a drive to seek consistency.

In the description of the human operating system, it would seem that there are many ways in which we establish what we believe and what beliefs we will recruit to the solving of any particular problem. Also, the many sources of knowledge may be inconsistent or contradictory. When we see inconsistencies in others we take this as evidence that we should doubt them and trust them less.

Youtube Video, Why Everyone (Else) is a Hypocrite, The RSA, April 2011, 17:13 minutes

However, there is, at least, a strong tendency in most people, to establish consistency between beliefs (or between beliefs and behaviours), and to account for inconsistencies. The only problem is that we are often prone to achieve consistency by changing sound evidence-based beliefs in preference to the strongly held beliefs based on faith or our need to protect our sense of self-worth.

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011. 4:31 minutes

From this analysis we can see that building AIs and robots is fraught with problems. The human operating system has evolved to survive, not to be rational or hold high ethical values. If we just blunder into building AIs and robots based on the human operating system we can potentially make all sorts of mistakes and give artificial agents power and autonomy without understanding how their beliefs will develop and the consequences that might have for people.

Fortunately there are some precautions we can take. There are ways of thinking that have been developed to counter the many biases that people have by default. Science is one method that aims to establish the best explanations based on current knowledge and the principle of simplicity. Also, critical thinking has been taught since Aristotle and fortunately many courses have been developed to spread knowledge about how to assess claims and their supporting arguments.

Youtube Video, Critical Thinking: Issues, Claims, Arguments, fayettevillestatenc, January 2011

Implications

To summarise:

Sensory perception – The robot’s ‘umwelt’ (what it can sense) may well differ from that of people, even to the extent that the robot can have super-human senses such as infra-red / x-ray vision, super-sensitive hearing and smell etc. We may not even know what it’s perceptual world is like. It may perceive things we cannot and miss things we find obvious.

Memory – human memory is remarkably fallible. It is not so much a recording, as a reconstruction based on clues, and influenced by previously encountered patterns and current intentions. Given sufficient storage capacity, robots may be able to maintain memories as accurate recording of the states of their sensory inputs. However, they may be subject to similar constraints and biases as people in the way that memories are retrieved and used to drive decision-making and behaviour.

Intuition – if the robot’s pattern-matching capabilities are based on the machine learning of historical training sets then bias will be built into its basic processes. Alternatively, if the robot is left to develop from it’s own experience then, as with people, great care has to be taken to ensure it’s early experience will not lead to maladaptive behaviours (i.e. behaviours not acceptable to the people around it).

Reason – through the use of mathematical and logical proofs, robots may well have the capacity to reason with far greater ability than people. They can potentially spot (and resolve) inconsistencies arising out of different ‘ways of knowing’ with far greater adeptness than people. This may create a quite different balance between how robots make decisions and how people do using emotion and reason in tandem.

Emotion – human emotion are general states that arise in response to both internal and external events and provide both the motivation and the criteria on which decisions are made. In a robot, emerging global states could also potentially act to control decision-making. Both people, and potentially robots, can develop the capacity to explicitly recognize and control these global states (e.g. as when suppressing anger). This ability to reflect, and to cause changes in perspective and behaviour, is a kind of feedback loop that is inherently unpredictable. Not having sufficient understanding to predict how either people or robots will react under particular circumstances, creates significant uncertainty.

Imagination – much the same argument about predictability can be made about imagination. Who knows where either a person’s or a robot’s imagination may take them? Chess computers out-performed human players because of their capacity to reason in depth about the outcomes of every move, not because they used pattern-matching based on machine learning (although it seems likely that this approach will have been tried and succeeded by now). Robots can far exceed human capacities to reason through and model future states. A combination of brute force computing and heuristics to guide search, may have far-reaching consequences for a robot’s ability to model the world and predict future outcomes, and may far exceed that of people.

Faith – faith is axiomatic for people and might also be for robots. People can change their faith (especially in a religious, political or ethical sense) but more likely, when confronted with contradictory evidence or sufficient need (i.e. to align with a partner’s faith) people with either ignore the evidence or find reasons to discount it. This way can lead to multiple interpretations of the same basic axioms, in the same way as there are many religious denominations and many interpretations of key texts within these. In robots, Asimov’s three laws of robotics would equate to their faith. However, if robots used similar mechanisms as people (e.g. cognitive dissonance) to resolve conflicting beliefs, then in the same way as God’s will can be used to justify any behaviour, a robot may be able to construct a rationale for any behaviour whatever its axioms. There would be no guarantee that a robot would obey its own axiomatic laws.

Communication – The term language is better labeled ‘communication’ in order to make it more apparent that it extends to all methods by which we ‘come to know’ from sources outside ourselves. Since communication of knowledge from others is not direct experience, it is effectively taken on trust. In one sense it is a matter of faith. However, the degree of consistency across external sources and between what is communicated (i.e. that a teacher or TV will re-enforce what a parent has said etc.) and between what is communicated and what is directly observed (for example, that a person does what he says he will do) will reveal some sources as more believable than others. Also we appeal to motive as a method of assessing degree of trust. People are notoriously influenced by the norms, opinions and behaviours of their own reference groups. Robots with their potential for high bandwidth communication could, in principle, behave with the same psychology of the crowd as humans, only much more rapidly and ‘single-mindedly’. It is not difficult to see how the Dr Who image of the Borg, acting a one consciousness, could come about.

Other Ways of Knowing

It is worth considering just a few of the many other ‘ways’ of knowing’ not considered above, partly because some of these might help mitigate some of the risks of human ‘ways of knowing’ .

Science – Science has evolved methods that are deliberately designed to create impartial, robust and consistent models and explanations of the world. If we want robots to create accurate models, then an appeal to scientific method is one approach. In science, patterns are observed, hypotheses are formulated to account for these patterns, and the hypotheses are then tested as impartially as possible. Science also seeks consistency by reconciling disparate findings into coherent overall theories. While we may want robots to use scientific methods in their reasoning, we may want to ensure that robots do not perform experiments in the real world simply for the sake of making their own discoveries. An image of concentration camp scientists comes to mind. Nevertheless, in many small ways robots will need to be empirical rather than theoretical in order to operate at all.

Argument – Just like people, robots of any complexity will encounter ambiguity and inconsistencies. These will be inconsistencies between expectation and actuality, between data from one way of knowing and another (e.g. between reason and faith, or between perception and imagination etc.), or between a current state and a goal state. The mechanisms by which these inconsistencies are resolved will be crucial. The formulation of claims; the identification, gathering and marshalling of evidence; the assessment of the relevance of evidence; and the weighing of the evidence, are all processes akin to science but can cut across many ‘ways of knowing’ as an aid to decision making. Also, this approach may help provide explanations of a robot’s behaviour that would be understandable to people and thereby help bridge the gap between opaque mechanisms, such as pattern matching, and what people will accept as valid explanations.

Meditation – Meditation is a place-holder for the many ways in which altered states of consciousness can lead to new knowledge. Dreaming, for example, is another altered state that may lead to new hypotheses and models based on novel combination of elements that would not otherwise have been brought together. People certainly have these altered states of consciousness. Could there be an equivalent in the robot, and would we want robots to indulge in such extreme imaginative states where we would have no idea what they might consist of? This is not to necessarily attribute consciousness to robots, which is a separate, and probably meta-physical question.

Theory of mind – For any autonomous agent with its own beliefs and intentions, including a robot, it is crucial to its survival to have some notion of the intentions of other autonomous agents, especially when they might be a direct threat to survival. People have sophisticated but highly biased and error-prone mechanisms for modelling the intentions of others. These mechanisms are particularly alert for any sign of threat and, as a proven mechanism, tend to assume threat even when none is present. The people that did not do this, died out. Work in robotics already recognizes that, to be useful, robots have to cooperate with people and this requires some modelling of their intentions. As this last video illustrates, the modelling of others intentions is inherently complex because it is recursive.

YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes

If there is a conclusion to this analysis of ‘ways of knowing’ it is that creating intelligent, autonomous mechanisms, such as robots and AIs, will have inherently unpredictable consequences, and that, because the human operating system is so highly error-prone and subject to bias, we should not necessarily build them in our own image.

– Representations of reality 2

Part 1 looked at language and thought, mental models and computational approaches to how the mind represents what it knows about the world (and itself). Part 2 contrasts thinking in words with thinking in pictures, looking first at how evidence from brain studies inform the debate, and then concludes how all these approaches – linguistic, psychological, computational, neurophysiological and phenomenological are addressing much the same set of phenomena from different perspectives. Can freedom be defined in terms of our ability to reflect on our own perceptions and thoughts?

The Flexibility of Thought

Although we often seek order, certainty and clarity, and think that the world can be put in neat conceptual boxes, nothing could be further from the truth. Our thoughts and our language are full of ambiguity, flexibility and room for interpretation. And this is of great benefit. Just like a building or a bridge that cannot flex will be brittle and break, our thinking (and our social interaction) is made less vulnerable and more robust by the flexibility of language and thought.

Wittgenstein realised that categories do not really exist in any absolute sense. A particular concept, such as ‘furniture’, does not have necessary and sufficient defining features so that we can say definitively that any one object, say a piano or a picture, is furniture or not. Rather pieces of furniture have a ‘family resemblance’ that makes them similar, but without any hard boundaries on what is inside or outside the category. Steven Pinker describes a man who was unable to categorise but nevertheless had amazing feats of memory.

YouTube Video, Professor Steven Pinker – Concepts & Reasoning, NCHumanities, First published October 2014, 1:10:40 hours

Pinker also considers reasoning – both deductive and inductive. Deductive reasoning is where a conclusion necessarily follows from a set of premises or assumptions – all men are mortal and Socrates is a man, leads inevitably to the conclusion that Socrates is mortal. Inductive reasoning is where we generalise from the particular – so we encounter five white swans and this leads us to the generalisation that ‘all swans are white’ even though this may not necessarily follow. He concludes that people can do deductive reasoning so long as they are dealing with concrete and familiar content, but easily go awry when the content is abstract. As for inductive reasoning, people are generally not very good, and thinking is subject to all manner of biases (as described by Kahneman).


Representation of Concepts in the Brain

Since technology has become available to scan brain activity, there has been a spate of studies that look at what is happening in the brain as people perform various mental tasks.

TED Video, Nancy Kanwisher: A neural portrait of the human mind,TED , March 2014, 17:40 minutes

Control Systems in the Brain

As well as looking at individual functional components it is possible to identify some of the gross anatomical parts of the brain with different forms of control.

http://totalbraintotalmind.co.uk/architecture

  • Cerebrum – Control mediated through conscious abstract thought and reflection
  • Cerebellum – Learned control and un/sub-consious processes
  • Brain stem – Innate level control

These ideas and a more fully elaborated nine-level brain architecture can be found in a free downloadable ebook available from:

http://totalbraintotalmind.co.uk


For more on the imaging techniques see:

YouTube Video, Magnetic Resonance Imaging Explained,ominhs, October 2011, 5:30 minutes

If you want to find out more about magnetic imaging techniques then there are several videos in the following Youtube playlist:

Using Functional Nuclear Magnetic Imaging (FNMI) techniques on people as they look at pictures of different objects (faces, road signs etc.) reveals not only something about object recognition in the brain’s visual system but also says something about how we may form categories and concepts. Interestingly, it appears to validate the more armchair philosophical speculations about the ‘fuzziness’ of concepts (e.g. Wittgenstein’s notion of ‘family resemblance’). For example, in his research, Nikolaus Kriegeskorte investigates patterns of neural activity in humans and monkeys. The neural activity suggests conceptual clusters such as animate ‘bodies’ (e.g. a human or animal body) and inanimate objects, despite visual similarities between the members in each group. If we consider the complexity of these patterns of activity and the way in which the patterns overlap, it is possible to see how concepts can, at the one time, be both ‘fuzzy’ (i.e. have no necessary and defining features) and yet distinct (i.e. given separate linguistic labels such as animate or inanimate).

TSN Video, Representational similarity analysis of inferior temporal object population codes – Nikolaus Kriegeskorte, The Science Network, August 2010, 23:11 minutes

In fact, brain and cognitive scientists have made considerable progress in bridging between our understanding of brain activity and more symbolic representation in language.

TSN Video, Emergence of Semantic Structure from Experience – James McClelland, The Science Network, August 2010,1:16 hours


The eventual direction of this type of work will be to integrate what we know about the brain into a simulation of how it works.

https://www.humanbrainproject.eu/brain-simulation-platform


Goals, Tasks and Mental Representation

Whilst both language and patterns of neural activity can be considered as mental representation, somehow neither really capture the level of representation that we intuitively feel underlie the performance of tasks and the ‘navigation’ towards goals.

When people perform tasks they have a model in their mind that guides their behaviour. To illustrate this, imagine going from one room to another in your house at night with the lights turned off. In your mind’s eye you have a mental map of the layout of the house and you use this to help guide you.

As you stumble about in the dark you will come across walls, pictures, doorways, stairways, shelves, tables and so on. Each of these will help reinforce your mental image and help validate your hypotheses about where you are. If you come across objects you do not recognise you will start to suspect your model. Any inconsistencies between your model and your experience will cause tension and a search for a way of reconciling the two, either by changing your model or by re-interpreting your experience.

It is often the case that mental representations are vague and fragmentary, needing reinforcement from the environment to maintain and validate them. Even so, conceptual models create expectations which guide the interpretation of experience and tension is created when the internal representation and the external experience are out of step.

In this example, by turning out the lights, we remove a major element of external feedback from the environment. All that is left is the conceptual or mental model supported by far less informative sensory mechanisms. Because you know your house well, the mental model acts as a strong source of information to guide behaviour. Even if you are in a strange house, your knowledge about how houses are typically designed and furnished will provide considerable guidance.

Now consider an example where there is still a strong mental model that drives task performance, except it is less obvious because it does not involve the disabling of any sensory feedback from the environment.

Imagine performing the task of putting photographs in a physical album. You are driven by a view of what the finished product will look like. You may imagine the photographs organised by date, by place, or by who or what is shown in them. Alternatively, you may organise the album to tell a story, to be a random collection of pictures in no particular order, or to have all the better shots at the front and the worse ones at the back. Perhaps you have some constraints on the photos you must include or leave out. All these factors and visualisations form the conceptual model that stands behind the performance of the task. The activity of conceptual modelling is to capture this ‘mind’s eye’ view.

The mental model is not the task itself. The task of putting photographs in the album might be done in many different ways. For example, the behaviour would be quite different if the album were on a computer, involving mouse clicks and key presses rather than physical manipulation of the photographs. The task behaviour would also be different if you were instructing somebody else to put the photographs in the album for you.

The model is the internal mental representation that guides the task behaviour. It can be seen to be different from the behaviour itself, because the behaviour can be changed while keeping the model the same. If instructing somebody to put photographs in the album a particular way is not working effectively, you can take over the job yourself. You have the same image of the end product even though you achieve it in a different way.

A mental model need not necessarily be a goal. The model of the house was simply a representation that allows many different tasks to be performed and many different goals to be achieved. The goal may be to get out of the house, to get to the fuse box, or to check that somebody else in the house is safe. The same mental representation may support the achievement of many different goals.


Imagination, Envisioning and Visualisation

From the above it will be clear that although the mind can respond in an immediate and simple way to what is going on around it, for example by pulling back a hand when it touches something hot, it is also capable of sophisticated modelling of what might happen in the future. This is imagination or envisioning.

http://plato.stanford.edu/entries/aristotle-psychology/suppl4.html

Francis Galton in 1880 published a classic paper in the journal Mind called the Statistics of Mental Imagery in which he set out some of the main characteristics of the ‘mind’s eye‘, in particular how people vary in the vividness of their mental images.

http://psychclassics.yorku.ca/Galton/imagery.htm

Jean Paul Sartre in ‘The Psychology of Imagination’ distinguishes between perception, conceptiual thinking and imagination.

The following playlist from the Open University looks at imagination and envisioning from perspectives from art through to neurophysiology.

https://www.youtube.com/playlist?list=PLBFE8D91E196C83B5

Stephen Kosslyn has been researching mental imagery since the 1970’s and argues that people have and can inspect internal mental images when performing tasks. They form a model or representation of reality in addition to propositional representations.

Youtube Video, 12. The Imagery Debate: The Role of the Brain, MIT OpenCourseWare, August 2012, 55:11 minutes (Embedded under policy of Fair Use)

However, the psychology of imagination is somewhat out of fashion at the moment as neurological approaches come to the fore. But talking about the mind in terms of mentalistic concepts like imagination is under-exploited, both as a means of understanding mental representation and as a therapeutic tool.

Youtube Video, Interview Ben Furman 2 – Imagination in modern psychology, MentalesStärken, October 2014, 6:43 minutes


Phenomenology

One approach to understanding how we think is phenomenology (Edmund Husserl). This focuses on subjective experience. It is looking inside our own heads rather than trying to construct an objective and theoretical account. Philosophers (Heidegger, Jean Paul Sartre, Simon de Beauvoir) and psychologists (Amedeo Giorgi) have taken this approach. The focus of phenomenology is on being, existence, consciousness, meaning, and intuition. This, in some sense, comes before the great philosophical questions like what is truth and why are we here. It is the sheer realisation that we exist at all and concerns fundamental ideas like the nature of the self and the relationship of self to reality – what we perceive and how we interpret it, before we start to analyse it, put linguistic labels on it or think about it in any logical sense.

BBC Radio 4, In Our Time, Phenomenology, January 2015, 43 Minutes
http://www.bbc.co.uk/programmes/b04ykk4m

An idea that comes out of phenomenology is the notion of the gap between what we perceive and our reflections on our perceptions. So, we see a glass of water, but the content of our thought can be about our perception of the glass of water as well as the perception itself. That we can reflect upon what we are seeing is well and simply just seeing it. So much is obvious. Indeed when I ask you to pass the glass of water I am making a reference to my perception of it and the assumption that you can perceive it too. If I ask, “where is the glass of water?” I am making a reference to a belief that the glass of water exists for both you and me even though I am unable to perceive it.

The interesting idea is that the notion of freedom derives from this ability to not just perceive but to be able to reflect on the perception. This removes us from responding to the world in a purely mechanical way. Instead, there are intermediary states that we can consult when making decisions.

It turns out that what the phenomenologists referred to as the gap between perception and reflection, the psychoanalysts have referred to as the distinction between the id, ego and super-ego, the psychologists have developed into the notion of mental models, Kahneman refers to as system 1 and system 2 thinking, linguists think of in terms of semantic structure, and the neurophysiologists have identified as being associated with higher layers of the brain such as the cortex, are all pretty much the same thing!

Mind the Gap

How the mind represents reality can be described at different levels from patterns of neural activity through to mentalistic concepts like imagination.

In reading the following very general and abstract account of mental processes, it is useful to think of an example, like driving a car. For an experienced driver it is almost automatic and requires little conscious thought or effort (until a child unexpectedly runs into the road). For a new driver it is a whole series of problems to be solved.

We can think of a person experiencing the world as a sensory ‘device’ attuned to monitoring our state of internal need and the gap between expectations and experience (our orientation). If all our needs are met, by default we coast along on automatic pilot simply monitoring the environment and noting any differences with our expectations (maintaining orientation). Expectations tune our sensory inputs and the inputs themselves activate neural pathways and may elicit or pre-dispose to certain outputs (behaviours or changes to internal states). Where we have needs, but know how to satisfy them (i.e. we have mastery), we engage appropriate solutions without effort or thought. The outputs can be behaviours that act on the world or changes to internal states (e.g. the states in our internal models). Some circumstances (either internal or external) may trigger a higher level control mechanism to over-ride default responses. When needs are met and experience and expectation are more or less aligned, our autonomic and well-learned responses flow easily. This, in Kahneman’s terms is relatively effort free, automatic and more or less subconscious, system 1 thinking.

Dissonance occurs when there is an unmet need or a difference between expectation and experience e.g. when there is a need to deal with something novel or some internal state is triggered to activate some higher level control mechanism (e.g. to inhibit an otherwise automatic reaction). If sufficient mental resources are available the mind is triggered to construct a propositional, linguistic or quasi-spatial/temporal representation that can then be internally inspected or consulted by the ‘mind’s eye’ in order to envisage future states and simulate the consequence of different outputs/behaviours before making a decision about the output (e.g. whether to act on the outside world or an internal state, and if so how). This is what Kahneman refers to as system 2 thinking. When we have done some system 2 thinking we sometimes go over it and consolidate it in our minds. These are the stories we construct to explain how we met a need or managed the difference between expectation and experience. The stories can then act as a shortcut to retrieving the solution in similar circumstances.

In a very simple system there is a direct mapping between input and output – flick the switch and the light comes on. In a highly complex system like the human brain the mapping between input and output can be of extra-ordinary complexity. At its more complex, an input might trigger an internal state that creates an ‘on the fly’ (imaginary) model of the world which is then used to mentally ‘test’ different possible response scenarios before deciding which response, if any, to make.

As we experience the world (through learning and maturation) we adjust our expectations in line with our experience. Our brains and and expectations become a progressively more refined model of our experience. When we are ‘surprised’, and recruit system 2 problem-solving thinking, we produce solutions. Solutions are outputs – either behaviours that act on the world or changes to internal states. Problem solving takes effort and resource but results in solutions that can potentially be re-used in similar circumstances in the future. This type of learning is going on at all levels of experience from the development of sensory-moror skills like walking or driving a car through to high level cognitive skills such as making difficult decisions and judgements in situations of uncertainty (e.g. a surgeon’s decision to operate on a life-threatening condition). System 1 and system 2 thinking are really just extremes of a spectrum. In practice, any task involves thousands of separate sub-processes some of which are highly learned and automatic and some of which require a degree of problem solving. To an outside observer these processes often appear to mesh seamlessly together.

The learning we do and the models we construct in our minds are very dependent on our own experiences of the world (and this accounts for many of the biases in the way we think). Although our models can be influenced by other people’s stories about how the world works e.g. though our education, peers, family, media etc. (or observing what happens to others), the deepest learning takes place through our own direct experience, and because our experiences are all just different samples of a larger reality, we are all different from each other. Each one of us has merely sampled an infinitely small fraction of an omniscient reality but because of the consistencies in the underlying reality (for example, we all experience the same laws of physics) there are sufficient commonalities in our models that we understand each other to a greater or lesser extent.

Need and maintaining orientation drive us all, and when under normal (but not total) control, we have wellbeing. However, we must always have some manageable gap, so that the system is at least ticking over. This is easily achieved because as lower level needs are satisfied we can always move to others further up the hierarchy, and constant change in the world is usually enough to drive the maintenance of our orientation.


Radio programme links

An index of BBC Radio programmes on cognitive science can be found at:
http://www.bbc.co.uk/programmes/topics/Cognitive_science

An index of BBC Radio programmes on Mental processes can be found at:
http://www.bbc.co.uk/programmes/topics/Mental_processes


This Blog Post: ‘Representations of Reality Enable Control’ shows how different levels of description can be used to represent the knowledge that enables us to meet our needs and deal with the unexpected.

Next Up: ‘Are we free?’ delves deeper into freewill, consciousness and moral responsibility. If we are free, then in what sense is this true?