Home » Posts tagged 'Concepts'
Tag Archives: Concepts
Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.
José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.
José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.
Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.
My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.
Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.
When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.
But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.
Part 1 looked at language and thought, mental models and computational approaches to how the mind represents what it knows about the world (and itself). Part 2 contrasts thinking in words with thinking in pictures, looking first at how evidence from brain studies inform the debate, and then concludes how all these approaches – linguistic, psychological, computational, neurophysiological and phenomenological are addressing much the same set of phenomena from different perspectives. Can freedom be defined in terms of our ability to reflect on our own perceptions and thoughts?
The Flexibility of Thought
Although we often seek order, certainty and clarity, and think that the world can be put in neat conceptual boxes, nothing could be further from the truth. Our thoughts and our language are full of ambiguity, flexibility and room for interpretation. And this is of great benefit. Just like a building or a bridge that cannot flex will be brittle and break, our thinking (and our social interaction) is made less vulnerable and more robust by the flexibility of language and thought.
Wittgenstein realised that categories do not really exist in any absolute sense. A particular concept, such as ‘furniture’, does not have necessary and sufficient defining features so that we can say definitively that any one object, say a piano or a picture, is furniture or not. Rather pieces of furniture have a ‘family resemblance’ that makes them similar, but without any hard boundaries on what is inside or outside the category. Steven Pinker describes a man who was unable to categorise but nevertheless had amazing feats of memory.
YouTube Video, Professor Steven Pinker – Concepts & Reasoning, NCHumanities, First published October 2014, 1:10:40 hours
Pinker also considers reasoning – both deductive and inductive. Deductive reasoning is where a conclusion necessarily follows from a set of premises or assumptions – all men are mortal and Socrates is a man, leads inevitably to the conclusion that Socrates is mortal. Inductive reasoning is where we generalise from the particular – so we encounter five white swans and this leads us to the generalisation that ‘all swans are white’ even though this may not necessarily follow. He concludes that people can do deductive reasoning so long as they are dealing with concrete and familiar content, but easily go awry when the content is abstract. As for inductive reasoning, people are generally not very good, and thinking is subject to all manner of biases (as described by Kahneman).
Representation of Concepts in the Brain
Since technology has become available to scan brain activity, there has been a spate of studies that look at what is happening in the brain as people perform various mental tasks.
TED Video, Nancy Kanwisher: A neural portrait of the human mind,TED , March 2014, 17:40 minutes
Control Systems in the Brain
As well as looking at individual functional components it is possible to identify some of the gross anatomical parts of the brain with different forms of control.
- Cerebrum – Control mediated through conscious abstract thought and reflection
- Cerebellum – Learned control and un/sub-consious processes
- Brain stem – Innate level control
These ideas and a more fully elaborated nine-level brain architecture can be found in a free downloadable ebook available from:
For more on the imaging techniques see:
YouTube Video, Magnetic Resonance Imaging Explained,ominhs, October 2011, 5:30 minutes
If you want to find out more about magnetic imaging techniques then there are several videos in the following Youtube playlist:
Using Functional Nuclear Magnetic Imaging (FNMI) techniques on people as they look at pictures of different objects (faces, road signs etc.) reveals not only something about object recognition in the brain’s visual system but also says something about how we may form categories and concepts. Interestingly, it appears to validate the more armchair philosophical speculations about the ‘fuzziness’ of concepts (e.g. Wittgenstein’s notion of ‘family resemblance’). For example, in his research, Nikolaus Kriegeskorte investigates patterns of neural activity in humans and monkeys. The neural activity suggests conceptual clusters such as animate ‘bodies’ (e.g. a human or animal body) and inanimate objects, despite visual similarities between the members in each group. If we consider the complexity of these patterns of activity and the way in which the patterns overlap, it is possible to see how concepts can, at the one time, be both ‘fuzzy’ (i.e. have no necessary and defining features) and yet distinct (i.e. given separate linguistic labels such as animate or inanimate).
TSN Video, Representational similarity analysis of inferior temporal object population codes – Nikolaus Kriegeskorte, The Science Network, August 2010, 23:11 minutes
In fact, brain and cognitive scientists have made considerable progress in bridging between our understanding of brain activity and more symbolic representation in language.
TSN Video, Emergence of Semantic Structure from Experience – James McClelland, The Science Network, August 2010,1:16 hours
The eventual direction of this type of work will be to integrate what we know about the brain into a simulation of how it works.
Goals, Tasks and Mental Representation
Whilst both language and patterns of neural activity can be considered as mental representation, somehow neither really capture the level of representation that we intuitively feel underlie the performance of tasks and the ‘navigation’ towards goals.
When people perform tasks they have a model in their mind that guides their behaviour. To illustrate this, imagine going from one room to another in your house at night with the lights turned off. In your mind’s eye you have a mental map of the layout of the house and you use this to help guide you.
As you stumble about in the dark you will come across walls, pictures, doorways, stairways, shelves, tables and so on. Each of these will help reinforce your mental image and help validate your hypotheses about where you are. If you come across objects you do not recognise you will start to suspect your model. Any inconsistencies between your model and your experience will cause tension and a search for a way of reconciling the two, either by changing your model or by re-interpreting your experience.
It is often the case that mental representations are vague and fragmentary, needing reinforcement from the environment to maintain and validate them. Even so, conceptual models create expectations which guide the interpretation of experience and tension is created when the internal representation and the external experience are out of step.
In this example, by turning out the lights, we remove a major element of external feedback from the environment. All that is left is the conceptual or mental model supported by far less informative sensory mechanisms. Because you know your house well, the mental model acts as a strong source of information to guide behaviour. Even if you are in a strange house, your knowledge about how houses are typically designed and furnished will provide considerable guidance.
Now consider an example where there is still a strong mental model that drives task performance, except it is less obvious because it does not involve the disabling of any sensory feedback from the environment.
Imagine performing the task of putting photographs in a physical album. You are driven by a view of what the finished product will look like. You may imagine the photographs organised by date, by place, or by who or what is shown in them. Alternatively, you may organise the album to tell a story, to be a random collection of pictures in no particular order, or to have all the better shots at the front and the worse ones at the back. Perhaps you have some constraints on the photos you must include or leave out. All these factors and visualisations form the conceptual model that stands behind the performance of the task. The activity of conceptual modelling is to capture this ‘mind’s eye’ view.
The mental model is not the task itself. The task of putting photographs in the album might be done in many different ways. For example, the behaviour would be quite different if the album were on a computer, involving mouse clicks and key presses rather than physical manipulation of the photographs. The task behaviour would also be different if you were instructing somebody else to put the photographs in the album for you.
The model is the internal mental representation that guides the task behaviour. It can be seen to be different from the behaviour itself, because the behaviour can be changed while keeping the model the same. If instructing somebody to put photographs in the album a particular way is not working effectively, you can take over the job yourself. You have the same image of the end product even though you achieve it in a different way.
A mental model need not necessarily be a goal. The model of the house was simply a representation that allows many different tasks to be performed and many different goals to be achieved. The goal may be to get out of the house, to get to the fuse box, or to check that somebody else in the house is safe. The same mental representation may support the achievement of many different goals.
Imagination, Envisioning and Visualisation
From the above it will be clear that although the mind can respond in an immediate and simple way to what is going on around it, for example by pulling back a hand when it touches something hot, it is also capable of sophisticated modelling of what might happen in the future. This is imagination or envisioning.
Francis Galton in 1880 published a classic paper in the journal Mind called the Statistics of Mental Imagery in which he set out some of the main characteristics of the ‘mind’s eye‘, in particular how people vary in the vividness of their mental images.
Jean Paul Sartre in ‘The Psychology of Imagination’ distinguishes between perception, conceptiual thinking and imagination.
The following playlist from the Open University looks at imagination and envisioning from perspectives from art through to neurophysiology.
Stephen Kosslyn has been researching mental imagery since the 1970’s and argues that people have and can inspect internal mental images when performing tasks. They form a model or representation of reality in addition to propositional representations.
Youtube Video, 12. The Imagery Debate: The Role of the Brain, MIT OpenCourseWare, August 2012, 55:11 minutes (Embedded under policy of Fair Use)
However, the psychology of imagination is somewhat out of fashion at the moment as neurological approaches come to the fore. But talking about the mind in terms of mentalistic concepts like imagination is under-exploited, both as a means of understanding mental representation and as a therapeutic tool.
Youtube Video, Interview Ben Furman 2 – Imagination in modern psychology, MentalesStärken, October 2014, 6:43 minutes
One approach to understanding how we think is phenomenology (Edmund Husserl). This focuses on subjective experience. It is looking inside our own heads rather than trying to construct an objective and theoretical account. Philosophers (Heidegger, Jean Paul Sartre, Simon de Beauvoir) and psychologists (Amedeo Giorgi) have taken this approach. The focus of phenomenology is on being, existence, consciousness, meaning, and intuition. This, in some sense, comes before the great philosophical questions like what is truth and why are we here. It is the sheer realisation that we exist at all and concerns fundamental ideas like the nature of the self and the relationship of self to reality – what we perceive and how we interpret it, before we start to analyse it, put linguistic labels on it or think about it in any logical sense.
BBC Radio 4, In Our Time, Phenomenology, January 2015, 43 Minutes
An idea that comes out of phenomenology is the notion of the gap between what we perceive and our reflections on our perceptions. So, we see a glass of water, but the content of our thought can be about our perception of the glass of water as well as the perception itself. That we can reflect upon what we are seeing is well and simply just seeing it. So much is obvious. Indeed when I ask you to pass the glass of water I am making a reference to my perception of it and the assumption that you can perceive it too. If I ask, “where is the glass of water?” I am making a reference to a belief that the glass of water exists for both you and me even though I am unable to perceive it.
The interesting idea is that the notion of freedom derives from this ability to not just perceive but to be able to reflect on the perception. This removes us from responding to the world in a purely mechanical way. Instead, there are intermediary states that we can consult when making decisions.
It turns out that what the phenomenologists referred to as the gap between perception and reflection, the psychoanalysts have referred to as the distinction between the id, ego and super-ego, the psychologists have developed into the notion of mental models, Kahneman refers to as system 1 and system 2 thinking, linguists think of in terms of semantic structure, and the neurophysiologists have identified as being associated with higher layers of the brain such as the cortex, are all pretty much the same thing!
Mind the Gap
How the mind represents reality can be described at different levels from patterns of neural activity through to mentalistic concepts like imagination.
In reading the following very general and abstract account of mental processes, it is useful to think of an example, like driving a car. For an experienced driver it is almost automatic and requires little conscious thought or effort (until a child unexpectedly runs into the road). For a new driver it is a whole series of problems to be solved.
We can think of a person experiencing the world as a sensory ‘device’ attuned to monitoring our state of internal need and the gap between expectations and experience (our orientation). If all our needs are met, by default we coast along on automatic pilot simply monitoring the environment and noting any differences with our expectations (maintaining orientation). Expectations tune our sensory inputs and the inputs themselves activate neural pathways and may elicit or pre-dispose to certain outputs (behaviours or changes to internal states). Where we have needs, but know how to satisfy them (i.e. we have mastery), we engage appropriate solutions without effort or thought. The outputs can be behaviours that act on the world or changes to internal states (e.g. the states in our internal models). Some circumstances (either internal or external) may trigger a higher level control mechanism to over-ride default responses. When needs are met and experience and expectation are more or less aligned, our autonomic and well-learned responses flow easily. This, in Kahneman’s terms is relatively effort free, automatic and more or less subconscious, system 1 thinking.
Dissonance occurs when there is an unmet need or a difference between expectation and experience e.g. when there is a need to deal with something novel or some internal state is triggered to activate some higher level control mechanism (e.g. to inhibit an otherwise automatic reaction). If sufficient mental resources are available the mind is triggered to construct a propositional, linguistic or quasi-spatial/temporal representation that can then be internally inspected or consulted by the ‘mind’s eye’ in order to envisage future states and simulate the consequence of different outputs/behaviours before making a decision about the output (e.g. whether to act on the outside world or an internal state, and if so how). This is what Kahneman refers to as system 2 thinking. When we have done some system 2 thinking we sometimes go over it and consolidate it in our minds. These are the stories we construct to explain how we met a need or managed the difference between expectation and experience. The stories can then act as a shortcut to retrieving the solution in similar circumstances.
In a very simple system there is a direct mapping between input and output – flick the switch and the light comes on. In a highly complex system like the human brain the mapping between input and output can be of extra-ordinary complexity. At its more complex, an input might trigger an internal state that creates an ‘on the fly’ (imaginary) model of the world which is then used to mentally ‘test’ different possible response scenarios before deciding which response, if any, to make.
As we experience the world (through learning and maturation) we adjust our expectations in line with our experience. Our brains and and expectations become a progressively more refined model of our experience. When we are ‘surprised’, and recruit system 2 problem-solving thinking, we produce solutions. Solutions are outputs – either behaviours that act on the world or changes to internal states. Problem solving takes effort and resource but results in solutions that can potentially be re-used in similar circumstances in the future. This type of learning is going on at all levels of experience from the development of sensory-moror skills like walking or driving a car through to high level cognitive skills such as making difficult decisions and judgements in situations of uncertainty (e.g. a surgeon’s decision to operate on a life-threatening condition). System 1 and system 2 thinking are really just extremes of a spectrum. In practice, any task involves thousands of separate sub-processes some of which are highly learned and automatic and some of which require a degree of problem solving. To an outside observer these processes often appear to mesh seamlessly together.
The learning we do and the models we construct in our minds are very dependent on our own experiences of the world (and this accounts for many of the biases in the way we think). Although our models can be influenced by other people’s stories about how the world works e.g. though our education, peers, family, media etc. (or observing what happens to others), the deepest learning takes place through our own direct experience, and because our experiences are all just different samples of a larger reality, we are all different from each other. Each one of us has merely sampled an infinitely small fraction of an omniscient reality but because of the consistencies in the underlying reality (for example, we all experience the same laws of physics) there are sufficient commonalities in our models that we understand each other to a greater or lesser extent.
Need and maintaining orientation drive us all, and when under normal (but not total) control, we have wellbeing. However, we must always have some manageable gap, so that the system is at least ticking over. This is easily achieved because as lower level needs are satisfied we can always move to others further up the hierarchy, and constant change in the world is usually enough to drive the maintenance of our orientation.
Radio programme links
An index of BBC Radio programmes on cognitive science can be found at:
An index of BBC Radio programmes on Mental processes can be found at:
This Blog Post: ‘Representations of Reality Enable Control’ shows how different levels of description can be used to represent the knowledge that enables us to meet our needs and deal with the unexpected.
Next Up: ‘Are we free?’ delves deeper into freewill, consciousness and moral responsibility. If we are free, then in what sense is this true?