Home » Posts tagged 'Brain'
Tag Archives: Brain
Artificial intelligence has learnt a lot from neuroscience. It was the move away from symbolic to neural net (machine learning) approaches that led to the current surge of interest in AI. Neural net approaches have enabled AI systems to do humanlike things such as object recognition and categorisation that had eluded the symbolic approaches.
So it was with great interest that I attended Dr. Tim Kietzmann's talk at the Cognitive and Brain sciences Unit (CBU) in Cambridge UK, earlier this month (March 2019), on what artificial intelligence (AI) and neuroscience can learn from each other.
Tim is a researcher and graduate supervisor at the MRC CBU and investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.
Both AI and neuroscience aim to understand information processing and decision making - neuroscience primarily through empirical studies and AI primarily through computational modelling. The talk had symmetry. The first half was 'how can neuroscience benefit from artificial intelligence', and the second half was 'how artificial intelligence benefits from neuroscience'.
Types of AI
It is important to distinguish between 'narrow', 'general' and 'super' AI. Narrow AI is what we have now. In this context, it is the ability of a machine learning algorithm to recognise or classify particular things. This is often something visual like a cat or a face, but it could be a sound (as when an algorithm is used to identify a piece of music or in speech recognition).
General AI is akin to what people have. When or if this will happen is speculative. Ray Kurzweil, Google’s Director of Engineering, predicts 2029 as the date when an AI will pass the Turing test (i.e. a human will not be able to tell the difference between a person and an AI when performing tasks). The singularity (the point when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created), he predicts should happen by about 2045. Super AIs exceed human intelligence. Right now, they only appear in fiction and films.
It is impossible to predict how this will unfurl. After all, you could argue that the desktop calculator several decades ago exceeded human capability in the very narrow domain of performing mathematical calculations. It is possible to imagine many very narrow and deep skills like this becoming fully integrated within an overall control architecture capable of passing results between them. That might look quite different from human intelligence.
One Way or Another
Research in machine learning, a sub-discipline of AI, has given neuroscience researchers pattern recognition techniques that can be used to understand high-dimensional neural data. Moreover, the deep learning algorithms, that have been so successful in creating a new range of applications and interest in AI, offer an exciting new framework for researchers like Tim and colleagues, to advance knowledge of the computational principles at play in the brain. AI allows researchers to test different theories of brain computations and cognitive function by implementing and testing them. 'Today's computational neuroscience needs machine learning techniques from artificial intelligence'.
AI benefits from neuroscience by informing the development of a wide variety of AI applications from care robots to medical diagnosis and self-driving cars. Some principles that commonly apply in human learning (such as building on previous knowledge and unsupervised learning) are not yet integrated into AI systems.
For example, a child can quickly learn to recognise certain types of objects, even those such as a mythical 'Tufa' that they have never seen before. A machine learning algorithm, by contrast, would require tens of thousands of training instances in order to reliably perform that same task. Also, AI systems can easily be fooled in ways that a person never would. Adding a specially crafted 'noise' to an image of a dog, can lead an AI to misclassify it as an ostrich. A person would still see a dog and not make this sort of mistake. Having said that, children will over-generalise from exposure to a small number of instances, and so also make mistakes.
It could be that the column structures found in the cortex have some parallels to the multi-layered networks used in machine learning and might inform how they are designed. It is also worth noting that the idea of reinforcement learning used to train artificial neural nets, originally came out of behavioural psychology - in particular Pavlov and Skinner. This illustrates the 'intertwined' nature of all these disciplines.
The Neuroscience of Ethics
Although this was not covered in the talk, when it comes to ethics, neuroscience may have much to offer AI, especially as we move from narrow AI into artificial general intelligence (AGI) and beyond. Evidence is growing as to how brain structures, such as the pre-frontal cortex are involved in inhibiting thought and action. Certain drugs affect neuronal transmission and can disrupt these inhibitory signals. Brain lesions and the effects of strokes can also interfere with moral judgements. The relationship of neurological mechanisms to notions of criminal responsibility may also reveal findings relevant to AI. It seems likely that one day the understanding of the relationship between neuroscience, moral reasoning and the high-level control of behaviours will have an impact on the design of, and architectures for, artificial autonomous intelligent systems (i.e. see Neuroethics: Challenges for the 21st Century.Neil Levy - 2007 - Cambridge University Press or A Neuro-Philosophy of Human Nature: Emotional Amoral Egoism and the Five Motivators of Humankind - April 2019).
Understanding the Brain
The reality of the comparison between human and artificial intelligence comes home when you consider the energy requirements of the human brain and computer processors performing similar tasks. While the brain uses about 15 watts of energy, just a single graphics processing unit requires up to 250 watts.
It has often been said that you cannot understand something until you can build it. That provides a benchmark against which we can measure our understanding of neuroscience. Building machines that perform as well as humans is a necessary step in that understanding, although that still does not imply that the mechanisms are the same.
Part 1 looked at language and thought, mental models and computational approaches to how the mind represents what it knows about the world (and itself). Part 2 contrasts thinking in words with thinking in pictures, looking first at how evidence from brain studies inform the debate, and then concludes how all these approaches – linguistic, psychological, computational, neurophysiological and phenomenological are addressing much the same set of phenomena from different perspectives. Can freedom be defined in terms of our ability to reflect on our own perceptions and thoughts?
The Flexibility of Thought
Although we often seek order, certainty and clarity, and think that the world can be put in neat conceptual boxes, nothing could be further from the truth. Our thoughts and our language are full of ambiguity, flexibility and room for interpretation. And this is of great benefit. Just like a building or a bridge that cannot flex will be brittle and break, our thinking (and our social interaction) is made less vulnerable and more robust by the flexibility of language and thought.
Wittgenstein realised that categories do not really exist in any absolute sense. A particular concept, such as ‘furniture’, does not have necessary and sufficient defining features so that we can say definitively that any one object, say a piano or a picture, is furniture or not. Rather pieces of furniture have a ‘family resemblance’ that makes them similar, but without any hard boundaries on what is inside or outside the category. Steven Pinker describes a man who was unable to categorise but nevertheless had amazing feats of memory.
YouTube Video, Professor Steven Pinker – Concepts & Reasoning, NCHumanities, First published October 2014, 1:10:40 hours
Pinker also considers reasoning – both deductive and inductive. Deductive reasoning is where a conclusion necessarily follows from a set of premises or assumptions – all men are mortal and Socrates is a man, leads inevitably to the conclusion that Socrates is mortal. Inductive reasoning is where we generalise from the particular – so we encounter five white swans and this leads us to the generalisation that ‘all swans are white’ even though this may not necessarily follow. He concludes that people can do deductive reasoning so long as they are dealing with concrete and familiar content, but easily go awry when the content is abstract. As for inductive reasoning, people are generally not very good, and thinking is subject to all manner of biases (as described by Kahneman).
Representation of Concepts in the Brain
Since technology has become available to scan brain activity, there has been a spate of studies that look at what is happening in the brain as people perform various mental tasks.
TED Video, Nancy Kanwisher: A neural portrait of the human mind,TED , March 2014, 17:40 minutes
Control Systems in the Brain
As well as looking at individual functional components it is possible to identify some of the gross anatomical parts of the brain with different forms of control.
- Cerebrum – Control mediated through conscious abstract thought and reflection
- Cerebellum – Learned control and un/sub-consious processes
- Brain stem – Innate level control
These ideas and a more fully elaborated nine-level brain architecture can be found in a free downloadable ebook available from:
For more on the imaging techniques see:
YouTube Video, Magnetic Resonance Imaging Explained,ominhs, October 2011, 5:30 minutes
If you want to find out more about magnetic imaging techniques then there are several videos in the following Youtube playlist:
Using Functional Nuclear Magnetic Imaging (FNMI) techniques on people as they look at pictures of different objects (faces, road signs etc.) reveals not only something about object recognition in the brain’s visual system but also says something about how we may form categories and concepts. Interestingly, it appears to validate the more armchair philosophical speculations about the ‘fuzziness’ of concepts (e.g. Wittgenstein’s notion of ‘family resemblance’). For example, in his research, Nikolaus Kriegeskorte investigates patterns of neural activity in humans and monkeys. The neural activity suggests conceptual clusters such as animate ‘bodies’ (e.g. a human or animal body) and inanimate objects, despite visual similarities between the members in each group. If we consider the complexity of these patterns of activity and the way in which the patterns overlap, it is possible to see how concepts can, at the one time, be both ‘fuzzy’ (i.e. have no necessary and defining features) and yet distinct (i.e. given separate linguistic labels such as animate or inanimate).
TSN Video, Representational similarity analysis of inferior temporal object population codes – Nikolaus Kriegeskorte, The Science Network, August 2010, 23:11 minutes
In fact, brain and cognitive scientists have made considerable progress in bridging between our understanding of brain activity and more symbolic representation in language.
TSN Video, Emergence of Semantic Structure from Experience – James McClelland, The Science Network, August 2010,1:16 hours
The eventual direction of this type of work will be to integrate what we know about the brain into a simulation of how it works.
Goals, Tasks and Mental Representation
Whilst both language and patterns of neural activity can be considered as mental representation, somehow neither really capture the level of representation that we intuitively feel underlie the performance of tasks and the ‘navigation’ towards goals.
When people perform tasks they have a model in their mind that guides their behaviour. To illustrate this, imagine going from one room to another in your house at night with the lights turned off. In your mind’s eye you have a mental map of the layout of the house and you use this to help guide you.
As you stumble about in the dark you will come across walls, pictures, doorways, stairways, shelves, tables and so on. Each of these will help reinforce your mental image and help validate your hypotheses about where you are. If you come across objects you do not recognise you will start to suspect your model. Any inconsistencies between your model and your experience will cause tension and a search for a way of reconciling the two, either by changing your model or by re-interpreting your experience.
It is often the case that mental representations are vague and fragmentary, needing reinforcement from the environment to maintain and validate them. Even so, conceptual models create expectations which guide the interpretation of experience and tension is created when the internal representation and the external experience are out of step.
In this example, by turning out the lights, we remove a major element of external feedback from the environment. All that is left is the conceptual or mental model supported by far less informative sensory mechanisms. Because you know your house well, the mental model acts as a strong source of information to guide behaviour. Even if you are in a strange house, your knowledge about how houses are typically designed and furnished will provide considerable guidance.
Now consider an example where there is still a strong mental model that drives task performance, except it is less obvious because it does not involve the disabling of any sensory feedback from the environment.
Imagine performing the task of putting photographs in a physical album. You are driven by a view of what the finished product will look like. You may imagine the photographs organised by date, by place, or by who or what is shown in them. Alternatively, you may organise the album to tell a story, to be a random collection of pictures in no particular order, or to have all the better shots at the front and the worse ones at the back. Perhaps you have some constraints on the photos you must include or leave out. All these factors and visualisations form the conceptual model that stands behind the performance of the task. The activity of conceptual modelling is to capture this ‘mind’s eye’ view.
The mental model is not the task itself. The task of putting photographs in the album might be done in many different ways. For example, the behaviour would be quite different if the album were on a computer, involving mouse clicks and key presses rather than physical manipulation of the photographs. The task behaviour would also be different if you were instructing somebody else to put the photographs in the album for you.
The model is the internal mental representation that guides the task behaviour. It can be seen to be different from the behaviour itself, because the behaviour can be changed while keeping the model the same. If instructing somebody to put photographs in the album a particular way is not working effectively, you can take over the job yourself. You have the same image of the end product even though you achieve it in a different way.
A mental model need not necessarily be a goal. The model of the house was simply a representation that allows many different tasks to be performed and many different goals to be achieved. The goal may be to get out of the house, to get to the fuse box, or to check that somebody else in the house is safe. The same mental representation may support the achievement of many different goals.
Imagination, Envisioning and Visualisation
From the above it will be clear that although the mind can respond in an immediate and simple way to what is going on around it, for example by pulling back a hand when it touches something hot, it is also capable of sophisticated modelling of what might happen in the future. This is imagination or envisioning.
Francis Galton in 1880 published a classic paper in the journal Mind called the Statistics of Mental Imagery in which he set out some of the main characteristics of the ‘mind’s eye‘, in particular how people vary in the vividness of their mental images.
Jean Paul Sartre in ‘The Psychology of Imagination’ distinguishes between perception, conceptiual thinking and imagination.
The following playlist from the Open University looks at imagination and envisioning from perspectives from art through to neurophysiology.
Stephen Kosslyn has been researching mental imagery since the 1970’s and argues that people have and can inspect internal mental images when performing tasks. They form a model or representation of reality in addition to propositional representations.
Youtube Video, 12. The Imagery Debate: The Role of the Brain, MIT OpenCourseWare, August 2012, 55:11 minutes (Embedded under policy of Fair Use)
However, the psychology of imagination is somewhat out of fashion at the moment as neurological approaches come to the fore. But talking about the mind in terms of mentalistic concepts like imagination is under-exploited, both as a means of understanding mental representation and as a therapeutic tool.
Youtube Video, Interview Ben Furman 2 – Imagination in modern psychology, MentalesStärken, October 2014, 6:43 minutes
One approach to understanding how we think is phenomenology (Edmund Husserl). This focuses on subjective experience. It is looking inside our own heads rather than trying to construct an objective and theoretical account. Philosophers (Heidegger, Jean Paul Sartre, Simon de Beauvoir) and psychologists (Amedeo Giorgi) have taken this approach. The focus of phenomenology is on being, existence, consciousness, meaning, and intuition. This, in some sense, comes before the great philosophical questions like what is truth and why are we here. It is the sheer realisation that we exist at all and concerns fundamental ideas like the nature of the self and the relationship of self to reality – what we perceive and how we interpret it, before we start to analyse it, put linguistic labels on it or think about it in any logical sense.
BBC Radio 4, In Our Time, Phenomenology, January 2015, 43 Minutes
An idea that comes out of phenomenology is the notion of the gap between what we perceive and our reflections on our perceptions. So, we see a glass of water, but the content of our thought can be about our perception of the glass of water as well as the perception itself. That we can reflect upon what we are seeing is well and simply just seeing it. So much is obvious. Indeed when I ask you to pass the glass of water I am making a reference to my perception of it and the assumption that you can perceive it too. If I ask, “where is the glass of water?” I am making a reference to a belief that the glass of water exists for both you and me even though I am unable to perceive it.
The interesting idea is that the notion of freedom derives from this ability to not just perceive but to be able to reflect on the perception. This removes us from responding to the world in a purely mechanical way. Instead, there are intermediary states that we can consult when making decisions.
It turns out that what the phenomenologists referred to as the gap between perception and reflection, the psychoanalysts have referred to as the distinction between the id, ego and super-ego, the psychologists have developed into the notion of mental models, Kahneman refers to as system 1 and system 2 thinking, linguists think of in terms of semantic structure, and the neurophysiologists have identified as being associated with higher layers of the brain such as the cortex, are all pretty much the same thing!
Mind the Gap
How the mind represents reality can be described at different levels from patterns of neural activity through to mentalistic concepts like imagination.
In reading the following very general and abstract account of mental processes, it is useful to think of an example, like driving a car. For an experienced driver it is almost automatic and requires little conscious thought or effort (until a child unexpectedly runs into the road). For a new driver it is a whole series of problems to be solved.
We can think of a person experiencing the world as a sensory ‘device’ attuned to monitoring our state of internal need and the gap between expectations and experience (our orientation). If all our needs are met, by default we coast along on automatic pilot simply monitoring the environment and noting any differences with our expectations (maintaining orientation). Expectations tune our sensory inputs and the inputs themselves activate neural pathways and may elicit or pre-dispose to certain outputs (behaviours or changes to internal states). Where we have needs, but know how to satisfy them (i.e. we have mastery), we engage appropriate solutions without effort or thought. The outputs can be behaviours that act on the world or changes to internal states (e.g. the states in our internal models). Some circumstances (either internal or external) may trigger a higher level control mechanism to over-ride default responses. When needs are met and experience and expectation are more or less aligned, our autonomic and well-learned responses flow easily. This, in Kahneman’s terms is relatively effort free, automatic and more or less subconscious, system 1 thinking.
Dissonance occurs when there is an unmet need or a difference between expectation and experience e.g. when there is a need to deal with something novel or some internal state is triggered to activate some higher level control mechanism (e.g. to inhibit an otherwise automatic reaction). If sufficient mental resources are available the mind is triggered to construct a propositional, linguistic or quasi-spatial/temporal representation that can then be internally inspected or consulted by the ‘mind’s eye’ in order to envisage future states and simulate the consequence of different outputs/behaviours before making a decision about the output (e.g. whether to act on the outside world or an internal state, and if so how). This is what Kahneman refers to as system 2 thinking. When we have done some system 2 thinking we sometimes go over it and consolidate it in our minds. These are the stories we construct to explain how we met a need or managed the difference between expectation and experience. The stories can then act as a shortcut to retrieving the solution in similar circumstances.
In a very simple system there is a direct mapping between input and output – flick the switch and the light comes on. In a highly complex system like the human brain the mapping between input and output can be of extra-ordinary complexity. At its more complex, an input might trigger an internal state that creates an ‘on the fly’ (imaginary) model of the world which is then used to mentally ‘test’ different possible response scenarios before deciding which response, if any, to make.
As we experience the world (through learning and maturation) we adjust our expectations in line with our experience. Our brains and and expectations become a progressively more refined model of our experience. When we are ‘surprised’, and recruit system 2 problem-solving thinking, we produce solutions. Solutions are outputs – either behaviours that act on the world or changes to internal states. Problem solving takes effort and resource but results in solutions that can potentially be re-used in similar circumstances in the future. This type of learning is going on at all levels of experience from the development of sensory-moror skills like walking or driving a car through to high level cognitive skills such as making difficult decisions and judgements in situations of uncertainty (e.g. a surgeon’s decision to operate on a life-threatening condition). System 1 and system 2 thinking are really just extremes of a spectrum. In practice, any task involves thousands of separate sub-processes some of which are highly learned and automatic and some of which require a degree of problem solving. To an outside observer these processes often appear to mesh seamlessly together.
The learning we do and the models we construct in our minds are very dependent on our own experiences of the world (and this accounts for many of the biases in the way we think). Although our models can be influenced by other people’s stories about how the world works e.g. though our education, peers, family, media etc. (or observing what happens to others), the deepest learning takes place through our own direct experience, and because our experiences are all just different samples of a larger reality, we are all different from each other. Each one of us has merely sampled an infinitely small fraction of an omniscient reality but because of the consistencies in the underlying reality (for example, we all experience the same laws of physics) there are sufficient commonalities in our models that we understand each other to a greater or lesser extent.
Need and maintaining orientation drive us all, and when under normal (but not total) control, we have wellbeing. However, we must always have some manageable gap, so that the system is at least ticking over. This is easily achieved because as lower level needs are satisfied we can always move to others further up the hierarchy, and constant change in the world is usually enough to drive the maintenance of our orientation.
Radio programme links
An index of BBC Radio programmes on cognitive science can be found at:
An index of BBC Radio programmes on Mental processes can be found at:
This Blog Post: ‘Representations of Reality Enable Control’ shows how different levels of description can be used to represent the knowledge that enables us to meet our needs and deal with the unexpected.
Next Up: ‘Are we free?’ delves deeper into freewill, consciousness and moral responsibility. If we are free, then in what sense is this true?