Home » Ethics » – AI and Neuroscience Intertwined

Request contact
If you need to get in touch
Enter your email and click the button

– AI and Neuroscience Intertwined

Artificial intelligence has learnt a lot from neuroscience. It was the move away from symbolic to neural net (machine learning) approaches that led to the current surge of interest in AI. Neural net approaches have enabled AI systems to do humanlike things such as object recognition and categorisation that had eluded the symbolic approaches.

So it was with great interest that I attended Dr. Tim Kietzmann's talk at the Cognitive and Brain sciences Unit (CBU) in Cambridge UK, earlier this month (March 2019), on what artificial intelligence (AI) and neuroscience can learn from each other.

Tim is a researcher and graduate supervisor at the MRC CBU and investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

Both AI and neuroscience aim to understand information processing and decision making - neuroscience primarily through empirical studies and AI primarily through computational modelling. The talk had symmetry. The first half was 'how can neuroscience benefit from artificial intelligence', and the second half was 'how artificial intelligence benefits from neuroscience'.

Types of AI

It is important to distinguish between 'narrow', 'general' and 'super' AI. Narrow AI is what we have now. In this context, it is the ability of a machine learning algorithm to recognise or classify particular things. This is often something visual like a cat or a face, but it could be a sound (as when an algorithm is used to identify a piece of music or in speech recognition).

General AI is akin to what people have. When or if this will happen is speculative. Ray Kurzweil, Google’s Director of Engineering, predicts 2029 as the date when an AI will pass the Turing test (i.e. a human will not be able to tell the difference between a person and an AI when performing tasks). The singularity (the point when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created), he predicts should happen by about 2045. Super AIs exceed human intelligence. Right now, they only appear in fiction and films.

It is impossible to predict how this will unfurl. After all, you could argue that the desktop calculator several decades ago exceeded human capability in the very narrow domain of performing mathematical calculations. It is possible to imagine many very narrow and deep skills like this becoming fully integrated within an overall control architecture capable of passing results between them. That might look quite different from human intelligence.

One Way or Another

Research in machine learning, a sub-discipline of AI, has given neuroscience researchers pattern recognition techniques that can be used to understand high-dimensional neural data. Moreover, the deep learning algorithms, that have been so successful in creating a new range of applications and interest in AI, offer an exciting new framework for researchers like Tim and colleagues, to advance knowledge of the computational principles at play in the brain.  AI allows  researchers to test different theories of brain computations and cognitive function by implementing and testing them. 'Today's computational neuroscience needs machine learning techniques from artificial intelligence'.

AI benefits from neuroscience by informing the development of a wide variety of AI applications from care robots to medical diagnosis and self-driving cars. Some principles that commonly apply in human learning (such as building on previous knowledge and unsupervised learning) are not yet integrated into AI systems.

For example, a child can quickly learn to recognise certain types of objects, even those such as a mythical 'Tufa' that they have never seen before. A machine learning algorithm, by contrast, would require tens of thousands of training instances in order to reliably perform that same task. Also, AI systems can easily be fooled in ways that a person never would.  Adding  a specially crafted 'noise' to an image of a dog,  can lead an AI to misclassify it as an ostrich. A person would still see a dog and not make this sort of mistake. Having said that, children will over-generalise from exposure to a small number of instances, and so also make mistakes.

It could be that the column structures found in the cortex have some parallels to the multi-layered networks used in machine learning and might inform how they are designed. It is also worth noting that the idea of reinforcement learning used to train artificial neural nets, originally came out of behavioural psychology - in particular Pavlov and Skinner. This illustrates the 'intertwined' nature of all these disciplines.

The Neuroscience of Ethics

Although this was not covered in the talk, when it comes to ethics, neuroscience may have much to offer AI, especially as we move from narrow AI into artificial general intelligence (AGI) and beyond. Evidence is growing as to how brain structures, such as the pre-frontal cortex are involved in inhibiting thought and action. Certain drugs affect neuronal transmission and can disrupt these inhibitory signals. Brain lesions and the effects of strokes can also interfere with moral judgements. The relationship of neurological mechanisms to notions of criminal responsibility may also reveal findings relevant to AI. It seems likely that one day the understanding of the relationship between neuroscience, moral reasoning and the high-level control of behaviours will have an impact on the design of, and architectures for, artificial autonomous intelligent systems (i.e. see Neuroethics: Challenges for the 21st Century.Neil Levy - 2007 - Cambridge University Press or A Neuro-Philosophy of Human Nature: Emotional Amoral Egoism and the Five Motivators of Humankind - April 2019).

Understanding the Brain

The reality of the comparison between human and artificial intelligence comes home when you consider the energy requirements of the human brain and computer processors performing similar tasks. While the brain uses about 15 watts of energy, just a single graphics processing unit requires up to 250 watts.

It has often been said that you cannot understand something until you can build it. That provides a benchmark against which we can measure our understanding of neuroscience. Building machines that perform as well as humans is a necessary step in that understanding, although that still does not imply that the mechanisms are the same.

Read more on this subject in an article from Stanford University. Find out more about Tim's work on his website at: http://www.timkietzmann.de or follow him on twitter (@TimKietzmann).

Tim Kietzmann

Tim Kietzmann

3 Comments

  1. Artificial intelligence (AI) has been evolved from human intelligence (HI). It is playing important part in data science through categorization and object recognition.

    Neuroscience describes the mechanism and motor application of human intelligence. In my opinion, the EEG device supported with advanced artificial intelligence can observe the quality of knowledge transfer in the student’s brain of future classroom. Thanks for the writing

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: