Home » Posts tagged 'machine learning'

Tag Archives: machine learning

Request contact
If you need to get in touch
Enter your email and click the button

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– AI and Neuroscience Intertwined

Artificial intelligence has learnt a lot from neuroscience. It was the move away from symbolic to neural net (machine learning) approaches that led to the current surge of interest in AI. Neural net approaches have enabled AI systems to do humanlike things such as object recognition and categorisation that had eluded the symbolic approaches.

So it was with great interest that I attended Dr. Tim Kietzmann's talk at the Cognitive and Brain sciences Unit (CBU) in Cambridge UK, earlier this month (March 2019), on what artificial intelligence (AI) and neuroscience can learn from each other.

Tim is a researcher and graduate supervisor at the MRC CBU and investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

Both AI and neuroscience aim to understand information processing and decision making - neuroscience primarily through empirical studies and AI primarily through computational modelling. The talk had symmetry. The first half was 'how can neuroscience benefit from artificial intelligence', and the second half was 'how artificial intelligence benefits from neuroscience'.

Types of AI

It is important to distinguish between 'narrow', 'general' and 'super' AI. Narrow AI is what we have now. In this context, it is the ability of a machine learning algorithm to recognise or classify particular things. This is often something visual like a cat or a face, but it could be a sound (as when an algorithm is used to identify a piece of music or in speech recognition).

General AI is akin to what people have. When or if this will happen is speculative. Ray Kurzweil, Google’s Director of Engineering, predicts 2029 as the date when an AI will pass the Turing test (i.e. a human will not be able to tell the difference between a person and an AI when performing tasks). The singularity (the point when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created), he predicts should happen by about 2045. Super AIs exceed human intelligence. Right now, they only appear in fiction and films.

It is impossible to predict how this will unfurl. After all, you could argue that the desktop calculator several decades ago exceeded human capability in the very narrow domain of performing mathematical calculations. It is possible to imagine many very narrow and deep skills like this becoming fully integrated within an overall control architecture capable of passing results between them. That might look quite different from human intelligence.

One Way or Another

Research in machine learning, a sub-discipline of AI, has given neuroscience researchers pattern recognition techniques that can be used to understand high-dimensional neural data. Moreover, the deep learning algorithms, that have been so successful in creating a new range of applications and interest in AI, offer an exciting new framework for researchers like Tim and colleagues, to advance knowledge of the computational principles at play in the brain.  AI allows  researchers to test different theories of brain computations and cognitive function by implementing and testing them. 'Today's computational neuroscience needs machine learning techniques from artificial intelligence'.

AI benefits from neuroscience by informing the development of a wide variety of AI applications from care robots to medical diagnosis and self-driving cars. Some principles that commonly apply in human learning (such as building on previous knowledge and unsupervised learning) are not yet integrated into AI systems.

For example, a child can quickly learn to recognise certain types of objects, even those such as a mythical 'Tufa' that they have never seen before. A machine learning algorithm, by contrast, would require tens of thousands of training instances in order to reliably perform that same task. Also, AI systems can easily be fooled in ways that a person never would.  Adding  a specially crafted 'noise' to an image of a dog,  can lead an AI to misclassify it as an ostrich. A person would still see a dog and not make this sort of mistake. Having said that, children will over-generalise from exposure to a small number of instances, and so also make mistakes.

It could be that the column structures found in the cortex have some parallels to the multi-layered networks used in machine learning and might inform how they are designed. It is also worth noting that the idea of reinforcement learning used to train artificial neural nets, originally came out of behavioural psychology - in particular Pavlov and Skinner. This illustrates the 'intertwined' nature of all these disciplines.

The Neuroscience of Ethics

Although this was not covered in the talk, when it comes to ethics, neuroscience may have much to offer AI, especially as we move from narrow AI into artificial general intelligence (AGI) and beyond. Evidence is growing as to how brain structures, such as the pre-frontal cortex are involved in inhibiting thought and action. Certain drugs affect neuronal transmission and can disrupt these inhibitory signals. Brain lesions and the effects of strokes can also interfere with moral judgements. The relationship of neurological mechanisms to notions of criminal responsibility may also reveal findings relevant to AI. It seems likely that one day the understanding of the relationship between neuroscience, moral reasoning and the high-level control of behaviours will have an impact on the design of, and architectures for, artificial autonomous intelligent systems (i.e. see Neuroethics: Challenges for the 21st Century.Neil Levy - 2007 - Cambridge University Press or A Neuro-Philosophy of Human Nature: Emotional Amoral Egoism and the Five Motivators of Humankind - April 2019).

Understanding the Brain

The reality of the comparison between human and artificial intelligence comes home when you consider the energy requirements of the human brain and computer processors performing similar tasks. While the brain uses about 15 watts of energy, just a single graphics processing unit requires up to 250 watts.

It has often been said that you cannot understand something until you can build it. That provides a benchmark against which we can measure our understanding of neuroscience. Building machines that perform as well as humans is a necessary step in that understanding, although that still does not imply that the mechanisms are the same.

Read more on this subject in an article from Stanford University. Find out more about Tim's work on his website at: http://www.timkietzmann.de or follow him on twitter (@TimKietzmann).

Tim Kietzmann

Tim Kietzmann

– Making Algorithms Trustworthy

Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live.  It’s kind of important we can trust them!

David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.

I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.



Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.

But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.

David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.


The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.

The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


As an aside, David entertained us by reporting on how the machine learning community have become obsessed with training algorithms to assess the characteristics of who did or did not survive the Titanic.   Unsurprisingly, being a woman or a child helped a lot. David used this example to present a statistically derived decision tree.  The point he was making was that the decision tree could (at least sometimes) be used as an explanation, whereas machine learning algorithms are generally black boxes (i.e. you can't inspect the algorithm itself).  

Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable. 

Algorithms need to be both globally and locally explainable. Global explainability relates to the validity of the algorithm in general, while local explainability relates to how the algorithm arrived at a particular decision. One important way of being able to test an algorithm, even when it’s a black box, is to be able to play with inputting different parameters and seeing the result.

Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.

Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used.  Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable.  I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


Without these types of explanation, algorithms such as the one used by the US company Compas to predict rates of  recidivism, are difficult to trust. 

It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently  good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions. 

Most of David’s points apply more broadly than to artificial intelligence and robots.  They are general principles applying to the transparency, accountabilityand user acceptance of any system.  Trust and trustworthiness are everything.

See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html ,      . And read his new book “The Art of Statistics: Learning from Data”, available shortly.

David Spiegelhalter