Home » Talks

Category Archives: Talks

Request contact
If you need to get in touch
Enter your email and click the button

– Next Stop, Biological AI

This truly startling talk by Professor Michael Levin, from the Allen Discovery Center at Tufts University, has implications for everything – not just regenerative medicine.

It is no exaggeration to describe the work done in Levin’s lab as Frankensteinian. This is not a criticism, just an inevitable observation.

Levin describes biochemical interventions that can effect electrical transmission at the inter-cellular level in a range of organisms. These change the parameters for regeneration of body parts and reveal that a non-neural regenerative memory can exist throughout an organism. From the start of evolution of ‘primitive’ life forms, anatomical decision-making is taking place in every cell, and at every level of body structure.

Levin gives a highly informed factual account of findings in bioelectrical computation. Although he only touches on the implications, these techniques potentially lead to a technology that can design new life-forms and biologically-based computation devices.

It seems incredible that research results like these are possible now. It may be years or decades before it translates into medical interventions for humans, or is applied to creating biologically-based artificial intelligence, but the vision is clear.

To me, more frightening than the content of this talk, is the Facebook logo hanging over Levin’s head (no doubt just promotion, but still!).

YouTube Video, What Bodies Think About: Bioelectric Computation Outside the Nervous System – NeurIPS 2018, Artificial Intelligence Channel, December 2018, 52:06 minutes

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– AI and Neuroscience Intertwined

Artificial intelligence has learnt a lot from neuroscience. It was the move away from symbolic to neural net (machine learning) approaches that led to the current surge of interest in AI. Neural net approaches have enabled AI systems to do humanlike things such as object recognition and categorisation that had eluded the symbolic approaches.

So it was with great interest that I attended Dr. Tim Kietzmann's talk at the Cognitive and Brain sciences Unit (CBU) in Cambridge UK, earlier this month (March 2019), on what artificial intelligence (AI) and neuroscience can learn from each other.

Tim is a researcher and graduate supervisor at the MRC CBU and investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

Both AI and neuroscience aim to understand information processing and decision making - neuroscience primarily through empirical studies and AI primarily through computational modelling. The talk had symmetry. The first half was 'how can neuroscience benefit from artificial intelligence', and the second half was 'how artificial intelligence benefits from neuroscience'.

Types of AI

It is important to distinguish between 'narrow', 'general' and 'super' AI. Narrow AI is what we have now. In this context, it is the ability of a machine learning algorithm to recognise or classify particular things. This is often something visual like a cat or a face, but it could be a sound (as when an algorithm is used to identify a piece of music or in speech recognition).

General AI is akin to what people have. When or if this will happen is speculative. Ray Kurzweil, Google’s Director of Engineering, predicts 2029 as the date when an AI will pass the Turing test (i.e. a human will not be able to tell the difference between a person and an AI when performing tasks). The singularity (the point when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created), he predicts should happen by about 2045. Super AIs exceed human intelligence. Right now, they only appear in fiction and films.

It is impossible to predict how this will unfurl. After all, you could argue that the desktop calculator several decades ago exceeded human capability in the very narrow domain of performing mathematical calculations. It is possible to imagine many very narrow and deep skills like this becoming fully integrated within an overall control architecture capable of passing results between them. That might look quite different from human intelligence.

One Way or Another

Research in machine learning, a sub-discipline of AI, has given neuroscience researchers pattern recognition techniques that can be used to understand high-dimensional neural data. Moreover, the deep learning algorithms, that have been so successful in creating a new range of applications and interest in AI, offer an exciting new framework for researchers like Tim and colleagues, to advance knowledge of the computational principles at play in the brain.  AI allows  researchers to test different theories of brain computations and cognitive function by implementing and testing them. 'Today's computational neuroscience needs machine learning techniques from artificial intelligence'.

AI benefits from neuroscience by informing the development of a wide variety of AI applications from care robots to medical diagnosis and self-driving cars. Some principles that commonly apply in human learning (such as building on previous knowledge and unsupervised learning) are not yet integrated into AI systems.

For example, a child can quickly learn to recognise certain types of objects, even those such as a mythical 'Tufa' that they have never seen before. A machine learning algorithm, by contrast, would require tens of thousands of training instances in order to reliably perform that same task. Also, AI systems can easily be fooled in ways that a person never would.  Adding  a specially crafted 'noise' to an image of a dog,  can lead an AI to misclassify it as an ostrich. A person would still see a dog and not make this sort of mistake. Having said that, children will over-generalise from exposure to a small number of instances, and so also make mistakes.

It could be that the column structures found in the cortex have some parallels to the multi-layered networks used in machine learning and might inform how they are designed. It is also worth noting that the idea of reinforcement learning used to train artificial neural nets, originally came out of behavioural psychology - in particular Pavlov and Skinner. This illustrates the 'intertwined' nature of all these disciplines.

The Neuroscience of Ethics

Although this was not covered in the talk, when it comes to ethics, neuroscience may have much to offer AI, especially as we move from narrow AI into artificial general intelligence (AGI) and beyond. Evidence is growing as to how brain structures, such as the pre-frontal cortex are involved in inhibiting thought and action. Certain drugs affect neuronal transmission and can disrupt these inhibitory signals. Brain lesions and the effects of strokes can also interfere with moral judgements. The relationship of neurological mechanisms to notions of criminal responsibility may also reveal findings relevant to AI. It seems likely that one day the understanding of the relationship between neuroscience, moral reasoning and the high-level control of behaviours will have an impact on the design of, and architectures for, artificial autonomous intelligent systems (i.e. see Neuroethics: Challenges for the 21st Century.Neil Levy - 2007 - Cambridge University Press or A Neuro-Philosophy of Human Nature: Emotional Amoral Egoism and the Five Motivators of Humankind - April 2019).

Understanding the Brain

The reality of the comparison between human and artificial intelligence comes home when you consider the energy requirements of the human brain and computer processors performing similar tasks. While the brain uses about 15 watts of energy, just a single graphics processing unit requires up to 250 watts.

It has often been said that you cannot understand something until you can build it. That provides a benchmark against which we can measure our understanding of neuroscience. Building machines that perform as well as humans is a necessary step in that understanding, although that still does not imply that the mechanisms are the same.

Read more on this subject in an article from Stanford University. Find out more about Tim's work on his website at: http://www.timkietzmann.de or follow him on twitter (@TimKietzmann).

Tim Kietzmann

Tim Kietzmann

– Making Algorithms Trustworthy

Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live.  It’s kind of important we can trust them!

David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.

I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.



Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.

But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.

David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.


The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.

The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


As an aside, David entertained us by reporting on how the machine learning community have become obsessed with training algorithms to assess the characteristics of who did or did not survive the Titanic.   Unsurprisingly, being a woman or a child helped a lot. David used this example to present a statistically derived decision tree.  The point he was making was that the decision tree could (at least sometimes) be used as an explanation, whereas machine learning algorithms are generally black boxes (i.e. you can't inspect the algorithm itself).  

Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable. 

Algorithms need to be both globally and locally explainable. Global explainability relates to the validity of the algorithm in general, while local explainability relates to how the algorithm arrived at a particular decision. One important way of being able to test an algorithm, even when it’s a black box, is to be able to play with inputting different parameters and seeing the result.

Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.

Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used.  Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable.  I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


Without these types of explanation, algorithms such as the one used by the US company Compas to predict rates of  recidivism, are difficult to trust. 

It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently  good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions. 

Most of David’s points apply more broadly than to artificial intelligence and robots.  They are general principles applying to the transparency, accountabilityand user acceptance of any system.  Trust and trustworthiness are everything.

See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html ,      . And read his new book “The Art of Statistics: Learning from Data”, available shortly.

David Spiegelhalter

– Its All Broken, but we can fix it

Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.

Moral Machines?

This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.

The Unpredictable...

Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam!  Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.

...But Possible

Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.

AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.

Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.

While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.

Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).

We can fix it - yes we can

On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.

For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.

Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.

Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.

I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

Mark Stevenson
Mark Stevenson