Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live. It’s kind of important we can trust them!
David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.
I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.
Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.
But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.
David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.

The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.
The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable.


Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.
Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used. Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable. I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions.
Most of David’s points apply more broadly than to artificial intelligence and robots. They are general principles applying to the transparency, accountabilityand user acceptance of any system. Trust and trustworthiness are everything.
See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html , . And read his new book “The Art of Statistics: Learning from Data”, available shortly.



David Spiegelhalter

Share this:
- Click to print (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on Reddit (Opens in new window)