Home » Talks (Page 2)

Category Archives: Talks

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– AI: Measures, Maps and Taxonomies

Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.

José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.

José Hernández-Orallo
José Hernández-Orallo

José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.

Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.

Intelligence not included
Intelligence not included

My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.

Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.

When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.

But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.

– What does it mean to be human?

John Wyatt is a doctor, author and research scientist. His concern is the ethical challenges that arise with technologies like artificial intelligence and robotics. On Tuesday this week (11th March 2019) he gave a talk called ‘What does it mean to be human?’ at the Wesley Methodist Church in Cambridge.

To a packed audience, he pointed out how interactions with artificial intelligence and robots will never be the same as the type of ‘I – you’ relationships that occur between people. He emphasised the important distinction between ‘beings that are born’ and ‘beings that are made’ and how this distinction will become increasingly blurred as our interactions with artificial intelligence become commonplace. We must be ever vigilant against the use of technology to dehumanise and manipulate.

I can see where this is going. The tendency for people to anthropomorphise is remarkably strong - ‘the computer won’t let me do that’, ‘the car has decided not to start this morning’. Research shows that we can even attribute intentions to animated geometrical shapes ‘chasing’ each other around a computer screen, let alone cartoons. Just how difficult is it going to be to not attribute the ‘human condition’ to a chatbot with an indistinguishably human voice or a realistically human robot. Children are already being taught to say ‘please’ and ‘thank you’ to devices like Alexa, Siri and Google Home – maybe a good thing in some ways, but …

One message I took away from this talk was a suggestion for a number of new human rights in this technological age. These are: (1) The right to cognitive liberty (to think whatever you want), (2) The right to mental privacy (without others knowing) (3) The right to mental integrity and (4) The right to psychological continuity - the last two concerning the preservation of ‘self’ and ‘identity’.

A second message was to consider which country was most likely to make advances in the ethics of artificial intelligence and robotics. His conclusion – the UK. That reassures me that I’m in the right place.

See more of John’s work, such as his essay ‘God, neuroscience and human identity’ at his website johnwyatt.com

John Wyatt

– Ethical AI

Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.

So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics.  Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma.  Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma.  Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.

This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique.  This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.

The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.

Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.

It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.

But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.

We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices.  We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.

It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress.  I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.

Danielle Belgrave

Danielle Belgrave

– A Changing World: so, what’s to worry about?

A World that can change – before your eyes!

I’ve been to a couple of good talks in Cambridge (UK) this week. First, futurist Sophie Hackford (formally of Singularity University and Wired magazine) gave a fast-paced talk about a wide range of technologies that are shaping the future. If you don’t know about swarms of drones, low orbit satellite monitoring, neural in-plants, face recognition for payments, high speed trains and rocket transportation then you need to, fast. I haven’t found a video of this very recent talk yet, but the one below from a year ago gives a pretty good indication of why we need to think through the ethical issues.

YouTube Video, Tech Round-up of 2017 | Sophie Hackford | CTW 2017, January 2018, 26:36 minutes

The Age of Surveillance Capitalism

The second talk, in some ways, is even more scary. We are already aware that the likes of Google, Facebook and Amazon are closely watching our every move (and hearing our every breath). And now almost every other company that is afraid of being left behind is doing the same thing, But what data are they collecting and how are they using it. They use the data to predict our behaviour and sell it on the behavioural futures market. Not just our computer behaviour but they are also influencing us in the real world. For example, apparently Pokamon Go was an experiment originally dreamed up by Google to see if retailers would pay to host ‘monsters’ to increase footfall past their stores. The talk by Shoshana Zuboff was at the Cambridge University Law Faculty. Here is an interview she did on radio the same day.

BBC Radio 4, Start the Week, Who is Watching You?, Monday 4th February 2019, 42:00 minutes
https://www.bbc.co.uk/programmes/m0002b8l