Home » Posts tagged 'Science'
Tag Archives: Science
Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.
So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics. Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma. Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma. Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.
This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique. This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.
The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.
Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.
It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.
But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.
We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices. We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.
It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress. I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.
We are all deluded. And for the most part we don’t know it. We often feel as though we have control over our own decisions and destiny, but how true is it? It’s a bit like what US Secretary of Defence, Donald Rumsfeld, famously said in February 2002 about the ‘known knowns’, the ‘known unknowns’ and the ‘unknown unknowns’.
Youtube video, Donald Rumsfeld Unknown Unknowns !, Ali, August 2009, 34 seconds
The significance for ROBOT ETHICS: If people can only act on the basis of what they know, then it is easy to see the implications for artificial Autonomous Intelligent Agents (A/ISs) like robots, that ‘know’ so much less. They may act with the same confidence as people, who have a bias to thinking that what they know and their interpretation of the world, is the only way to see it. Understanding the ‘goggles’ through which people see the world, how they learn, how they classify, how they form concepts and how they validate and communicate knowledge is fundamental to embedding ethical self-regulation into A/ISs.
How can a brain that is deluded even get an inkling that it is? For the most part, the individual finds it very difficult. Interestingly, it is often those who are most confident that they are right who are most wrong (and dangerously, who we most trust). The 2002 Nobel Prize winner, Daniel Kahneman has spent a lifetime studying the systematic biases in our thinking. Here is what he says about confidence:
Youtube video, Daniel Kahneman: The Trouble with Confidence, Big Think, February 2012, 2:56 minutes
The fact is, that when it comes to our own interpretations of the world, there is very little that either you or I can absolutely know as demonstrated by René Descartes in 1637. It has long been know that we have deficiencies in our abilities to understand and interpret the world, and indeed, it can be argued that the whole system of education is motivated by the need to help individuals make more informed and more rational decisions (although it can be equally argued that education and training in particular, is a sausage factory in the service of employers whose interests may not align with those of the individual).
The significance for ROBOT ETHICS: Whilst people may have some idea that there are things they do not know, this is generally untrue of most computer programs. Young children start to develop ethical ideas (e.g. a sense of fairness) from an early age. Then it takes years of schooling and good parenting to get to the point where, as an adult, the law assumes you have full responsibility for your actions. This highlights the huge gap between an adult human’s understanding of ethics and what A/ISs are likely to understand for the foreseeable future.
The debate about whether we should act by reason or by our intuitions and emotions is not new. The classic work on this is Kant’s ‘Critique of Pure Reason’ published in 1781. This is a masterpiece of epistemological analysis covering science, mathematics, the psychology of mind and belief based on faith and emotion. Kant distinguishes between truth by definition, truth by inference and truth by faith, setting out the main strands of debate for centuries to come. Here is a short, clear presentation of this work.
Introduction to Kant’s Critique of Pure Reason (Part 1 of 4), teach philosophy, September 2013, 4:52 minutes
From an individual’s point of view, by a process of cross validation between different sources of evidence (people we trust, the media and society generally, our own reasoned thinking, sometimes scientific research and our feelings), we are continuously challenged to construct a consistent view about the world and about ourselves. We feel a need to create at least some kind of semi-coherent account. It’s a primary mechanism of reducing anxiety. It keeps us orientated and safe. We need to account for it personally, and in this sense we are all ‘personal’ scientists, sifting the evidence and coming to our own conclusions. We also need to account for it as a society, which is why we engage in science and research to build a robust body of knowledge to guide us.
George Kelly, in 1955, set out ‘personal construct theory’ to describe this from the perspective of the individual – see, for example this straight-forward account of constructivism which also, interestingly, proposes how to reconcile it with Christianity – a belief system based on an entirely different premise, methodology and pedigree):
But for the most part there are inconsistencies – between what we thought would happen and what actually did happen, between how we felt and how we thought, between how we thought and what we did, between how we thought somebody would react and how they did react, between our theories about the world and the evidence. Some of the time things are pretty well what we expect but almost as frequently, things don’t hang together, they just don’t add up. This drives us on a continuous search for patterns and consistency. We need to make sense of it all:
Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011,4:31 minutes
But it turns out that really, as Kahneman demonstrates, we are not particularly good scientists after all. Yes, we have to grapple with the problems of interpreting evidence. Yes, we have to try and understand the world in order to reduce our own anxieties and make it a safer place. But, no, we do not do this particularly systematically or rationally. We are lazy and we are also as much artists as we are scientists. In fact, what we are is ‘story tellers’. We make up stories about how the world works – for ourselves and for others.
The significance for ROBOT ETHICS: The implications for A/ISs is that they must learn to see the world in a manner that is similar (or at least understandable) to the people around them. Also, they must have mechanisms to deal with ambiguous inputs and uncertain knowledge, because not much is straightforward when it comes to processing at the abstract level of ethics. Dealing with contradictory evidence by denial, forgetting and ignoring, as people often do, may not be the way we would like A/ISs to deal with ethical issues.
Sifting evidence is not the only way that we come to ‘know’. There is another method that, in many ways, is a lot more efficient and used just as often. This is to believe what somebody else says. So instead of having to understand and reconcile all the evidence yourself you can, as it were, delegate the responsibility to somebody you trust. This could be an expert, or a friend, or a God. After all, what does it matter whether what you (or anybody else) believe is true or not, so long as your needs are being met. If somebody (or something) repeatedly comes up with the goods, you learn to trust them and when you trust, you can breathe a sigh of relief – you no longer have to make the effort to evaluate the evidence yourself. The source of information is often just as important as the information itself. Despite the inconsistencies we believe the stories of those we trust, and if others trust us, they believe our stories.
Stories provide the explanations for what has happened and stories help us understand and predict what will happen. Our anxiety is most relieved by ‘a good story’. And while the story needs to have some resemblance to the evidence, and like in court can be challenged and cross-examined, what seems to matter most is that it is a ‘good’ story. And to be a ‘good’ story it must be interesting, revealing, surprising and challenging. Its consistency is just one factor. In fact, there can be many different stories, or accounts, of precisely the same incident or event – each account from a different perspective; interpreting, weighing and presenting the evidence from a different viewpoint or through a different value system. The ‘truth’ is not just how well the story accounts for the evidence but is also to do with a correspondence between the interpretive framework of the listener and that of the teller:
YouTube Video, The danger of a single story | Chimamanda Ngozi Adichie, TED, October 2009, 19:16 minutes
Both as individuals and as societies, we often deny, gloss over and suppress the inconsistencies. They can be conveniently forgotten or repressed long enough for something else to demand our attention and pre-occupy us. But also sometimes, for the sake of a ‘better’ story (often one that better reflects the biases in our own value system), the inconsistencies and the evidence about ourselves and the human condition fight back. Inconsistencies can re-emerge to create nagging doubts, and over time we start to wonder – is our story really true?
The significance for ROBOT ETHICS:Just like people, A/ISs will have to learn who to trust, identify and resolve inconsistencies in belief, and how to construct a variety of accounts of the world and their own decision making processes in order to explain themselves and generally communicate in forms that are understandable to people. Like in human dialogue, these accounts will need to bring out certain facets of it’s own beliefs, and afford certain interpretations, depending on the intent of the A/IS and taking into account a knowledge of the person or people it is in dialogue with. Unlike, in human dialogue, the intent of the A/IS must be to enhance the wellbeing of the people it serves (except when their intent is malicious with respect to other people), and to communicate transparently with this intent in mind.
Some Epistemological Assumptions
In these blog postings, I try not to take for granted any particular story about how we are and how we relate to each other? What really lies behind our motivations, decisions and choices? Is it the story that classical economists tell us about rational people in a world of perfect information? Is it the story neuroscientists tell us about how the brain works? Is it the story about the constant struggle between the id and the super-ego told to us by Freud? Is it the story that the advertising industry tell us about what we need for a more fulfilled life? Or is it the story that cognitive psychologists tell us about how we process information? Which account tells the best story? Can these different accounts be reconciled?
The epistemological view taken in this blog is eclectic, constructivist and pragmatic. As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make, become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.
We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.
In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.
The significance for ROBOT ETHICS:To embed ethical self-regulation in artificial Autonomous, Intelligent Systems (A/ISs) will require an understanding of how people learn, interpret, reflect and act on the world and may require a similar decision-making architecture. This is partly for the A/IS’s own ‘operating system’ but also so that it can model how the people around them operate so that it can engage with them ethically and effectively.
This Blog Post: ‘It’s Like This’ sets the epistemological framework for what follows in later posts. It’s the underlying assumptions about how we know, justify and explain what we know – both as individuals and in society.