Home » Ethics » – Ethical AI

Request contact
If you need to get in touch
Enter your email and click the button

– Ethical AI

Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.

So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics.  Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma.  Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma.  Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.

This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique.  This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.

The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.

Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.

It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.

But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.

We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices.  We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.

It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress.  I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.

Danielle Belgrave

Danielle Belgrave

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: