Home » Posts tagged 'Belgrave'
Tag Archives: Belgrave
Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.
So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics. Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma. Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma. Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.
This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique. This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.
The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.
Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.
It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.
But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.
We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices. We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.
It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress. I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.