Home » Ethics » – Its All Broken, but we can fix it

Request contact
If you need to get in touch
Enter your email and click the button

– Its All Broken, but we can fix it

Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.

Moral Machines?

This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.

The Unpredictable...

Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam!  Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.

...But Possible

Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.

AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.

Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.

While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.

Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).

We can fix it - yes we can

On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.

For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.

Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.

Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.

I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

Mark Stevenson
Mark Stevenson

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: