Home » Posts tagged 'Ethics'

Tag Archives: Ethics

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Autonomy now

Making sense of a changing world

It’s difficult to make sense of a fast changing world. But could ‘autonomy’ be at the centre of it? I’ll explain.

There are two main themes – people and technology. Ideas about autonomy are changing for both. For people, it is a matter of their relationship to employment, government and the many institutions of society. For technology, it is the introduction of autonomous intelligence in a wide range of systems including phone apps and all manner of automated decision making systems that affect every aspect of our lives. There is also the increasing inter-dependency between people and technology, both empowering and constraining. Questions of autonomy are at the heart of these changes.

There have been times in history when it has not occurred to people that they could be autonomous in the broad scope of their lives. They were born into a time and place where the control of their destiny was not their own concern. They were conditioned to know their place, accept it and stay in it. In first world democracies, autonomy is, perhaps, a luxury of the here and now. It may not necessarily stay that way.


My particular interest is the way in which we are giving autonomy to the things that we create – computer algorithms, artificial intelligence systems and robots. But it’s broader than that. We all want the freedom to pursue our own goals, to self-determine. We are told repeatedly by an industry concerned with self-development and achieving success, that we should ‘find our authentic self’ and pursue the values and goals that really matter to us.

However, we can only do this within an environment of constraints – physical constraints, resource constraints, psychological constraints and social constraints. It is the dynamic between the individual and their constraints that is in constant flux and that I am trying to examine.

What’s Trending in Autonomy?

There are two main trends – one towards decentralisation and one towards concentrations of wealth and power. This seems something of a contradiction. How can both be true and, if so, where is this going?

There is a long-term trend towards decentralization. First we rejected the ancient gods as the controllers of nature. Much more recently we have started to question other sources of authority – governments, doctors, the church, the law, the mainstream media and many other institutions in society. As we as individuals have become more informed and empowered by technology, we have started to see the flaws in these ‘authorities’.

I believe, along with many other commentators, that we are heading towards a world where autonomy is not just highly valued but is also more possible than it ever has been. As society becomes better educated, as technology enables greater information sharing and flexibility, we can, and perhaps inevitably will, move towards a more decentralized society in which both human and artificial autonomous agents increasingly interact with each other. The interactions will become more frequent, more informed, more fine-grained and more local. The technological infrastructure for this continues to rollout at ever-increasing pace. Soon we will have 5G communications facilitating almost instantaneous communications between ever more intelligent and powerful devices – smart phones, autonomous vehicles, information servers, and a host of smart devices.

On the other hand, there is clear evidence of increased concentrations of wealth and power. Although estimates may vary, it seems that a greater proportion of the worlds wealth is held by fewer and fewer people. Stories abound of fewer that eight people owning more than half the worlds wealth. Economists like Thomas Pickerty have documented in detail the evidence for such a trend.

There is clearly a tension between these trends. As power and wealth become more concentrated, manifesting in the form of surveillance capitalism (not ignoring surveillance by the state) and fake news, there is a fight back by individuals and other institutions.

Individuals increasingly recognise the intrusions on their privacy and this is picked up (often belatedly) in legislation like GDPR and other moves to regulate. The checks and balances do work to modulate the dynamics of the struggle, but when they don’t work, the accumulated frustration at the loss of human dignity can become political and violent. Take a closer look at autonomy.

Why do we need Autonomy?

We each have a biological imperative to survive. While we can count on others to some extent, ‘the buck stops’ with each of us as individuals. The more robust and resilient solutions are self-sufficiency and self-determination. It’s not a fail-safe but it takes out the risk that others might not be there for us all the time. It also appears to be a route to greater wellbeing. Learning, developing competence and mastery, being able to predict and hence increase the possibility that we can control, being less subject to constraints on our actions – all contribute to satisfaction, ultimately in the service of survival.

In the hierarchy of needs, having enough food, shelter, sleep and security releases some of your own resources. It provides the freedom to climb. Somewhere near the top of the hierarchy is what Maslow called self-actualisation – the discovery and expression of your authentic self. But unless you are exceptionally lucky, and find that your circumstances align perfectly with your authentic self, then a pre-requisite is to have freedom from the constraints that prevent you from getting there.

Interactions between people and machines

This is all the human side of autonomy – the bit that applies to us all. This is a world in which both people and artificial agents – computer algorithms, smart devices, robots etc. interact with each other. Interactions between people and people, machines and machines and people and machines are accelerating in both speed and frequency in order that each autonomous agent can achieve its own rapidly changing priorities and goals. There is nothing static or certain in this world. It is changing, ambiguous, and unpredictable.

Different autonomous agents have different goals and different value systems that drive them. For people these are different cultures, social norms and roles. For machines they relate to their different functions and circumstances in which they operate. For interactions to work smoothly there needs to be some stability in the protocols that regulate them. Autonomy may be a way into defining accountability and responsibility. It may lead us towards mechanisms for the justification and explanation of action. Neither machines nor people are very good at this, but autonomy may provide the key that unlocks our understanding of effective communication and protocols.

Still, that’s for later. Right now, this article is just focused on the concept of autonomy.
I hope you are convinced that this is an important and interesting subject. It is at the foundation of our relationships with each other and between people and the increasingly autonomous and intelligent agents we are creating.

Questions that need to be addressed


  • What do we mean by autonomy?
  • How do agents (people and machines) differ in the amount of autonomy they have?
  • Can we measure autonomy?
  • What examples of peoples societies and artefacts can we think of that might help us understand what is it what is not autonomous?
  • What do we mean by autonomy when we talk about artificial autonomous intelligence systems?
  • Are the computer algorithms and robotic systems that we have today truly autonomous?
  • What would it mean to build an artificial intelligence that was truly autonomous?
  • What is the relationship between autonomy and morality?
  • Can we be truly autonomous if we are constrained by ethical principles and social norms?
  • If we want our intelligent artefacts to behave ethically, then how would we approach the design of those systems?

That’s quite a chunk of questions to get through. But they are all on the critical path to understanding how our own human autonomy and the autonomy that we build into artefacts, can relate to and engage with each other. They take us to a point where we can better understand the trade-offs every intelligent agent, be it human or artificial, has to make between the freedom to pursue its own goals and the constraints of living in a society of other intelligent agents.

It also reveals how, in people, the constraints of society are internalised. As adults they have become part of our internal control mechanisms. These internal controls have no absolute morality but reflect the circumstances and society in which we grow up. As our artefacts become increasingly intelligent we may need to develop similar mechanisms for their socialisation.

Definitions of Autonomy

The following definitions are taken from the glossary of the IEEE publication called ‘Ethically Aligned Design’ (version 1). The glossary has several definitions from different perspectives:

Ordinary language: The ability of a person or artefact to govern itself including formation of intentions, goals, motivations, plans of action, and execution of those plans, with or without the assistance of other persons or systems.

Engineering: “Where an agent acts autonomously, it is not possible to hold any one else responsible for its actions. In so far as the agent’s actions were its own and stemmed from its own ends, others cannot be held responsible for them” (Sparrow 2007, 63).

Government: “we define local [government] autonomy conceptually as a system of local government in which local government units have an important role to play in the economy and the intergovernmental system, have discretion in determining what they will do without undue constraint from higher levels of government, and have the means or capacity to do so” (Wolman et al 2008, 4-5).

Ethics and Philosophy: “Put most simply, to be autonomous is to be one’s own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman 2015).

Medical: “Two conditions are ordinarily required before a decision can be regarded as autonomous. The individual has to have the relevant internal capacities for self-government and has to be free from external constraints. In a medical context a decision is ordinarily regarded as autonomous where the individual has the capacity to make the relevant decision, has sufficient information to make the decision and does so voluntarily” (British Medical Association 2016).

More on autonomy later. Sign up to the blog if you want to be notified.

Meanwhile a couple of videos

The first has an interesting take on autonomy. Autonomy is not a matter of what you want, but what you want to want. The more reflective you are about what you want the more autonomous you are.

Youtube Video, What is Autonomy? (Personal and Political), Carneades.org, December 2018, 6:50 minutes

https://www.youtube.com/watch?v=z0uylpfirfM

The second is from a relatively new Youtube channel called ‘Rebel Wisdom’. It starts with the breakdown of trust in traditional media and moves on to themes of decentralisation.

Youtube Video, The War on Sensemaking, Daniel Schmachtenberger, Rebel Wisdom, August 2019, 1:48:49 hours

https://www.youtube.com/watch?v=7LqaotiGWjQ&t=17s

– Sex Robots

A Brief Summary by Eleanor Hancock


Sex robots have been making the headlines recently. We have been told they have the power to endanger humans or fulfil our every sexual fantasy and desire. Despite the obvious media hype and sensationalism, there are many reasons for us to be concerned about sex robots in society.

Considering the huge impact that sexbots may have in the realms of philosophy, psychology and human intimacy, it is hard to pinpoint the primary ethical dilemmas surrounding the production and adoption of sex robots in society, as well as considering who stands to be affected the most.

This article covers the main social and ethical deliberations that currently surround the use of sex robots and what we might expect in the next decade.

What companies are involved in the design and sale of sex robots?

One of the largest and most well-known retailers of sex dolls and sex robots is Realbotix in San Francisco. They designed and produced ‘Realdolls’ for years but in 2016 they released their sex robot Harmony, which also has a corresponding phone application that allows you to ‘customise’ your robotic companion. Spanish developer Sergi also released Samantha the sexbot, who is a life-sized gynoid which can talk and interact with users. When sex robots become more sophisticated and can gather intimate and personal user data from us, we may have more reason to be concerned about who is designing and manufacturing sex robots – and what they are doing with our sexual data.

What will sex robots look like?

The current state of sex dolls and robots has largely commodified the human body, with the female human body appearing to be more popular in the consumer sphere amongst most sex robot and doll retailers. With that in mind, male sex robots appear to be increasing in popularity and two female journalists have documented their experiences with male sex dolls. Furthermore, there are also instances of look-a-like sex dolls who replicate and mimic celebrities. To this effect, sex robot manufacturers have had to make online statements about their refusal to replicate people, without the explicit permission of that person or their estate. The industry is proving hard to regulate and the issue of copyright in sex robots may be a real ethical and social dilemma for policy makers in the future. However, there have also been examples of sex robots and dolls that do not resemble human form, such as the anime and alien-style dolls.

Will sex robots impact gender boundaries?

Sex robots will always be genderless artifice. However, allowing sex robots to enter the human sexual arena may allow humans to broaden their sexual fantasies. Sex robots may even be able to replicate both genders through customisation and add-on parts. As mentioned previously, the introduction of genderless artifice who do not resemble humans may positively impact human sexual relations by broadening sexual and intimate boundaries.

Who will use sex robots?

There has been variation between the research results studying whether people would use sex robots. The fluctuations in research results mean it is difficult to pinpoint who exactly would use a sex robot and why. Intensive research about the motivations to use sex robots has highlighted the complexities behind such choice that mirror our own human sexual relationships. However, most research studies have been consistent when reporting which gender is most likely to have sex with a robot, with most studies suggesting males would always be more likely than females to have sex with a robot and purchase a sex robot.

Can sex robots be used to help those with physical or mental challenges access sexual pleasure?

Sex robots may allow people to practice sexual acts or receive sexual acts that they are otherwise unable to obtain due to serious disabilities. The ethics behind such a practice have been divisive between radical feminists who deny sex is a human-right, and critics who think it could be medically beneficial and therapeutic.

Will sex robots replace human lovers?

There has not been enough empirical research on the effects of sexual relations with robots and to what extent they are able to reciprocate the same qualities in a human relationship. However, it is inferable that some humans will form genuine sexual or/and intimate relationships with sex robots, which may impede their desire to bother or desire human relationships anymore. The Youtube sensation ‘Davecat’ highlights how a man and his wife have been able to incorporate sex dolls into their married life comfortably. In a similar episode, Arran Lee Wright displayed his sexbot on British daytime television and was supportive of the use of sexbots between couples.

Will sex robots lead to social isolation and exclusion?

There are many academics who already warn us against the isolating impact technology has on our real-life relationships. Smartphones and social media have increased our awareness about online and virtual relationships and some academics believe sex robots signal a sad reflection of humanity. There is a risk that some people may become more isolated as they chose robotic lovers over humans but there is not enough empirical research to deliver a conclusion at this stage.

Will sex robot prostitutes replace human sex workers?

As much as there have been examples of robot and doll brothels and rent-a-doll escort agencies, it is difficult to tell whether sex robots will ever be able to replace human sex workers completely. Some believe there are benefits from adopting robots as sex workers and a 2012 paper suggested that by 2050, the Red Light District in Amsterdam would only facilitate sex robot prostitution. Escort agency owners and brothel owners have spoken about the reduction in management and time costs that using dolls or robots would deliver. However, sociological research from the sex industry suggests sex robots will have a tough time replacing all sex workers, and specifically escorts who need a high range of cognitive skills in order to complete their job and successfully manipulative a highly saturated and competitive industry.

How could sex robots be dangerous?

It seems at this stage, there is not enough research about sex robots to jump to any conclusions. Nonetheless, it seems that most roboticists and ethicists consider how humans interact and behave towards robots as a key factor in assessing the dangers of sex robots. It is more about how we will treat sex robots than the dangers they can evoke on humans.

Is it wrong to hurt a Sex Robot?

Sex robots will allow humans to explore sexual boundaries and avenues that they may not have previously been able to practice with humans. However, this could also mean that people choose to use sex robots as ways to enact violent acts, such as rape and assault. Although some would argue robots cannot feel so violence towards them is less morally corrupt than humans, the violent act may still have implications through the reinforcement of such behaviours in society. If we enact violence on a machine that looks human, we may still associate our human counterparts with such artifice. Will negative behaviour we practice on sex robots became more acceptable to reciprocate on humans? Will the fantasy of violence on robots make it commonplace in wider society? Roboticists and ethicists have been concerned about these issues when considering sex robots but there is simply not enough empirical research yet. Although, Kate Darling still believes there is enough reason to consider extending legal protection towards social robots (see footnote).



References

Jason Lee – Sex Robots and the Future of Desire
https://campaignagainstsexrobots.org/about/

Robots, men and sex tourism, Ian Yeoman and Michelle Mars, Futures, Volume 44, Issue 4, May 2012, Pages 365-371
https://www.sciencedirect.com/science/article/pii/S0016328711002850?via%3Dihub

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami
http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf

Attitudes on ‘Sex Robots will liberate the next generation of women
https://www.kialo.com/will-sex-robots-liberate-the-next-generation-of-women-4214?path=4214.0~4214.1

Footnotes

Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami

– It’s All Too Creepy

As concern about privacy and use of personal data grows, solutions are starting to emerge.

This week I attended an excellent symposium on ‘The Digital Person’ at Wolfson College Cambridge, organised by HATLAB.

The HATLAB consortium have developed a platform where users can store their personal data securely. They can then license others to use selected parts of it (e.g. for website registration, identity verification or social media) on terms that they, the user, is in control of.

The Digital Person
The Digital Person
This turns the table on organisations like Facebook and Google who have given users little choice about the rights over their own data, or how it might be used or passed on to third parties. GDPR is changing this through regulation. HATLAB promises to change it through giving users full legal rights to their data – an approach that very much aligns with the trend towards decentralisation and the empowerment of individuals. The HATLAB consortium, led by Irene Ng, is doing a brilliant job in teasing out the various issues and finding ways of putting the user back in control of their own data.

Highlights

Every talk at this symposium was interesting and informative. Some highlights include:


  • Misinformation and Business Models: Professor Jon Crowcroft
  • Taking back control of Personal Data: Professor Max van Kleek
  • Ethics-Theatre in Machine Learning: Professor John Naughton
  • Stop being creepy: Getting Personalisation and Recommendation right: Irene Ng

There was also some excellent discussion amongst the delegates who were well informed about the issues.

See the Slides

Fortunately I don’t have to go into great detail about these talks because thanks to the good organisation of the event the speakers slide sets are all available at:

https://www.hat-lab.org/wolfsonhat-symposium-2019

I would highly recommend taking a look at them and supporting the HATLAB project in any way you can.

– AI and Neuroscience Intertwined

Artificial intelligence has learnt a lot from neuroscience. It was the move away from symbolic to neural net (machine learning) approaches that led to the current surge of interest in AI. Neural net approaches have enabled AI systems to do humanlike things such as object recognition and categorisation that had eluded the symbolic approaches.

So it was with great interest that I attended Dr. Tim Kietzmann's talk at the Cognitive and Brain sciences Unit (CBU) in Cambridge UK, earlier this month (March 2019), on what artificial intelligence (AI) and neuroscience can learn from each other.

Tim is a researcher and graduate supervisor at the MRC CBU and investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

Both AI and neuroscience aim to understand information processing and decision making - neuroscience primarily through empirical studies and AI primarily through computational modelling. The talk had symmetry. The first half was 'how can neuroscience benefit from artificial intelligence', and the second half was 'how artificial intelligence benefits from neuroscience'.

Types of AI

It is important to distinguish between 'narrow', 'general' and 'super' AI. Narrow AI is what we have now. In this context, it is the ability of a machine learning algorithm to recognise or classify particular things. This is often something visual like a cat or a face, but it could be a sound (as when an algorithm is used to identify a piece of music or in speech recognition).

General AI is akin to what people have. When or if this will happen is speculative. Ray Kurzweil, Google’s Director of Engineering, predicts 2029 as the date when an AI will pass the Turing test (i.e. a human will not be able to tell the difference between a person and an AI when performing tasks). The singularity (the point when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created), he predicts should happen by about 2045. Super AIs exceed human intelligence. Right now, they only appear in fiction and films.

It is impossible to predict how this will unfurl. After all, you could argue that the desktop calculator several decades ago exceeded human capability in the very narrow domain of performing mathematical calculations. It is possible to imagine many very narrow and deep skills like this becoming fully integrated within an overall control architecture capable of passing results between them. That might look quite different from human intelligence.

One Way or Another

Research in machine learning, a sub-discipline of AI, has given neuroscience researchers pattern recognition techniques that can be used to understand high-dimensional neural data. Moreover, the deep learning algorithms, that have been so successful in creating a new range of applications and interest in AI, offer an exciting new framework for researchers like Tim and colleagues, to advance knowledge of the computational principles at play in the brain.  AI allows  researchers to test different theories of brain computations and cognitive function by implementing and testing them. 'Today's computational neuroscience needs machine learning techniques from artificial intelligence'.

AI benefits from neuroscience by informing the development of a wide variety of AI applications from care robots to medical diagnosis and self-driving cars. Some principles that commonly apply in human learning (such as building on previous knowledge and unsupervised learning) are not yet integrated into AI systems.

For example, a child can quickly learn to recognise certain types of objects, even those such as a mythical 'Tufa' that they have never seen before. A machine learning algorithm, by contrast, would require tens of thousands of training instances in order to reliably perform that same task. Also, AI systems can easily be fooled in ways that a person never would.  Adding  a specially crafted 'noise' to an image of a dog,  can lead an AI to misclassify it as an ostrich. A person would still see a dog and not make this sort of mistake. Having said that, children will over-generalise from exposure to a small number of instances, and so also make mistakes.

It could be that the column structures found in the cortex have some parallels to the multi-layered networks used in machine learning and might inform how they are designed. It is also worth noting that the idea of reinforcement learning used to train artificial neural nets, originally came out of behavioural psychology - in particular Pavlov and Skinner. This illustrates the 'intertwined' nature of all these disciplines.

The Neuroscience of Ethics

Although this was not covered in the talk, when it comes to ethics, neuroscience may have much to offer AI, especially as we move from narrow AI into artificial general intelligence (AGI) and beyond. Evidence is growing as to how brain structures, such as the pre-frontal cortex are involved in inhibiting thought and action. Certain drugs affect neuronal transmission and can disrupt these inhibitory signals. Brain lesions and the effects of strokes can also interfere with moral judgements. The relationship of neurological mechanisms to notions of criminal responsibility may also reveal findings relevant to AI. It seems likely that one day the understanding of the relationship between neuroscience, moral reasoning and the high-level control of behaviours will have an impact on the design of, and architectures for, artificial autonomous intelligent systems (i.e. see Neuroethics: Challenges for the 21st Century.Neil Levy - 2007 - Cambridge University Press or A Neuro-Philosophy of Human Nature: Emotional Amoral Egoism and the Five Motivators of Humankind - April 2019).

Understanding the Brain

The reality of the comparison between human and artificial intelligence comes home when you consider the energy requirements of the human brain and computer processors performing similar tasks. While the brain uses about 15 watts of energy, just a single graphics processing unit requires up to 250 watts.

It has often been said that you cannot understand something until you can build it. That provides a benchmark against which we can measure our understanding of neuroscience. Building machines that perform as well as humans is a necessary step in that understanding, although that still does not imply that the mechanisms are the same.

Read more on this subject in an article from Stanford University. Find out more about Tim's work on his website at: http://www.timkietzmann.de or follow him on twitter (@TimKietzmann).

Tim Kietzmann

Tim Kietzmann

– Its All Broken, but we can fix it

Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.

Moral Machines?

This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.

The Unpredictable...

Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam!  Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.

...But Possible

Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.

AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.

Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.

While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.

Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).

We can fix it - yes we can

On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.

For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.

Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.

Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.

I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

Mark Stevenson
Mark Stevenson