Home » Posts tagged 'Capitalism'

Tag Archives: Capitalism

Request contact
If you need to get in touch
Enter your email and click the button

– Autonomy now

Making sense of a changing world

It’s difficult to make sense of a fast changing world. But could ‘autonomy’ be at the centre of it? I’ll explain.

There are two main themes – people and technology. Ideas about autonomy are changing for both. For people, it is a matter of their relationship to employment, government and the many institutions of society. For technology, it is the introduction of autonomous intelligence in a wide range of systems including phone apps and all manner of automated decision making systems that affect every aspect of our lives. There is also the increasing inter-dependency between people and technology, both empowering and constraining. Questions of autonomy are at the heart of these changes.

There have been times in history when it has not occurred to people that they could be autonomous in the broad scope of their lives. They were born into a time and place where the control of their destiny was not their own concern. They were conditioned to know their place, accept it and stay in it. In first world democracies, autonomy is, perhaps, a luxury of the here and now. It may not necessarily stay that way.


My particular interest is the way in which we are giving autonomy to the things that we create – computer algorithms, artificial intelligence systems and robots. But it’s broader than that. We all want the freedom to pursue our own goals, to self-determine. We are told repeatedly by an industry concerned with self-development and achieving success, that we should ‘find our authentic self’ and pursue the values and goals that really matter to us.

However, we can only do this within an environment of constraints – physical constraints, resource constraints, psychological constraints and social constraints. It is the dynamic between the individual and their constraints that is in constant flux and that I am trying to examine.

What’s Trending in Autonomy?

There are two main trends – one towards decentralisation and one towards concentrations of wealth and power. This seems something of a contradiction. How can both be true and, if so, where is this going?

There is a long-term trend towards decentralization. First we rejected the ancient gods as the controllers of nature. Much more recently we have started to question other sources of authority – governments, doctors, the church, the law, the mainstream media and many other institutions in society. As we as individuals have become more informed and empowered by technology, we have started to see the flaws in these ‘authorities’.

I believe, along with many other commentators, that we are heading towards a world where autonomy is not just highly valued but is also more possible than it ever has been. As society becomes better educated, as technology enables greater information sharing and flexibility, we can, and perhaps inevitably will, move towards a more decentralized society in which both human and artificial autonomous agents increasingly interact with each other. The interactions will become more frequent, more informed, more fine-grained and more local. The technological infrastructure for this continues to rollout at ever-increasing pace. Soon we will have 5G communications facilitating almost instantaneous communications between ever more intelligent and powerful devices – smart phones, autonomous vehicles, information servers, and a host of smart devices.

On the other hand, there is clear evidence of increased concentrations of wealth and power. Although estimates may vary, it seems that a greater proportion of the worlds wealth is held by fewer and fewer people. Stories abound of fewer that eight people owning more than half the worlds wealth. Economists like Thomas Pickerty have documented in detail the evidence for such a trend.

There is clearly a tension between these trends. As power and wealth become more concentrated, manifesting in the form of surveillance capitalism (not ignoring surveillance by the state) and fake news, there is a fight back by individuals and other institutions.

Individuals increasingly recognise the intrusions on their privacy and this is picked up (often belatedly) in legislation like GDPR and other moves to regulate. The checks and balances do work to modulate the dynamics of the struggle, but when they don’t work, the accumulated frustration at the loss of human dignity can become political and violent. Take a closer look at autonomy.

Why do we need Autonomy?

We each have a biological imperative to survive. While we can count on others to some extent, ‘the buck stops’ with each of us as individuals. The more robust and resilient solutions are self-sufficiency and self-determination. It’s not a fail-safe but it takes out the risk that others might not be there for us all the time. It also appears to be a route to greater wellbeing. Learning, developing competence and mastery, being able to predict and hence increase the possibility that we can control, being less subject to constraints on our actions – all contribute to satisfaction, ultimately in the service of survival.

In the hierarchy of needs, having enough food, shelter, sleep and security releases some of your own resources. It provides the freedom to climb. Somewhere near the top of the hierarchy is what Maslow called self-actualisation – the discovery and expression of your authentic self. But unless you are exceptionally lucky, and find that your circumstances align perfectly with your authentic self, then a pre-requisite is to have freedom from the constraints that prevent you from getting there.

Interactions between people and machines

This is all the human side of autonomy – the bit that applies to us all. This is a world in which both people and artificial agents – computer algorithms, smart devices, robots etc. interact with each other. Interactions between people and people, machines and machines and people and machines are accelerating in both speed and frequency in order that each autonomous agent can achieve its own rapidly changing priorities and goals. There is nothing static or certain in this world. It is changing, ambiguous, and unpredictable.

Different autonomous agents have different goals and different value systems that drive them. For people these are different cultures, social norms and roles. For machines they relate to their different functions and circumstances in which they operate. For interactions to work smoothly there needs to be some stability in the protocols that regulate them. Autonomy may be a way into defining accountability and responsibility. It may lead us towards mechanisms for the justification and explanation of action. Neither machines nor people are very good at this, but autonomy may provide the key that unlocks our understanding of effective communication and protocols.

Still, that’s for later. Right now, this article is just focused on the concept of autonomy.
I hope you are convinced that this is an important and interesting subject. It is at the foundation of our relationships with each other and between people and the increasingly autonomous and intelligent agents we are creating.

Questions that need to be addressed


  • What do we mean by autonomy?
  • How do agents (people and machines) differ in the amount of autonomy they have?
  • Can we measure autonomy?
  • What examples of peoples societies and artefacts can we think of that might help us understand what is it what is not autonomous?
  • What do we mean by autonomy when we talk about artificial autonomous intelligence systems?
  • Are the computer algorithms and robotic systems that we have today truly autonomous?
  • What would it mean to build an artificial intelligence that was truly autonomous?
  • What is the relationship between autonomy and morality?
  • Can we be truly autonomous if we are constrained by ethical principles and social norms?
  • If we want our intelligent artefacts to behave ethically, then how would we approach the design of those systems?

That’s quite a chunk of questions to get through. But they are all on the critical path to understanding how our own human autonomy and the autonomy that we build into artefacts, can relate to and engage with each other. They take us to a point where we can better understand the trade-offs every intelligent agent, be it human or artificial, has to make between the freedom to pursue its own goals and the constraints of living in a society of other intelligent agents.

It also reveals how, in people, the constraints of society are internalised. As adults they have become part of our internal control mechanisms. These internal controls have no absolute morality but reflect the circumstances and society in which we grow up. As our artefacts become increasingly intelligent we may need to develop similar mechanisms for their socialisation.

Definitions of Autonomy

The following definitions are taken from the glossary of the IEEE publication called ‘Ethically Aligned Design’ (version 1). The glossary has several definitions from different perspectives:

Ordinary language: The ability of a person or artefact to govern itself including formation of intentions, goals, motivations, plans of action, and execution of those plans, with or without the assistance of other persons or systems.

Engineering: “Where an agent acts autonomously, it is not possible to hold any one else responsible for its actions. In so far as the agent’s actions were its own and stemmed from its own ends, others cannot be held responsible for them” (Sparrow 2007, 63).

Government: “we define local [government] autonomy conceptually as a system of local government in which local government units have an important role to play in the economy and the intergovernmental system, have discretion in determining what they will do without undue constraint from higher levels of government, and have the means or capacity to do so” (Wolman et al 2008, 4-5).

Ethics and Philosophy: “Put most simply, to be autonomous is to be one’s own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman 2015).

Medical: “Two conditions are ordinarily required before a decision can be regarded as autonomous. The individual has to have the relevant internal capacities for self-government and has to be free from external constraints. In a medical context a decision is ordinarily regarded as autonomous where the individual has the capacity to make the relevant decision, has sufficient information to make the decision and does so voluntarily” (British Medical Association 2016).

More on autonomy later. Sign up to the blog if you want to be notified.

Meanwhile a couple of videos

The first has an interesting take on autonomy. Autonomy is not a matter of what you want, but what you want to want. The more reflective you are about what you want the more autonomous you are.

Youtube Video, What is Autonomy? (Personal and Political), Carneades.org, December 2018, 6:50 minutes

https://www.youtube.com/watch?v=z0uylpfirfM

The second is from a relatively new Youtube channel called ‘Rebel Wisdom’. It starts with the breakdown of trust in traditional media and moves on to themes of decentralisation.

Youtube Video, The War on Sensemaking, Daniel Schmachtenberger, Rebel Wisdom, August 2019, 1:48:49 hours

https://www.youtube.com/watch?v=7LqaotiGWjQ&t=17s

– Its All Broken, but we can fix it

Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.

Moral Machines?

This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.

The Unpredictable...

Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam!  Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.

...But Possible

Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.

AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.

Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.

While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.

Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).

We can fix it - yes we can

On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.

For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.

Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.

Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.

I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

Mark Stevenson
Mark Stevenson

– A Changing World: so, what’s to worry about?

A World that can change – before your eyes!

I’ve been to a couple of good talks in Cambridge (UK) this week. First, futurist Sophie Hackford (formally of Singularity University and Wired magazine) gave a fast-paced talk about a wide range of technologies that are shaping the future. If you don’t know about swarms of drones, low orbit satellite monitoring, neural in-plants, face recognition for payments, high speed trains and rocket transportation then you need to, fast. I haven’t found a video of this very recent talk yet, but the one below from a year ago gives a pretty good indication of why we need to think through the ethical issues.

YouTube Video, Tech Round-up of 2017 | Sophie Hackford | CTW 2017, January 2018, 26:36 minutes

The Age of Surveillance Capitalism

The second talk, in some ways, is even more scary. We are already aware that the likes of Google, Facebook and Amazon are closely watching our every move (and hearing our every breath). And now almost every other company that is afraid of being left behind is doing the same thing, But what data are they collecting and how are they using it. They use the data to predict our behaviour and sell it on the behavioural futures market. Not just our computer behaviour but they are also influencing us in the real world. For example, apparently Pokamon Go was an experiment originally dreamed up by Google to see if retailers would pay to host ‘monsters’ to increase footfall past their stores. The talk by Shoshana Zuboff was at the Cambridge University Law Faculty. Here is an interview she did on radio the same day.

BBC Radio 4, Start the Week, Who is Watching You?, Monday 4th February 2019, 42:00 minutes
https://www.bbc.co.uk/programmes/m0002b8l