Home » Design » – IEEE Consultation on Ethically Aligned Design

Request contact
If you need to get in touch
Enter your email and click the button

– IEEE Consultation on Ethically Aligned Design

A Response Submitted for robotethics.co.uk

A summary of the IEEE document Ethically Aligned Design (Version 2) can be found below. Responses to this document were invited by 7th May 2018.


Response to Ethically Aligned Design Version 2 (EADv2)
Rod Rivers, Socio-Technical Systems, Cambridge, UK
March 2018 (rod.rivers@ieee.org)

I take a perspective from philosophy, phenomenology and psychology and attempt to inject thoughts from these disciplines.

Social Sciences: EADv2 would benefit from more input from the social sciences. Many of the concepts discussed (e.g. norms, rights, obligations, wellbeing, values, affect, responsibility) have been extensively investigated and analysed within the social sciences (psychology, social psychology, sociology, anthropology, economics etc.). This knowledge could be more fully integrated into EAD. For example, the meaning of ‘development’ to refer to ‘child development’ or ‘moral development’ is not in the glossary.

Human Operating System: The first sentence in EADv2 establishes a perspective looking forward from the present, as use and impact of A/ISs ‘become pervasive’. An additional tack would be to look in more depth at human capability and human ethical self-regulation, and then ‘work backwards’ to fill the gap between current artificial A/IS capability and that of people. I refer to this as the ‘Human Operating System’ (HOS) approach, and suggest that EAD makes explicit, and endorses, exploration of the HOS approach to better appreciate the complexity (and deficiencies) of human cognitive, emotional, physiological and behavioural functions.

Phenomenology: A/ISs can be distinguished from other artefacts because they have the potential to reflect and reason, not just on their own computational processes, but also on the behaviours, and cognitive processes of people. This is what psychologists refer to as ‘theory of mind’ – the capability to reason and speculate on the states of knowledge and intentions of others. Theory of mind can be addressed using a phenomenological approach that attempts to describe, understand and explain from the fully integrated subjective perspective of the agent. Traditional engineering and scientific approaches tend to objectify, separate out elements into component parts, and understand parts in isolation before addressing their integration. I suggest that EAD includes and endorses exploration of a phenomenological approach to complement the engineering approach.

Ontology, epistemology and belief: EADv2 includes the statement “We can assume that lying and deception will be prohibited actions in many contexts” (EADv2 p.45). This example may indicate the danger of slipping into an absolutist approach to the concept of ‘truth’. For example, it is easy to assume that there is only one truth and that the sensory representations, data and results of information processing by an A/IS necessarily constitute an objective ‘truth’. Post-modern constructivist thinking see ‘truth’ as an attribute of the agent (albeit constrained by an objective reality) rather than as an attribute of states of the world. The validity of a proposition is often re-defined in real time as the intentions of agents change. It is important to establish some clarity over these types of epistemological issues, not least in the realm of ethical judgments. I suggest that EAD note and encourage greater consideration of these epistemological issues.

Embodiment, empathy and vulnerability: It has been argued that ethical judgements are rooted in physiological states (e.g. emotional reactions to events), empathy and the experience of vulnerability (i.e. exposure to pain and suffering). EADv2 does not currently explicitly set out how ethical judgements can be made by an A/IS in the absence of these human subjective states. Although EAD mentions emotions and affective computing (and an affective computing committee) this is almost always in relation to human emotions. The more philosophical question of judgement without physical embodiment, physiological states, emotions, and a subjective understanding of vulnerability is not addressed.

Terminology / Language / Glossary: In considering ethics we are moving from amoral mechanistic understanding of cause and effect to value-laden, intention driven notions of causality. This requires inclusion of more mentalistic terminology. The glossary should reflect this and could form the basis of a language for the expression of ideas that transcend both artificial and human intelligent systems (i.e. that is substrate independent). In a fuller response, I discuss terms already used in EADv2 (e.g. autonomous, intelligent, system, ethics, intention formation, independent reasoning, learning, decision-making, principles, norms etc.), and terms that are either not used or might be elaborated (e.g. umwelt, ontology, epistemology, similarity, truth-value, belief, decision, intention, justification, mind, power, the will).



IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems

For Public Discussion – By 7th May 2018 (consultation now closed)

Version 2 of this report is available by registering at:
http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.

Version 2 presents the following principles/recommendations:

Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.

Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.

Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:

• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Algorithms
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators/outputs
• Optimization goal/loss function/reward function

Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages

http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: