Home » Talks » – AI: Measures, Maps and Taxonomies

Request contact
If you need to get in touch
Enter your email and click the button

– AI: Measures, Maps and Taxonomies

Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.

José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.

José Hernández-Orallo
José Hernández-Orallo

José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.

Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.

Intelligence not included
Intelligence not included

My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.

Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.

When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.

But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: