Home » Accountability » – Google political targeting

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Google political targeting

Google’s Polical Ad. Targeting, Democracy and Randomness

Google announced on Wednesday (20/11/19) that it was no longer going to allow political candidates to target their advertisements to individuals. Instead Google will restrict the targeting to age, gender and location.

But does this go far enough? During the 1990s I worked on software for modelling user characteristics and using these to determine the content that was delivered to them. This was primarily for computer-assisted learning. The models would identify the level of knowledge a student had in a particular subject and then deliver material appropriate to that level. Some of these programs were even considered by the space industry to guide and train astronauts on the space station.

However, while there might be arguments that these systems are useful in education, it is far more questionable as to whether they contribute anything to the political process. It seems to me that in a democratic society, everybody should have access to exactly the same information and that it should not even be targeted by age, gender or even location. What are the arguments for this?

If information is targeted on the basis of my demographic characteristics, then I am leaving it to the politicians to decide what I should hear and not my own judgement about what is important to attend to. All targeting takes away my right to see what other people might be concerned about. It confines me to my own information bubble. It restricts my knowledge of what is going on in other parts of the community, and puts me at a disadvantage in being able to interact with a diversity of opinion. It compromises my autonomy.

I believe that we all have the right to not only consider our own situations but to make political decisions that relate to others. I may be well healed but still concerned for the plight of the poor, or I may be at a disadvantage in society and still concerned about how the better advantaged manage their affairs. I may be female but still have opinions about men. I may be a child but still have opinions about the way adults are dealing with climate change, for example. I may live in London and still be concerned about what is happening in the north. If information is targeted on the basis of my demographic characteristics, then I am leaving it to the politicians to decide what I should hear and not my own judgement about what is important to attend to.

As somebody once said, ‘in a world where everything is connected to everything else, it is difficult to see what matters’. Targeting restricts my view of a complicated information ecosystem and restricts me to acting locally on the basis of limited information. It takes away my capacity to see the bigger picture.

Much the same argument can be applied to the way that Parliament is currently constituted. The so-called ‘representative democracy’ is a system in which only certain people have access to the political decision-making process. These people are either self selected or selected by their parties. One way or another, they are in no way representative of the population at large. A truly representative system would select 650 MPs at random from the entire UK population. These people would then truly reflect a widely varying set of circumstances and concerns amongst the people at large. I am not necessarily saying that this is a good idea. There are many snags and it would be necessary to provide high levels of training and support (e.g. in the issues of the day and in the political process) for such a system to be effective. What I am saying is that it would be more representative and more democratic.

In fact, the more I think about it, the more I tend to have faith in random processes. Would it not be fairer for all kinds of selection to be random. For example, in job selection, is it really necessary to do any vetting beyond having qualifications for the job. If having established that baseline capability, all vacancies were filled at random, then there would be far greater equality of opportunity, diversity and social mobility. It would also save a lot of time in managing the selection process and then correcting it with processes to avoid discrimination.

Selection is partly a process in which new power relations and obligations are created between those that select and those that are selected. However, somehow one doubts that the individuals that currently hold the power, would be prepared to give it up.

I tend to discount arguments that those that currently hold power are there because they deserve it. Some do, there is no doubt. But many don’t and that is divisive for the whole system. We are all aware that opportunity favours the circumstances of birth. I suppose we could argue that this itself is something of a random process, and perhaps that is why we do not question the status quo. However it is only random at a single point in time and from that point a person’s birth circumstance has an overwhelming influence.

So, I would argue against the power to target messages to anybody in particular. If there has to be any basis for selection at all (e.g. on the grounds of costs), I would argue for the random scattering of identical messages amongst the population.

A policy not to ‘fact-check’ what politicians say in their advertisements is another matter. I am not sure why the advertising standards authority principles cannot apply to politicians in the same way as it applies to the advertising of products and services. Google’s policies, are at least, going in the right direction. They say they will identify ‘clear violations’ by putting in place checks on blatantly fake news and ‘deep fakes’. They also promise transparency with respect to who is placing and seeing advertisements. Really, we might be better continuing our scrutiny of Facebook who, until very recently, have not thought it necessary to put significant controls on either targeting or the fact checking of content.

I do have some sympathy for the argument that we should not leave it to commercial companies to be making editorial decisions. Indeed, they are political decisions that need to be taken at the level of society generally. However while society and its regulatory controls are so slow to act, we are dependent on these companies to exercise controls that we hope will, in retrospect, withstand public scrutiny.

Google ad targeting policy
Facebook ad targeting policy


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: