Archive for May 2014


The trouble with algorithms

May 20th, 2014 — 11:03am

Today, algorithms are at the heart of our relationship with the Web and so-called ‘big data’. But what are they, how do they work, and how useful are they?

At the most basic level, an algorithm is a set of instructions for a series of calculations. Algorithms for example allow software to query large databases and find results that match a defined set of terms. Google’s algorithms calculate how many connections each website has to other websites and assign a weighting to those connections, generating ‘pagerank’, and then order your search results accordingly. Netflix’s algorithms find films that match your previous viewing history based on data about numbers of users who have watched/rated the same films. Amazon’s algorithms do the same for books, DVDs, and so on.

In a much more important and sinister way, algorithms also determine the makeup of the President’s ‘Disposition Matrix’ (or ‘Kill List’ in non-Orwellian English) based on the recorded contacts between suspects and members of known terrorist networks. Google Flu Trends also uses algorithms to mine data from search terms to produce a predictive map of where and when flu outbreaks might strike.

As algorithms and ‘big data’ play a more and more important role in our lives: shaping our taste for films and books, suggesting our food and determining what we see online, shouldn’t we ask how good they are and what they hide from us?

The important thing to realise about algorithmic prediction is that past behaviour does not predict future behaviour. There is a strong correlation, in many cases, between the two. But it is only a correlation. Just because a man has eaten a Big Mac every Tuesday for 10 years without fail does not mean that he will eat one next Tuesday. And, crucially, it does not increase the probability that he will eat a Big Mac next Tuesday. Nevertheless, if I were a betting man, given the evidence of his previous eating habits, I would probably stake quite a lot on him doing exactly that.

This is an example of the Gambler’s Fallacy, which Kahneman and Tversky first identified as a cognitive bias. It’s a very hard habit to shake precisely because it is often very useful. The correlation often does seem to confirm our intuition that there is a causal relationship, even though there isn’t.  

The problem with murdering people based on their past contacts with terrorists (quite apart from the moral repulsiveness of invisibly assassinating people with robots in the sky) is that those past contacts do not establish that a suspect is a terrorist. And again, crucially, they do not increase the probability (absent any other information) that that person is a terrorist. They just lead us to believe that the probability is high.

Algorithms don’t make these kinds of distinction. They just do as they are told. And this leads us to a much bigger problem with algorithms – their composition – and the assumptions that are made by their programmers. Algorithms are only as good as these assumptions. So, if the assumption is that anyone who has had regular meetings (and who is to say what the relevant threshold is?) with terrorists must be a terrorist himself, the algorithm will function according to this erroneous idea.

In other words, algorithms are hardly disinterested bits of code. They have encoded within them the biases of their creators, with all the subjective heuristics that entails, and also the assumptions created by the environment in which they are designed and factors bearing on their production. And they are only as good as the data sets or metadata which they interrogate. As this article about Netflix demonstrates.

The crux of the problem is that an algorithm can only interpret the world based on a selective and highly simplified representation contained within a set of data, of whatever size. If one is capable of believing that the world can be accurately represented as a series of binary data, this does not represent a philosophical problem, only one of scale. But if you believe that there are things which cannot possibly be represented in this way (for example the moral problems implicit in deciding who should live and who should be blown up and what rate of collateral damage is acceptable) then you might conclude that an algorithm is not a capable, or suitable, tool for making such decisions.

Of course the consequences of letting an algorithm decide which song you might want to hear next on your iPod are much less important, and therefore you could be perfectly happy for the software to make those choices. But you are still aware of the problems presented by deferring to the machine. Every time it makes a choice for us we are missing something, ceding something, in the way of our own unique and unpredictable judgement. For the sake of convenience, and for the sake of profit, algorithms are becoming ubiquitous. But before they completely take over our lives, we should recognise their limitations.

Tim Harford recently wrote a much-cited piece in the Financial Times about the problems with ‘big data’ and there is a feeling that the backlash against data-driven decision making is about to gather real momentum. But as in all popular debate there is a danger that the pendulum swings too far back, just as it swung too far forward, and little progress is achieved. I wouldn’t want to suggest that algorithms can’t be useful, and extremely helpful, in many cases. But I think it’s important to understand what they are and how they work and what they are not very good at, so that we are not tempted to give them too much responsibility.

2 comments » | Uncategorized

Back to top