I was recently asked to write a blog post by the CCCB Lab – Centre de Cultura Contemporània Barcelona on the subject of digital transformation in cultural organisations. You can find the full post (in English, Catalan and Spanish) here:
At the ACE Hotel in Shoreditch in the lobby is a long, low table with lots of plug sockets and lamps on it. Around the table there’s a more or less permanent nimbus of young people working on Apple laptops. A lot of them are drinking flat whites or cappuccinos, or eating pastries. None of them is wearing a suit, or anything which could credibly be described as ‘smart’. None of them seems to be doing anything very serious. You see the same type everywhere nowadays in the cafes and bars and co-working spaces of East London.
It’s easy to be fooled into thinking that they are just well-off dossers, or trustafarians lounging around the ether, generously sponsored by their parents who own large, re-mortgaged homes in Godalming or Sevenoaks and holiday in the South of France.
But that’s not what they are. These are the workers of the future. The unevenly distributed future. Or rather the present. You didn’t even know the jobs they do existed. That’s how out of date you are. And they’re already gone. Filled by these neophytes. These are the kids buying targeted ads on Facebook, using Twitter Cards to add ‘rich media’ to brand communications or running their own channels on YouTube and Vimeo. They are the new moguls of online video. These are the ‘community gardeners’ pushing up the follower count for Nike and Adidas. These are the geeks mining the analytics for key words and key trends, optimising titles and links. Or coding up their own apps in Git or Swift. These are the bloggers getting paid to churn out before the jump copy for Net A Porter or ASOS.
They are the trailing indicators of an economic shift that has been going on for at least ten years now and is so well established that in many parts of the city and pockets around the country things like going to an office, wearing a suit, scheduling meetings, desktop PCs, IT departments, HR and annual leave seem about as quaint as Penny Farthings.
It’s easy to take the piss, to dismiss these people as idiots – the froth on top of a syllabub of silly activity, a faddish avant-garde housed in a glittery bubble of nonsense that will soon pop – leaving the ‘real economy’, which is built out of reassuringly physical things like factories and shops and warehouses and transit vans and laybys and petrol stations – just where it was before: getting along very nicely, thank you. But doing that is the same as bemoaning the fact that the Mohawk is out and the Topknot is in, saying football is a contact sport and defenders should be allowed to tackle and it’s all FIFA’s fault, learning Latin was good because it helped you to pick up other European languages, and lecturing people younger than yourself on why governments lose elections, oppositions never win them, and politics is just one big cycle so nothing ever really changes.
It’s living in the past and thinking it’s the present. It’s a grand failure of imagination.
This change isn’t a small thing. This is the activity that is going to power the UK through the 21st century – *whether you like it or not*. And the change isn’t going to slow down. It’s going to get faster and faster.
And if the kids with the stupid trousers and incomprehensible facial hair look as if they’re not serious, be warned: they play to win. The whole culture of this new generation of businesses is ruthless, stripped-down, bare-knuckled Randianism. They may talk about social enterprise but you can already see, in the examples of Uber, AirBnB, Twitter and countless others that when it comes to abiding by national laws, rules and regulations, or even when it comes to the idea of public goods or human rights, nothing shall ever interfere with the bottom line.
The big companies take off and grow so quickly that national governments and regulatory bodies can’t keep up with them, and by the time they do it’s too late. They impose a small fine, which the now gargantuan enterprise can simply write off as an ex-post-facto cost. Startups know this, and exploit this bureaucratic inertia to go global at frighteningly high speed. They also tend to disclaim any responsibility from themselves towards the societies in which they operate. For example, AirBnB takes no responsibility for ascertaining whether the leaser of a flat is the owner. In this way they shield themselves from future legal proceedings. And they encourage large numbers of tenants to sub-let flats that don’t belong to them. It’s a huge money-spinner.
But if like me you often find yourself, two years later, arguing about the rights and wrongs of this sort of business practice, you are missing the point – spectacularly so. The world has changed, and continues to change, and arguing that it should go into reverse because it’s unfair and you don’t like it is a waste of time and energy. Rather than lamenting the fact that our city is increasingly populated by people who look like Nathan Barley we should be celebrating it. Because the cities without these hipster honchos are going to be the ones that decline and grow poorer over the next few decades.
In other words, as Clay Shirky recently exhorted print journalists in the US – you can’t beat them, so you’d better stop pointing and laughing at them and join them before it’s too late and you are shining their shoes on a street corner or serving them a nice refreshing glass of Dalston Cola.
Art proceeds from experience; it is born out of the wider culture. Its materials are perception, sensation, memory, thought and attention. Its landscapes are those of the imagination, the conscious and unconscious mind, the psychic, the spiritual and the emotional. All of which are partly formed by our daily contact with the world around us.
Drawing on this experience, artists create an aesthetic language (literal, symbolic, and metaphorical) in which to express themselves.
It should come as no surprise, then, that when the world becomes increasingly digital, when our technologies and our conveyances with them are based on computers and mobile devices, artists’ work begins to reflect these experiences, and the concerns that arise out of them.
In fact, for anyone remotely observant of their environment, it would be shocking if art had not already begun to be deeply impressed by these surroundings and preoccupations.
The art of today ought to be about digital technology. It ought to be about our relationships with this technology – how it affects us and what it means and how we experience it and what is wrong with it. Because that is the world in which we are now living.
The medium and the terminology of this art ought to be those of the network itself – the terminals and servers and ‘clouds’ and devices and ‘wearables’ that together make up the digital element that surrounds and envelops us.
Of course art oughtn’t to be about anything in particular. It has no obligations. But an art that tells us nothing, that illuminates nothing about the world we inhabit has little relevance, and less power. The power of great art is eternal. But it cannot be great unless it responds to some specific condition in the culture and the society – and more importantly the era – in which it is made.
What surprises me is how much of our current artistic establishment reveres an art that ignores this environment, and ignores the art which is actually engaging with it. When the world changes, art changes.
And so it should, otherwise it would die.
If the old adage about transatlantic epidemiology is true, then in terms of the digital start-up economy, England, or at least Shoreditch, has not so much caught a cold as deliberately inhaled an entire chestful of virions straight from Silicon Valley. And this infection, rather than being resisted or quarantined by the public health authorities, has been welcomed and actively encouraged, in the hope that the economic gains it will bring mirror those which have accrued to its original US hosts.
But of course Silicon Roundabout* and its environs and satellites all around Hackney, Tower Hamlets, Bow, Mile End and Stepney, have also given the virus their own mutations, in the form of a language, fashion and culture that is distinctively English. While apparently aping much of the jargon and some of the appurtenances of their Californian forebears, startups in London have added their own twists – hence the prevalence of full-length beards and duffel coats, mankles and ‘staches as opposed to the t-shirts and flip-flops preferred by Googlers in San Francisco and Mountain View. Hence also the spiritual pre-eminence of native computer scholars Alan Turing and Tim Berners-Lee over Alan Kay and Vint Cerf and other American luminaries.
The language of VC funding, investment rounds, tweet-ups, and so on has been borrowed, wholesale, from across the pond, but even though these mantras are repeated with shamanistic fervour, they have not been, thankfully, internalised in the way they are in Stanford and Cupertino. Native irony prevents such fads from setting in.
A couple of years ago, I attended a meeting at one well-known company based in Shoreditch with a group of publishers. As we sat in the reception area, looking at the neon signs above the bar and sneaking envious glances at the pool tables, my friend leaned over to tell me that he knew someone who had just started work at a nearby firm.
‘His desk is on the ping-pong table’, he said, in a sort of awed whisper.
The peculiar financial mix in London’s cultural and creative industries has also shaped this landscape, for better or worse. There is much less investment capital available to firms in London than there is in either California or New York, although the range of ‘angel’ investors and seed funders has grown substantially over the last couple of years. Similarly, crowdfunding platforms have become an increasingly reliable part of the funding mix for many new ventures – notably independent games.
Most commercial revenue has come from the advertising industry, which has been relatively quick to catch on to the potential for new types of digital experience to turn consumers on to its clients’ products. By contrast, most public institutions have been slow to grasp the full cultural import of this new ecology, and funding interventions have typically been reactive and wide of the mark in supporting a truly artistic exploration of these new media. In any case, funding opportunities have rarely reflected the type of agile business models and software development cycles – or sprints – by which startups typically work. In most cases, they seem irrelevant or impossible to access to many digital ‘creatives’ and artists.
Broadcasters and ‘mainstream’ media organisations, when they haven’t viewed many of these startups as irritating tyros to be ignored or squashed, have also failed to implement effective models of partnership and collaboration to maximise the potential of working with them (straightforward commissioning relationships/service agreements notwithstanding). Channel 4’s Education team has perhaps been the notable exception to the rule, although in more recent times even they have struggled to keep up the same degree of energy, flexibility and resourcing.
Google and Facebook, hymned by the Prime Minister, have established beachheads, and are gradually breaking out, their immense gravitational pull slowly but surely warping the whole environment in their own, banal image. All sorts of schemes have been concocted by government to try to harness the digital wind blowing from the west and grasp some of the virtual wealth flowing around the world in beams of immaterial cash. Only time will tell whether these initiatives, with their tax reliefs and brash publicity machines have any measurable impact.
There is, though, a feeling that in purely economic terms at least, the tech sector in London is rapidly maturing. It’s sometimes hard to fight your way through the PR and propaganda to make a sensible judgement about what is actually happening, but it’s certainly true that overall revenue to the sector is growing strongly, and the slightly tenuous, experimental atmosphere that used to pervade the area a couple of years ago has now been replaced by a much greater sense of self-confidence and assurance. This is born out of a feeling that Silicon Roundabout has reached a sort of critical mass in terms of size and density, that online consumer behaviours – and the business models which can exploit them – have been more fully understood, and people really know what they are doing now and how to make money.
Academe, too, has begun to hover over the agglomeration of entrepreneurial activity, partly driven by the imperatives of Whitehall’s confused and demented search for ‘value-added’ contributions to export growth, partly by the dawning awareness of a substantial cultural phenomenon that has become ripe for study, another set of entrails to be read, bones to be picked clean, once the Promethean heat has gone out. The linkage between universities and tech firms is still embryonic. The process by which companies can patent and adapt research to commercial ends, compared to its equivalent in the States, is Byzantine and needs to be reformed (but without inculcating the sort of IP trolling that has hindered genuine competition and growth in the US).
In terms of arts and culture, there is an energetic critical strain in the work of technologists, hackers, developers, games makers and businesses. A pervasive irony still courses through the area, in posters, restaurants and cafes, clothes, pets, pop-up shops, conversations, food, facial hair, multi-coloured mushrooms installed on the tops of buildings, and just about everything else. Sometimes layer upon layer of the stuff overlaps in such a way that it becomes vertiginous and you cannot tell what is serious and when you have disappeared down some sort of Ballardian sinkhole. Developers and coders often make ‘art’ in their spare time, or confess a desire to make more imaginative work, free from the constraints imposed on them by their agencies or clients – but these emerging artists (who would probably not refer to themselves by that name or even think of themselves as artists) are rarely recognized or championed by curators, producers or patrons in the cultural sector. The Barbican’s ‘Digital Revolution’ and partnership with Google, and the re-launch of The Space, among other things – have brought a renewed focus on digital art in the short term, but by and large there is still a lack of awareness about where the most interesting work is being made, who by, how to support it, and just how important it is.
The featureless corporate discourse, the ruthless financial zeal and rapacity of the zealots from the golden land across the Atlantic is still fundamentally alien – the sort of cold steel one encounters in transcripts of conference calls with Jeff Bezos and Mark Zuckerberg. That unreflective, appetitive will for dominance doesn’t catch on in this country. It is still viewed with too much suspicion. Our horizons are just too narrow to think on that scale. There is still an element of ‘Make Do and Mend’ among all the techno-utopianism spouting forth from pundits, journalists, market-makers and bureaucrats. The maverick spirit of young English minds, the crossword-puzzling, code-cracking, tinkering inquisitiveness that begins with the Book of Riddles and ran through Bletchley is still strong.
But it can’t survive forever against the tide of corporate cash unless there are other frameworks of support and other rafts of public, charitable or philanthropic funding for it to cling to.
* In Bologna, at the Children’s Book Fair, where I went to speak about support for digital publishing in the UK, I met the CEO of a startup from New York – ‘Silicon Alley’. He was introduced to me by his assistant, a girl who regarded him with the sort of wide-eyed devotion usually reserved for the Messiah or the Mahdi. When I told him about ‘Silicon Roundabout’ he gave me a pitying smile that seemed to encompass the whole course of British Imperial decline and American ascendancy from 1918 onwards.
Jill Lepore has a much-discussed article in the latest New Yorker about innovation, in particular the book “The Innovator’s Dilemma” by Clayton M. Christensen and the theories it contains and the influence those theories have had. She gracefully debunks the ideology of disruption, pointing out that many of its claimed successes are, on closer inspection, nothing of the kind, and that the whole premise of widespread disruption leading to greater efficiency and generating better economic performance is built on very shaky foundations.
‘Disruption’, in its most common usage, seems to be closely related to Schumpeter’s idea of ‘Creative Destruction’* – the idea that markets need to clear by ruthlessly burning away dead wood, or failing businesses, to spur future growth. And of course Schumpeter’s analysis of the business cycle and ‘Kondratiev Waves’ anticipates the idea of disruptive or path-breaking new technologies leading to economic expansion. It’s surely no coincidence, then, that as the venerable economist’s ideas have resurfaced – (e.g. in a recent Nesta report: http://www.nesta.org.uk/publications/schumpeter-comes-whitehall )** – this has coincided with the ubiquitous deployment of the terminology of disruption and the worship of disruption among hungry start-ups and new businesses keen to do away with the old behemoths and inherit their customers and profits.
Why is this idea, this way of thinking about business and the wider economy, so powerful? Why does it have such an appeal to the instincts of aggressive entrepreneurs and investors, as opposed to the much more successful and widely accepted (at least in academic circles), Keynesian model of the economy which has in fact helped to moderate recessions and depressions, create greater employment, keep inflation low, reduce inequality, etc… for extended periods of time?
Perhaps it is because some of those people see markets, and much of life, as a simple competition. They do not understand Ricardo’s idea of comparative advantage, nor do they understand the concept of counter-cyclical fiscal policy and management of aggregate demand. They just think that business is a race, and that only the fittest can, or ought to, survive. And as long as they simply believe that and carry out that work, they go a long way to fulfilling their social function. From the perspective of the rat, the experiment looks like a labyrinth, and so it should. The problem comes when rats (so to speak) think they can be scientists, and lobby for the re-arrangement of the labyrinth to suit themselves.
The concept of simple competition, and the accounting concepts of debt and credit, are easy to grasp. They do not require any real mental effort. They are intuitive and ‘make sense’ on the basis of personal experience. On the other hand, advising a government to spend money that it (apparently) doesn’t have seems counter-intuitive, and grasping why it might ultimately be the best policy requires an understanding of sophisticated financial relationships, in both real and nominal values. Thinking of debt not as an accounting identity or a fixed, real value, but as a concept that is somewhat malleable, a social and cultural as much as a financial construction, is difficult (although anyone who has read David Graeber’s excellent ‘Debt: the first 5000 years’ will have a massive head start in this respect).
Simple, clear, intuitive concepts are easier to sell, easier to explain, and easier to spread, than subtle, nuanced and counter-intuitive ones. And this partly explains what Paul Krugman likes to refer to as ‘zombie’ arguments and why they keep reappearing long after they have been comprehensively refuted. This is also perhaps why such weak, unoriginal or generally uninteresting ideas seem to find currency in financial circles. And perhaps this is why the idea of disruptive innovation is now so ubiquitous and so much unwarranted, and uncritical, respect is paid to it.
Nassim Nicholas Taleb’s book ‘Antifragile’ is another example of the way in which this kind of idea gains traction. Taleb is one in a long line of successful investors who want to credit themselves not just with the ability to pick stocks, but to draw profound philosophical lessons from their ability to do so. Success leads them to conclude that they do actually know the secrets of the market, and that they can establish a coherent system by which the movement of markets (and by extension a great deal of human behaviour and psychology) is to be understood. George Soros, with his theory of ‘reflexivity’, has made similar claims.
‘Antifragility’ means thriving in situations of extreme turbulence or violence which are generally destructive or ‘disruptive’. To be antifragile is not just to be tough, to be able to endure powerful destructive forces, or enormous change, but to be able to profit from them and grow as a result of them. An example of an ‘antifragile’ organism might be thermophilic bacteria who live in vents near the ocean floor where they grow in conditions of extreme heat and pressure. For Taleb, these adaptations make the bacteria extremely tough and resilient, and allow them to survive and even flourish where other species cannot.
Taleb argues, at wearisome length, that investors can make themselves somewhat antifragile by adopting certain positions in the market, allowing them to profit massively when the vast majority of people are losing money. Not only being resistant to stock market crashes, but thriving in them. This is an extremely grand and long-winded way of expressing a fairly simple concept. But precisely because the insight is a simple one it can quickly be understood and establish itself as a meme with widespread influence.
Innovation is a treacherous concept, and because the word and its denominals are so widely used and applied to so many objects, events, and processes, it has lost much of its meaning. There is a distracting obsession with degrees of innovation, with just how new and original something is. Apart from the fact that this is almost impossible to judge, and almost entirely subjective, it doesn’t really matter. What matters is surely not just how different something is from what went before, but the degree to which it improves things. A modest reform or update may bring far more benefits than a radical one. Does it matter which is the more ‘innovative’ approach if the one that is most helpful requires less change, less disruption to the status quo ante?
It is a mistake simply to associate upheaval with progress. It is a mistake that Josef Schumpeter made, and many of his intellectual descendants continue to make.
* An idea which had its most famous, and notorious, expression in Treasury Secretary Andrew Mellon’s advice to Roosevelt in the depths of the Great Recession: ‘Liquidate the farmers…’ Quite how this policy was supposed to work in practice (and not lead to mass starvation) is a mystery.
** The idea of applying Schumpeter’s ideas (which in and of themselves are a bit zany, and mostly untested, and apply in any case largely to a specific analysis of the business cycle in the private sector) to the provision of public services is, in my view, just, well, a little bit misguided.
Today, algorithms are at the heart of our relationship with the Web and so-called ‘big data’. But what are they, how do they work, and how useful are they?
At the most basic level, an algorithm is a set of instructions for a series of calculations. Algorithms for example allow software to query large databases and find results that match a defined set of terms. Google’s algorithms calculate how many connections each website has to other websites and assign a weighting to those connections, generating ‘pagerank’, and then order your search results accordingly. Netflix’s algorithms find films that match your previous viewing history based on data about numbers of users who have watched/rated the same films. Amazon’s algorithms do the same for books, DVDs, and so on.
In a much more important and sinister way, algorithms also determine the makeup of the President’s ‘Disposition Matrix’ (or ‘Kill List’ in non-Orwellian English) based on the recorded contacts between suspects and members of known terrorist networks. Google Flu Trends also uses algorithms to mine data from search terms to produce a predictive map of where and when flu outbreaks might strike.
As algorithms and ‘big data’ play a more and more important role in our lives: shaping our taste for films and books, suggesting our food and determining what we see online, shouldn’t we ask how good they are and what they hide from us?
The important thing to realise about algorithmic prediction is that past behaviour does not predict future behaviour. There is a strong correlation, in many cases, between the two. But it is only a correlation. Just because a man has eaten a Big Mac every Tuesday for 10 years without fail does not mean that he will eat one next Tuesday. And, crucially, it does not increase the probability that he will eat a Big Mac next Tuesday. Nevertheless, if I were a betting man, given the evidence of his previous eating habits, I would probably stake quite a lot on him doing exactly that.
This is an example of the Gambler’s Fallacy, which Kahneman and Tversky first identified as a cognitive bias. It’s a very hard habit to shake precisely because it is often very useful. The correlation often does seem to confirm our intuition that there is a causal relationship, even though there isn’t.
The problem with murdering people based on their past contacts with terrorists (quite apart from the moral repulsiveness of invisibly assassinating people with robots in the sky) is that those past contacts do not establish that a suspect is a terrorist. And again, crucially, they do not increase the probability (absent any other information) that that person is a terrorist. They just lead us to believe that the probability is high.
Algorithms don’t make these kinds of distinction. They just do as they are told. And this leads us to a much bigger problem with algorithms – their composition – and the assumptions that are made by their programmers. Algorithms are only as good as these assumptions. So, if the assumption is that anyone who has had regular meetings (and who is to say what the relevant threshold is?) with terrorists must be a terrorist himself, the algorithm will function according to this erroneous idea.
In other words, algorithms are hardly disinterested bits of code. They have encoded within them the biases of their creators, with all the subjective heuristics that entails, and also the assumptions created by the environment in which they are designed and factors bearing on their production. And they are only as good as the data sets or metadata which they interrogate. As this article about Netflix demonstrates.
The crux of the problem is that an algorithm can only interpret the world based on a selective and highly simplified representation contained within a set of data, of whatever size. If one is capable of believing that the world can be accurately represented as a series of binary data, this does not represent a philosophical problem, only one of scale. But if you believe that there are things which cannot possibly be represented in this way (for example the moral problems implicit in deciding who should live and who should be blown up and what rate of collateral damage is acceptable) then you might conclude that an algorithm is not a capable, or suitable, tool for making such decisions.
Of course the consequences of letting an algorithm decide which song you might want to hear next on your iPod are much less important, and therefore you could be perfectly happy for the software to make those choices. But you are still aware of the problems presented by deferring to the machine. Every time it makes a choice for us we are missing something, ceding something, in the way of our own unique and unpredictable judgement. For the sake of convenience, and for the sake of profit, algorithms are becoming ubiquitous. But before they completely take over our lives, we should recognise their limitations.
Tim Harford recently wrote a much-cited piece in the Financial Times about the problems with ‘big data’ and there is a feeling that the backlash against data-driven decision making is about to gather real momentum. But as in all popular debate there is a danger that the pendulum swings too far back, just as it swung too far forward, and little progress is achieved. I wouldn’t want to suggest that algorithms can’t be useful, and extremely helpful, in many cases. But I think it’s important to understand what they are and how they work and what they are not very good at, so that we are not tempted to give them too much responsibility.
Many aspects of Walter Benjamin’s famous thesis can now be reconsidered in light of the even greater reproducibility of works of art in the digital age. The ‘sense perception’ that Benjamin refers to in his essay has evolved once again, as the production and distribution of art has changed. The ‘use value’ of art has moved further away from ritual in some respects, closer in others. The ‘aura’ has been almost entirely diminished, but there are new efforts (not coincidentally) to try to restore it.
The aura that Benjamin talks about is the ‘location in time and space’ of a work of art, its ‘authenticity’, which is not reproducible.
And of course one of the things he would have commented on immediately is the different means by which information is recorded and stored digitally and in analogue. The bit by bit binary nature of digital recording means that some elements of an analogue original are necessarily lost in the process of reproduction. It’s hard to define exactly what has gone missing and what it means, but a digital reproduction of an analogue work is different from the original in this fundamental way. Every subsequent copy is identical of the first, assuming no errors in transcription. But the first and most important digital reproduction misses something of the qualities of the original.
For example, recording music digitally means that each note is divided into a number of ‘bits’ of information which are then stored on a disk. These bits can be in one of only two states – 0 or 1 – ‘on’ or ‘off’. Depending on the number of bits and the rate of sampling, this means that the digital sound inevitably loses some of the complexity and fullness of the original.
Another good way to see this difference is in digital images and especially text. Handwriting in ink allows the writer to form letters in an almost infinite variety of ways, and the outer edges of the letters form a continuous and contiguous curve. Compare this with the jagged edges of letters in a word processor when you zoom in to see them up close.
In the end, when the pixel density and resolution is high enough, so that each bit corresponds to an individual atom, the difference may become a purely ontological one. But there is still a distinction that Benjamin would recognise and point to as a diminution of the aura of the original work.
The ‘tradition’ in Benjamin’s essay is a tradition in the way we perceive art’s place and function in society. We go through periods of consensus about what to make of art, how to view it, what its role should be in our lives. In the late Twentieth Century there developed two broad strands in this tradition – the exclusive, private appreciation and ‘ownership’ of individual great works, the increasing hitching of artists to the hurdy-gurdy of the jet set and the international market, art that fulfilled the function of a status symbol and a demarcation of powers and fortunes.
In this world, the aura and the authenticity of work was prized because it was supposedly conferred on the owner, along with the rank of patron, and because it was deemed the ultimate mark of exclusivity. By ‘collecting’ a work, the owner could exclude everybody else from enjoying it, or, in an even more refined process of self-distinction and self-aggrandization, could loan the work for public exhibition. This emphasized their ownership of it while seeming to share it with the public. And they could tell themselves, mysteriously, that they had some greater claim to knowledge about the work, or some greater love of it, than anybody else because it hung in their living room or bedroom, or their elevator. The type of esteem that this activity bestowed became a sought-after commodity and was cleverly traded and speculated in by market makers.
The other strand, a somewhat countervailing one, emphasised the role art could play in enriching the lives of the many, not the few, and the supposedly educational, civilising, democratic influence of art. It arranged exhibitions for mass audiences, large performances and vast installations designed to capture the public imagination and imbue them with the healthy benefits of earnest appreciation.
This tradition in some sense replaced, or recreated, the religious use of art that had been so heavily undermined in the early and middle parts of the century. Instead of believing that art had some place in helping audiences to achieve a transcendent contemplation of the divine, now art’s religious function, properly understood, was to encourage a spiritual development that had no (or in fact a cryptic) relationship with religion.
And it could be said that the encouragement of the religious, public contemplation/appreciation of art was devoted to the worship of art itself, rather than any theological object, and thereby the cultivation of a more elevated sense of self in the individual, and at the same time an awareness of his or her place in a hierarchy or web of cultural sophistication. Art worked as a kind of sublimated religion and at the same time a conservative and normative force for the production of social awareness and order.
This idea, that art could occupy such a role, based on an optimistic notion of human perfectibility, can be seen as an extension of early Victorian ideas about the progress of the human spirit through education and exposure to (man-made) beauty.
Interestingly (but hardly surprisingly), artists themselves (by and large) neither shaped nor particularly attached themselves to either of these developments. They ironised them and often criticised them, but rarely made any profession of faith in one or the other. In any case, the relationship of artists to their patrons and their critics is of course a well understood and much studied subject. Both ‘traditions’ are the result of different shaping forces on the production and development of works of art. Patronage had been in some sense institutionalised, both in the private and public spheres, so that artists became reliant on two different languages, one of truculence, accessibility and availability, and one of ironic self-awareness, difficulty and the nouvelle coterie.
These developments themselves mirror the post-war balance in the West between the power of private capital in a few hands and the ‘public goods’ constructed by the enlightened state. In recent years, as the power of the wealthy individual has grown, the ability of the State to act as patron and collector has, correspondingly, diminished. There is nothing especially ‘digital’ about this. It’s just background.
This is all far too neat and simplistic, but I think it helps to give an outline of the broad position.
But from the late 1990s onwards, the widespread penetration of broadband (ADSL) internet access and the emergence of many new platforms for viewing, sharing and distributing art work started to change these two traditions, and to change our ‘sense perception’ of the status of a work of art, especially because reproductions of work, of a sufficiently high quality to be almost indistinguishable from the original, could now be found at the touch of a button (or screen).
A factor which has not been commented on enough is the extent to which the World Wide Web, and its daily use by hundreds of millions of people, is shaping their expectations of culture and artistic work. We currently have only the barest understanding of the ways in which the web is changing not just our behaviours but our attitudes to art, our expectations of stories, characters, performances and visual aesthetics.
And at the same time, due to an increase in the ‘illegal’ downloading of these copies, companies that had made profits from distributing recorded music, for instance, started to revert to an older model of making money from live performances. The ‘aura’ of these performances was something that could not be captured and reproduced with current technology. Instead of the reproduction of an individual song being the work of art, attention shifted to the performance itself.
This model was not available to the film industry, for obvious reasons, and so they were forced to adopt a number of different tactics in order to maintain their profits –by clamping down on downloaders/filesharers, by advertising the ‘cinema experience’ and emphasizing the quality of the ‘real thing’ (in fact, the studio licensed and distributed DVD recording), and by exploring the use of new technologies such as 3-D and Blu-ray (high definition) to make it more difficult for downloaders to enjoy the same quality of experience as those who purchased the official products. Belatedly, they also adopted much more successful licensing arrangements with ‘legal’ online distributors such as iTunes, NetFlix and Amazon.
These various types of copy do not differ essentially from the reels of film projected in cinemas at the time when Benjamin wrote his essay. To talk about a film is almost inevitably to talk about a reproduction. Original prints used to degrade and disappear, but now that films are made digitally, the copy has a longer life span. But the original film is almost never exhibited. It is hard to talk about a film having an ‘aura’ because any version that is exhibited is a copy. What has changed, or been largely replaced, is the communal experience of watching films together in a cinema. Although this is still popular, it is much less so than it was in Benjamin’s day. Now, by far the most prevalent way of watching films is at home, on the television screen or the computer.
The cinema experience (that is, the frisson of watching a film together with a large crowd of strangers and being united with them by the experience) has largely gone as more and more people choose to watch by themselves, in the comfort and privacy of their homes.
Streaming video introduces another type of reproduction – a continuous one that goes on for as long as the film is being downloaded. The copy does not persist, except in the form of a cookie from the webpage. What does this mean to the viewer? Streaming technologies are still limited, and so the experience is often marred by artefacts, slow buffering or unexpected pauses. Pixel density and quality of the image are not as high as from a DVD, and much less good than in High Definition. Viewers have sacrificed quality for convenience – the ability to watch what they want, when they want, and where they want. In this case, what is the ‘use value’ of the film as art? What role does it occupy? And is it altered by the fact of the film being transmitted and received and stored so widely?
(to be continued…)
In the week of the revelations about the extent of NSA surveillance, a few thoughts about the state’s authoritarian tendency.
Edward Bernays was the first person to apply psychoanalytic techniques to what Chomsky calls the ‘control of the public mind’. Bernays firmly believed that the use of Freud’s ideas in combination with well-designed propaganda could enable the government to influence mass behaviour in order to create a better and more ordered society. His insights were developed and used by Josef Goebbels to mobilize mass support for the Nazis and later became the basis for the revolution in advertising in the 1950s spearheaded by the Madison Avenue agencies.
B. F. Skinner, father of the behaviorist school of psychology and sociology, shared many of the same goals as Bernays, but believed that the way to encourage better behaviour was not by using subtle messaging in the media, but to teach it through a series of rewards and punishments, or incentives and disincentives. His central idea was that people had no real free will or agency but that they simply responded to stimuli and that these could be tailored to suit the model of society the state wanted to create. If the government simply pressed the right buttons when people did good things and bad things, they could control the public.
Game Theory* famously developed mathematical formulae – ‘games’ – to predict how ‘rational actors’ would behave in certain scenarios, and was deeply influential in western policy throughout the Cold War (because it promised to ‘understand’ Soviet behaviour in response to Western military and geopolitical moves). And subsequently it also became hugely influential in economics and especially finance where traders often relied (and still rely) on the adages of Game Theory to predict the movement of markets.
Underlying Game Theory, especially in its economic implementations, is the ‘Rational Expectations’ hypothesis which posits that individuals are rational agents, in other words that they always, in all conditions, use the knowledge they have to maximise their own material advantage.
Cass Sunstein and Richard Thaler, building on the work of Daniel Kahnemann and Amos Tversky, have recently introduced a new set of behaviorist ideas which have also had a profound impact on policymakers in the US and Britain – so-called ‘behavioral economics’, or, simplistically, ‘nudge theory’. Their argument could be crudely boiled down to the idea that you could influence public behaviour with a very subtle set of incentives based on people’s natural tendency to apathy or to following the crowd.
It’s important to understand this set of ideas and the history of these sets of beliefs and their influence on politicians and legislators because they continue to shape the world we live in very strongly, especially in light of recent revelations about mass surveillance and the unprecedented accumulation of data by the NSA about citizens in the US and beyond.
Governments and intelligence agencies believe that they can control the behaviour of their citizens, and they believe that doing that is a fundamentally beneficial thing for society. They have accepted Bernays’ and Skinner’s (and Bentham’s) philosophical gambit: that that degree of control/power is good because it allows the benevolent authority to make society better for the majority, if not every citizen. All of the subsequent behaviorist refinements are simply more subtle ways of approaching the same problem. This is, at the most basic level, why governments have a tendency to become authoritarian – because in seeking to do what they think is ‘good’ or ‘right’ they believe that the means justify the ends and therefore they must take on whatever powers are necessary to achieve their goals in the interest of the greater good.
Even relatively sophisticated students of jurisprudence and government (here’s looking at you, Barack) don’t seem to question the ethical basis of this argument.**
As I see it, there are two strong ethical refutations of their position. The first is utilitarian – and appears in several forms, perhaps the most common of which is: “But what if the government (at some future point) is not benevolent?” Once you have taken such gigantic powers into the hands of the government, it’s hard to give them back, and if in future a dangerous or malevolent or corrupt set of rulers is elected, this could be immensely damaging.
Governments tend to respond to this argument with a relatively weak defence, along the lines of: “We live in a democracy. If you don’t like a government in future you can vote them out.” This argument fails because the very powers that the government seeks can undermine democracy and make it more difficult to oppose power in future. This is especially relevant in the current case of the NSA Prism/Boundless Informant programmes. If the state has access to all of our communications and personal information, it will be impossible to resist in future, whatever it decides to do.
The second is even more basic – and it is simply that the government has no right to arrogate any powers to itself, or any property belonging to its citizens, without first gaining their explicit consent (or that of their elected representatives) and doing so in a fully transparent manner. This argument derives from the idea of the social contract but it is inherent in virtually all post-Enlightenment theories of government. It is striking how absent this argument has been in the current debate about ‘national security’ powers and the overreach of executive authority.
If this argument is raised at all it is usually opposed with another weak response, which is really just boilerplate: “Your elected representatives have been informed about this and they have made a choice for you. You should trust them.” This is the sound of the Establishment desperately trying to get you to look away and ignore what is going on. It is paper thin. In many cases the legislators in question turn out not to have been informed, not to have had any meaningful consultation or oversight of the programmes or decisions involved, or to have been co-opted by the intelligence agencies or executive power because they believe the same things as I have set out above. In practice, judicial oversight and the power to challenge the actions of the government is weak. But in any case, what matters when such huge power grabs are concerned is that the people themselves have a chance to decide. That they are informed. Not just the lemmings who sit in Parliament or Congress and pretend to serve their interests. That is the essence of democracy and if politicians deny it they have got something to hide.
But there are also a number of scientific objections to the behaviorist world view. One of the classic attacks on Skinner was made by Noam Chomsky in his seminal 1959 review of Skinner’s ‘Verbal Behavior’ – in which Chomsky convincingly argued for a cognitive and generative model of grammar instead of a merely behavioural one and in doing so laid the theoretical groundwork for much of modern cognitive science and linguistics.
Joseph Stiglitz and others have done excellent work in the field of macroeconomics, much of it devoted to showing that the Rational Expectations hypothesis is a fallacy and that people make economic (and many other decisions) in a way that does not maximise their own personal advantage much of the time.
In other words the idea that authorities can accurately predict, much less control, public behaviour at the scale of the population seems very uncertain. So, not only are there strong objections to their attempts to do this at a philosophical level, there is a real danger that in practical terms their efforts, even if they are well intentioned, may fail and may have unforeseen consequences which are extremely dangerous.
The NSA surveillance programmes have been dismissed by some of the more jaded commentators as a fact of life in the 21st century, an inevitable and worthwhile trade-off between liberty and security in an age when everyone gives up their personal information just to join social networks. But it is not simply a question of privacy. It is really about who we give the information to and what they do with it. We should be suspicious of Google and Facebook and others and we should be better at insisting on clear rules about how they use our data (afterall, it should belong to us until we give it up). But we should be equally suspicious of government and how it intends to use the information it gathers. We should insist on extremely strong rules about the gathering and use of our data, and greater transparency and oversight.
Because if any authority can know all of our conversations, all of our exchanges and messages and all of our personal information it may be impossible to overthrow.
Update: the list of thinkers and ideas I have cited here is very cursory and subjective. But one person I left out who is also worth considering because of his deep influence on a whole generation of Neo-conservatives is Leo Strauss. Strauss taught at the University of Chicago and formed the political awareness and ideology of men like Paul Wolfowitz. One of the ideas in Strauss’s political philosophy is that the governing class should create and celebrate a mythology of the state that encourages citizens to believe in the government’s just authority and purpose. In the case of America this mythology is based on the idea of the United States as a unique country, a land of destiny, the so-called ‘city on a hill’ whose mission is to defend and spread ‘liberty’ around the world.
For Strauss, these ‘noble lies’ are a way of ensuring the citizens’ attachment to the state and their loyalty to it. And they can also help to reinforce social order. The mythology helps to explain the political structure of society and to justify it. In other words, a well constructed set of national myths is a powerful mechanism of mass control.
* there’s a very good treatment of/primer on Game Theory and its use in Adam Curtis’ series of programmes ‘The Trap’
** I wish people would stop saying that Barack Obama’s record on civil liberties is ‘disappointing’. This is patronising and implies that he has fallen short of some imaginary ideal that he set out before he was elected. If it was any other president we would be saying his record was shameful, not disappointing. Obama never promised, if you paid attention, to be anything but a hawk on national security issues, and so he has proved. So stop treating him like a child who has done badly in a Maths test.
I was very flattered to be invited by Prof David Gauntlett and Dr Paul Dwyer of Westminster University to come and talk at one of their Digital Transformations seminars on the 20th of April at the British Library. Digital Transformations is a research network exploring digital transformations in the creative relationships between cultural and media organisations and their users. The project is based at the University of Westminster, and Partners include UCL, Tate, the British Library, and MuseumNext.
I was there to talk about The Space – a new digital arts service for connected TVs, web and mobile platforms – with Susannah Simons from the BBC. Robert Waddilove from Across the Pond, Google’s in-house marketing agency, also spoke about some of the campaigns and ‘virals’ he had worked on with them, and gave us some great insights into how they work and what makes them spread.
After the presentations and a Q and A session, the audience and speakers broke out into groups to discuss other questions. The group I was in looked at the question of whether, ‘in a digital era, notions of ownership should be set aside.’ We had a really interesting discussion, with a wide range of opinions represented, from a staunch defence of existing copyright arrangements and their role in providing creative people with a reliable income, to a very different view – that these arrangements are no longer up to the job and are in urgent need of reform.
We discussed the ways in which copyright and IP legislation is currently being enforced and policed, the difficulties for some academics, researchers and writers in clearing permissions to reproduce images and text, institutions’ attempts to change their policies on licensing of material, piracy, and everything in between.
What was quite interesting, to me, was that nobody really agreed with the premise of the question. Everyone thought, more or less, that authors should retain some form of ownership of their work, the right to be identified as author, for instance, and the right to grant permission to copy it. The discussion was really about the legal extent of these concepts of ownership, and the efficacy of the mechanisms for enforcing them.
There was a general view that these mechanisms are increasingly unfit for purpose. We talked about the case of the Norwich boy who hacked into the club’s website CMS and posted photos of their new kit on his blog before they had been officially unveiled. After club officials noticed he had reproduced the images without their permission and before the embargo expired, they called the police, who visited the boy’s house.
This is obviously a very extreme instance of a heavy-handed response to a case of copyright infringement, but it is by no means the only one. And as we have seen with rights-holders in the music industry suing individual file-sharers, or lobbying to have their internet connections cut off, it’s perhaps an apt symbol of a wider problem. Of course, rights-holders are simply using the tools available to them to prevent what they see as theft plain and simple. But these methods are at best ineffectual, at worst counter-productive and morally questionable.
We talked about the chilling effect of high fees for reproducing images in academic works, and the similar problem of permissions fees for poetry or text. Many in the group were of the view that rights-holders would actually benefit from operating a more efficient permissions policy and that it was uneconomical for them to pursue online infringement of these rules.
To take poetry as an example, poems appear in full, unauthorised, on thousands of blogs and other websites. No publisher could ever track down and take action against the copyright infringer in the majority of cases, and even if they could, the cost and time it took would soon outweigh the slim additional income they might receive from the permissions fees they recouped. In any case, there might be a beneficial relationship between having these poems so widely available online and sales of the poet’s books. Paulo Coelho, for one, is an author who firmly believes that when his work is reproduced on blogs and other websites, it helps his sales.
So, there was generally a feeling that although authors ought to retain some ownership, there ought to be a greater recognition of the problems the Web creates for existing rights regimes and licensing arrangements, and that this ought to be reflected in more suitable, and subtle, legislation and mechanisms of enforcement. These should include the ability to discriminate better between different types of uses, and users, of intellectual property, so that rights holders wouldn’t need to charge an academic researcher the same as a media company, and a simplification of the process for requesting permissions. Convention and incentive need to replace legislation and enforcement. But in order for that to happen there must be simpler ways of licensing or clearing IP.
Fortunately, many of these ideas seem to chime with the recommendations of the Hargreaves report, and increasingly the attitudes of various rights-holders who are seeking to make it easier for their material to be copied and re-used. It seems that there is a broad movement towards greater consensus about the need for some changes to make the reproduction and transmission of intellectual property easier and simpler. Nobody wants to set aside ownership completely. We just want to make sharing better and simpler.
Aulis at dawn, the sea still, and the wind
Not even whispering.
The hunger of men waiting for war
Hangs in the silence, breathless. Calchas shrugs
What sense is there in questioning the Gods?
It was her father’s boast that cost Iphigenia’s life.
Her throat is white and soft as bread
Under his wrist. The garland in her hair.
The knife ready to strike and slice.
Flares, bucked ribbons, coil
Thrown by magnetic bursts
The surface boils, cells divide.
Like hearts of sunflowers, grasses, grains
Run, streams melt.
Mitosis, interference races to the Earth. Artemis leaps.
A pattern in the flight of birds acts like an augury
Like those Augustus saw at Actium.
And Romulus above the Palatine.
A shark, deep in the Pacific
And our thoughts collide. Just then.
Long strands of light dive through the water
Strobing the chancel, green with dusk,
Swirl and shimmer as the black fish swims.
The machine is seeing things, learning to talk,
Mapping space in signals, bit by bit.
We process metaphors the same way you read code
And one day you will have to live the way we do
With death in our eyes and calculations
That we cannot make clutching our chests.