Action and Trade

May the Force Be with Us

Services from Google, Facebook and others are free of charge and efficient. However, they collect data about us on a large scale and use algorithms to determine our courses of action. Does this mean that we lose our autonomy and our freedom of action? The political aspect of the commercial interest for big data may be quite different.

By Andres Friedrichsmeier

Translated from the German by: Kathrin Ellwanger, Aneka Faiß, Eliot Reiniger, Mona Lang, Korbinian Feigl, Julia Hevesi, Ella Dering

 

LSD, hypnosis and deliberately induced concussions: CIA officials had indeed shredded their files in time, but when the US Senate wanted to investigate the activities of its intelligence organizations in 1975 towards the end of the Vietnam War, shocking evidence was still discovered. The secret project MKUltra, initiated in 1953, tested the possibilities of mind control. From today’s perspective, this CIA programme sounds not only scary but also a little ridiculous. Speaking of mind control: do the intelligence organization NSA and companies like Facebook have more efficient tools nowadays than LSD and brain surgery? In 2014, Facebook shocked the public by admitting to conducting an emotional manipulation study on 689,003 users without their knowledge. The study showed that the users whose news feeds Facebook purposefully filtered of negative emotional content, which was accordingly withheld, consequentially posted more positive emotional content. The actual issue, however, is not the one-week experiment itself, but rather that Facebook filters and pre-sorts the news feeds of all its 1.5 billion users with machine learning algorithms every single day. Plainly speaking, Facebook influences the emotions of about 20 percent of the world’s population. The US president does not even govern a quarter of that many citizens. In contrast to Obama and Trump, Facebook’s self-learning algorithms neither have to assert themselves against hostile senators nor do they have to publically document their decisions or even justify them. But they only come second in the Dvorsky’s ranking of the top 10 world dominating algorithms, lagging far behind Google’s ‘PageRank’.

Yet, Google still wants more influence than it already has today with its 3.5 billion processed Internet searches per day. When it comes to searching online, we know that Google’s algorithms do not only influence what we find but also, by means of autocomplete of the search text input, what we search. A prominent example is “Bettina Wulff + Escort”. In the future, we will be enticed even more directly to use pre-designed courses of action. The ‘Inbox’ service, which was published in may 2015, is the best example of this. It allows Google to analyse one’s full email correspondence – to whom and to which email one has replied so far and in what way. From then on, for every newly received email Inbox provides an appropriately pre-formulated answer that we can send back with just one click or automatically add to our calendar. It is ideal for office jobs that are stressful and burdened with information overload. Thus, Google’s algorithms efficiently suggest how we communicate with others or which appointments we attend or cancel. Assuming that users are pressed for time, this simply means that the algorithm has an influence on determining their behaviour – and who does not feel work-related stress caused by emails and appointments?

Every user knows what they are getting into

Probably every Inbox user knows what they are getting into. It only becomes interesting when we get to the supra-individual level. Especially when we realise that the individual ability to agree to the terms and conditions of such services has been degraded to an empty legal fiction. As early as in 2007, researchers estimated that it would take roughly 201 hours per year just to read the privacy agreements that require approval for the typical internet use. An average employee would have already spent 12 percent of their annual working time without having compared different providers or even renegotiated individual provisions.

Most of the people who play down the subject of big data by saying "that does not concern me personally" are right in a way. Not because their personal lives would be unaffected – for a start, big data helps supermarkets to systematically evaluate buyer behaviour and price sensitivities – but because the relevance of the topic is on the politico-social level and not with the individual consumer sovereignty.

A personal example: when I became a part of a project team, which was coordinated through a shared Google Calendar, I had the choice to complicate the work of the whole group out of personal privacy reasons or just click the 'Agree' button on Google. A calendar is only useful if I enter all my appointments. I agree to every change of the terms and conditions by Google, otherwise I would be without a calendar. So far, so mediocre. After all, I’m using a handy service that is paid only indirectly through advertising fees – unlike my brother, who lives abstinent from data. Nevertheless, his birthday is marked in my Google calendar, I’ve given away his address via ‘Maps’ and all his e-mails to my Gmail account are also analysed. Furthermore, from the knowledge about me and other Google users who resemble my brother, Google deduces his shopping behaviour and price sensitivity reasonably accurately. As a result, many providers from whom my brother buys feel compelled to book advertisements on Google.

Although my brother never sees these advertisements – thanks to the ad blocker enabled in his browser – he pays constantly for them. The advertising costs are apportioned among the prices of consumer goods, whether in the supermarket, in the electronics store or when ordering pizza. My brother even pays for my use of Google Calendar. Does he, as a non-user, at least escape the ‘filter bubble’ effect? This means that algorithms from Facebook and Google primarily have the aim to optimise my user satisfaction, but they also show me things that I like anyway. For the conservatives among us, non-conservative news is filtered out; football fans can meet up inside a digital bubble of football friends, etc. This provides cause for concern, but we should not make the mistake of judging the digital world according to standards which have not even been met outside of the digital world. Ultimately, any bowling club is a kind of ‘filter bubble’, and we falsely consider ‘non-partisan’ media companies such as the Süddeutsche Zeitung, Die Welt or Focus, which are dependent on the advertising market and controlled by a few private owners, to be pillars of freedom of speech. Therefore, we cannot say for sure whether digital services actually restrict the pluralistic exchange of views or broaden it, as was the case during the Arab Spring from 2010 until 2011.

People are employed to deal with the 'penis problem'

But does the sociopolitical influence of Google and others take a specific direction? Widely debated is the so-called nipple censorship, with an incredible amount of effort going into teaching computers how to differentiate between male and female nipples so that the latter can be removed from Google’s image search. This also applies to Facebook: millions of dollars are being invested in another unsolved issue, the ‘penis problem’. The money is spent on this subject rather than on filtering out right-wing extremist hate campaigns. The kind of influence taken here becomes apparent with the following thought experiment: what would the digital world look like if Google and Facebook had their headquarters not in California but in conservative Kansas (would the phrase ‘bloody hell’ still be allowed?) or Guangzhou in southern China (would everyone publish their annual income on Facebook)? Nevertheless, how can we avoid overestimating this influence? By extrapolating such recent technical progress, as the contemporaries of the CIA experiment mentioned above did, who were absolutely certain that we would be living in colonies on the moon and Mars by 2015? The Facebook experiment quoted above shows that the manipulated users shared only three percent more positive messages. 

The algorithms discussed here are undoubtedly more powerful in their main function – that is, to encourage us to spend more time on Facebook. Recently, this was brought about by a supposedly 83 percent reliable facial recognition which suggests tagging your friends even in photos where they were only photographed from behind. In terms of data privacy, it is at least alarming, but does it endanger our autonomy? The question is formulated incorrectly because single persons cannot be remotely controlled against their will via facial recognition. Until the 1970s the former was still completely imaginable for cinemagoers, as it was part of many Bond movies but as well as the main plot of the Doctor Mabuse movies from the 1920s and 1960s. Big data and its algorithms, however, only work on the basis of probabilities – whether it’s in 2015 or in 50 years. Generally speaking, this is done with a gigantic amount of user profiles because customising big data for a single person would be too complex. The reason for this becomes apparent by trying one of the two available applications for desktop segments with machine learning algorithms: they need a degree of human control to differentiate between analysis and redundant information. 

For example, real people are needed to deal with the aforementioned ‘penis problem’. Therefore, big data customised for single users asks them to cooperate voluntarily in verifying the correctness of the algorithm (“Please click if this suggestion was helpful”). But will it stay that way? Will the scenario of the movie ‘Minority Report’ become reality, where potential criminals can be punished before they even think about committing a crime? Today probation officers in the US already implemented this practice using algorithms to decide whether a released convict will reoffend or not. The police even evaluate whole neighbourhoods using this system. Or does big data only continue something that started under the heading ’computer search’ in the 1970s? A social-theoretical answer: clearly the latter. Human behaviour, as well as human language, is inherently ambiguous and inexplicit. Every sense, every meaning depends on how the persons involved interpret the social context. They change their interpretation dynamically and their decisions are rarely consistent. 

But only in a way which can be defined by good big-data-based probabilities. Therefore, big data can know more about us than we do, but it will always just be guessing.

Only the low-cost culture has made the rapid internet development possible

To know more about us than we do is according to Google and others at least a question of survival. They’ll want to survive on commissions instead of pop-ups if we use more and more devices without displays or at least with tiny displays in the future.

This means that my smartphone could know when I would like to call a taxi at the train station or order a pizza at a pizzeria within walking distance. Only in this case would I agree and if necessary cooperate with the algorithm, which could semi-automatically pick a vendor for me that is registered with Google – probably for a small fee. Will I lose the autonomy of my life through this? Not if I can gain some time for more important things than comparing prices of taxi services. But should we be giving the potential influences behind all that into the hands of only a few private companies? I do not know a single argument that would support that. USA Today recently calculated that 70 percent of the global Internet economy is controlled by only five US companies.

This calculation can be questioned; but not the fact that trade with the consumer-related big data has the tendency to make economic principles ineffective (big data for weather models is a completely different topic here). The reason for this is called the ‘network effect’, which means that services get more valuable the more people use them. Coordinating appointments via Google is only useful if my colleagues are using the same service. And if my friends are on Facebook, nobody could ever convince me to use the German social media platform ‘Studi-VZ’. The network effect also explains why a European anti-Google would never be successful. Thanks to the network effect, the global monopoly models are more productive. It also represents the fact that only the low-cost culture, typical for big data business, has made the rapid Internet development of the last decades possible since most of the new services will become more valuable when people take part at an early stage.

This is questionable from a market economic perspective. Is the principle of ‘goods for money’ not the better option? Shouldn’t we get money in exchange for our data, which was already requested by Jaron Lanier, winner of the 2014 Peace Price of the German Book Trade? However, our private data has no considerable cash value for Google, it only allows the offering of free services. The market value of private data lies within the single-digit cent range or below, which can be examined with the web calculator of the Financial Times for free. 

Therefore, the cash flow of Google and others is not the result of data sales, but the fact that they encourage the development of digital user communities. The winners of the 2014 Nobel Prize in Economics have been able to verify that Google and others have some exclusive access of income, which they do not even exploit improperly. According to Rochet and Tirole, the monopolies of Google and Facebook have tended to lead to decreasing advertising rates. That’s why Google’s price policy is socially more acceptable than those of the manufacturers of medication for cancer, HIV or hepatitis, for example, who shamelessly exploit the monopoly situation protected by patents.

One could say that the monopolies are developing almost unavoidably as a result of the network effect and thus are undermining the traditional concept of the system known as the market economy. 

However, unlike Lanier and his enthusiastic publishers, we should not necessarily be appalled by this effect. After all, as Herman E. Daly and Joshua Farley have proven in their standard work Ecological Economics, it was not by accident that the market economy and the fossil fuel industry that is leading to climatic disaster, developed simultaneously. Why should a business model of free mass access be considered bad if it leads to the development of virtual communities – in contrast to the traditional model which does not consider social and ecological issues? Now, don’t laugh, but could there be a grain of truth in Google’s business slogan ‘Don’t be evil’? Anyway, from research on developing countries we know about the relationship between private enterprise models and social consequences: companies engaged in raw material extraction have no business interest in establishing social relations on site because they move on as soon as the raw materials have been depleted. 

“…he remains the villain who must be deprived of his powers.”

A Ford factory is busier if its labourers can afford a Ford too. Thus, a whole era (approx. 1914 to 1973) has been named Fordism after this business interest. ‘Fordism’ has subsequently become a sign for how the business interest of manufacturers in the consumption of goods has affected society, a phenomenon called the Wirtschaftswunder (economic miracle) in German and trente glorieuses (glorious thirty) in French. Some decades later, Google & Co may currently need manufacturers as advertising customers but they are also dependent on the success of virtual communities. Thus, it is becoming apparent that their business interests could be more compatible with the cultivation of social relations. The architecture of their headquarters is a clear sign of that. While banks and industrial companies demonstrate their claim to power with phallus-like skyscrapers, Google & Co are constructing campuses. The universities, which are the centres of international exchange networks, serve as models. For more references, beyond Google & Co.’s much-discussed dislike of dividends, consult Jeremy Rifkin’s book The Zero Marginal Cost Society. What will Google-ism look like if the business interest of Google & Co in flourishing virtual communities has effects on society? We don’t know and we really should not ‘know’ yet but influence it ourselves. For one thing is obvious: there would be greater chances for an oil-independent world, for a system which is not based on resource consumption. Not that the chances would automatically be good but at least the result would lead less directly into climate catastrophe than before.

Does that mean everything is fine? I would like to remind you of Fritz Lang’s initial film version of the character Dr. Mabuse from 1922, who aims at the creation of a better society free from “corruption and decay”. But he gains too much power to manipulate others and thus remains the villain who must be deprived of his powers. His rival, a lawyer, is only temporarily successful in doing so. In the long term, this could only be achieved through political means. A Google that doesn’t want to be ‘evil’ (and is now called Alphabet) would not necessarily need to be broken up. It could be transformed into a foundation which is democratically co-determined by its users all over the world; into something similar to Mozilla, the famous developer of Firefox, an organisation with more powerful advisory committees than ICANN, which coordinates the assignation of names and IP addresses in the internet. Similarly, rather than selling his majority of shares to third parties for the benefit of colleges or aid programmes, the chairman of Facebook, Mark Zuckerberg, should instead transform his entire company into a democratic non-profit oriented entity.

 

Dr. Andres Friedrichsmeier is organisational sociologist and teaches at the university in Münster in the Department of Communication (IfK). Among other things, he is doing research on the use of management instruments in the public sector and works in the fields of counselling and further education. He had to start working with machine learning algorithms when one of his research projects generated far more data than could ever be processed by humans. (Editorial Update (2017): He works now as a organisational sociologist in the Department of Education in the Free State of Thuringia.

More articles to the topics of Trade, Action and Debate are not only online but in our factory-magazine Action and Trade which can be downloaded for free. This is as always finely illustrated and good readable on tablet-computers and screens – and contains all articles and images as even additional numbers and citations.

Magazin als PDF

Themen