Rodrigo Ochigame & James Holston, Filtering Dissent, NLR 99, May–June 2016

7 min read Original article ↗
https://doi.org/10.64590/vrb

Share

Public discourse is increasingly mediated by proprietary software systems owned by a handful of major corporations. Google, Facebook, Twitter and YouTube claim billions of active users for their social media platforms, which automatically run filtering algorithms to determine what information is displayed to those users on their feeds. A feed is typically organized as an ordered list of items. Filtering algorithms select which items to include and how to order them. Far from being neutral or objective, these algorithms are powerful intermediaries that prioritize certain voices over others. An algorithm that controls what information rises to the top and what gets suppressed is a kind of gatekeeper that manages the flow of data according to whatever values are written into its code. In the vast majority of cases, platforms do not inform users about the filtering logics they employ—still less offer them control over those filters. As a ubiquitous, automated, powerful, and yet largely secret and unexamined form of information control, this filtering process deserves more critical attention.footnote1 Its implications for euphoric predictions about political mobilization in a new information age—exemplified by talk of ‘Facebook revolutions’ in the Arab world—have yet to be fully explored.

There is little doubt that filtering algorithms can serve political purposes effectively. In 2013, Facebook researchers conducted experiments to test whether manipulations of its algorithm could change user moods and voting behaviour, varying the number of posts containing positive or negative emotional words in the feeds of 689,003 users.footnote2 They claimed to have found evidence of ‘massive-scale emotional contagion’; that is, people who saw posts with either more positive or more negative words were more likely to write posts with the same emotional bias. In another experiment, during the 2010 us congressional elections, Facebook inserted an item into the feeds of 60 million users that encouraged them to vote.footnote3 Its researchers then cross-referenced the names of users with actual voting records and concluded that users with manipulated feeds were more likely to vote: they even claimed that the manipulation had increased turnout by 340,000. If such manipulation was directed towards specific social and political groups, which is already possible through the paid sponsorship of filtering, it could determine the outcome of an election.footnote4 Significant attempts to sway elections in several Latin American countries through more straightforwardly criminal abuses of social media have already been documented.footnote5

Some platforms employ a combination of algorithmic filters and human curators. The latter are typically low-wage contractors, whose involvement recently became the subject of a major controversy: in May 2016, former Facebook ‘news curators’—young American journalists subcontracted through Accenture—anonymously accused the platform of routinely suppressing right-wing content in its ‘trending topics’, which appear as a list of news items separate from the main feed, and which supposedly prioritize the most ‘popular’ news topics of the day.footnote6 American conservatives jumped on the accusations, claiming that Facebook has a liberal bias, and prompting an inquiry from the Republican chair of the Senate Commerce Committee. In its defence, Facebook responded that the human curators merely ‘review’ stories that are ‘surfaced by an algorithm’—as if algorithmic filtering automatically assures neutrality—while claiming to stand ‘for a global community . . . giving all people a voice, for a free flow of ideas and culture across nations’.footnote7

There are deep flaws with both the conservatives’ charge and Facebook’s response. The official list of ‘1,000 trusted sources’ for trending topics actually includes many right-wing news outlets, but very few on the left.footnote8 Moreover, there have been more serious and better-documented cases of censorship by Facebook ‘content moderators’ that have been largely neglected by the mainstream press. In 2012, for example, a former moderator leaked Facebook’s list of abuse standards, whose ‘international compliance’ section prohibited any content critical of the Turkish government or Kemal Atatürk, or in support of the Kurdistan Workers’ Party.footnote9 This censorship occurred not in the small box of trending topics, but in the main feed. Thus, in comparison with the suppression of leftist dissent, the conservatives’ charge is weak in both substance and evidence. It is also striking that the issue of Facebook’s non-neutrality in the selection of news topics was raised by the revelation that humans are involved in the editorial process; the implication throughout the controversy has often been that the use of filtering algorithms is unbiased and objective. As we will demonstrate, algorithmic filtering routinely suppresses some political perspectives and promotes others, independently of human ‘editorial’ intervention.

In other words, overt censorship of the internet—for example, server takedown, seizure of domain names, denial of service and editorial manipulation—is not necessary to control the flow of information for political purposes. Algorithmic filtering can accomplish the same end implicitly and continuously through its logics of promotion and suppression.footnote10 In the algorithmic control of information, there are no clearly identifiable censors or explicit acts of censorship: the filtering is automated and inconspicuous, with a tangled chain of actors (computer scientists, lines of code, private corporations and user preferences). This complex process systematically limits the diversity of voices online and in many cases suppresses certain kinds of speech. Although the outcome may be viewed as tantamount to censorship, we need to broaden our conceptual framework to take account of the specific logics that are built into the selection, distribution and display of information online.

In what follows, we will describe how filtering algorithms work on the leading social media platforms, before going on to explain why those platforms have adopted particular filtering logics, and how those logics structure a political economy of information control based primarily on advertising and selling consumer products. Political activists regularly use such platforms for outreach and mobilization. What are the consequences of relying on commercial logics to manage political speech? We show the impact of algorithmic filtering on a contemporary social conflict, the land disputes between agribusiness and the Guarani and Kaiowá peoples in Mato Grosso do Sul, Brazil. The predominant filtering logics result in various forms of information promotion and suppression that negatively affect indigenous activists and benefit the agribusiness lobby—but we also show how activists can sometimes strengthen their voices by circumventing those logics in creative ways. In conclusion, we will propose a number of strategies to subvert the predominant logics of information control and to nurture alternatives that would enable a more democratic circulation of information online. Given the overwhelming importance of online mediation for social and political life, this is an urgent task.

How does algorithmic filtering work? What are its predominant logics today?footnote11 Filtering algorithms typically determine a selection and order of items in a feed by calculating numerical scores for each item in a database based on user actions. If an item has a high score, its position will be higher and therefore more visible. A recent Facebook study demonstrates that items in top positions are more likely to be clicked on.footnote12 Platforms gather data for the calculation of feed positions from the surveillance of user actions. The constant tracking of clicks, browsing histories and communication patterns provides the data on which algorithms operate. Some data consist of direct user input such as clicks of buttons, including ‘likes’ on Facebook and ‘retweets’ on Twitter. Other data involve sophisticated tracking of involuntary input, such as how much time a user spends viewing each item before scrolling down. In some cases, surveillance reaches beyond the platform itself. Installed on many websites as promotional tools, Facebook’s ‘like’ and Twitter’s ‘tweet’ buttons also run background operations to track all visitors to those sites. Both companies use this surreptitiously obtained information for profiling, advertising, filtering and other purposes. Algorithmic filtering makes such surveillance profitable.