Tactical disinformation, while definitely not a new phenomenon, has never before been this calculated, algorithmic, or organized in the history of mass communication. Today’s media environment is severely impaired by the strategic dissemination of fake news, by trolls herding masses on social media platforms, and by filter bubbles reinforcing already present homogenous zones of consensus.
The liberal promise of the Internet as a distributed mass communication tool thinning the line between the producer and the audience has for once and for all turned out to be just so much hot air. After all, the Internet, especially in the last years, has rather been prone to the broadcast effect in an atmosphere where the line between fact and fiction has been critically blurred. The televisionization of the Internet as such laid the groundwork for the Putins, Erdogans, and Trumps of the world to effectively exhaust our attention spans. What’s more, with central powers having managed to decentralize their reach in the manufactured noise, we now live in an era of “post-truth,” in which objective facts are less influential in shaping public opinion than are appeals to emotion and personal belief.
A small amount of “fake news” is better described as disinformatzya. Its goal is not to persuade readers of its truth so much as it attempts to raise doubt in the reader that anything is true. We’re not used to disinformatzya in the US, but it’s been quite common not only in Russia but in Turkey, where Erdogan has manufactured fake news designed to reduce Turkish trust in Twitter, trying to disable it as a vehicle for organized opposition to his leadership. The long-term effect of disinformatzya is reduced faith in institutions of all sorts: the press in particular, but government, banks, NGOs, etc. Who benefits from this doubt? People who already have power benefit from a population that’s disempowered, frustrated, confused. And highly charismatic leaders who promise guidance away from failed institutions benefit personally from this mistrust.
Disinformation provides things that are not necessarily fake, but contain biased or incomplete views of events in order to have a persuasive effect on the reader. When systematically produced, disinformation causes continuous doubt about facts, and over time, such feelings grow into mistrust of institutions. Zuckerman explains the larger effects of propaganda:
The medium term effect of propaganda is polarization, as we stop seeing our political opponents as reasonable people we disagree with, but as people who are so wrong and misguided that we couldn’t possibly find common ground with them. In the long term, propaganda destroys democracy, because it silences dissent and calcifies the parties currently in power.
Fact checking as a countermeasure to disinformation is practiced by both mainstream and alternative media outlets. Hundreds of fact-checks were published during the Brexit referendum and in the US elections this year. However, none of those checks seemed to matter significantly. As discussed broadly, one reason might be that our polarized social networks increasingly constrain us to live in our own constructed realities, which are reinforced by recommender systems and news feeds curated by black-box algorithms. Filter bubbles on social media are more real than ever, as demonstrated by Gilad Lotan’s meticulous analysis of how bubbles are being developed on Twitter, and MIT Media Lab’s Lab for Social Machines’ mapping of how Trump and Clinton supporters live in their own Twitter worlds. Moreover, even if the facts are aroused enough to leave an echo chamber, moving from one isolated social cluster to another, they evaporate rapidly as a result of organized trolling.
Tim O'Reilly’s article “Media in the Age of Algorithms” clarifies the problems of curating in a world of infinite information and limited attention, arguing that the “truth signal” is in the metadata, not the data. It explains how Google filters disinformation not by looking at the content of the web pages, but by utilizing meta-data signals, which are harvested from links between pages (via PageRank) and behaviors of users, including the ones who are trying to game the system. Meanwhile, the algorithm Facebook uses to curate news feeds seems to prioritize engagement more than truth. The challenge, O’Reilly concludes, is how to build algorithms that take into account “truth” as well as popularity. Indeed, Facebook and increasingly Twitter prefer algorithmically curated feeds over timelines in order to filter the information out of noise. This allows them to save space in a user’s attention span, which makes room for new advertising real estate. And this is only half the problem. Social media platforms are public spaces as much as shopping malls are. Thus, there have been many cases reported of censoring unpleasant facts and shutting down accounts of activists and journalists, especially when the platform is in compliance with oppressive governments.
Reg Chua of Thomson Reuters reminds us of Google’s new feature to promote fact checking in search results by labeling fact-check articles. Google asks fact-checkers to use Schema.org’s “ClaimReview” markup in their article pages so that their sites can be reviewed and promoted accordingly. Asking publishers to use markup is a good direction to take, because it would allow other media platforms to also harvest such markup, getting fact checks in front of even more people.
However, Chua rightly argues that it would barely make a dent in the torrent of misleading or highly partisan news that isn’t susceptible to classic fact checking. Certainly, we have broader problems than just platform biases or fact checkability. How can self-censorship be detected by an algorithm or be treated by a platform policy? How does the curation algorithm judge an established news channel that acts as the voice of the state when the parent company has a large contract with the government? How does an algorithm curate facts that cannot even be published because the journalists are jailed, the media office shut down? These are extreme cases that we see today, for example, in Turkey and in other oppressive states of the world, to such a degree that the daily news agenda is forcefully monopolized by top-down disinformation. In fact, such extreme conditions are starting to emerge elsewhere; Jay Rosen of NYU just wrote winter is coming for journalism under Trump.
These circumstances described as post-truth are here to stay, because the ones who already have power immensely benefit from them. Our capacity for shock will remain constant, with today’s breaking political event shocking us as much as last week’s, even as these events’ cumulative effects on society escalate persistently. Furthermore, our moments of shock will become almost routine, hypernormalized as a result of ever deepening doubt, frustration, and confusion. These situations will only advance to profit the status quo, unless we confront it now and take action to overcome such systematic disinformation.
Reclaiming our attention span
Established news organizations such as the Washington Post, the Guardian and Reuters, as well as dedicated fact-checkers such as PolitiFact, FullFact, and Teyit have been publishing hundreds of fact checks about news and political statements, but their work is not necessarily organized as semantically structured data. So is most of the work done by investigative journalists such as ProPublica, The Intercept, Carbon Brief, Correctiv, ICIJ and many others. On the other hand, civic data initiatives around the world already diverge from the traditional organizations in their output, in that they don’t just publish their research as a report, but open it to the public as a database.
However, these efforts are not as coordinated as they should be. There are no bridges between such efforts, no systematic feedback loops to burst the media bubbles. Overcoming such organized disinformation will not be easy, but we should start from wherever we are. We can reclaim our attention grid if we invest in three actions, and incorporate them into our daily habits:
1. As experts of our pool of knowledge, partial as it may be, we have isolated bits and pieces of data in our reach, in our making. We should systematically link these blocks of information to one another at scale, and together create a bigger picture of the pressing issues we grapple to understand. This will allow us to gain positions of influence.
2. We should use interfaces that enable us to continuously make sense of how convoluted power relations work and explore complex issues that impact us and our communities.
3. We should have the ability to connect facts to action whenever possible, so that we help close the feedback loops between the Internet and the city.
For this cause, we’ve been building Graph Commons, a collaborative platform for connecting data points and making, analyzing, and publishing networked data maps. Our platform has been mostly used by journalists, activists, advocates, lawyers, artists, designers, curators, technologists, academics, cultural institutions, and civil society organizations. Though in its early days, we share a network with communities creating their graph commons, as they organize information against organized disinformation. We wanted to share some of the information hubs created on Graph Commons, legitimate steps in overcoming the struggles of the post-truth era. Here are some of the graphs published on pressing issues around the globe:
click on graphs to enlarge them
Mapping Media Ownership
Owners of established media outlets were given the concession to receive public tenders from the government thanks to their lobbying efforts. In this map, media patrons are exposed as profit makers in the construction and energy sectors, a dire conflict of interest given that the press should be independent from market forces. This data research and mapping is part of the Networks of Dispossession project, maintained by a collective of volunteers since June 2013 in Turkey.
ISIS Network in Turkey
Journalists at the Birgun Newspaper developed the most complete picture of the ISIS network operating in Turkey. Despite the Turkish government’s effort to cover up any information about the ISIS militants, the Birgun team meticulously researched and linked the isolated bits and pieces of data in the police records and court documents about the ISIS bombings. Their map “ISIS Antep Network” systematically confronts the organized disinformation of the pro-government mainstream media.
Syrian Refugees and NGOs
Journalist Melih Cilga built the first ever map of local and international NGOs that support the Syrian Refugees in Turkey. Neither the government nor the UN provides such detailed information, so Cilga started a map and solicited data contributions from organizations such as Doctors Without Borders and Helsinki Citizens’ Assembly about their own support activities in the field. The map became a reference for those who want to reach the supporting organizations or study refugees’ conditions.
Lobbyists in the Swiss Parliament
LobbyWatch.ch is a platform for transparent politics, and has been monitoring and mapping the lobbying activities in the Swiss Parliament and profiling the parliament members by their relationships to various industries, from defense to pharmaceuticals to other international organizations. They research MPs and their relations and find interconnections through their indirect relationships. Their findings as well as their research data are periodically published in their magazine LobbyWatch.ch.
These are the early steps of linking our data research to one another in order to defeat organized disinformation, gain positions of influence, and reclaim our attention grid. You can view more maps and build your own data projects on the Graph Commons platform.