Talks in Europe on Quello Center’s Search and Politics Project

I had a fascinating and challenging week in Europe speaking about the Quello Center’s work on search and politics. The findings of our project, called ‘The Part Played by Search in Shaping Public Opinion’, suggested that concerns over fake news, echo chambers, and filter bubbles is ‘overhyped and underresearched’. The project was supported by Google, and the findings and methodology are publicly available online (see references), along with the slides I adapted for each of the particular talks. The slides are posted here:

In Paris, on the 10th and 11th, I was able to speak at a UNESCO Knowledge Café for a seminar chaired by the Director for Freedom of Expression and Media Development, Guy Berger, for UNESCO staff, which included UNESCO’s Xianhong Hu. I then met with members of the French Audio Visual Regulator, the Conseil Supérieur de l’Audiovisuel (CSA); and then members of the Ministère de la Culture (Ministry of Culture); and gave a lecture at Sciences Po, which was jointly organized by Thierry Vedel for the MediaLab and CEVIPOF. I was also able to meet over lunch with a former colleague in the President’s office at the French National Commission on Informatics and Liberty (CNIL), which is central to data protection in France.

On the 12th, I was in Rome, where I first spoke at a roundtable over a wonderful lunch at the Centro Studi Americani – the Center for American Studies. That evening, I spoke on the Terrazza dei Cesari with members of YouTrend, an organization of political communicators in Italy, which was picked up by over a thousand on a Facebook Live video stream. The talk was sandwiched by an aperitif and dinner, and sequentially translated.

Centro Studi Americani

My last stop was in Berlin, where I was able to meet at the Ministry for Culture with representatives of the state media authorities, representing the German Lander. I finished my talks with a roundtable at the Alexander von Humboldt Institute für Internet und Gesellschaft (HIIG – Germany’s first Internet Institute), chaired by Professor Dr. Wolfgang Schulz and joined by Professor Dr. Dr. Ingolf Pernice. As a member of HIIG’s Advisory Committee, it was great to end my trip with a sense of the quality and diversity of faculty, fellows and visitors at the Institute.

This week was an incredible opportunity for me to convey the results of our research. I want to thank all of those who helped organize and attended these events; thank my colleagues on the project, including Grant Blank, Elizabeth Dubois, and Bibi Reisdorf, along with our graduate assistants, Sabrina Ahmed and Craig Robertson; and thank our colleagues at Google for their confidence in our project.

I must say that I was unable to convince many of those involved in these talks that the panics over fake news, filter bubbles and echo chambers have been overhyped. Despite evidence on the many ways that Internet users are likely to mitigate these problems, such as in consulting multiple sources of information about politics, many politicians, regulators and scholars remain very concerned.

I spoke to each group about the ways evidence can fail to change views on these issues as an example of how many divisions in society are not due to filtered or biased information, but to real divisions in opinion. These panics are powerful for several reasons, including the attraction of technologically deterministic perspectives, the role of a confirmatory self-selection or dismissal of evidence, and the role of the third-person effect – I’m okay, but others are likely to be fooled.


Dutton, W. H. Talking Points that Formed the Basis for the Talks in Europe:

Dutton, W.H., Reisdorf, B.C., Dubois, E., and Blank, G. (2017), Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States, Quello Center Working Paper available on SSRN:

Dutton, W.H. (2017), ‘Fake News, Echo Chambers, and Filter Bubbles: Underresearched and Overhyped’:

Dutton, W. H. (2017), ‘Bubblebusters’, NESTA.


Don’t Panic over Fake News

Fake News is a Wonderful Headline but Not a Reason to Panic

I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon.[1] It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online. th-1

Fake News

Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.

It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.

Problems with Responding

There are problems with attacking the problem of fake news.

First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.

Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.

Lessons from the Internet and Web’s Short History

But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)

Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.

A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’.[2] PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.

Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.

The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.

Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.

New Social Media News Feeds

But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it.[3] But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.

Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther.[4] His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.

The Risks

My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.

In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information.[5] These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.

But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.


[1] List of Fake News Sites:

[2] Resnick, P., and Miller, J. (1996), ‘PICS: Internet Access Controls without Censorship’, Communications of the ACM, 39(10): 87-93.

[3] Walters, R. (2016), ‘Mark Zuckerberg responds to criticism over fake news on Facebook, Financial Times:

[4] Phil Howard:

[5] B.S. Detector: