As I noted in my endorsement of this book: “James W. Cortada and Willam Aspray’s brilliantly selected and crafted case studies are must-reads because they bring historical insight to issues of fake news, disinformation, and conspiracy theories of our digital age.”
There seems to be a pattern evolving around concerns over fake news – one that runs counter to more conventional expectations. Most people expect that raising concerns over fake news might actually lead to improvements in search, platforms, regulation, or consumer behavior that improves the quality and diversity of news. However, the opposite might be unfolding.
The story begins with the panic over fake news. It is a panic since most research on the actual use of online news suggests that people see multiple sources and most often check news that they see as questionable but important, such as by using search. This panic over fake news has been fueled by a focus on the production of fake news. It is indeed produced albeit this phenomenon is not new – that is one reason why search engines were invented. But far less attention has been directed at its consumption. When you look systematically at how Internet users consume news, such as information about politics, it is clear that the impacts of fake news are largely mitigated.*
However, the mainstream media continue to promote the idea of fake news, with mainstream news being the source of truth and fact, to the degree that politicians, regulators, and the public have become increasingly concerned, pressing online platforms to ‘do something’ about it. Internet platforms have done so by raising quite dramatically the prominence of mainstream news sources when people search for news online.
As a consequence, when you go online for news about what is going on in the world, you are increasingly likely to be steered to the headline news of the mainstream news media. If you wish to go beyond the headline news, you find yourself asked to pay for a subscription to go behind their paywall. This has already proven so effective that even academics are beginning to think that subscription services are seeing a renaissance of sorts. However, this increase is being driven by the platforms and news aggregators prioritizing mainstream news headlines, to avoid the charge of promoting fake news. Thus, the concern over fake news is essentially creating advertising for subscription news services, with more providers moving to pay walls, and existing subscription services raising their rates, doubling them in some cases.
So the Internet is becoming less of a source for diverse news as stories in the long tail are pushed behind the headlines, and more of a source for the most popular headline news – the same news you hear on radio and TV. Will this undermine online first news outlets? I believe it already has done so.
Therefore, I am worried that panic over fake news is leading us to no news beyond the major headline stories that leave so much news uncovered. The thrust of actual research on the use of online news should undermine the panic over fake news, filter bubbles, and echo chambers, but journalists don’t read social science, and the story of fake news serves their interests.
Of course, I am simplifying a complex set of developments, but I believe this captures a pattern that is not being identified in the current fake news narrative. I am a news fan, subscribing to multiple print newspapers and an avid consumer of online news, which has been so complementary to the print news. If we recognize this tendency, we can hack through the headlines, and search for specific topics and information, but don’t be surprised if you find yourself walled off from more information by pay services.
Let me know if you think this is wrong, fake, or exaggerated, let me know. I fear I am right about this, but am open to be proven wrong, and think systematic research on this trend would be of value.
I had a fascinating and challenging week in Europe speaking about the Quello Center’s work on search and politics. The findings of our project, called ‘The Part Played by Search in Shaping Public Opinion’, suggested that concerns over fake news, echo chambers, and filter bubbles is ‘overhyped and underresearched’. The project was supported by Google, and the findings and methodology are publicly available online (see references), along with the slides I adapted for each of the particular talks. The slides are posted here: https://www.slideshare.net/WHDutton/search-and-politics-fake-news-echo-chambers-and-filter-bubbles-july2017
In Paris, on the 10th and 11th, I was able to speak at a UNESCO Knowledge Café for a seminar chaired by the Director for Freedom of Expression and Media Development, Guy Berger, for UNESCO staff, which included UNESCO’s Xianhong Hu. I then met with members of the French Audio Visual Regulator, the Conseil Supérieur de l’Audiovisuel (CSA); and then members of the Ministère de la Culture (Ministry of Culture); and gave a lecture at Sciences Po, which was jointly organized by Thierry Vedel for the MediaLab and CEVIPOF. I was also able to meet over lunch with a former colleague in the President’s office at the French National Commission on Informatics and Liberty (CNIL), which is central to data protection in France.
On the 12th, I was in Rome, where I first spoke at a roundtable over a wonderful lunch at the Centro Studi Americani – the Center for American Studies. That evening, I spoke on the Terrazza dei Cesari with members of YouTrend, an organization of political communicators in Italy, which was picked up by over a thousand on a Facebook Live video stream. The talk was sandwiched by an aperitif and dinner, and sequentially translated.
My last stop was in Berlin, where I was able to meet at the Ministry for Culture with representatives of the state media authorities, representing the German Lander. I finished my talks with a roundtable at the Alexander von Humboldt Institute für Internet und Gesellschaft (HIIG – Germany’s first Internet Institute), chaired by Professor Dr. Wolfgang Schulz and joined by Professor Dr. Dr. Ingolf Pernice. As a member of HIIG’s Advisory Committee, it was great to end my trip with a sense of the quality and diversity of faculty, fellows and visitors at the Institute.
This week was an incredible opportunity for me to convey the results of our research. I want to thank all of those who helped organize and attended these events; thank my colleagues on the project, including Grant Blank, Elizabeth Dubois, and Bibi Reisdorf, along with our graduate assistants, Sabrina Ahmed and Craig Robertson; and thank our colleagues at Google for their confidence in our project.
I must say that I was unable to convince many of those involved in these talks that the panics over fake news, filter bubbles and echo chambers have been overhyped. Despite evidence on the many ways that Internet users are likely to mitigate these problems, such as in consulting multiple sources of information about politics, many politicians, regulators and scholars remain very concerned.
I spoke to each group about the ways evidence can fail to change views on these issues as an example of how many divisions in society are not due to filtered or biased information, but to real divisions in opinion. These panics are powerful for several reasons, including the attraction of technologically deterministic perspectives, the role of a confirmatory self-selection or dismissal of evidence, and the role of the third-person effect – I’m okay, but others are likely to be fooled.
Dutton, W.H., Reisdorf, B.C., Dubois, E., and Blank, G. (2017), Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States, Quello Center Working Paper available on SSRN: http://ssrn.com/abstract=2960697
These concerns are much discussed, but have not yet been thoroughly studied. What research does exist has typically been limited to a single platform, such Twitter or Facebook. Our study of search and politics in seven nations – which surveyed the United States, Britain, France, Germany, Italy, Poland and Spain in January 2017 – found these concerns to be overstated, if not wrong. In fact, many internet users trust search to help them find the best information, check other sources and discover new information in ways that can burst filter bubbles and open echo chambers.
We found that the fears surrounding search algorithms and social media are not irrelevant – there are problems for some users some of the time. However, they are exaggerated, creating unwarranted fears that could lead to inappropriate responses by users, regulators and policymakers.
The importance of searching
The survey findings demonstrate the importance of search results over other ways to get information. When people are looking for information, they very often search the internet. Nearly two-thirds of users across our seven nations said they use a search engine to look for news online at least once a day. They view search results as equally accurate and reliable as other key sources, like television news.
In line with that general finding, a search engine is the first place internet users go online for information about politics. Moreover, those internet users who are very interested in politics, and who participate in political activities online, are the most likely to use a search engine like Bing or Google to find information online about politics.
But crucially, those same users engaged in search are also very likely to get information about politics on other media, exposing themselves to diverse sources of information, which makes them more likely to encounter diverse viewpoints. Further, we found that people who are interested and involved in politics online are more likely to double-check questionable information they find on the internet and social media, including by searching online for additional sources in ways that will pop filter bubbles and break out of echo chambers.
Internet-savvy or not?
It’s not just politically interested people who have these helpful search habits: People who use the internet more often and have more practice searching online do so as well.
That leaves the least politically interested people and the least skilled internet users as most susceptible to fake news, filter bubbles and echo chambers online. These individuals could benefit from support and training in digital literacy.
However, for most people, internet searches are critical for checking the reliability and validity of information they come across, whether online, on social media, on traditional media or in everyday conversation. Our research shows that these internet users find search engines useful for checking facts, discovering new information, understanding others’ views on issues, exploring their own views and deciding how to vote.
We found that people in different countries do vary in how much they trust and rely on the internet and searches for information. For example, internet users in Germany, and to a lesser extent those in France and the United Kingdom, are more trusting in TV and radio news, and more skeptical of searches and online information. Internet users in Germany rate the reliability of search engines lower than those in all the other nations, with 44 percent saying search engines are reliable, compared with 50 to 57 percent across the other six countries.
In Poland, Italy and Spain, people trust traditional broadcast media less and are more reliant on, and trusting of, internet and searching. Americans are in the middle; there were greater differences within European countries than between Europe as a whole and the U.S. American internet users were so much more likely to consult multiple sources of information that we called them “media omnivores.”
Internet users generally rely on a diverse array of sources for political information. And they display a healthy skepticism, leading them to question information and check facts. Regulating the internet, as some have proposed, could undermine existing trust and introduce new questions about accuracy and bias in search results.
But panic over fake news, echo chambers and filter bubbles is exaggerated, and not supported by the evidence from users across seven countries.
I recently posted a short overview of the findings of one of our projects on fake news, filter bubbles, and echo chambers in The Conversation. All three are foci of panic over the potential political implications of new technologies, such as search algorithms and social media friending and de-friending mechanisms. Given the comments received and the worries expressed in those comments, the fake news panic trumps all the others – no question.
One reason is that it is so new. The public debate over fake news only began to arise during the 2016 elections in the US, though it quickly spread internationally. I’m sure I could be corrected on that, but I believe that is roughly the case.
Secondly, the definition – to the degree that is fair to apply to this concept – is being constantly enlarged and blurred by pundits and politicians referring to more and more ‘news’ as fake. In fact, ‘fake’ is becoming an almost viral term. There are many ways to characterize much of the news, some of it is patriotic journalism, some partisan, some misinformation, some just poor reporting, etc. But more and more of the whole journalistic enterprise is being labelled as fake. But journalists are not the victim so much as among the major users of this term, increasingly characterizing mainstream media as real news versus blogging and social media as the sources of fake news. In such ways, it has become a pejorative term used to discredit the butt of the insult.
These are a few of the reasons why we did not use the term ‘fake news’ in our survey of Internet users. We asked other questions, such as how often they found wrong information on different media. That said, we found the a surprisingly large proportion of people tend to check information they believe to be suspect, such as by using a search engine or consulting other sources.
So despite the rising panic over fake news, I still believe it is under-researched and over-hyped.
A report we just completed for the Quello Center on ‘Search and Politics‘ concluded that most people are not fooled by fake news, or trapped by filter bubbles or echo chambers. For example, those interested in politics and with some ability in using the Internet and search, generally consult multiple sources for political information, and use search very often to check information they suspect to be wrong. It is a detailed report, so I hope you can read it to draw your own conclusions. But the responses I’ve received from readers are very appreciate of the report, yet then go on to suggest people remain in somewhat of a panic. Our findings have not assuaged their fears.
First, these threats tied to the Internet and social media appeal to common fears about technology being out of control. Langdon Winner’s book comes to mind. This is an enduring theme of technology studies, and you can see it being played out in this area. And it is coupled with underestimating the role users actually play online. You really can’t fool most of Internet users most of the time, but most people worry that way too many are fooled.
This suggests that there might also be a role played by a third person effect, with many people believing that they themselves are not fooled by these threats, but that others are. I’m not fooled by fake news, for example, but others are. This may lead people to over-estimate the impact of these problems.
And, finally, there is a tendency for communication and technology scholars to believe that political conflicts can be solved simply by improving information and communication. I remember a quote from Ambassador Walter Annenberg at the Annenberg School, where I taught, to the effect that all problems can be solved by communication. However, many political conflicts result from real differences of opinions and interests, which will not be resolved by better communication. In fact, communication can sometimes clarify the deep differences and divisions that are at the heart of conflicts. So perhaps many of those focused on filter bubbles, echo chambers and fake news are from the communication and the technical communities rather than political science, for example. If only technologies of communication could be improved, we would all agree on … That is the myth.
Today’s New York Times provided a clear illustration of an impact of the rise of online news and associated cable and satellite news coverage around the clock. Could it be true that newspapers have given up on trying to report breaking news?
Maybe this was a bad news day, but the front page of today’s 19 March 2017 Sunday New York Times had virtually no ‘news’ – only essays or stories on conservatives trying to change the judiciary, the risks associated with SWAT teams serving search warrants, the perks of Uber versus taxi services, healthcare, the damages done by Boko Haram, and an obituary for Chuck Berry. All are interesting and valuable stories, but not one story was what I would call hard or breaking news, as I understand news. The closest was Chuck Berry’s obituary. For example, there was no coverage of the US Secretary of State’s visits in East Asia, but an essay on page 10 about the dangerous options available vis-à-vis North Korea.
Most studies of the impact of online news focus on the declining revenues and advertising in the newspaper industry, and the decline of print newspapers as more move only online. However, the greatest impact might well be on what editors believe is fit to print in the newspaper. If they are inevitably scooped by online news, then why publish news that is a day old? So the editors shift increasingly to analysis and opinion pieces on the news, rather than even try to surface new news.
In academia, a similar impact is apparent in book publishing, where I have long argued that while more books are published year by year, it is important to look at the content of books to see the real impact. In my own case, why would I put material in a book that is already available online, or for which more up-to-date information will be online before any book goes into print? So, I think about what would have a longer shelf-life as a book, and focus on key arguments, and the potential to send readers online for more facts on a particular case or event.
Interestingly, while so much angst in the US and worldwide is focused on the rise of fake news, which I have argued as not that new, the real problem might be the more basic demise of hard news reporting. Televisions news coverage is shifting more and more to entertaining debates about the news, and less and less investment in coverage of breaking developments. Now print newspapers seem to be moving away from the reporting of real news to analysis of known developments, perhaps with some investigative reporting, but essentially the discussion of what is already known.
Of course, a valuable role of the reporter is to put facts into a larger and more meaningful context, and this is as aspect of what we see more of in the newspaper. But my worry is that they are moving closer to the role of news magazines, which themselves are challenged by the pace of online news developments.
I would like to learn of more systematic research on any changes in the content of the news, but with increasing worry about trust in the authenticity of the news, it strikes me as worrisome that newspapers might well be retreating from their traditional role in sourcing original and putting it into a broader context for their readers. Hopefully, my fears are not warranted. Instead of threats of fake news, we may be facing the threat of less if not no news from the sources we have relied on for decades.
Fake News is a Wonderful Headline but Not a Reason to Panic
I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon. It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online.
Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.
It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.
Problems with Responding
There are problems with attacking the problem of fake news.
First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.
Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.
Lessons from the Internet and Web’s Short History
But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)
Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.
A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’. PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.
Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.
The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.
Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.
New Social Media News Feeds
But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it. But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.
Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther. His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.
My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.
In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information. These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.
But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.