Talk on the politics of the Fifth Estate at University Institute of Lisbon, March 2017

I had a quick but engaging trip to Portugal to speak with students and faculty at CIES at the University Institute of Lisbon. I have given a number of talks on my concept of the Fifth Estate, but there are always new issues emerging that enable me to help students see the transformations around the Internet in light of current developments. In this case, they were most interested in the election of Donald Trump and the implications for Europe of his Presidency. I will post a link to the slides for my talk.

It was so rewarding to speak with the students, who were most appreciative. I don’t think students realize how much people like myself value hearing from students who have read their work. So, many thanks to my colleagues and the students of the University Institute of Lisbon for their feedback. You made my long trip even more worthwhile.

I also had the opportunity to meet with my wonderful colleague, Gustavo Cardosoa, a Professor of Media, Technology and Society at ISCTE – Lisbon University Institute. I met Gustavo when he was the adviser of information society policies for the Presidency of the Portuguese Republic from 1996-2006, and continued to work with him through the World Internet Project and more, such as his contribution to the Oxford Handbook of Internet Studies (OUP 2014).

Professor Gustavo Cardoso and Bill
Gustavo Cardoso, 2017

 

 

 

 

 

 

Orwell’s 1984: Must Reading for the Digital Age

I have not taught an undergraduate course on the Internet and society for quite some time, but when I did, at USC, I had George Orwell’s Nineteen Eighty-Four on the required reading list. I remember one of the last classes I taught. It was in 1998. It is memorable because my students – after questioning why they should read a book written in 1948, and published in 1949 (how could it be relevant?) – came into class after seeing Will Smith’s movie, entitled Enemy of the State. The movie was based on Will Smith’s character being chased by the bad guys and all the time aided by satellite surveillance technologies, following a sensor planted on Will. It was: “Professor Dutton. This is exactly like 1984!”  img_0867

Even in 1998, I had learned the sad news that 1984 had been removed from most required reading lists across high schools in the US. That was one of the reasons I put it on my reading list. I was worried that my students may never have read this book, and I was right.

So it is very heartening to me that 1984 along with other dystopian futures novels are making a strong comeback.* They are indeed still relevant. Some attribute the rise of dystopian novels like 1984 to the election of President Donald Trump, but I believe it goes well beyond any single individual, and is tied to the information revealed by Edward Snowden, particularly around mass surveillance. The technologies envisioned by Orwell, like the telescreen, have been surpassed, but the idea of trying to sense what people are thinking, and not just what they are doing, by their location, movements, and associates, remains very central in understanding contemporary debates over surveillance in the digital age. Even Enemy of the State was trapped in mere surveillance – tracking and capturing Will Smith. But Orwell saw the ultimate objective to discern what a person was thinking, and whether they were about to commit a thought crime.

I first read 1984 in high school, and recall wondering if I would even be alive in 1984 to see if Orwell was a futurist. Long past 1984, I still wonder if Orwell will be proven right in my lifetime, if he has not already captured today’s threat better than any other novelist. It should be must reading for anyone living in today’s digital age.

*http://www.pbs.org/newshour/art/george-orwells-1984-best-seller-heres-resonates-now/

Russian Hacking and the Certainty Trough

Views on Russian Hacking: In a Certainty Trough?

I have been amazed by the level of consensus, among politicians, the press and the directors of security agencies, over the origins and motivations behind the Russian hacking of the 2016 presidential election. Seldom are security agencies willing to confirm or deny security allegations, much less promote them*, even when cyber security experts vary in their certainty over the exact details. Of course there are many interpretations of what we are seeing, including speaking arguments that this is simply a responsible press, partisan politics, reactions to the President-elect, or a clear demonstration of what has been called, in a study of a thread of Israeli journalism, ‘patriotic’ journalism.* For example, you can hear journalists and politicians not only demonizing WikiLeaks founder Julian Assange, the messenger, but also arguing that those who do not accept the consensus are virtually enemies of the state.

One useful theoretical perspective that might help make sense of this unfolding display of consensus is the concept of the ‘certainty trough’, anchored in Donald MacKensie’s research** on missile systems and those who had different levels of certainty about their performance, such as their accuracy in hitting the targets they are designed to strike. He was trying to explain how the generals, for example, could be so certain of their performance, when those most directly involved in developing the missile systems were less certain of how well they will perform. screen-shot-2017-01-07-at-15-21-25

The figure applies MacKenzie’s framework to the hacking case. My contention is that you can see aspects of the certainty trough with respect to accounts of Russian hacking of John Podesta’s emails, which led to damaging revelations about the Democratic National Committee (DNC) and the Clinton Foundation during the election, such as in leading to the resignation of Representative Debbie Wasserman Schultz’s DNC post. On the one hand, there are security experts, most directly involved in, and knowledgeable about, these issues, with less certainty than the politicians and journalists about how sophisticated these hacks of an email account were, and whether they can attribute clear intentions to an ecology of multiple actors. At the other extreme, the public is the least knowledgeable about cyber security, and likely to have less certainty over what happened (see Figure). Put simply, it is not the case that the more you know the more certain you are about the facts of the case.

The upshot of this possibility is that the journalists and politicians involved in this issue should not demonize those who are less certain about who did what to whom in this case. The critics of the skeptics might well be sitting in the certainty trough.

References

*ICA (2017), ‘Intellligence Community Assessment, Assessing Russian Activities and Intentions in Recent US Elections’, Intelligence Community Assessment, 01D, 6 January: https://www.dni.gov/files/documents/ICA_2017_01.pdf

**Avashalom Ginosar, ‘Understanding Patriotic Journalism: Culture, Ideology and Professional Behavior’, see: https://www.academia.edu/20610610/Understanding_Patriotic_Journalism_Culture_Ideology_and_Professional_Behavior

***for Donald MacKensie’s work on the certainty trough, see: http://modeldiscussion.blogspot.com/2007/01/mackenzies-certainty-trough-nuclear.html or his summary of this work in Dutton, W. H. (1999), Society on the Line. (Oxford: OUP), pages 43-46.

Twitter Foreign Policy and the Rise of Digital Diplomacy

Recent Chinese concerns over ‘Twitter Foreign Policy” are just the tip of the iceberg on the ways in which the Internet has been enabling diplomacy to be reconfigured, for better or worse. Over a decade ago, Richard Grant, a diplomat from New Zealand, addressed these issues in a paper I helped him with at the OII.[1] Drawing from Richard’s paper, there are at least five ways in which the Internet and social media are reconfiguring diplomacy:

  1. Changing who participates in diplomacy, creating a degree of openness and transparency, for example through leaks and whistleblowers like Edward Snowden, that puts diplomacy in the public eye, establishing an entire field of “public diplomacy”;
  2. Creating new sources of information for diplomacy, such as when mobile Internet videos become key to what is known about an event of international significance;
  3. Speeding up diplomatic processes in response to the immediacy of news about events in the online world that require more rapid responses in order to be more effective, such as in challenging misinformation;
  4. Pushing diplomacy to be more event-led, when the world knows about events that diplomats cannot ignore; and
  5. Eroding borders, such as enabling diplomats to communicate locally or globally from anywhere at any time.  th-1

These transformations do not diminish the need for diplomats to serve a critical role as intermediaries. If anything, the Internet makes it possible for diplomats to be where they need to be to facilitate face-to-face interpersonal communication, making the geography of diplomacy more, rather than less, important. However, it poses serious challenges for adapting diplomacy to a globally digital village, such as how to adapt hierarchical bureaucracies of diplomacy to respond to more agile networks, and how to best ‘join the conversation’ on social media.

[1] Richard Grant (2004), “The Democratization of Diplomacy: Negotiating with the Internet,” OII Research Report No. 5. Oxford, UK: Oxford Internet Institute, University of Oxford. See http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1325241  Also discussed in a talk I gave last year on Mexico in the New Internet World, see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2788392

Email Disrupting Life at Home?

Email Disrupting Life at Home? Careful What You Ask For

In France and other nations there is discussion of somehow banning email after 6pm or outside of working hours. For example, see here. Perhaps this could help provide a better work-life balance or prevent households from competing with email for the attention of their family. But this raises not only problems of implementation, but also the reverse – shall we start policing the personal use of communication and information technologies like email in the office?

email-management

Implementation would be impossible. You could get email at home or outside of work hours, but also work related Tweets, texts, messages, calls, video calls, WeChats, social media posts, and more. Email is only one avenue into the household, and declining in use relative other social media and other new media. Implementation would also be problematic by what would be a regulatory overreach, with public regulation reaching into the use of media in the households and private companies and NGOs, etc.

But the greatest threat is that this will go both ways. Companies, government departments, NGOs and others will want their employees and managers to stop using electronic media for personal reasons while at work, or during the work day, such as checking on your children, or making reservations, or getting any personal emails.

The first dissertation I supervised on corporate email was in 1980 and one of the key issues in these early days when email was beginning to be used in business instead of telegrams or faxes, was a worry that employees would use email for personal reasons that had nothing to do with work. My response then and now has always been that this should not be a worry. Personal uses of email at work are helpful for the morale and time management of people in the workplace, and – it goes both ways – email will enable employees to handle some business at home. And especially in the early days of email, personal use helped bring business people online, as then and now, many resist the use of online media for business purposes. There is a positive synergy (sorry to use that word) between the use of communication technologies at home and at work – a win-win.

Encourage and teach individuals to manage their time and self-regulate their engagement with work from home and vice versa, but don’t try to regulate something for which no one size fits all.

BBC news coverage: http://www.bbc.com/news/magazine-26958079

Forthcoming Ukrainian Publication on Distributed Intelligence

Aspects of my work on the role of distributed intelligence in problem solving, what I have called distributed collaborative networks, was published in English as Dutton, W. H. (2015), ‘Lend Me Your Expertise: Citizen Sourcing Advice to Government’, pp. 247-63 in Johnston, E. W. (eds), Governance in the Information Era: Theory and Practice of Policy Informatics. Abingdon, UK: Taylor and Francis Routledge. Delighted to see a revision translated into a  Ukrainian publication, entitled Advertising and Public Relations of the XXI Century: Reviews and Researchers. Collective monograph. Edited by Bezchotnikova S.V. (Mariupol : Mariupol State University, 2016): 82-103.

screen-shot-2016-12-04-at-15-08-26

 

Don’t Panic over Fake News

Fake News is a Wonderful Headline but Not a Reason to Panic

I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon.[1] It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online. th-1

Fake News

Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.

It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.

Problems with Responding

There are problems with attacking the problem of fake news.

First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.

Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.

Lessons from the Internet and Web’s Short History

But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)

Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.

A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’.[2] PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.

Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.

The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.

Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.

New Social Media News Feeds

But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it.[3] But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.

Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther.[4] His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.

The Risks

My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.

In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information.[5] These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.

But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.

Notes

[1] List of Fake News Sites: http://nymag.com/selectall/2016/11/fake-facebook-news-sites-to-avoid.html

[2] Resnick, P., and Miller, J. (1996), ‘PICS: Internet Access Controls without Censorship’, Communications of the ACM, 39(10): 87-93.

[3] Walters, R. (2016), ‘Mark Zuckerberg responds to criticism over fake news on Facebook, Financial Times: https://www.ft.com/content/80aacd2e-ae79-11e6-a37c-f4a01f1b0fa1?sectionid=home

[4] Phil Howard: https://www.oii.ox.ac.uk/is-social-media-killing-democracy/

[5] B.S. Detector: http://lifehacker.com/b-s-detector-lets-you-know-when-youre-reading-a-fake-n-1789084038