Should Tweeting Politicians be able to Block Users?

An interesting debate has been opened up by lawyers who have argued that President Trump should not block Twitter users from posting on Twitter. I assume this issue concerns his account @realDonaldTrump (32M followers) but the same issue would arise over his newer and official account as President @realDonaldTrump (almost 19M followers).

th

Apparently, the President has blocked users who may have made rude or critical comments to one or more of his Twitter posts. Regardless of the specifics of Donald Trump’s tweets, and specific individuals blocked, the general question is: Should any American politician who tweets be able to block any user without violating the user’s first amendment rights? I would say, yes, but others, including the lawyers posing this question, would disagree.

I would think that any user has a right to block any other user, particularly if they appear to be a malicious user, bot, or simply obnoxious. I’d argue this on the basis that these are the affordances of Twitter, and the rules of the site are – or should be – known by users. Moreover, the potential for blocking is a means of maintaining some level of civility on one’s social media. Having rude or obnoxious users posting harassing comments could frighten other users off the site, and thereby undermine a space for dialogue and the provision of information. If there is no way for a social media site to moderate its users, its very survival is at risk.

I actually argued this in the mid-1990s, when the issue surrounded electronic bulletin boards, and some of the first public forums, such as Santa Monica, California’s Public Electronic Network (PEN).* Essentially, I maintained that any democratic forum is governed by rules, such as Robert’s Rules of Order for many face-to-face meetings. Such rules evolved in response to difficulties in conducting meeting without rules. Some people will speak too long and not take turns. Some will insult or talk over the speaker. Democratic communication requires some rules, even thought this may sound somewhat ironic. As long as participants know the rules in advance, rules of order seem legitimate to enabling expression. Any rule suppresses some expression in order to enable more equitable, democratic access to a meeting. Obviously, limiting a tweet to 140 characters is a restriction on speech, but it has fostered a rich medium for political communication.

In this sense, blocking a Twitter user is a means for moderation, and if known in advance, and not used in an arbitrary or discriminatory way, it should be permitted. That said, I will post a Twitter poll and let you know what respondents believe. Bryan M. Sullivan (2017), an attorney, seems to argue a very different position in his Forbes article.** I respectively disagree, but wonder what the Twitter community thinks, while it is easy to guess that they will be on the side of not being blocked. But please think about it, before you decide.

Reference

*Dutton, W. H. (1996), ‘Network Rules of Order: Regulating Speech in Public Electronic Fora,’ Media, Culture, and Society, 18 (2), 269-90. Reprinted in David, M., and Millward, P. (2014) (eds), Researching Society Online. (London: Sage), pp. 269-90.

**Sullivan, B. (2017), ‘Blocked by the President: Are Trump’s Twitter Practices Violating Free Speech?’, Forbes, available here: https://www.forbes.com/sites/legalentertainment/2017/06/08/blocked-by-the-president-are-trumps-twitter-practices-violating-free-speech/#40fe73043d57

A New Internet Institute to Rise in Berlin: Congratulations

Delighted to hear about the announcement of the award of support by the German Ministry for Education and Research for a German Internet Institute. It will be based in Berlin and be called the Internet Institute for the Networked Society or the Internet-Institut für die vernetzte Gesellschaft. The ministry has committed 50 million euros over five years, roughly based on a scheme similar to initial government funding for the Oxford Internet Institute (OII) at Oxford University by the UK government, which was matched by support from a major gift by Dame Stephanie Shirley.

The OII was founded in 2001 as the first department at a major university that was focused on multi-disciplinary studies of the Internet. It complemented Harvard’s Berkman Center, which was focused on law and policy in its early years. 2001 was a time at which the Internet was still dismissed by some academics as a fad. Since the OII’s founding, study of the Internet has been one of the most burgeoning fields in the social sciences (Dutton 2013). I am pleased to see that the name of the new institute suggests it will be, like the OII, firmly planted in the social sciences with many opportunities for collaboration across all relevant fields. Also I am pleased that the new institute appears to build on the The Alexander von Humboldt Institute for Internet and Society (HIIG), which spearheaded the development of a network of Internet research centers. Clearly, the new institute could make Berlin the center for Internet studies.

A Map of Internet Research Centers from NoC

I am certain that many groups of academics competed for this grant, and that many will have been disappointed with the outcome. However, adding a major new center for Internet studies is going to lift all the growing numbers of centers and academics with a focus on the economic, societal and political shaping and implications of the Internet. And all of the scholars who put their efforts into competing proposals are likely to have many great ideas to continue pursuing.

So, my colleagues and I welcome the leaders and academics of the Internet Institute for the Networked Society to the world of Internet studies. The social and economic implications of the Internet are raising many technical, policy, and governance issues, from inequalities to fake news and more. Quite seriously, the world needs your institute along with many others to help shape responses to these issues in ways that ensure that the Internet continues to play a positive role in society.

I along with others are only now learning about this development. I look forward to hearing more in due course, and welcome any comments or corrections to this information – but too great to hold back.

Reference

Dutton, W. H. (2013, 2014), The Oxford Handbook of Internet Studies. Oxford University Press.

More information: an Announcement from AoIR: https://aoir.org/welcome_gii/ 

Also: https://www.bmbf.de/de/aufbau-eines-deutschen-internet-institut-2934.html 

Fake News May Trump Other Current Panics over the Internet and Social Media

I recently posted a short overview of the findings of one of our projects on fake news, filter bubbles, and echo chambers in The Conversation. All three are foci of panic over the potential political implications of new technologies, such as search algorithms and social media friending and de-friending mechanisms. Given the comments received and the worries expressed in those comments, the fake news panic trumps all the others – no question. 

Why?

One reason is that it is so new. The public debate over fake news only began to arise during the 2016 elections in the US, though it quickly spread internationally. I’m sure I could be corrected on that, but I believe that is roughly the case.

Secondly, the definition – to the degree that is fair to apply to this concept – is being constantly enlarged and blurred by pundits and politicians referring to more and more ‘news’ as fake. In fact, ‘fake’ is becoming an almost viral term. There are many ways to characterize much of the news, some of it is patriotic journalism, some partisan, some misinformation, some just poor reporting, etc. But more and more of the whole journalistic enterprise is being labelled as fake. But journalists are not the victim so much as among the major users of this term, increasingly characterizing mainstream media as real news versus blogging and social media as the sources of fake news. In such ways, it has become a pejorative term used to discredit the butt of the insult.

These are a few of the reasons why we did not use the term ‘fake news’ in our survey of Internet users. We asked other questions, such as how often they found wrong information on different media. That said, we found the a surprisingly large proportion of people tend to check information they believe to be suspect, such as by using a search engine or consulting other sources.

So despite the rising panic over fake news, I still believe it is under-researched and over-hyped.

Notes

Short note on our study is here.

The full report of our study is here.

Are Newspapers Surrendering News Coverage? The Big Impact of Online News

Today’s New York Times provided a clear illustration of an impact of the rise of online news and associated cable and satellite news coverage around the clock. Could it be true that newspapers have given up on trying to report breaking news?

Source: Wikipedia

Maybe this was a bad news day, but the front page of today’s 19 March 2017 Sunday New York Times had virtually no ‘news’ – only essays or stories on conservatives trying to change the judiciary, the risks associated with SWAT teams serving search warrants, the perks of Uber versus taxi services, healthcare, the damages done by Boko Haram, and an obituary for Chuck Berry. All are interesting and valuable stories, but not one story was what I would call hard or breaking news, as I understand news. The closest was Chuck Berry’s obituary. For example, there was no coverage of the US Secretary of State’s visits in East Asia, but an essay on page 10 about the dangerous options available vis-à-vis North Korea.

Most studies of the impact of online news focus on the declining revenues and advertising in the newspaper industry, and the decline of print newspapers as more move only online. However, the greatest impact might well be on what editors believe is fit to print in the newspaper. If they are inevitably scooped by online news, then why publish news that is a day old? So the editors shift increasingly to analysis and opinion pieces on the news, rather than even try to surface new news.

In academia, a similar impact is apparent in book publishing, where I have long argued that while more books are published year by year, it is important to look at the content of books to see the real impact. In my own case, why would I put material in a book that is already available online, or for which more up-to-date information will be online before any book goes into print? So, I think about what would have a longer shelf-life as a book, and focus on key arguments, and the potential to send readers online for more facts on a particular case or event.

Interestingly, while so much angst in the US and worldwide is focused on the rise of fake news, which I have argued as not that new, the real problem might be the more basic demise of hard news reporting. Televisions news coverage is shifting more and more to entertaining debates about the news, and less and less investment in coverage of breaking developments. Now print newspapers seem to be moving away from the reporting of real news to analysis of known developments, perhaps with some investigative reporting, but essentially the discussion of what is already known.

Of course, a valuable role of the reporter is to put facts into a larger and more meaningful context, and this is as aspect of what we see more of in the newspaper. But my worry is that they are moving closer to the role of news magazines, which themselves are challenged by the pace of online news developments.

I would like to learn of more systematic research on any changes in the content of the news, but with increasing worry about trust in the authenticity of the news, it strikes me as worrisome that newspapers might well be retreating from their traditional role in sourcing original and putting it into a broader context for their readers. Hopefully, my fears are not warranted. Instead of threats of fake news, we may be facing the threat of less if not no news from the sources we have relied on for decades.

 

Talk on the politics of the Fifth Estate at University Institute of Lisbon, March 2017

I had a quick but engaging trip to Portugal to speak with students and faculty at CIES at the University Institute of Lisbon. I have given a number of talks on my concept of the Fifth Estate, but there are always new issues emerging that enable me to help students see the transformations around the Internet in light of current developments. In this case, they were most interested in the election of Donald Trump and the implications for Europe of his Presidency. I will post a link to the slides for my talk.

It was so rewarding to speak with the students, who were most appreciative. I don’t think students realize how much people like myself value hearing from students who have read their work. So, many thanks to my colleagues and the students of the University Institute of Lisbon for their feedback. You made my long trip even more worthwhile.

I also had the opportunity to meet with my wonderful colleague, Gustavo Cardosoa, a Professor of Media, Technology and Society at ISCTE – Lisbon University Institute. I met Gustavo when he was the adviser of information society policies for the Presidency of the Portuguese Republic from 1996-2006, and continued to work with him through the World Internet Project and more, such as his contribution to the Oxford Handbook of Internet Studies (OUP 2014).

Professor Gustavo Cardoso and Bill
Gustavo Cardoso, 2017

 

 

 

 

 

 

Don’t Panic over Fake News

Fake News is a Wonderful Headline but Not a Reason to Panic

I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon.[1] It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online. th-1

Fake News

Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.

It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.

Problems with Responding

There are problems with attacking the problem of fake news.

First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.

Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.

Lessons from the Internet and Web’s Short History

But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)

Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.

A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’.[2] PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.

Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.

The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.

Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.

New Social Media News Feeds

But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it.[3] But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.

Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther.[4] His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.

The Risks

My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.

In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information.[5] These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.

But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.

Notes

[1] List of Fake News Sites: http://nymag.com/selectall/2016/11/fake-facebook-news-sites-to-avoid.html

[2] Resnick, P., and Miller, J. (1996), ‘PICS: Internet Access Controls without Censorship’, Communications of the ACM, 39(10): 87-93.

[3] Walters, R. (2016), ‘Mark Zuckerberg responds to criticism over fake news on Facebook, Financial Times: https://www.ft.com/content/80aacd2e-ae79-11e6-a37c-f4a01f1b0fa1?sectionid=home

[4] Phil Howard: https://www.oii.ox.ac.uk/is-social-media-killing-democracy/

[5] B.S. Detector: http://lifehacker.com/b-s-detector-lets-you-know-when-youre-reading-a-fake-n-1789084038

 

10th Anniversary of OII’s DPhil in Information, Communication & the Social Sciences

It was a real honour today to speak with some of the alumni (a new word for Oxford) of the Oxford Internet Institute’s DPhil programme. A number came together to celebrate the 10th anniversary of the DPhil. It began four seemingly long years after I became the OII’s founding director in 2002. So while I have retired from Oxford, it was wonderful to return virtually to congratulate these graduates on their degrees.

The programme, like the OII itself, was hatched through four years of discussions around how the Institute (which is a department at Oxford University) should move into teaching. Immediately after my arrival we began organizing the OII’s Summer Doctoral Programme (SDP), which was an instant success and continues to draw top doctoral students from across the world who want to hone their thesis through an intensive summer programme with other doctoral students focused on Internet studies. The positive experience we had with this programme led us to move quickly to set up the DPhil – and four years is relatively quick in Oxford time.

As I told our alumni, the quality of our doctoral students has been largely responsible for the esteem the OII was able to gain across the university and colleges of Oxford. That and the international visibility of the OII enabled the department to later develop our Masters programme, and continue to attract excellent faculty and students from around the world. th-1

I am certain the OII DPhil programme has and will continue to progress since I left Oxford in 2014, such as in adding such strong faculty as Phil Howard and Gina Neff. However, I believe its early success was supported by four key principles that were part of our founding mission:

First, it was anchored in the social sciences. The OII is a department within the Division of Social Sciences at Oxford, which includes the Law Faculty. In 2002, but even since, this made us relatively unique given that so many universities, particular in the USA, viewed study of the Internet as an aspect of computer sciences and engineering. It is increasingly clear that Internet issues are multidisciplinary, and need a strong social science component that the social sciences should be well equipped to contribute. Many social sciences faculty are moving into Internet studies, which has become a burgeoning field, but the OII planted Internet studies squarely in the social sciences.

Secondly, our DPhil emphasized methods from the beginning. We needed to focus on methods to be respected across the social sciences in Oxford. But also we knew that the OII could actually move the social sciences forward in such areas as online research, later digital social science, and big data analytics as applied to the study of society. The OII did indeed help move the methods in the social sciences at Oxford into the digital age, such as through its work on e-Science and digital social research.

Thirdly, while it is somewhat of a cliché that research and teaching can complement each other, this was always the vision for the OII DPhil programme. And it happened in ways more valuable than we anticipated.

Finally, because Oxford was a green field in the areas of media, information and communication studies, with no legacy departments vying to own Internet studies, we could innovate around Internet studies from a multi-disciplinary perspective. And we found that many of the best students applying to the OII were multidisciplinary in their training even before they arrived, and understood the value of multidisciplinary, problem-focused research and teaching.

As you can see, I found the discussion today to be very stimulating. My 12 years at Oxford remains one of the highlights of my career, but it is so much enhanced by seeing our alumni continue to be engaged with the institute. So many thanks to Dame Stephanie Shirley for endowing the OII, and the many scholars across Oxford University and its Colleges, such as Andrew Graham and Colin Lucas, for their confidence and vision in establishing the OII and making the DPhil programme possible.

Remember, the OII was founded in 2001, shortly after the dotcom bubble burst and at a university that is inherently skeptical of new fields. Today the Internet faces a new wave of criticisms ranging from online bullying to global cyber security, including heightened threats to freedom of expression and privacy online. With politicians worldwide ratcheting up attacks on whistleblowers and social media, claiming undue political influence, threats to the Internet are escalating. This new wave of panic around the Internet and social media will make the OII and other departments focused on Internet studies even more critical in the coming years.