Recent Chinese concerns over ‘Twitter Foreign Policy” are just the tip of the iceberg on the ways in which the Internet has been enabling diplomacy to be reconfigured, for better or worse. Over a decade ago, Richard Grant, a diplomat from New Zealand, addressed these issues in a paper I helped him with at the OII. Drawing from Richard’s paper, there are at least five ways in which the Internet and social media are reconfiguring diplomacy:
Changing who participates in diplomacy, creating a degree of openness and transparency, for example through leaks and whistleblowers like Edward Snowden, that puts diplomacy in the public eye, establishing an entire field of “public diplomacy”;
Creating new sources of information for diplomacy, such as when mobile Internet videos become key to what is known about an event of international significance;
Speeding up diplomatic processes in response to the immediacy of news about events in the online world that require more rapid responses in order to be more effective, such as in challenging misinformation;
Pushing diplomacy to be more event-led, when the world knows about events that diplomats cannot ignore; and
Eroding borders, such as enabling diplomats to communicate locally or globally from anywhere at any time.
These transformations do not diminish the need for diplomats to serve a critical role as intermediaries. If anything, the Internet makes it possible for diplomats to be where they need to be to facilitate face-to-face interpersonal communication, making the geography of diplomacy more, rather than less, important. However, it poses serious challenges for adapting diplomacy to a globally digital village, such as how to adapt hierarchical bureaucracies of diplomacy to respond to more agile networks, and how to best ‘join the conversation’ on social media.
Email Disrupting Life at Home? Careful What You Ask For
In France and other nations there is discussion of somehow banning email after 6pm or outside of working hours. For example, see here. Perhaps this could help provide a better work-life balance or prevent households from competing with email for the attention of their family. But this raises not only problems of implementation, but also the reverse – shall we start policing the personal use of communication and information technologies like email in the office?
Implementation would be impossible. You could get email at home or outside of work hours, but also work related Tweets, texts, messages, calls, video calls, WeChats, social media posts, and more. Email is only one avenue into the household, and declining in use relative other social media and other new media. Implementation would also be problematic by what would be a regulatory overreach, with public regulation reaching into the use of media in the households and private companies and NGOs, etc.
But the greatest threat is that this will go both ways. Companies, government departments, NGOs and others will want their employees and managers to stop using electronic media for personal reasons while at work, or during the work day, such as checking on your children, or making reservations, or getting any personal emails.
The first dissertation I supervised on corporate email was in 1980 and one of the key issues in these early days when email was beginning to be used in business instead of telegrams or faxes, was a worry that employees would use email for personal reasons that had nothing to do with work. My response then and now has always been that this should not be a worry. Personal uses of email at work are helpful for the morale and time management of people in the workplace, and – it goes both ways – email will enable employees to handle some business at home. And especially in the early days of email, personal use helped bring business people online, as then and now, many resist the use of online media for business purposes. There is a positive synergy (sorry to use that word) between the use of communication technologies at home and at work – a win-win.
Encourage and teach individuals to manage their time and self-regulate their engagement with work from home and vice versa, but don’t try to regulate something for which no one size fits all.
Aspects of my work on the role of distributed intelligence in problem solving, what I have called distributed collaborative networks, was published in English as Dutton, W. H. (2015), ‘Lend Me Your Expertise: Citizen Sourcing Advice to Government’, pp. 247-63 in Johnston, E. W. (eds), Governance in the Information Era: Theory and Practice of Policy Informatics. Abingdon, UK: Taylor and Francis Routledge. Delighted to see a revision translated into a Ukrainian publication, entitled Advertising and Public Relations of the XXI Century: Reviews and Researchers. Collective monograph. Edited by Bezchotnikova S.V. (Mariupol : Mariupol State University, 2016): 82-103.
Fake News is a Wonderful Headline but Not a Reason to Panic
I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon. It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online.
Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.
It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.
Problems with Responding
There are problems with attacking the problem of fake news.
First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.
Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.
Lessons from the Internet and Web’s Short History
But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)
Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.
A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’. PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.
Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.
The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.
Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.
New Social Media News Feeds
But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it. But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.
Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther. His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.
My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.
In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information. These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.
But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.
It was a real honour today to speak with some of the alumni (a new word for Oxford) of the Oxford Internet Institute’s DPhil programme. A number came together to celebrate the 10th anniversary of the DPhil. It began four seemingly long years after I became the OII’s founding director in 2002. So while I have retired from Oxford, it was wonderful to return virtually to congratulate these graduates on their degrees.
The programme, like the OII itself, was hatched through four years of discussions around how the Institute (which is a department at Oxford University) should move into teaching. Immediately after my arrival we began organizing the OII’s Summer Doctoral Programme (SDP), which was an instant success and continues to draw top doctoral students from across the world who want to hone their thesis through an intensive summer programme with other doctoral students focused on Internet studies. The positive experience we had with this programme led us to move quickly to set up the DPhil – and four years is relatively quick in Oxford time.
As I told our alumni, the quality of our doctoral students has been largely responsible for the esteem the OII was able to gain across the university and colleges of Oxford. That and the international visibility of the OII enabled the department to later develop our Masters programme, and continue to attract excellent faculty and students from around the world.
I am certain the OII DPhil programme has and will continue to progress since I left Oxford in 2014, such as in adding such strong faculty as Phil Howard and Gina Neff. However, I believe its early success was supported by four key principles that were part of our founding mission:
First, it was anchored in the social sciences. The OII is a department within the Division of Social Sciences at Oxford, which includes the Law Faculty. In 2002, but even since, this made us relatively unique given that so many universities, particular in the USA, viewed study of the Internet as an aspect of computer sciences and engineering. It is increasingly clear that Internet issues are multidisciplinary, and need a strong social science component that the social sciences should be well equipped to contribute. Many social sciences faculty are moving into Internet studies, which has become a burgeoning field, but the OII planted Internet studies squarely in the social sciences.
Secondly, our DPhil emphasized methods from the beginning. We needed to focus on methods to be respected across the social sciences in Oxford. But also we knew that the OII could actually move the social sciences forward in such areas as online research, later digital social science, and big data analytics as applied to the study of society. The OII did indeed help move the methods in the social sciences at Oxford into the digital age, such as through its work on e-Science and digital social research.
Thirdly, while it is somewhat of a cliché that research and teaching can complement each other, this was always the vision for the OII DPhil programme. And it happened in ways more valuable than we anticipated.
Finally, because Oxford was a green field in the areas of media, information and communication studies, with no legacy departments vying to own Internet studies, we could innovate around Internet studies from a multi-disciplinary perspective. And we found that many of the best students applying to the OII were multidisciplinary in their training even before they arrived, and understood the value of multidisciplinary, problem-focused research and teaching.
As you can see, I found the discussion today to be very stimulating. My 12 years at Oxford remains one of the highlights of my career, but it is so much enhanced by seeing our alumni continue to be engaged with the institute. So many thanks to Dame Stephanie Shirley for endowing the OII, and the many scholars across Oxford University and its Colleges, such as Andrew Graham and Colin Lucas, for their confidence and vision in establishing the OII and making the DPhil programme possible.
Remember, the OII was founded in 2001, shortly after the dotcom bubble burst and at a university that is inherently skeptical of new fields. Today the Internet faces a new wave of criticisms ranging from online bullying to global cyber security, including heightened threats to freedom of expression and privacy online. With politicians worldwide ratcheting up attacks on whistleblowers and social media, claiming undue political influence, threats to the Internet are escalating. This new wave of panic around the Internet and social media will make the OII and other departments focused on Internet studies even more critical in the coming years.
One of the classic works on the governance of England is Walter Bagehot’s (1867) The English Constitution. He observed that through the evolution of its unwritten Constitution entailed two critical but separate components, the ‘dignified’ and the ‘efficient’. The former exercised symbolic power and was represented by the monarch, who did not have effective power but could capture the imagination and support of the public. The efficient component was represented by Parliament and the Prime Minister, who had the power to effect change. The modern Prime Minister of Britain in the 21st century retains this role in getting the work of government done, but has also become more ‘presidential’ in the American sense, but embodying more of the symbolic roles of the state. Nevertheless, despite contention, a far more educated public, and access to information about anything everywhere, Queen Elizabeth remains the major symbolic head of state, helping to maintain the legitimacy of the government.
In the US, the founders had combined these dignified and effective components in the Office of the President. The US President represents the state in formal international ceremonies, such as in laying wreaths with the Queen or her representative, as well as being the chief executive and Commander of Chief on the nation’s military.
For decades, the preservation of the dignified role of the President as head of state has been a matter of debate. Television news was said to reveal so much about the President that it was impossible to maintain any myths about a President’s leadership (Meyrowitz 1985). Every foible, stumble, illness of a President is in the news for all to know. This transparency has a very positive role, such as undermining the potential for a president to become too powerful if shielded from the public accountability. However, it may also undermine the ability of the government to maintain the support and trust in the government that was delivered by the symbolic Chief of State.
It is obvious where I am going with this rendition of Bagehot’s perspective on the components of governance. Whomever you support for President, there must be some concern over whether the institution of the Presidency will be reduced dramatically by the revelations of the 2016 primaries and presidential campaigns. Will we, or have we, lost the dignified role of the Presidency? Maybe this is good and appropriate in a modern democratic state, but these trends are likely to generate far more discussion in the wake of the 2016 elections, whomever is elected. In whom will we entrust the dignified role of the Presidency? Perhaps this dignified role has been antiquated by modern forms of democratic governance, but the lack of trust in government, and the candidates for office, are likely to keep this debate alive.
Bagehot, W. (1867), The English Constitution.
Meyrowitz, J. (1985), No Sense of Place: The Impact of Electronic Media on Social Behavior. Oxford University Press.
The Department of Media and Information at Michigan State University had one of its (now) annual retreats on a beautiful Friday in the clubhouse of a local golf course. One of our faculty members, Professor Carrie Heeter, was in San Francisco, but she worked with colleagues to create a means for her to participate virtually. Her explanation of the approach and how she experienced the day might be very useful for others experimenting with blending virtual participation into real meetings.
They used Zoom, a video service like Skype or Google Hangouts, to connect Carrie in San Francisco to an iPad mounted on a portable stand, and to a laptop, both present in the retreat room. Essentially, the laptop on the stand became Carrie’s virtual presence in the room.
As Carrie wrote, when the retreat moved into about 6 breakout groups, someone in Carrie’s group ‘agreed to “take care of” Carrie’. As Carrie put it: ‘When Jeremy [Bond] took care of me, he actively turned the iPad to face whoever was speaking. It was amazing. It felt like I was right there at the table, but also weird to not be turning my physical head, while I was virtually looking all around. I also felt bad that he was working so hard thinking about what I was seeing.’
They planned to use a Mini-jam box speaker/microphone to enable Carrie to be heard by the larger group, but it did not work on the day. So it was hard to hear Carrie speaking when we were assembled as the whole group. However, she could hear others very well, even in the big group. Carrie notes: ‘we used the Zoom chat and I would type, then my caretaker would speak for me. A few times I wrote on a piece of paper and held it up to the camera. When I went to lunch I used the share screen function of Zoom to show a word document with big letters saying GONE TO LUNCH BE BACK SOON. I also occasionally Texted room participants. … I used the spotlight function of Zoom to control which of the three window’s was the main one on the iPad.’
Professor Robby Ratan took the table and stand to the flip chart in discussing the notes from his breakout group. Carrie noted: ‘When Robby took notes for our breakout session; he went to Share My Screen mode, which meant I couldn’t use my computer. But I could see really well.’
Carrie joined the retreat at 6am California time, and was “at the retreat” for 7 hours.
The departmental secretary, Heather Brown, carried the portable stand and tablet downstairs and outdoors for a photo of the retreat participants. I’ll post the photo here. As Carrie describes it: ‘When Heather carried me down the stairs and out onto the lawn, there was a visceral feeling of being carried.’ You can see Professor Heeter on the tablet in the front row of the photo, but in another use of virtualization, Carrie had to Photoshop her picture onto the tablet’s screen. Nevertheless, the WiFi was quite good at the retreat center, and even out onto the grass, letting Carrie virtually participate in the photo session, even if invisible [due to the bright sun] in the photo without the touchup on Photoshop.
Carrie’s evaluation of the experience is also useful. She argued: ‘That it “worked” is due in part to the good will, tolerance, and helpfulness of physically present folks, and to the resolve of all of us to make it work. The iPad on the stand was much better than being on someone’s laptop. It was more like having my own place at the table and in the room.
Connecting through both the laptop and the iPad provided continuity (when the iPad turned off or needed to be recharged) as well as providing a second window on the meeting.’
Carrie concludes with a fascinating observation: ‘I was very much in people’s hands — they would raise and lower me to choose the height.’