Portulans Institute Discussion of the Global Innovation Index 2020

The Portulans Institute is hosting, in a partnership with the Information Technology & Innovation Foundation (ITIF), a virtual event focused on how America can strengthen and revitalize its innovation to ensure continued global competitiveness. Experts will discuss the state of innovation in America in the context of the Global Innovation Index (GII) 2020 report.

This USA virtual event will take place on September 15th, 2020, from 12 pm to 1:30 pm EST. It will follow the global launch of the GII, which will take place on September 2nd.


The 2020 edition of the GII, co-published by WPO, INSEAD, and Cornell University, is dedicated to the theme of ‘Who Will Finance Innovation’? The 13th edition of the GII sheds light on the state of innovation financing by investigating the evolution of financing mechanisms for entrepreneurs and other innovators, and by pointing to progress and remaining challenges—including in the context of the economic slowdown induced by the COVID-19 pandemic. More information on the 2020 GII here


Program at: https://portulansinstitute.org/wp-content/uploads/GII-USA-Launch-Flyer-FINAL.pdf and registration at: https://zoom.us/meeting/register/tJAsc-muqDIjGNzHa0IFXXTcc81aWyORfWAn


Self-Preservation of Your Work

Self-Preservation of Your Work

For decades I have been concerned over the fragility of information and whether ephemerality or the transitory nature of information and communication is just an inevitable feature of the digital age. I therefore frequently look back at a talk I gave on the Internet to a conference of historians held in Oxford in the early 2000s. Given that I was speaking to historians, at a time when I was the founding director of the Oxford Internet Institute, one key theme of my talk concerned the major ways in which content on the Web was unlikely to be preserved. The Internet community did not have adequate plans and strategies for preserving the Internet, Web and related online content. I thought they would be engaged – if not frightened – by a shift of content to online media when it might mean losing much of our history with respect to data, documents, letters, and more. 

My audience seemed interested but unmoved. A historian from the audience chatted with me after the talk to explain that this is not new. Historians have always worked in piecing together history from letters found in a shoebox stored in an attic, tomb stones, and so on – not from systematically recorded archives, even though fragments of such records exist in many libraries, museums, and archives. This is nothing new to efforts aimed at writing or reconstructing history. 

This attitude frightened me even more. From my perspective, perhaps the historians had not seen anything yet. And I am continually reminded of this problem. Of course, there have been brilliant efforts to preserve online, digital content, such as the ‘Way Back Machine’, an initiative of the Internet Archive,[i] which indicates it has saved over 446 billion web pages. Yet the archive and its Way Back Machine have become a subscription service and have dropped out of the limelight they shared in the early days of the Web. The archive is also being limited by concerns over copyright that are leading them to reduce valuable services, such as their digital library.[ii]

But a recent and more personal experience brought all of this to the forefront of my thinking. I always print to save a hard copy of anything of significance (to me) that I write. That may seem quaint, but time and again, it has saved me from losing work that was stored on out of date media, such as floppy discs, or failing journals. I recently wanted to share a copy of a piece I did for a journal of the UK’s Economic and Social Research Council (ESRC), written in 1994, when I was director of an ESRC programme. This time my system failed me and I could not find it in my files. 

This was a short piece that the ESRC published in one of its journals called Social Sciences. Being a social scientist, my article focused on the problematic mindset of social scientists regarding outreach (Dutton 1994). Too often, I argued, a (social) scientist thought they were through with outreach once they published an article. The way I put it was that many social scientists believed in sort of a ‘trickle-down’ theory of outreach. Once their work was published, the findings and their implications will eventually trickle down to those who might benefit from their insights.

Today, all disciplines of the sciences are far more focused on outreach and the impact of research. Many research assessment exercises require evidence of the impact of research as a basis for assessment. And individual academics, research units, departments and universities are becoming almost too focused on getting the word out to the world about their research and related achievements. Outreach has become a major aspect of contemporary academic and not-for-profit research enterprises. There is even an Association for Academic Outreach.[iii] One only needs to reflect on the innovative and competitive race to a vaccine for COVID-19, where at least 75 candidate vaccines are in preclinical or clinical evaluation[iv], to see how robust and important outreach has become. Nevertheless, outreach does not necessarily translate into preservation of academic work.

So – lo and behold – I could not find a copy of my piece on ‘Trickle-Down Social Science’. I recall seeing it in my files, but given moves back and forth across the Atlantic, it had vanished without a trace. I searched online for it, and found my books and articles that referenced it, but no copy of the article. I tried the Way Back Machine, but it was not on the Web, as the journal Social Sciences in those days did not put its publication online. I wrote the ESRC, as they might have an archive of their journal. They kindly replied that they not only did not have a copy of the article (from that far back), but, more surprisingly, they did not even have a copy of Social Sciences in their archives. So, 1994 is such ancient history that even revered institutions like the ESRC do not keep copies of their publications. [A former student read this blog and sent me a photocopy, which I used to create a new version of my little viewpoint piece from a quarter-century earlier.]

Well, this little personal experience reminded me of my practice of keeping copies and reinforced the obvious conclusion that I need to preserve my own work, as I had tried to do, and do a more consistent job of it in the process! The toppling of real, analogue statues across the world selfishly reminded me of the need to preserve my own far less significant – if not insignificant – historical record and not to count on anyone else doing this for me. 

So, preserve your own work and don’t rely on the Internet, Web, big data, or any other person to save your work. Take it from C. Wright Mills (1952), any academic should devote considerable time to their files. While Mills argued that maintaining one’s files was a central aspect of ‘intellectual craftsmanship’, even he did not focus on their preservation.

That said, if anyone has a copy of ‘Trickle-Down Social Science’, name your price. 😉

Reference

Dutton, W. (1994), ‘Trickle-Down Social Science: A Personal Perspective,’ Social Sciences, 22, 2.

Wright Mills, C. (1980), ‘On Intellectual Craftsmanship (1952)’, Society, Jan/Feb: 63-70.https://link.springer.com/content/pdf/10.1007/BF02700062.pdf


[i] http://web.archive.org

[ii] https://www.inputmag.com/culture/internet-archive-kills-its-free-digital-library-over-copyright-concerns

[iii] https://www.afao.ac.uk

[iv] https://www.who.int/blueprint/priority-diseases/key-action/novel-coronavirus-landscape-ncov.pdf?ua=

Publication of A Research Agenda for Digital Politics

A Research Agenda for Digital Politics 

The publication of my most recent edited book, A Research Agenda for Digital Politics, is available in hardback and electronic forms at: https://www.e-elgar.com/shop/gbp/a-research-agenda-for-digital-politics-9781789903089.html From this site you can look inside the book to review the preface, list of contributors, the table of contents, and my introduction, which includes an outline of the book. In addition, the first chapter by Professor Andrew Chadwick, entitled ‘Four Challenges for the Future of Digital Politics Research’, is free to read on the digital platform Elgaronline, where you will also find the books’ DOI: https://www.elgaronline.com/view/edcoll/9781789903089/9781789903089.xml

Finally, a short leaflet is available on the site, with comments on the book from Professors W. Lance Bennett, Michael X. Delli Carpini, and Laura DeNardis. I was not aware of these comments, with one exception, until today – so I am truly grateful to such stellar figures in the field for contributing their views on this volume.  

Digital politics has been a burgeoning field for years, but with the approach of elections in the US and around the world in the context of a pandemic, Brexit, and breaking cold wars, it could not be more pertinent than today. If you are considering texts for your (online) courses in political communication, media and politics, Internet studies, or digital politics, do take a look at the range and quality of perspectives offered by the contributors to this new book. Provide yourself and your students with valuable insights on issues framed for high quality research. 

List of Contributors:

Nick Anstead, London School of Economics and Political Science; Jay G. Blumler, University of Leeds and University of Maryland; Andrew Chadwick, Loughborough University; Stephen Coleman, University of Leeds; Alexi Drew, King’s College London and Charles University, Prague; Elizabeth Dubois, University of Ottawa; Laleah Fernandez, Michigan State University; Heather Ford, University of Technology Sydney; M. I. Franklin, Goldsmiths, University of London; Paolo Gerbaudo, King’s College London; Dave Karpf, George Washington University;  Leah Lievrouw, University of California, Los Angeles; Wan-Ying Lin, City University of Hong Kong; Florian Martin-Bariteau, University of Ottawa; Declan McDowell-Naylor, Cardiff University; Giles Moss, University of Leeds; Ben O’Loughlin, Royal Holloway, University of London; Patrícia Rossini, University of Liverpool; Volker Schneider, University of Konstanz; Lone Sorensen, University of Huddersfield; Scott Wright, University of Melbourne; Xinzhi Zhang, Hong Kong Baptist University. 

How People Look for Information about Politics

The following lists papers and work in progress flowing from our research, which began at MSU, and was funded by Google Inc., on how people get access to information about politics. It was launched when I was director of the Quello Center at Michigan State University, but continues with me and colleagues at Quello and other universities in the US, UK and Canada. Funding covered the cost of the surveys – online surveys of 14000 Internet users in seven nations, but yielded a broad set of outputs. Your comments, criticisms, are welcomed. It was called the Quello Search Project.

Quello Search Project Papers

6 May 2020

Opinion and Outreach Papers to Wider Audiences

Dutton, W. H. (2017), ‘Fake news, echo chambers and filter bubbles: Underresearched and overhyped’, The Conversation, 5 May: https://theconversation.com/fake-news-echo-chambers-and-filter-bubbles-underresearched-and-overhyped-76688

This post was republished on a variety of platforms, including Salon, Inforrm.org, mediablasfactcheck, BillDutton.me, Observer.com, Quello.msu.com, USAToday.com, Techniamerica, pubexec

Dutton, W. H. (2017), Bubblebusters: Countering Fake News, Filter Bubbles and Echo Chambers, NESTA.org.uk, 15 June. 

This post was republished on the Nesta site and readie.eu. Bill plans to update and repost this blog on his own site.  

Dubois, E., and Blank, G. (2018), The Myth of the Echo Chamber, The Conversation, March: https://theconversation.com/the-myth-of-the-echo-chamber-92544

Presentations of the Project Report

The project report has been presented at a wide variety of venues. A blog about Bill’s presentations is available here: http://quello.msu.edu/the-director-presents-in-europe-on-our-quello-search-project/Presentations include:

  • Summaries of our report/project were presented to academic, industry and policy communities in Britain (London, Oxford); Germany (Hamburg, Berlin, Munich); Italy (Rome); Belgium (Brussels); Spain (Madrid); China (Beijing); and the US (Arlington, Boston), and most recently in Mexico (Mexico City).
  • An overview of our Report was part of a three-hour workshop on research around echo chambers, filter bubbles and social media organized for a preconference workshop for the Social Media and Society Conference, Toronto, Canada https://socialmediaandsociety.org/ July 28-30, 2017. It included Bill, Elizabeth, and Craig.  

Papers Completed or in Progress

The following is a list of papers that further develop and deepen particular themes and issues of our project report. They have been completed or are in progress, categorized here by the indicative list of paper topics promised by the team: 

  1. Overview: A critical overview of the project findings for a policy journal, such as the Internet Policy Review, or Information Communication and Society

Dutton, W. H., Reisdorf, B. C., Blank, G., Dubois, E., and Fernandez, L. (2019), ‘The Internet and Access to Information About Politics: Searching Through Filter Bubbles, Echo Chambers, and Disinformation’, pp. 228-247 in Graham, M., and Dutton, W. H. (eds), Society and the Internet: How Networks of Information and Communication are Changing our Lives, 2nd Edition. Oxford: Oxford University Press. 

Earlier version: Dutton, W.H., Reisdorf, B.C., Blank, G., and Dubois, E. (2017), ‘Search and Politics: A Cross-National Survey’, paper presented at the TPRC #45 held at George Mason University in Arlington Virginia, September 7-9, 2017.

Dubois, E., and Blank, G. (2018). ‘The echo chamber is overstated: the moderating effect of political interest and diverse media’. Information, Communication & Society, 21(5), 729-745. 

Dutton, W. H. (2018), ‘Networked Publics: Multi-Disciplinary Perspectives on Big Policy Issues’, Internet Policy Review, 15 May: https://policyreview.info/articles/analysis/networked-publics-multi-disciplinary-perspectives-big-policy-issues   

  • Vulnerables: Work identifying the Internet users most vulnerable to fake news and echo chambers. This paper would build on the findings to suggest interventions, such as around digital media literacy to address these risks.

Dutton, W. H., and Fernandez, L. (2018/19), ‘How Susceptible Are Internet Users?’, InterMEDIA, December/January 2018/19 46(4): 36-40. 

Earlier version: Dutton, W. H., and Fernandez, L. (2018), ‘Fake News, Echo Chambers, and Filter Bubbles: Nudging the Vulnerable’, presentation at the International Communication Association meeting in Prague, Czech Republic on 24 May 2018.

Reisdorf, B. presented work on ‘Skills, Usage Types and political opinion formation’, an invited talk at Harvard Kennedy School, Oct 19, 2017 [Bibi (presenting) work with Grant]

Blank, G., and Reisdorf, B. (2018), ‘Internet Activity, Skills, and Political Opinion Formation: A New Public Sphere?’, presentation at the International Communication Association meeting in Prague, Czech Republic on 24 May 2018.

  • Trust: A study focused on trust in different sources of information about politics and policy for a political communication journal, such as the International Journal of Communication.

Cotter, K.  & Reisdorf, B.C. (2020). Algorithmic knowledge gaps: Education and experience as co-determinants. International Journal of Communication, 14(1). Online First.

Dubois, E., Minaeian, S., Paquet-Labelle, A. and Beaudry, S. (2020), Who to Trust on Social Media: How Opinion Leaders and Seekers Avoid Disinformation and Echo Chambers, Social Media + Society, April-June: 1-13. 

Reisdorf, B.C. & Blank, G. (forthcoming). Algorithmic Literacy and Platform Trust, pp. forthcoming in Hargittai, E. (Ed.). Handbook of Digital Inequality. Edward Elgar Publishing.

Previously presented: Reisdorf, B.C. & Blank, G. (2018), ‘Algorithmic literacy and platform trust’, paper to be presented at the 2018 American Sociological Association annual meeting, Philadelphia, Pennsylvania, 11 August.

  • Cross-national Comparison: A cross-national comparative analysis of search, seeking to explain cross-national differences, for an Internet and society journal, such as Information, Communication and Society (iCS), or New Media and Society

Blank, G., Dubois, E., Dutton, W.H., Fernandez, L., and Reisdorf, B.C. presented a panel entitled ‘Personalization, Politics, and Policy: Cross-National Perspectives’ at ICA Conference 2018 in Prague, Czech Republic.

Dubois, E. (forthcoming), ‘Spiral of Silence/Two Step Flow: How Social Support/Pressure and Political Opinion’, under preparation for a journal.

  • Search: A study of the role of search in our evolving media ecology. One of the unique strengths of this project is that it contextualized search in the environment of the entire range of media. The dataset asks respondents about activity on six offline and seven online media, including search, plus nine social media. What is the role of search in this broad ecology of online and offline media? Are people who have complex media habits less likely to fall into echo chambers? 

Robertson, C. (2017), ‘Are all search results created equal? An exploration of filter bubbles and source diversity in Google search results’, presented at a symposium entitled Journalism and the Search for Truth in an Age of Social Media at Boston University, April 23-25.

Blank, G. (2017), ‘Search and politics: The uses and impacts of search in Britain, France, Germany, Italy, Poland, Spain and the United States’. Presentation at the Google display at the Almadalen conference in Sweden on 3 July.

Blank, G. and Dubois, E. (2017), ‘Echo chambers and media engagement with politics’, presentation at the Social Informatics 2017 conference in Oxford on 13 September.

Blank, G. and Dubois, E. (2018), ‘Echo Chambers and the Impact of Media Diversity: Political Opinion Formation and Government Policy’, paper presented at the General Online Research Conference, Düsseldorf, Germany on 1 March.

Blank, G., and Dubois, E. (2018), ‘Is the echo chamber overstated? Findings from seven countries’, presentation at the Düsseldorf University, Institute for Internet and Democracy Conference, Düsseldorf, Germany on 5 July. 

  • Populism: An analysis of the role of search and the Internet in populist attitudes. How is populism related to search? Is the Internet and search supporting the rise of individuals with more confidence in their knowledge of policy, and supportive of more popular control? Are populists more likely to be in an echo chamber than those less in line with populist viewpoints?

Dutton, W. H. and Robertson, C. T. (forthcoming), ‘The Role of Filter Bubbles and Echo Chambers in the Rise of Populism: Disentangling Polarization and Civic Empowerment in the Digital Age’ in Howard Tumber and Silvio Waisbord (eds), The Routledge Companion to Media Misinformation and Populism. New York: Routledge, pp. forthcoming.

  • Fact Checking: Checking Information via Search: Who, When, Why? Between 41 percent (UK) and 57 percent (Italy) of respondents say they check information using search “often” or “very often”. Who are those who double-check sources?

Robertson, C.T. (under review). Who checks? Identifying predictors of online verification behaviors in the United States and Europe.

  • Democracy: An analysis of democratic digital inequalities that would examine how education and motivation are related to searching for and sharing political news. Is there a gap in the way that people from different educational backgrounds search for and share political news, and if so, does this affect how they shape their political opinions?

Dutton, W. H. (2020 forthcoming) (ed), Digital Politics. Cheltenham, UK: Edward Elgar Publishing.

Dutton, W. H. (In Progress), The Fifth Estate: The Political Dynamics of Empowering Networked Individuals. Book under contract with OUP, New York: Oxford University Press, with 1-2 chapters on QSP. 

Blank, G. (2018), ‘Democracy and Technology’, Grant will spoke at the Google display at the SuomiAreena conference on 16 July in Pori, Finland.

Reisdorf, B. C., Blank, G., and Dutton, W. H. (2019), ‘Internet Cultures and Digital Inequalities’, pp. 80-95 in Graham, M., and Dutton, W. H. (eds), Society and the Internet: How Networks of Information and Communication are Changing our Lives, 2nd Edition. Oxford: Oxford University Press. 

Previously presented: Blank, G., Reisdorf, B., and Dutton, W. H. (2018), ‘Internet Cultures and Digital Inequalities’, presentation at the Digital Inclusion Policy and Research Conference, London, 21-22 June.

How the #Infodemic is being Tackled

The fight against conspiracy theories and other fake news about the coronavirus crisis is receiving more help from the social media and other tech platforms, as a number of thought leaders have argued.[1] However, in my opinion, a more important factor has been more successful outreach by governmental, industry, and academic researchers. Too often, the research community has been too complacent about getting the results of their research to opinion leaders and the broader public. Years ago, I argued that too many scientists held a ‘trickle down’ theory of information dissemination.[2] Once they publish their research, their job is done, and others will read and disseminate their findings. 

Even today, too many researchers and scientists are complacent about outreach. They are too focused on publication and communication with their peers and see outreach as someone else’s job. The coronavirus crisis has demonstrated that governments and mainstream, leading researchers, can get their messages out if they work hard to do so. In the UK, the Prime Minister’s TV address, and multiple press conferences have been very useful – the last reaching 27 million viewers in the UK, becoming one of the ‘most watched TV programmes ever’, according to The Guardian.[3] In addition, the government distributed a text message to people across the UK about it rules during the crisis. And leading scientists have been explaining their findings, research, and models to the public, with the support of broadcasters and social media. 

If scientists and other researchers are complacent, they can surrender the conversation to creative and motivated conspiracy theorists and fake news outlets. In the case of Covid-19, it seems to be that a major push by the scientific community of researchers and governmental experts and politicians has shown that reputable sources can be heard over and amongst the crowd of rumors and less authoritative information. Rather than try to censor or suppress social media, we need to step-up efforts by mainstream scientific communities to reach out to the public and political opinion leaders. No more complacency. It should not take a global pandemic crisis to see that this can and should be done.


[1] Marietje Schaake (2020), ‘Now we know Big Tech can tackle the ‘infdemic’ of fake news’, Financial Times, 25 March: p. 23. 

[2] Dutton, W. (1994), ‘Trickle-Down Social Science: A Personal Perspective,’ Social Sciences, 22, 2.

[3] Jim Waterson (2020), ‘Boris Johnson’s Covid-19 address is one of the most-watched TV programmes ever’, The Guardian, 24 March: https://www.theguardian.com/tv-and-radio/2020/mar/24/boris-johnsons-covid-19-address-is-one-of-most-watched-tv-programmes-ever

Women and the Web

News reports today citing one of the inventors of the Web, Sir Tim Berners-Lee, as arguing that the Web “is not working for women and girls”. Tim Berners-Lee is a hero of all of us involved in study and use of the Internet, Web, and related information and communication technologies. Clearly, many women and girls might well ‘experience violence online, including sexual harassment and, threatening messages’. This is a serious problem, but it should not be unnoticed that the Internet and Web have been remarkably gender neutral with respect to access. 

In fact, women and girls access and use the Internet, Web and social media at about the same level as men and boys. There are some nations in which the use of the Internet and related ICTs is dramatically lower for women and girls, but in Britain, the US, and most high-income nations, digital divides are less related to gender than to such factors as income, education, and age.  This speaks volumes about the value of these media to women and girls, and this should not be lost in focusing on problematic and sometimes harmful aspects of access to content on the Web and related media. 

Below is one example of use of the Internet by gender in Britain in 2019, which shows that women are more likely to be next generation users (using three or more devices, one of which is mobile) and less likely to be a non-user:

The full report from which this drawn is available online here.

Jettison the Digital Nanny State: Digitally Augment Users

My last blog argued that the UK should stop moving along the road of a duty of care regime, as this will lead Britain to become what might be called a ‘Digital Nanny State’, undermining the privacy and freedom of expression of all users. A promising number of readers agreed with my concerns, but some asked whether there was an alternative solution.

Before offering my suggestions, I must say that I do not see any solutions outlined by the duty of care regime. Essentially, a ‘duty of care’ approach[1], as outlined in the Cyber Harms White Paper would delegate solutions to the big tech companies, threatening top executives with huge fines or criminal charges if they fail to stop or address them.[2] That said, I assume that any ‘solutions’ would involve major breaches of the privacy and freedom of expression of Internet users across Britain given that surveillance and content controls would be the most likely necessity of their approach. The remedy would be draconian and worse that the problems to be addressed.[3]

Nevertheless, it is fair to ask how the problems raised by the lists of cyber harms could be addressed. Let me outline elements of a more viable approach. 

Move Away from the Concept of Cyber Harms

Under the umbrella of cyber harms are lumped a wide range of problems that have little in common beyond being potential problems for some Internet users. Looked at with any care it is impossible to see them as that similar in origin or solution. For example, disinformation is quite different from sexting. They involve different kinds of problems, to different people, imposed by different actors. Trolling is a fundamentally different set of issues than the promotion of female genital mutilation (FGM). The only common denominator is that any of these actions might result is some harm at some level for some individuals or groups – but they are so different that they violate common sense and logic to put them into the same scheme. 

Moreover, any of the problems are not harms per se, but actions that could be harmful – maybe even lead to many harms at many different levels, from psychological to physical.  Step one in any reasonable approach would be to decompose this list of cyber harms into specific problems in order to think through how each problem could be addressed. Graham Smith captures this problem in noting that the mishmash of cyber harms might be better labelled ‘users behaving badly’.[4] The authors of the White Paper did not want a ‘fragmented’ array of problems, but the reality is that there are distinctly different problems that need to be addressed in different ways in different contexts by different people. For example, others have argued for looking at cyber harms from the perspective of human rights law. But each problem needs to be addressed on its own terms.

Remember that Technologies have Dual Effects

Ithiel de Sola Pool pointed out how almost any negative impact of the telephone could be said to have exactly the opposite impact as well – ‘dual effects’.[5] For example, a telephone in one’s home could undermine your privacy by interrupting the peace and quiet of the household, but it could also provide more privacy compared to people coming to your door. A computer could be used to enhance the efficiency of an organization, but if poorly designed and implemented, the same technology could undermine its efficiency. In short, technologies do not have inherent, deterministic effects, as their implications can be shaped by how we design, use and govern them in particular contexts. 

This is important here because the discussion of cyber harms is occurring is a dystopian climate of opinion. Journalists, politicians, and academics are jumping on a dystopian bandwagon that is as misleading as the utopian bandwagon of the Arab Spring when all thought the Internet would democratize the world. Both the utopian and dystopian perspectives are misleading, deterministic viewpoints that are unhelpful for policy and practice. 

Recognise: Cyber Space is not the Wild West

Many of the cyber harms listed in the White Paper are activities that are illegal. It seems silly to remind the Home Office in the UK that what is illegal in the physical world is also illegal online in so-called cyber space or our virtual world. Given that financial fraud or selling drugs is illegal, then it is illegal online, and is a matter for law enforcement. The difference is that activities online do not always respect the same boundaries as activities in the real world of jurisdictions, law enforcement, and the courts. But this does not make the activities any less illegal, only more jurisdictionally complex to police and enforce. This does not require new law but better approaches to connecting and coordinating law enforcement across geography of spaces and places. Law enforcement agencies can request information from Internet platforms, but they probably should not outsource law enforcement, as suggested by the cyber harms framework. Cyber space is not the “Wild West” and never was.

Legal, but Potentially Harmful, Activities Can be Managed

The White Paper lists many activities that are not necessarily illegal – in fact some actions are not illegal, but potentially harmful. Cyberbullying is one example. Someone bullying another person is potentially harmful, but not necessarily. It is sometimes possible to ignore or standup to a bully and find that this actually could raise one’s self-esteem and sense of efficacy. A bully on the playground can be stopped by a person standing up to him or her, or another person intervening, or a supervisor on the playground calling a stop to it. If an individual repeatedly bullies, or actually harms another person, then they face penalties in the context of that activity, such as the school or workplace. In many ways, the act of cyberbullying can be useful in proving that a particular actor bullied another person. 

Many other examples could be developed to show how each problem has unique aspects and requires different networks of actors to be involved in managing or mitigating any harms. Many problems do not involve malicious actors, but some do. Many occur in households, others in schools, and workplaces, and anywhere at any time. The actors, problems, and contexts matter, and need to be considered in addressing these issues. 

Augment User Intelligence to Move Regulation Closer to Home

Many are beginning to address the hype surrounding artificial intelligence (AI) as a technological fix.[6] But in the spirit of Douglas Englebart in the 1950s, computers and the Internet can be designed to ‘augment’ human intelligence, and AI along with other tools have the potential to augment the choices of Internet users, as so widely experience in the use of search. While technically and socially challenging, it is possible and an innovative challenge to develop approaches to using digital technology to move regulation closer to the users: with content regulation, for example, being enabled by networked individuals, households, schools, businesses, and governmental organizations, as opposed to moving regulation up to big tech companies or governmental regulators. 

Efforts in the 1990s to develop a violence-chip (V-chip) for televisions provides an early example of this approach. It was designed to allow parents to set controls to prevent young children from watching adult programming. It would move content controls closer to the viewers and, theoretically, parents. [Children were often the only members of the household who knew how to use the V-chip.] The idea was good, its implementation limited. 

Cable television services often enable the use of a child lock for reducing access by children to adult programming. Video streaming services and age verification systems have had problems but remain ways to potentially enable a household to create services safer for children. Mobile Internet and video streaming services have apps for kids. Increasingly, it should be possible to design more ways to control access to content by users and households in ways that can address many of the problems raised by the cyber harms framework, such as access to violent content, that can be filtered by users.

With emerging approaches of AI, for example, it could be possible to not simply have warning flags, but information that could be used by users to decide whether to block or filter online content, such as unfriending a social media user. With respect to email, while such tools are in their infancy, there is the potential for AI to be used to identify emails that reflect bullying behavior. So Internet users will be increasingly able to detect individuals or messages that are toxic or malicious before they even see them, much like SPAM and junk mail can disappear before ever being seen by the user.[7] Mobile apps, digital media, intelligent home hubs and routers, and computer software generally could be designed and used to enable users to address their personal and household concerns. 

One drawback might be the ways in which digital divides and skills could enable the most digitally empowered households to have more sophisticated control over content and services. This will create a need for public services to help households without the skills ‘inhouse’ to grapple with emerging technology. However, this could be a major aspect of educational and awareness training that is one valuable recommendation of the Cyber Harms White Paper. Some households might create a personalized and unique set of controls over content, while others might simply choose from a number of set profiles that can be constantly up-dated, much like anti-virus software and SPAM filters that permit users to adjust the severity of filtering. In the future, it may be as easy to avoid unwanted content as it now is to avoid SPAM and junk mail. 

Disinformation provides another example of a problem that can be addressed by existing technologies, like the use of multiple media sources and search technologies. Our own research found that most Internet users consulted four our more sources of information about politics, for example, and online (one source), they would consult an average of four different sources.[8] These patterns of search meant that very few users are likely to be trapped in a filter bubble or echo chamber, albeit still subject to the selective perception bias that no technology can cure. 


My basic argument is to not to panic in this dystopian climate of opinion and consider the following:

  • Jettison the duty of care regime. It will create problems that are disproportionately greater than the problems to be addressed.
  • Jettison the artificial category of cyber harms. It puts apples and oranges in the same basket in very unhelpful ways, mixing legal and illegal activities, and activities that are inherently harmful promotion of FMG, with activities that can be handled by a variety of actors and mitigating actions. 
  • Augment the intelligence of users. Push regulation down to users – enable them to regulate content seen by themselves or for their children. 

If we get rid of this cyber harm umbrella and look at each ‘harm’ as a unique problem, with different actors, contexts, and solutions, then they can each be dealt with through more uniquely appropriate mechanisms. 

That would be my suggestion. Not as simple as asking others to just ‘take care of this’ or ‘stop this’ but there simply is no magic wand or silver bullet that the big tech companies have at their command to accomplish this. Sooner or later, each problem needs to be addressed by often different but appropriate sets of actors, ranging from children, parents, and Internet users to schools, business and governmental organizations, law enforcement, and Internet platforms. The silver lining might be that as the Internet and its benefits become ever more embedded in everyday life and work. And as digital media become more critical that we routinely consider the potential problems as well as the benefits of every innovation made in the design, use, and governance of the Internet in your life and work. All should aim to further empower users to use, and control, and network with others to control the Internet and related digital media, and not to be controlled by a nanny state.  

Further Reading

Useful and broad overviews of the problems with the cyber harms White Paper are available by Gian Volpicelli in Wired[9] and Graham Smith[10] along with many contributions to the Cyber Harms White Paper consultation.


[1] A solicitor, Graham Smith, has argued quite authoritatively that the White Paper actually “abandons the principles underpinning existing duties of care”, see his paper, ‘Online Harms White Paper Consultation – Response to Consultation’, 28 June 2019, posted on his Twitter feed:  https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html

[2] https://www.bmmagazine.co.uk/news/tech-bosses-could-face-criminal-proceedings-if-they-fail-to-protect-users/

[3] Here I found agreement with the views of Paul Barron’s blog, ‘Response to Online Harms White Paper’, 3 July 2019: https://paulbernal.wordpress.com/2019/07/03/response-to-online-harms-white-paper/ Also, see his book, The Internet, Warts and AllCambridge: Cambridge University Press, 2018.

[4] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

[5] Ithiel de Sola Pool (1983), Forecasting the Telephone: A Retrospective Technology Assessment. Norwood, NJ: Ablex. 

[6] See, for example, Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’, October 2019, forthcoming in the European Journal of Risk Regulation, available at: https://osf.io/preprints/lawarxiv/dvx4f/

[7] https://www.theguardian.com/technology/2020/jan/03/metoobots-scientists-develop-ai-detect-harassment

[8] See Dutton, W. H. and Fernandez, L., ‘How Susceptible are Internet Users‘, Intermedia, Vol 46 No 4 December/January 2019

[9] https://www.wired.co.uk/article/online-harms-white-paper-uk-analysis

[10] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

Britain’s Digital Nanny State

The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. Perhaps it is because discussion of a new approach to Internet regulation occurred in the midst of the public’s focus on Brexit, this initiative has not really been exposed to critical scrutiny. Ironically, its implementation would do incredible harm to the human rights of the public at large albeit in the name of curbing the use of the Internet by malicious users, such as terrorists and pedophiles. Hopefully, it is not too late to reconsider this cyber harms framework. 

The problems with the government’s approach were covered well by Gian Voipicelli in an article in Wired UK. I presented my own concerns in a summary to the consumer forum for communications in June of 2019.[1] The problems with this approach were so apparent that I could not imagine this idea making its way into the Queen’s Speech as part of the legislative programme for the newly elected Conservative Government. It has, so let me briefly outline my concerns. 

Robert Huntington, The Nanny State, book cover

The aim has been to find a way to stop illegal or ‘unacceptable’ content and activity online. The problem has been finding a way to regulate the Internet and social media in ways that could accomplish this aim without violating the privacy and freedom of all digital citizens – networked individuals, such as yourself. The big idea has been to apply a duty of care responsibility on the social media companies, the intermediaries between those who use the Internet. Generally, Internet companies, like telephone companies, in the past, would not be held responsible for what their users do. Their liability would be very limited. Imagine a phone company sued because a pedophile used the phone. The phone company would have to surveil all telephone use to catch offenses. Likewise, Internet intermediaries will need to know what everyone is using the Internet and social media for to stop illegal or ‘unacceptable’ behavior. This is one reason why many commentators have referred to this as a draconian initiative. 

So, what are the possible harms? Before enumerating the harms it does consider, it does not deal with harms covered by other legislation or regulators, such as privacy, which is the responsibility of the Information Commissioner’s Office (ICO). Ironically, one of the major harms of this initiative will be to the privacy of individual Internet users. Where is the ICO?

The harms cited as within the scope of this cyber harms initiative included: child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; harassment and cyberstalking;  hate crime; encouraging and assisting suicide; incitement to violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18). This is only a start, as there are cyber harms with ‘less clear’ definitions, including: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM); and underage exposure to legal content, such as children accessing pornography, and spending excessive time online – screen time.  Clearly, this is a huge range of possible harms, and the list can be expanded over time, as new harms are discovered. 

Take one harm, for example, disinformation. Seriously, do you want the regulator, or the social media companies to judge what is disinformation? This would be ludicrous. Internet companies are not public service broadcasters, even though many would like them to behave as if they were. 

The idea is that those companies that allow users to share or discover ‘user-generated content or interact with each other online’ will have ‘a statutory duty of care’ to be responsible for the safety of their users and prevent them from suffering these harms. If they fail, the regulator can take action against the companies, such as fining the social media executives, or threatening them with criminal prosecution.[2]

The White Paper also recommended several technical initiatives, such as to flag suspicious content, and educational initiatives, such as in online media literacy. But the duty of care responsibility is the key and most problematic issue. 

Specifically, the cyber harms initiative poses the following risks: 

  1. Covering an overly broad and open-ended range of cyber harms;
  2. Requiring surveillance in order to police this duty that could undermine privacy of all users;
  3. Incentivizing companies to over-regulate content & activity, resulting in more restrictions on anonymity, speech, and chilling effects on freedom of expression;
  4. Generating more fear, and panic among the general public, undermining adoption & use of the Internet and widening digital divides;
  5. Necessitating an invasive monitoring of content, facing a volume of instances that is an order of magnitude beyond traditional media and telecom, such as 300 hours of video posted on YouTube every minute;
  6. Essentially targeting American tech giants (no British companies), and even suggesting subsidies for British companies, which will be viewed as protectionist, leaving Britain as a virtual backwater of a more global Internet; 
  7. Increasing the fragmentation of Internet regulators: a new regulator, Ofcom, new consumer ‘champion’, ICO, or more?

Notwithstanding these risks, this push is finding support for a variety of reasons. One general driver has been the rise of a dystopian climate of opinion about the Internet and social media over the last decade. This has been exacerbated by concerns over child protection and elections in the US, across Europe, such as with Cambridge Analytica, and with Brexit that created the spectre of foreign interference. Also, Europe and the UK have not developed Internet and social media companies comparable to the so-called big nine of the US and China. (While the UK has a strong online game industry, this industry is not mentioned at all in the White Paper, except as a target of subsidies.) The Internet and social media companies are viewed as foreign, and primarily American, companies that are politically popular to target. In this context, the platformization of the Internet and social media has been a gift to regulators — the potential for companies to police a large proportion of traffic, providing a way forward for politicians and regulators to ‘do something’. But at what costs? 

The public has valid complaints and concerns over instances of online harms. Politicians have not known what to do, but now have been led to believe they can simply turn to the companies and command them to stop cyber harms from occurring, or they will suffer the consequences in the way of executives facing steep fines or criminal penalties. But this carries huge risks, primarily in leading to over-regulation and inappropriate curtailing of the privacy and freedom of expression of all digital citizens across the UK. 

You only need to look at China to see how this model works. In China, an Internet or social media company could lose its license overnight if it allowed users to cross red lines determined by the government. And this fear has unsurprisingly led to over-regulation by these companies. Thus, the central government of China can count on private firms to strictly regulate Internet content and use. A similar outcome will occur in Britain, making it not the safest place to be online, but a place you would not want to be online with your content with even screen time under surveillance. User-generated content will be dangerous. Broadcast news and entertainment will be safe. Let the public watch movies. 

In conclusion, while I am an American, I don’t think this is simply an American obsession with freedom of expression. This right is not absolute even in the USA. Internet users across the world value their ability to ask questions, voice concerns, and use online digital media to access information, people, and services they like without fear of surveillance.[3] It can be a technology of freedom, as Ithiel de Sola Pool argued, in countries that support freedom of expression and personal privacy. If Britons decide to ask the government and regulators to restrict their use of the Internet and social media – for their own good – then they should support this framework for an e-nanny, or digital-nanny state. But its implications for Britain are real cyber harms that will result from this duty of care framework. 


[1] A link to my slides for this presentation is here: https://www.slideshare.net/WHDutton/online-harms-white-paper-april-2019-bill-dutton?qid=5ea724d0-7b80-4e27-bfe0-545bdbd13b93&v=&b=&from_search=1

[2] https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7

[3] Dutton, W. H., Law, G., Bolsover, G., and Dutta, S. (2013, released 2014) The Internet Trust Bubble: Global Values, Beliefs and Practices. NY: World Economic Forum. 

Managing the Shift to Next Generation Television

Columbia University’s Professor Eli Noam was in Oxford yesterday, 17 October 2019, speaking at Green Templeton College about two of his most recent books, entitled ‘Managing Media and Digital Organizations’ and ‘Digital and Media Management’: https://www.palgrave.com/gp/book/9783319712871. The title of his talk was ‘Does Digital Management Exist? Challenges for the Next Generation of TV’. Several departments collaborated with Green Templeton College in supporting this event, including the Oxford Internet Institute, Saïd Business School, the Blavatnik School of Government, and Voices from Oxford

Green Templeton College Lecture Hall

Professor Noam has focused attention on what seems like a benign and economically rational technical shift from linear TV to online video. Most people have some experience with streaming video services, for example. But the longer term prospects of this shift could be major (we haven’t seen anything yet) and have serious social implications that drive regulatory change, and also challenge those charged with managing the media. What is the next generation of digital television? Can it be managed? Are the principles of business management applicable to new digital organizations? 

The Principal of Green Templeton College, Professor Denise Lievesley opened the session and introduced the speaker, and two discussants: Professor Mari Sako, from the Saïd Business School, and Damian Tambini, from the Department of Media and Communication at LSE, and a former director of Oxford’s Programme in Comparative Media Law and Policy (PCMLP). Following Eli Noam’s overview of several of the key themes developed in his books, and the responses of the discussants, the speakers fielded a strong set of questions from other participants. Overall, the talk and discussion focused less on the management issues, and more on the potential social implications of this shift and the concerns they raised. 

Roland Rosner, Eli Noam, Bill Dutton, Mari Sako, Damian Tambini

The social implications are wide ranging, including a shift towards more individualized, active, emersive, and global media. There will be some of the ‘same old same old’, but also ‘much more’ that brings many perspectives on the future of television into households. The concerns raised by these shifts include threats to privacy and security to even shorter attention spans – can real life compete with sensational emersion in online video? Perhaps the central concern of the discussion focused around media concentration, and not only in cloud services, such as offered by the big tech companies, but also in national infrastructures, content, and devices. 

This led to a discussion of the policy implications arising from such concerns, particularly in the aftermath of 2016 elections, mainly around the efforts to introduce governmental regulation of the global online companies and governmental pressures on platforms to censor their own content. This surfaced some debate over the cross-national and regional differences in approaches to freedom of expression and media regulation. While there were differences of opinion on the need and nature of greater regulation, there did seem to be little disagreement with Eli’s argument that many academics seem to have moved from being cheerleaders to fear mongering, when we should all seek to be ‘thought leaders’ in this space, given that academics should have the independence from government and the media, and an understanding informed by systematic research versus conventional wisdom across the world. 

Eli Noam presenting his lecture on Digital Media Management

Eli is one of the world’s leading scholars on digital media and management, and his latest books demonstrate his command of this area. One of the speakers referred to his latest tome as an MBA in a box. The text has a version for undergraduate and graduate courses, but every serious university library should have them in their collection. 

Bill and Eli with Susanne, a former Columbia Un student of Eli’s, now at the OII and Green Templeton College, holding Eli’s new books

Notes: 

Eli Noam has been Professor of Economics and Finance at the Columbia Business School since 1976 and its Garrett Professor of Public Policy and Business Responsibility. He has been the Director of the Columbia Institute for Tele-Information, and one of the key advisors to the Oxford Internet Institute, having served on its Advisory Board since its founding in 2001 through the Institute’s first decade. 

His new books on digital media and organizations have been praised by a range of digital and media luminaries, from Vint Cerf, one of the fathers of the Internet, to the former CEO of Time Warner, Gerald Levin and former CTO of HBO, Robert Zitter. 

An interview with Eli Noam will be available soon via Voices from Oxford.

Fake News Nation – a new book by Aspray and Cortada is out!

I’d like to recommend to you a new book, entitled Fake News Nation: The Long History of Lies and Misinterpretations in America (Rowman & Littlefield, 2019). Information about the book is at: https://rowman.com/ISBN/9781538131107/Fake-News-Nation-The-Long-History-of-Lies-and-Misinterpretations-in-America

As I noted in my endorsement of this book: “James W. Cortada and Willam Aspray’s brilliantly selected and crafted case studies are must-reads because they bring historical insight to issues of fake news, disinformation, and conspiracy theories of our digital age.”