Private Emails Are Not (Yet) a Thought Crime

Private Emails? A Personal Perspective on Politicizing Norms of Communication

In Orwell’s 1984, Winston Smith opens himself up to accusations of thought crimes for walking onto a street with a shop where he could buy pen and paper. In 2021, politicians and even the UK’s Information Commissioner wonder if ministers are guilty of some criminal offense for using private email.[1] The ICO, charged with protecting our privacy, does not want to lose data critical to her surveillance of public officials! All in the name of ‘transparency’. 

Increasingly, accusations seem to fly around such issues as the security of public officials using personal email. While security, legal, and privacy issues are embedded in these criticisms of the practices of others, my concern is over the degree they lack common sense, any historical perspective, and politicize what is fundamentally a cultural difference that has risen over the decades across different kinds of Internet users. Moreover, technical advances are diminishing the distinctions being drawn. Let me explain on the basis of my experiences. 

Winston Smith, 1984

I began using email around 1974, when I had to call colleagues to tell them to look for an email sent from me. Otherwise, they would not check their inbox. Those were early days, when academics in universities could get an email address from their university if they were at one of the institutions that were early nodes on the ARPANET. 

At that time, in the early 1970s, I wrote most of my correspondence by hand, and it was typed up by a pool of typists. I would revise a draft and someone in the pool would revise it for me to mail or fax. A carbon copy of all my letters was (I discovered) put in a chronological file of all correspondence going out of our academic research unit and studiously read by one of our managers. He knew what was going on across the organization by reading all of our outgoing correspondence. This was part of a culture of administrative control, which I accepted, but did not like and was surprised to discover. That said, I was an employee of an organization and in that role, it is arguable that I did not have a true right to privacy within the organization. 

Presumably, even in those early days, an archive of all incoming and outgoing emails existed in the university so our manager might have had even better intelligence about our work, but most administrators were not email users. If a malicious user sent hate mail, for example, I would imagine it could be found in the archive, but then again, it is likely to have been sent under another user’s name. (Yes, it was a problem in very early days of email.)

By the early 1980s, one amusing (to me) concern in business and industry around email was its use for social purposes. Before email, most electronic communication was costly for organizations. The telegraph created a mindset in government and industry of every letter and word costing money, so electronic communication, reinforced by fax machines, was that it was considered costly compared to regular, physical mail – later called ‘snail mail’. 

So when employees in organizations began using email, managers were concerned about the cost and the potential waste of money if used for social purposes. Academics used university email for anything – teaching, research, or personal reasons – and lived in sort of a free culture, meaning free of control as well as cost. But this was not the case in business and government where the legacy of telegrams, faxes, and costly phone calls created a sense of email being expensive. 

One of my students in the early 1980s studied an aerospace company in Los Angeles and found the managers very concerned over the employees using email for personal or social purposes. Rather than counting the letters, they would embellish their business correspondence with a joke or questions or pleasantries about the family, etc. Even then, we defended the social uses of email at work as it would undoubtedly help executives and other employees to adopt this new communication system. Moreover, communication in the workplace has always been a blend of social and business uses, such as over the proverbial watercooler. Nevertheless, an administrative control structure still pervaded the use of communication at work. 

It was only when private email services arose, such as through CompuServe, from 1978, and one of the first commercial email services, MCI Mail, which was founded in 1989, that this mindset began to change. Google Mail, was launched in its Beta version from 2002, about the time MCI Mail folded. Private email services like Google Mail made it possible to escape this administrative control structure and the control culture of communication in organizations. 

In my own case, having changed universities many times, one of the only steady email addresses I have maintained has been my gmail account, established with the Beta version. I’ve never sensed it being any less secure than my university accounts, and I don’t have the feeling that an administrator is looking over my shoulder. It is free of charge and free of administrative surveillance. I give my data. My main concern is not burdening colleagues with unnecessary or too frivolous email messages. The last thing I want to do is audit myself to determine if my message to a particular person about a particular topic requires me to use my personal email or one of my academic email accounts. 

Moreover, today, more individuals are moving to private conferencing (e.g., Zoom, Teams, Skype) and private messaging services (e.g., WhatsApp, WeChat, Telegram, Signal, Slack, or others) rather than email for interpersonal communication. If you are in government or business or academia, you want your colleagues to be exploring and innovating and using those information and communication services that support their work. Don’t dictate what those are. Let them decide in the spirit of bottom-up innovation within your organization. But this is exactly the worry of the ICO and politicians who fear they will not have access to every word written by a public servant. 

But will private services undermine security? Increasingly, public organizations from universities to governments are moving more of their services, such as email, to the cloud. That is, they are not running their own home-grown institutional services, but outsourcing to private cloud service providers, which offer pretty good security protection. This is how private gmail is provided as well. So, no, it will not undermine security.

To me, those who discuss email use from such an administrative control perspective are simply administrative control types – in a prerogative sense of that term. I for one do not want to be told what email account or what information or communication services to use for each and every purpose. I am not at the extreme of the ‘free software’ movement of Richard Stallman, but sufficiently supportive of civil liberties that I find these almost Orwellian efforts to police our communication to be a huge mistake.    

Some politicians and administrators live in a control culture rather than a free digital culture. However, interpersonal communication is good to support, particularly in these times of incivility and toxic politics. Let’s encourage it and not politicize email or the use of private messaging on any account. 

Reference

Richard M. Stallman (2002, 2015), Free Software, Free Society, Third Edition. Boston, MA: Free Software Foundation. 


[1] https://news.yahoo.com/information-watchdog-launches-investigation-health-194714162.html

Flawed Economics Behind Online Harms Regulation

The Flawed Economics of Online Harms Regulation

I am not an economist,  but even I can see the huge flaws in a recently published “cost/benefit analysis of the UK’s online safety bill”.[1] My immediate reactions:

The author, Sam Wood, of ‘The Economics of Online Harms Regulation’ in InterMEDIA, begins with an argument that the pandemic ‘[feuled] concerns about harmful content and behaviour encountered online’. Quite the contrary, I think it is arguable that the Internet and related online media became a lifeline for households in the UK and across the world during this period of lockdowns and working from home. The impetus for the online harms [now called ‘safety’] bill was fueled by the demonization of social media in the years before the pandemic. So, from the very introduction to this piece, I worried about the credibility of the economic analysis it promises. 

I was not disappointed. It is jaw-dropping. Even enumerating some of the many online harms to be addressed and outlining some of the ‘challenges’ in quantifying them, the piece proceeds to do exactly that. Using Department of Digital, Culture, Media & Sports (DCMS) estimates, the author argues that the estimated costs of seven types of harm, the greatest being cyberstalking – at £2,176m over ten years beginning in 2023, is much greater than the costs of implementing the bill, the greatest cost being ‘content moderation’ estimated to be £1,700m over the same ten years in the future. 

Pulling the costs of regulation out of a hat?

Of course, the costs of implementing this bill are not simply captured by the activities enumerated: awareness campaigns, creating reporting mechanisms, updating terms of service, producing risk assessments, content moderation, and transparency reports. What has the analysis left out?

Well, what about reductions in freedom of expression and commensurate reductions in the value and use of the Internet and related social media for the public good? This will be the major impact of the disproportionate incentivisation of censorship by tech platforms in order for platforms to avoid huge the potentially huge costs to be imposed by government regulators.

The duty of care solution is the problem that will have major negative impacts on what has been the lifeline for households, educators, healthcare professionals and all government departments at all times, but made so visible by the pandemic. The duty of care mechanism will incentivise censorship and surveillance of users and push the tech platforms to act like newspapers in performing ever stronger editorial roles, such as determining what is ‘disinformation’.

One could ask: Would not an economist need to list all the benefits of social media and related activities, and not just the costs?

This is not a neutral, critical analysis, but seems to be a political stitch up to support the proposed regulation. That said, such a flawed analysis might well make a better case for opposing than supporting this bill. Read it, consider its flaws, and oppose this misguided effort to address particular grievances by introducing a terrible policy. The proposed bill will do unmeasured damage to one of the most critical infrastructures available here and now for enabling enhanced means for communication and information for every age group in the UK and worldwide. 

With apologies to the journal editors, but if the BBC or public service broadcasting were subject to such a flawed analysis, I am sure that InterMEDIA would not have even considered publishing such a piece. Then again, this bill seems to enjoy widespread support and specifically advertises its intent to protect freedom of expression. Yet how often do the intentions of regulation end up in failure and the unintended collateral damage overwhelm any positive outcomes. I’m afraid we are about to see this happen when this bill undermines an open and global Internet and free expression, and privacy is further eroded in order to enforce tech’s duty of care.

Progress on the pandemic is allowing the UK to talk about moving back to a new normal. In that spirit, may the UK apply a level of common sense and closer parliamentary and public scrutiny to the online safety bill – a level of care that such an important piece of media regulation would normally receive.


[1] Sam Wood, ‘The Economics of Online Harms Regulation’, InterMEDIA, 49(2): 31-34. 

Value Tradeoffs for a Cashless Society

A recent news story (Sunday Times 6 June 2021) highlighted the potential for Sweden to lead the way to a ‘cashless’ future.[1] Not surprising in the context of so many observable trends moving in this direction. However, it reminded me of the early forecasts of a cashless society that were debated in the 1970s, and sense, particularly to the work of my former colleague and pioneer of social informatics, the late-Rob Kling, who died in 2003.[2]

PPRO Colleagues 1979

Early in my research on the social aspects of information and communication technologies, I had the opportunity to collaborate with Professor Rob Kling at UC Irvine, when we were both involved with the Public Policy Research Organization (PPRO), directed by Professor Kenneth Kraemer. I joined this team that also included John Leslie King, Jim Danziger, Alana Northrup, and others, in 1974 to work on the URBIS Project. Supported by the US National Science Foundation, URBIS was one of the first systematic evaluations of the role and impact of computing in American governments.[3]

In 1976, Rob published one of his early critiques of what were then called ‘electronic funds transfer systems’ that pioneered in raising some of the social and ethical issues for society, namely around privacy. Here is the abstract of this piece, entitled ‘Passing the Digital Buck: Unresolved Social and Technical Issues in Electronic Funds Transfer Systems’:

“Over the last decade, plans for using computer-based systems to automate the transfer of debits and credits have moved from a technologist’s pipe dream to an emerging reality. During the last few years, several components of this technology have been developed in prototype form and have begun to be implemented on a large scale. While such systems promise financial benefits for the institutions that exploit them, they also raise significant social, legal, and technical questions that must be resolved if full-scale Electronic Funds Transfer Systems (EFTS) are not to cause more problems for the larger public than they solve. Few of these problems have been systematically articulated. This paper describes the mechanics of EFTS, and the benefits it should provide its promoters. But it emphasizes a variety of the problems that EFTS raises and places them in context.”[4]

Like many others, I’ve followed the development of electronic payment systems over the decades. Three simple but notable reflections repeatedly come to mind from this work. 

One is the degree to which some thoughtful thinkers really can provide valuable forecasts of future developments. I most often find myself marveling at how wrong forecasting can be, but yes, there are some clear examples of individuals, like James Martin, clarifying the social and technical dynamics of likely trends and their future development. Rob’s discussion of the social and value tradeoffs of EFTS is one that we are seeing played out today – 4 decades later. The trick is to sort out the forecasts that are truly prescient as they have a sound empirical basis in the history and underlying dynamics of their development from those that are silly, simply technologically deterministic extrapolations, or based on a limited and possibly misleading example. Of course, even the best of forecasts need to be understood as problematic given the many factors shaping the use and impacts of technical innovations. 

The second is that everyone needs to be skeptical of forecasts as long-range expectations about the future are most often overly optimistic or pessimistic. Even forecasts that are on target are often a decade or two further in the future than originally forecast. Video telephony was forecasted in the 1960s and marketed in the early 1970s but is only recently flourishing. 

The third is the unpredictable fluctuations in these trends. It is not just a straightforward linear, non-linear or slower curve of development, but often entail major perturbations over time. For example, in the case of digital payments, the automated teller machines were an early development that seemed to be a gift that enable a return to privacy. Rather than paying for everything electronically, people tended to get cash from distributed teller machines and therefore be able to make a larger proportion of their purchases privately – using cash. So, surprise – digital systems were enhancing privacy – but only for a time. 

Of course, it became clear that cash withdrawals could be so well tracked that individuals could be followed with considerable accuracy. And today, given the many ways payments and clicks are analyzed online for marketing and advertising, the concept of ‘surveillance capitalism’ has become widely accepted.[5] Moreover, in the context of the global pandemic, individuals have been incentivized to use electronic payments for everything and not to use cash. That brings us full speed ahead into a more truly cashless society with all of the social and political tradeoffs that Rob warned us about in the 1970s. While even Rob could not have foreseen the pandemic and its pressure on moving to a cashless society, his forecasts of the value tradeoffs remain valuable to this day. However, far more empirical research needs to be conducted on the actual development and impacts of our cashless society.

Further Reading

‘The Social Construction of Rob Kling’, The Information Society, 2003, 19: 195-196. https://tisj.sitehost.iu.edu/contact/rltork.pdf

Rob Kling, ‘The Social and Institutional Meanings of Electrnonic Funds Transfer Systems’, Chapter 15 (pp. 183-195) in Kent Colton and Kenneth Kraemer (eds), Computers and Banking. New York: Plenum Press, 1980.


[1] https://www.thetimes.co.uk/article/sweden-leads-way-to-a-cashless-future-5kqj75mb9

[2] ‘The Social Construction of Rob Kling’, The Information Society, 2003, 19: 195-196. https://tisj.sitehost.iu.edu/contact/rltork.pdf

[3] This research was reported widely, but captured in two books, including Kraemer, K. L., Dutton, W. H., and Northrop, A. (1981), The Management of Information Systems, New York: Columbia University Press, and Danziger, J. N., Dutton, W. H., Kling, R., and Kraemer, K. L. (1982; 1983 paperback), Computers and Politics: High Technology in American Local Governments, New York: Columbia University Press.

[4] https://www.semanticscholar.org/paper/Passing-the-digital-buck-%3A-unresolved-social-and-in-Kling/83643a73b2c0400d0a680d4fd5e6a72f5e81e145#paper-header

[5] Shoshana Zuboff, The Age of Surveillance Capitalism. London: Profile Books. 

Six Benefits of Academics Working with Government

The Value of Academics Working with Government: Lessons from Collaboration on Cybersecurity 

William H. Dutton with Carolin Weisser Harris 

Six of the benefits of academics collaborating with government include realising the value of: 1) complementary perspectives and knowledge sets; 2) different communication skills and styles; 3) distributing the load; 4) different time scales; 5) generating impact; and 6) tackling multifaceted problems.

Our Global Cybersecurity Capacity Centre (GCSCC) at Oxford University recently completed a short but intense period of working with a UK Government team focused on cybersecurity capacity building with foreign governments. In one of our last meetings around our final reports, we had a side discussion – not part of the report – about the differences between academic researchers and our colleagues working in government departments. Of course, some academics end up in government and vice versa, but individuals quickly adapt to the different cultures and working patterns of government or academia if they choose to stay. 

For example, the differences in our time horizons were not controversial, as some of us on the academic team have been working on particular issues for decades while our government colleagues are focused on the start and finish a project over a short, finite time, such as lasting one year or even less. These different time horizons are only one of many other challenges tied to the very different ways of working, but what about the benefits? 

Drawing courtesy of Arthur Asa Berger

What is the value of fostering more academic-government collaboration? Here we were not as quick to come up with clear answers. But collaboration between academia and government is more difficult than working within one’s own institutional context. There must be benefits to justify the greater commitments of time and effort to collaborate. On reflection, and from our experience, a number of real benefits and taken-for-granted assumptions come to mind. The all ways to realise the benefits of:

  1. Complementary Perspectives and Knowledge Sets

Our focus on cybersecurity, for example, is inherently tied to both academic research and policy and practice. By bringing actors together across academia and government, there is less risk of working in a way that is blind to the perspectives of other sectors. It might be impossible to shape policy and practice if the academic research is not alert to the issues most pertinent to government. Likewise, governments cannot establish credible policy or regulatory initiatives without an awareness of the academic controversies and consensus around relevant concepts and issues. 

2. Different Communication Skills and Styles

Academic research can get lost in translation if academics are not confronted with what resonates well with governmental staff and leadership. What is understood and misunderstood in moving across academic and government divides? Think of the acronyms used in government versus academia. How can assumptions and work be better translated to each set of participants? Working together forces a confrontation with these communication issues, as well as the different styles in the two groups. Comparing the slides prepared by academics with those of government staff can provide a sense of people coming from different planets, not just different sectors.  

3. Distributing the Load – Time to Read Everything?

My academic colleagues noticed that many in the government simply did not have the time to read extremely long and often dense academic papers or books, much less to write a blog about collaborative research! It was far better to have brief executive oriented briefing papers. Better yet would be a short 10-minute oral explanation of any research or a discussion in the form of a webinar. Do they need to know the finest details of a methodology, or to simply have a basic understanding of the method and trust that the specific methodology followed was state of the practice, done professionally, or peer reviewed? Can they quickly move to: What did they find? Being able to trust the methods of the academics saved an enormous amount of time for the governmental participants. 

Likewise, did the academics want to take the time to read very long and detailed administrative reports and government documents? Clearly, they also appreciated the brief summary or distillation of any texts that were not central to the study. Unless academics were focused on organizational politics and management, they often do not need to know why the government has chosen to support or not support particular work, but trust that there is a green light to go ahead, and their colleagues in government will try to keep the work going. 

So, the two groups read and were interested in reading and hearing different kinds of reports and documentation, about different issues, and at different levels. Working together, they could then cover more ground in the time of the project and better understand each other’s needs and what each could contribute to the collaboration.  

4. Different Time Scales

As mentioned above, another aspect of time was the different time scales of academic research versus governmental studies. One of our colleagues had been working on Internet studies for over four decades, but a short governmental study could draw easily on that investment in time. Everyone did not need to spend decades on research. 

Academics can’t change the focus of their research too rapidly without losing their basis of expertise. The cycle of attention in government may move towards the interests of an academic from time to time and then it is important to connect governmental staff with the right researchers to take advantage of their different time scales. 

The different time scales do not undermine collaboration, but they put a premium on being able to connect governmental research with relevant academic research that is at a level and at a time at which the findings can be valuable to policy or practice. Academics cannot chase policy issues as they will always be late to the debate. But governmental researchers can find researchers doing relevant work that is sufficiently mature to inform the questions faced by the government. 

5. Generating Impact

Academics are increasingly interested in having an impact, which has been defined as ‘having an effect, benefit, or contribution to economic, social, cultural, and other aspects of the lives of citizens and society beyond contributions to academic research’ (Hutchinson 2019). Is their research read, understood, or acted upon? Does it make a difference to the sector of relevance to their research? Working directly with government can enhance the likelihood of governmental actors being aware of and reactive to academic research. Collaboration does not guarantee greater productivity (Lee and Bozeman 2005). However, it has the potential to support the greater dissemination of the research across government and create greater awareness of the evidence behind the policy advice of academic researchers.

Of course, governments do not simply write reports to tick boxes. They also wish to have an impact on policy or practice. Working with academics can help gain insights and credibility that can make reports more novel, interesting, and meaningful for enacting change in policy and practice. They can also gain a better sense of the limits of academic research as researchers explain the lack of evidence in some areas and the needs for additional work. 

6. Tackling Multifaceted Problems

Cybersecurity is not only tied to academia and government. Many other actors are involved. We found that our partners in government had different contacts with different networks of actors than we had and vice versa. Putting together these networks of actors enabled us to better embed the multiplicity of actors – other governments, civil society, non-governmental organizations, business and industry, and experts in cybersecurity – in our joint work. 

#

The potential benefits are many, but there are risks. Participants need to care a great deal about the common work and be committed to the area in order to overcome the challenges. That said, the different time frames, communication styles, and more that confront collaboration between government and academia not only can be addressed but also bring some benefits to the collaboration. 

Cybersecurity is one of many policy areas that requires engagement with various stakeholders, and for meaningful engagement to develop you need to build trustful relationships. Projects like ours where partners from different stakeholder groups (in this case academia and government) work together can enable building those trustful relationships and strengthen the potential for others to trust the outputs of joint projects.

References

Hutchinson, A. (2019), ‘Metrics and Research Impact’, pp. 91-103 in Science Libraries in the Self-Service Age. https://doi.org/10.1016/B978-0-08-102033-3.00008-8

Lee, S., and Bozeman, B. (2005), ‘The Impact of Research Collaboration on Scientific Productivity’ Social Studies of Science, 35: DOI: 10.1177/0306312705052359 Online at: http://sss.sagepub.com/cgi/content/abstract/35/5/673

Online Micro-Choices in Remote Seminars, Teaching, and Learning

Online Micro-Choices Shaping Remote Seminars, Teaching, and Learning

The move to online education has been a huge shift, dramatically hastened by the COVID-19 pandemic and the existence of technical options, such as online meeting platforms like Zoom and Teams. For decades, handwringing and resistance over moves toward more online instruction, seminars, and lectures has collapsed as universities not only accept this shift but are supporting if not requiring it. In many respects, the move online has saved many educational institutions and the new normal – whatever that ends up being – is almost certain to incorporate more online teaching and learning. 

However, after participating in many online seminars, lectures, and conferences, I sense that it is time to focus far more attention on the micro-choices being made about the conduct of online teaching and learning. Not focus on on or off-line, but how to do online teaching and learning. 

There are books on teaching tips for graduate students and instructors, but fewer for the online world. That said, I imagine that most academics tend to follow the examples set by their own best teachers. Unfortunately, in the online world of education, there are fewer great examples on which developing teachers can model themselves. Moreover, I believe I am seeing so many problematic examples and trends emerging that the micro-choices underpinning them merit more critical discussion. 

Take for example, the decision on whether or not to mute the audio and turn off the video of the audience – whether students or fellow colleagues. The convenor of an online session, such as over Zoom, can mute everyone but the speaker and turn off everyone’s video but the speaker’s video, or they can simply ask everyone but the speaker to mute their own audio and turn off their video while the speaker or teacher is presenting. Who has permission to share their screen is another micro-choice of a convenor. 

Screen sharing enables people to show a slide or a graph or any image or text that they can put on their own screen to the group. For a small seminar with known participants, everyone can be enabled to share their screen. If open to the public and if a larger group is brought together, screen sharing needs to be restricted to avoid problems such as Zoombombing, such as a malicious user sharing a vulgar image. But it is easy to keep the meeting link to those invited, use passwords to join, and restrict screen sharing to avoid such possible problems.

Muting everyone’s audio during a presentation seems to be good practice as well. You avoid unplanned sounds in households, like the sounds of barking dogs and crying babies, from interrupting a seminar. And individuals normally have a means to raise their hand to ask a question or make a comment, so they can be unmuted when speaking. That said, if it is a small group discussion, such as following a lecture, I think individuals should decide on their own whether to mute, such as if their dog starts barking, but generally remain unmuted to be as interactive as possible during the discussion. When education is being socially distanced in so many ways by going online, any opportunities to enhance sociality and interaction online should be seriously considered. 

In contrast, in my opinion, stopping everyone’s video is not a good practice. Unfortunutely, I see this a becoming a trend. In the earliest weeks and months of the pandemic and online meetings, people tended to be visible online all the time even when their audio is muted. With my video on, you could see if I was on the call and that I was listening or if I was multitasking. If I had to leave or take a break, I could switch to a still photo of me or my initials, until I was ready to engage again. More importantly, the speakers would know that they were speaking to real, live, human beings, rather than talking to themselves in a dark room. 

Doing it Right: Video ON

Over time, it is clear that more universities and conferences are moving to shut off the video of the audiences, and only have video streaming on for the speaker or the panelists. Often this means that no one is visible as the speaker is presenting slides – such as when talking behind the slides occupying center stage. Once a critical proportion of the audience starts shutting off their video, then others feel pressured to as well, lest they be accused of perceiving themselves as too self-important. But it is for others, not for yourself, that it is good to be seen.  

I have taken issue with this minimalist approach to limiting video on the basis that it takes social distancing to an unacceptable and unjustifiable limit. Of course, I’ve heard justifications, such as maintaining the focus on the material on the slides and keeping people from being distracted by the images of audience members. Protecting the privacy of individuals and households is another. There are many ways to protect privacy of the listeners, such as by using a virtual background or sitting in front of a blank wall. Nevertheless, I find such justifications to be weak rationales for avoiding social interaction.

Teaching or lecturing is not simply about transferring information. If that were so, a reading or video recording would be superior to a seminar. Most importantly, teaching or lecturing is about motivating the audience – students or colleagues – to see your topic as important and interesting and worthy of reading and learning more about. That means you need to engage them in the presentation and make sure they are engaged. In the classroom, you can tell if students are not engaged, even if – as was the case in many in-person classes – many are pressed against the back row of seats. You can see if the audience is engaged online as well, but only if you keep the video going both ways. 

Also, you need to motivate the lecturer. Unless you are very shy or nervous about public speaking, I can’t think of what could be more deflating that speaking to a set of initials or a blank screen or simply reading your own slides. Cut off the video and you risk disengaging the speaker as well as the audience. 

Obviously, I am a cranky, old colleague, easily annoyed, and opinionated. Fine if you disagree with my suggestions, but you should really think through these many micro-choices you make in presenting and speaking and listening online. Discuss them with those convening any seminar where you are presenting. 

I accept and defend the right of teachers to present material to their classes in the ways they choose – assuming they are within an increasing set of rules and guidelines set by educational institutions. Similarly, lecturers or speakers should be free to present in ways in which they are comfortable. But be careful that you don’t undermine your ability to engage, educate, and entertain your audience simply by following bad practices set by colleagues that are too cautious or conservative about the issues that might arise from social interaction. Don’t handicap yourself by speaking to an invisible audience or supporting any idea that being invisible is a good idea in online teaching or learning that is engaging.

Engaging Academia in Cybersecurity Research

Engaging Academia in Cybersecurity Research 

Across most academic fields, researchers are increasingly focused on outreach to relevant practitioner and policy communities. It can sharpen their sense of the key questions but also enable their research to have greater application and impact. In contrast, within the field of cybersecurity, policy and practitioners from governmental, non-governmental organizations (NGOs), like the World Bank, and business and industry are more dominant in the production of research. Academic researchers play a relatively less active role. That said, research on cybersecurity could be greatly enhanced if a larger and more multidisciplinary collection of academic researchers could be engaged to focus on issues of cybersecurity and build collaborative relationships with the policy and practitioner communities. 

Why is this the case, and what could be done to correct it? 

Courtesy Arthur Berger

The Dynamics Limiting Academia’s Role in Cybersecurity

I am but one of a growing set of multidisciplinary researchers with a focus on cybersecurity. The field is clearly engaging some top researchers and scholars from a variety of fields, evidenced by colleagues and centers at prominent universities, a growing number of journals and publications, and a dizzying number of events and conferences on topics within the field. Stellar academics, such as Professor David Clark at MIT, Professor Sadie Creese at Oxford University, and Bruce Schneier, a Fellow at the Berkman Center at Harvard, are strong examples. I would add Gabriella Coleman, a chaired professor at McGill University, and Professor Patrick Burkart at Texas A&M, to the list, even though they might not identify themselves as cybersecurity researchers. Many others could be added.  

Nevertheless, compared with other fields, cybersecurity research appears to be dominated more by the practitioner and policy communities. Cybersecurity is not a discipline but a multidisciplinary field of study. But it remains less multidisciplinary and more anchored within the computer sciences than some related fields, such as Internet studies as one comparator with which I am familiar. A number of possible explanations for the different multidisciplinary balance of this field come to mind. 

First, it is a relatively new field of academic research. It was preceded by studies of computer security, which were more computer science centric as they were more focused on technical advances in security systems. The development of shared computing systems and the Internet in particular, has greatly expanded the range of users and devices linked to computer systems, reaching over 4 billion users in 2020. In many respects, the Internet drove the transition from computer security to cybersecurity research and is therefore understandably young in relation to other academic fields of study. 

Secondly, the concept of cybersecurity carries some of the baggage of its early stages. While the characterisations evoked by concepts are often crude, the term often conjures up images of men in suits employed by large institutions trying to keep young boys out of their systems. My MSU colleague, Ruth Shillair, reminded me of the 1983 movie War Games. It is based around a young hacker getting into the backdoor of a major military computer system in ways that threatened to launch a world war, but which left the audience cheering for the young haker.

Today, big mainframe computers are less central than are the billions of devices in households and business and industry and governments across the world. Malicious users, rather than a child accidentally entering the backdoor of a military complex, are the norm. Yet cybersecurity carries some of this off-putting imagery from its early days into the present. 

Thirdly, it is an incredibly important field of research for which there is great demand. Many rising academics in the field of cybersecurity are snapped up by business, industry and governmental headhunters for lucrative positions rather than by academia. 

These are only a few of many reasons for the relative lack of a stronger multidisciplinary research community. Whatever initiatives might enhance its multidisciplinary make-up might also bring more academics as well as more academic disciplines into the study of cybersecurity. How could this be changed?

What Needs to Be Done?

First, academics involved with research on cybersecurity need to do more to network among themselves. This is somewhat of a chicken and egg problem as when there are relatively few academics in a field it seems less important to network with each other. However, until the field comes together to better define the field and its priorities for research, it is harder for it to flourish. Similarly, there are so many pulls to work with practitioners and the policy communities in this area that academic collaboration may seem like a distraction. It is not, as it is essential for the field to mature as an academic field of study. 

Secondly, the field needs to identify and promote academic research on cybersecurity that address big questions with major implications for policy and practice. On this point, some of the research at Oxford’s Global Cyber Security Capacity Centre (GCSCC) has made a difference for nations across the world. For example, the research demonstrates that nations that have enhanced their cybersecurity capacity building efforts have made a serious improvement in the experiences of their nations’ Internet users.[1] But this work is one of many examples of work that is meeting needs in this new area of technological and organizational advances. 

Thirdly, national governments need to place a greater priority on building this field of academia along with building their own cybersecurity capacities. Arguably, in the long run, a stronger academic field in cybersecurity will help nations advance cybersecurity capacity, such as by creating a larger pool of expertise and thought leadership in this area. 

This would be possible through a number of initiatives, from simply taking a leadership role in identifying the importance of the field to encouraging the public research councils and other funding bodies to consider the development of grant support for multidisciplinary research on cybersecurity.

For example, the UK’s Economic and Social Research Council (ESRC) generated early funding for what became the Programme on Information and Communication Technologies (PICT). The establishment of PICT helped to draw leading researchers, such as the late Roger Silverstone, into the study of the social aspects of information and communication technologies. Such pump-priming helped put the UK in an early strategic international position in research on the societal aspects of the Internet and related digital media. 

What factors are constraining the more rapid and widespread development of this field? What could be done to accelerate and deepen its development?

There are a host of other issues around whether policy makers and practitioners would value collaboration with academics, given that their time scales and methodologies can be so dramatically different.[2] That is for another blog, but in the interim, I’d value your thoughts on whether you agree on the need and approaches to further develop the multidisciplinary study of cybersecurity within academia.

Notes


[1] See: Creese, S., Shillair, R., Bada, M., Reisdorf, B.C., Roberts, T., and Dutton, W. H. (2019), ‘The Cybersecurity Capacity of Nations’, pp. 165-179 in Graham, M., and Dutton, W. H. (eds), Society and the Internet: How Networks of Information and Communication are Changing our Lives, 2nd Edition. Oxford: Oxford University Press.

[2] My thanks to Caroline Weisser Harris for suggesting a focus on this question of why practitioners and policy makers might or might not value collaboration with academia. 

Publication of A Research Agenda for Digital Politics

A Research Agenda for Digital Politics 

The publication of my most recent edited book, A Research Agenda for Digital Politics, is available in hardback and electronic forms at: https://www.e-elgar.com/shop/gbp/a-research-agenda-for-digital-politics-9781789903089.html From this site you can look inside the book to review the preface, list of contributors, the table of contents, and my introduction, which includes an outline of the book. In addition, the first chapter by Professor Andrew Chadwick, entitled ‘Four Challenges for the Future of Digital Politics Research’, is free to read on the digital platform Elgaronline, where you will also find the books’ DOI: https://www.elgaronline.com/view/edcoll/9781789903089/9781789903089.xml

Finally, a short leaflet is available on the site, with comments on the book from Professors W. Lance Bennett, Michael X. Delli Carpini, and Laura DeNardis. I was not aware of these comments, with one exception, until today – so I am truly grateful to such stellar figures in the field for contributing their views on this volume.  

Digital politics has been a burgeoning field for years, but with the approach of elections in the US and around the world in the context of a pandemic, Brexit, and breaking cold wars, it could not be more pertinent than today. If you are considering texts for your (online) courses in political communication, media and politics, Internet studies, or digital politics, do take a look at the range and quality of perspectives offered by the contributors to this new book. Provide yourself and your students with valuable insights on issues framed for high quality research. 

List of Contributors:

Nick Anstead, London School of Economics and Political Science; Jay G. Blumler, University of Leeds and University of Maryland; Andrew Chadwick, Loughborough University; Stephen Coleman, University of Leeds; Alexi Drew, King’s College London and Charles University, Prague; Elizabeth Dubois, University of Ottawa; Laleah Fernandez, Michigan State University; Heather Ford, University of Technology Sydney; M. I. Franklin, Goldsmiths, University of London; Paolo Gerbaudo, King’s College London; Dave Karpf, George Washington University;  Leah Lievrouw, University of California, Los Angeles; Wan-Ying Lin, City University of Hong Kong; Florian Martin-Bariteau, University of Ottawa; Declan McDowell-Naylor, Cardiff University; Giles Moss, University of Leeds; Ben O’Loughlin, Royal Holloway, University of London; Patrícia Rossini, University of Liverpool; Volker Schneider, University of Konstanz; Lone Sorensen, University of Huddersfield; Scott Wright, University of Melbourne; Xinzhi Zhang, Hong Kong Baptist University. 

Zoom-bombing the Future of Education

Zoom-bombing the Future of Education

by Bill Dutton and Arnau Erola based on their discussions with Louise Axon, Mary Bispham, Patricia Esteve-Gonzalez, and Marcel Stolz

In the wake of the Coronavirus pandemic, schools and universities across the globe have moved to online education as a substitute rather than a complement for campus-based instruction. While this mode of online learning may be time-limited and is expected to return to campuses and classroom settings once the Covid-19 outbreak subsides, this period could also be an important watershed for the future of education. Put simply, with thousands of courses and classrooms going online, this could usher in key innovations in the technologies and practices of teaching and learning online in ways that change the future of education. 

However, the success of this venture in online learning could be undermined by a variety of challenges. With dramatic moves to online education and a greater reliance on audio, video and Web conferencing systems, like Zoom, Webex and Skype, have come unexpected challenges. One particular challenge that has risen in prominence is efforts of malicious users to sabotage classrooms and discussions, such as by what has been called Zoom-bombing (Zoombombing). Some have defined it as ‘gate-crashing tactics during public video conference calls’, that often entail the ‘flooding of Zoom calls with disturbing images’. There are a growing number of examples of courses and meetings that have been bombed in such ways. It seems that most ‘Zoombombers’ join illegitimately, by somehow gaining access to the meeting or classroom details. But a student who is actually enrolled in a class could create similar problems. In either case, it is clear that zoom-bombing has become an issue for schools and universities, threatening to undermine the vitality of their teaching and relationships with faculty, students, and alumni of their institutions. 

TheQuint.com

We are involved in research on cybersecurity, and see this as one example in the educational domain, of how central cybersecurity initiatives can be to successfully using the Internet and related social media. We also believe that this problem of the digital gate-crasher and related issues of malicious users can be addressed effectively by a number of actors. As you will see, it is in part, but not only, a cybersecurity problem. It involves training in the use of online media, awareness of risks, and a respect for the civility of discussion in the classroom, meetings, and online discussions. Unfortunately, given how abrupt the shift to online learning has been, given efforts to protect the health of students, staff, faculty, and their networks, there has not been sufficient time to inform and train all faculty and students in the use of what is, to many, a new media. Nor has there been time to explain the benefits as well as the risks, intended and unintended, such as is the case with digital gate-crashers. 

Not a New Phenomenon

From the earliest years of computer-based conferencing systems, issues have arisen over productively managing and leading discussion online. One to many lectures by instructors have been refined dramatically over the years enabling even commercially viable initiatives in online education, such as Ted Talks, which actually began in the early 1980s and have been refined since, as well as live lectures, provided by many schools for at home students. 

But the larger promise of online learning is the technical facility for interaction one-to-one, one-to-many, many-to-one, and many-to-many. An early, pioneering computer-mediated conferencing system, called ‘The Emergency Management Information System and Reference Index’ (EMISARI) led to one of the first academic studies of the issues involved in what was called ‘computerized conferencing’ in the mid-1970s (Hiltz and Turoff 1978). Since the 1970s, many have studied the effective use of the Internet and related social and digital media in online learning. It would be impossible to review this work here, but suffice it to say, problems with the classroom, and online learning have a long and studied history that can inform and address the issues raised by these new digital gate-crashers.

Actors and Actions

This is not simply a problem for an administrator, or a teacher, as online courses and meetings involve a wide array of actors, each of which have particular as well as some shared responsibilities. Here we identify some of the most central actors and some of the actions they can take to address malicious actors in education’s cyberspace. 

Recommendations 

There are different issues facing different actors in online education. Initially, we focus on the faculty (generally the conference host) side, providing guidance on essential actions that can be taken to diminish the risks of zoom-bombing the future of education. We will then turn to other actors, including students and administrators.

  • Authentication: as far as possible, limit the connection to specific users by only allowing users authenticated with specific credentials, having a valid and unique link, or possessing an access code. Ideally, many want courses to be open to visitors, but the risks of this are apparent unless the moderator is able to eject malicious users, as discussed below. A pre-registration process for attendees  (e.g. via an online ticketing system) could help limit the risk of “trolls” joining while keeping an event open to visitors. 
  • Authorization: limit the technical facilities to which the students or participants in any meeting have access. Keep to the minimum required for the class session. That is, in most circumstances, the instructor should restrict file sharing, chat access, mic holding or video broadcasting if they do not need to use these in the session. This does not prevent students from using chat (interacting with other students) over other media, but it limits disruption of the class. The need to access these resources varies largely depending on the type of classroom, and it is the responsibility of the instructor or host to grant the permissions required.
  • Monitoring: careful monitoring of the connected participants can help avoid unauthorized connections – the gatecrashers, so the course lead should have access to the list of participants and monitor it routinely. In some cases, virtual classrooms can be locked when no more participants are allowed. (See the last bullet point with respect to stolen accounts.)
  • Moderation: in the same way that participants are monitored, their participation in the form of text, voice, video or shared links or files, should be reviewed. This can be a tedious task, particularly with a large class, but it is an advantage of online courses that instructors can monitor student participation, comments, and gain a better sense of their engagement. That said, it can take some time and it might not be possible during the class. 
  • Policies: Each institution should have adequate policies and reporting mechanisms to deal with offensive, violent and threatening behaviour in the classroom, real or virtual. Actions or words that are judged offensive, or otherwise toxic language, should not necessarily exclude a student’s opinions from a class discussion, but the students should be aware of and try to abide by the institution’s standards and policies. It is also helpful if student participants have the facility to report offensive posts, which instructors can then review, delete or discuss with the individual(s) posting them. 
  • Procedures: procedures need to be in place to deal in a timely manner (quickly) with stolen credentials and participants behaving irresponsibly. That could involve removing classroom access for an offending user and their loss of authorization to the specific credentials, as well as processes for generating new ones in case they are needed.

The above recommendations provide general guidance in securing online classrooms without any specifics on the technology used. Some platforms such as Zoom, have published their own guidelines for the administrators of online educational initiatives. But here it is useful to identify some of the responsibilities of other actors.

Students need to understand how the principles of behaviour in the classroom translate into the online, virtual classroom. The Internet is not a ‘Wild West, and the rules and etiquette of the classroom need to be followed for effective and productive use of everyone’s time. Students should have the ability to express their opinions and interpretations of course material, but this would be impossible without following rules of appropriate behaviour and what might be called ‘rules of order’, such as raising your hand, which can be done in the virtual classroom (Dutton 1996). Also, just as it would be wrong to give one’s library card to another person, when credentials or links are provided for enabling authentic students to join a class, it is the student’s responsibility to keep these links to themselves, and not share with individuals not legitimately enrolled. These issues need to be discussed with students and possibly linked to the syllabus of any online course. 

Administrators and top managers also have a responsibility to ensure that faculty and students have access to training on the technologies and best practices of online learning. It is still the case that some students are better equipped in the online setting than their instructors, but instructors can no longer simply avoid the Internet. It is their responsibility to learn how to manage their classroom, and not blame the technology, but it is the institution’s responsibility to ensure that appropriate training is available to those who need it. Finally, administrations need to ensure that IT staff expertise is as accessible as possible to any instructor that needs assistance with managing their online offerings. 

Points of Conclusion and Discussion

On Zoom, and other online learning platforms, instructors may well have more rather than less control of participation in the classroom, even if virtual, such as in easily excluding or muting a participant, but that has its added responsibilities. For example, the classroom is generally viewed as a private space for the instructors and students to interact and learn through candid and open communication about the topics of a course. Some level of toxicity, for example, should not justify expelling a participant. However, this is a serious judgement call for the instructor. Balancing the concerns over freedom of expression, ethical conduct, and a healthy learning environment is a challenge for administrators, students and teachers, but approaches such as those highlighted above are available to manage lectures and discussions in the online environment. Zoom-bombing can be addressed without diminishing online educational initiatives. 

We would greatly welcome your comments or criticisms in addressing this problem. 

References

Dutton, W. H. (1996), ‘Network Rules of Order: Regulating Speech in Public Electronic Fora,’ Media, Culture, and Society, 18 (2), 269-90.

Hiltz, S. R., and Turoff, M. (1978), The Network Nation: Human Communication via Comptuer. Reading, Massachusetts: Addison-Wesley Publishing. 

Jettison the Digital Nanny State: Digitally Augment Users

My last blog argued that the UK should stop moving along the road of a duty of care regime, as this will lead Britain to become what might be called a ‘Digital Nanny State’, undermining the privacy and freedom of expression of all users. A promising number of readers agreed with my concerns, but some asked whether there was an alternative solution.

Before offering my suggestions, I must say that I do not see any solutions outlined by the duty of care regime. Essentially, a ‘duty of care’ approach[1], as outlined in the Cyber Harms White Paper would delegate solutions to the big tech companies, threatening top executives with huge fines or criminal charges if they fail to stop or address them.[2] That said, I assume that any ‘solutions’ would involve major breaches of the privacy and freedom of expression of Internet users across Britain given that surveillance and content controls would be the most likely necessity of their approach. The remedy would be draconian and worse that the problems to be addressed.[3]

Nevertheless, it is fair to ask how the problems raised by the lists of cyber harms could be addressed. Let me outline elements of a more viable approach. 

Move Away from the Concept of Cyber Harms

Under the umbrella of cyber harms are lumped a wide range of problems that have little in common beyond being potential problems for some Internet users. Looked at with any care it is impossible to see them as that similar in origin or solution. For example, disinformation is quite different from sexting. They involve different kinds of problems, to different people, imposed by different actors. Trolling is a fundamentally different set of issues than the promotion of female genital mutilation (FGM). The only common denominator is that any of these actions might result is some harm at some level for some individuals or groups – but they are so different that they violate common sense and logic to put them into the same scheme. 

Moreover, any of the problems are not harms per se, but actions that could be harmful – maybe even lead to many harms at many different levels, from psychological to physical.  Step one in any reasonable approach would be to decompose this list of cyber harms into specific problems in order to think through how each problem could be addressed. Graham Smith captures this problem in noting that the mishmash of cyber harms might be better labelled ‘users behaving badly’.[4] The authors of the White Paper did not want a ‘fragmented’ array of problems, but the reality is that there are distinctly different problems that need to be addressed in different ways in different contexts by different people. For example, others have argued for looking at cyber harms from the perspective of human rights law. But each problem needs to be addressed on its own terms.

Remember that Technologies have Dual Effects

Ithiel de Sola Pool pointed out how almost any negative impact of the telephone could be said to have exactly the opposite impact as well – ‘dual effects’.[5] For example, a telephone in one’s home could undermine your privacy by interrupting the peace and quiet of the household, but it could also provide more privacy compared to people coming to your door. A computer could be used to enhance the efficiency of an organization, but if poorly designed and implemented, the same technology could undermine its efficiency. In short, technologies do not have inherent, deterministic effects, as their implications can be shaped by how we design, use and govern them in particular contexts. 

This is important here because the discussion of cyber harms is occurring is a dystopian climate of opinion. Journalists, politicians, and academics are jumping on a dystopian bandwagon that is as misleading as the utopian bandwagon of the Arab Spring when all thought the Internet would democratize the world. Both the utopian and dystopian perspectives are misleading, deterministic viewpoints that are unhelpful for policy and practice. 

Recognise: Cyber Space is not the Wild West

Many of the cyber harms listed in the White Paper are activities that are illegal. It seems silly to remind the Home Office in the UK that what is illegal in the physical world is also illegal online in so-called cyber space or our virtual world. Given that financial fraud or selling drugs is illegal, then it is illegal online, and is a matter for law enforcement. The difference is that activities online do not always respect the same boundaries as activities in the real world of jurisdictions, law enforcement, and the courts. But this does not make the activities any less illegal, only more jurisdictionally complex to police and enforce. This does not require new law but better approaches to connecting and coordinating law enforcement across geography of spaces and places. Law enforcement agencies can request information from Internet platforms, but they probably should not outsource law enforcement, as suggested by the cyber harms framework. Cyber space is not the “Wild West” and never was.

Legal, but Potentially Harmful, Activities Can be Managed

The White Paper lists many activities that are not necessarily illegal – in fact some actions are not illegal, but potentially harmful. Cyberbullying is one example. Someone bullying another person is potentially harmful, but not necessarily. It is sometimes possible to ignore or standup to a bully and find that this actually could raise one’s self-esteem and sense of efficacy. A bully on the playground can be stopped by a person standing up to him or her, or another person intervening, or a supervisor on the playground calling a stop to it. If an individual repeatedly bullies, or actually harms another person, then they face penalties in the context of that activity, such as the school or workplace. In many ways, the act of cyberbullying can be useful in proving that a particular actor bullied another person. 

Many other examples could be developed to show how each problem has unique aspects and requires different networks of actors to be involved in managing or mitigating any harms. Many problems do not involve malicious actors, but some do. Many occur in households, others in schools, and workplaces, and anywhere at any time. The actors, problems, and contexts matter, and need to be considered in addressing these issues. 

Augment User Intelligence to Move Regulation Closer to Home

Many are beginning to address the hype surrounding artificial intelligence (AI) as a technological fix.[6] But in the spirit of Douglas Englebart in the 1950s, computers and the Internet can be designed to ‘augment’ human intelligence, and AI along with other tools have the potential to augment the choices of Internet users, as so widely experience in the use of search. While technically and socially challenging, it is possible and an innovative challenge to develop approaches to using digital technology to move regulation closer to the users: with content regulation, for example, being enabled by networked individuals, households, schools, businesses, and governmental organizations, as opposed to moving regulation up to big tech companies or governmental regulators. 

Efforts in the 1990s to develop a violence-chip (V-chip) for televisions provides an early example of this approach. It was designed to allow parents to set controls to prevent young children from watching adult programming. It would move content controls closer to the viewers and, theoretically, parents. [Children were often the only members of the household who knew how to use the V-chip.] The idea was good, its implementation limited. 

Cable television services often enable the use of a child lock for reducing access by children to adult programming. Video streaming services and age verification systems have had problems but remain ways to potentially enable a household to create services safer for children. Mobile Internet and video streaming services have apps for kids. Increasingly, it should be possible to design more ways to control access to content by users and households in ways that can address many of the problems raised by the cyber harms framework, such as access to violent content, that can be filtered by users.

With emerging approaches of AI, for example, it could be possible to not simply have warning flags, but information that could be used by users to decide whether to block or filter online content, such as unfriending a social media user. With respect to email, while such tools are in their infancy, there is the potential for AI to be used to identify emails that reflect bullying behavior. So Internet users will be increasingly able to detect individuals or messages that are toxic or malicious before they even see them, much like SPAM and junk mail can disappear before ever being seen by the user.[7] Mobile apps, digital media, intelligent home hubs and routers, and computer software generally could be designed and used to enable users to address their personal and household concerns. 

One drawback might be the ways in which digital divides and skills could enable the most digitally empowered households to have more sophisticated control over content and services. This will create a need for public services to help households without the skills ‘inhouse’ to grapple with emerging technology. However, this could be a major aspect of educational and awareness training that is one valuable recommendation of the Cyber Harms White Paper. Some households might create a personalized and unique set of controls over content, while others might simply choose from a number of set profiles that can be constantly up-dated, much like anti-virus software and SPAM filters that permit users to adjust the severity of filtering. In the future, it may be as easy to avoid unwanted content as it now is to avoid SPAM and junk mail. 

Disinformation provides another example of a problem that can be addressed by existing technologies, like the use of multiple media sources and search technologies. Our own research found that most Internet users consulted four our more sources of information about politics, for example, and online (one source), they would consult an average of four different sources.[8] These patterns of search meant that very few users are likely to be trapped in a filter bubble or echo chamber, albeit still subject to the selective perception bias that no technology can cure. 


My basic argument is to not to panic in this dystopian climate of opinion and consider the following:

  • Jettison the duty of care regime. It will create problems that are disproportionately greater than the problems to be addressed.
  • Jettison the artificial category of cyber harms. It puts apples and oranges in the same basket in very unhelpful ways, mixing legal and illegal activities, and activities that are inherently harmful promotion of FMG, with activities that can be handled by a variety of actors and mitigating actions. 
  • Augment the intelligence of users. Push regulation down to users – enable them to regulate content seen by themselves or for their children. 

If we get rid of this cyber harm umbrella and look at each ‘harm’ as a unique problem, with different actors, contexts, and solutions, then they can each be dealt with through more uniquely appropriate mechanisms. 

That would be my suggestion. Not as simple as asking others to just ‘take care of this’ or ‘stop this’ but there simply is no magic wand or silver bullet that the big tech companies have at their command to accomplish this. Sooner or later, each problem needs to be addressed by often different but appropriate sets of actors, ranging from children, parents, and Internet users to schools, business and governmental organizations, law enforcement, and Internet platforms. The silver lining might be that as the Internet and its benefits become ever more embedded in everyday life and work. And as digital media become more critical that we routinely consider the potential problems as well as the benefits of every innovation made in the design, use, and governance of the Internet in your life and work. All should aim to further empower users to use, and control, and network with others to control the Internet and related digital media, and not to be controlled by a nanny state.  

Further Reading

Useful and broad overviews of the problems with the cyber harms White Paper are available by Gian Volpicelli in Wired[9] and Graham Smith[10] along with many contributions to the Cyber Harms White Paper consultation.


[1] A solicitor, Graham Smith, has argued quite authoritatively that the White Paper actually “abandons the principles underpinning existing duties of care”, see his paper, ‘Online Harms White Paper Consultation – Response to Consultation’, 28 June 2019, posted on his Twitter feed:  https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html

[2] https://www.bmmagazine.co.uk/news/tech-bosses-could-face-criminal-proceedings-if-they-fail-to-protect-users/

[3] Here I found agreement with the views of Paul Barron’s blog, ‘Response to Online Harms White Paper’, 3 July 2019: https://paulbernal.wordpress.com/2019/07/03/response-to-online-harms-white-paper/ Also, see his book, The Internet, Warts and AllCambridge: Cambridge University Press, 2018.

[4] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

[5] Ithiel de Sola Pool (1983), Forecasting the Telephone: A Retrospective Technology Assessment. Norwood, NJ: Ablex. 

[6] See, for example, Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’, October 2019, forthcoming in the European Journal of Risk Regulation, available at: https://osf.io/preprints/lawarxiv/dvx4f/

[7] https://www.theguardian.com/technology/2020/jan/03/metoobots-scientists-develop-ai-detect-harassment

[8] See Dutton, W. H. and Fernandez, L., ‘How Susceptible are Internet Users‘, Intermedia, Vol 46 No 4 December/January 2019

[9] https://www.wired.co.uk/article/online-harms-white-paper-uk-analysis

[10] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

Britain’s Digital Nanny State

The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. Perhaps it is because discussion of a new approach to Internet regulation occurred in the midst of the public’s focus on Brexit, this initiative has not really been exposed to critical scrutiny. Ironically, its implementation would do incredible harm to the human rights of the public at large albeit in the name of curbing the use of the Internet by malicious users, such as terrorists and pedophiles. Hopefully, it is not too late to reconsider this cyber harms framework. 

The problems with the government’s approach were covered well by Gian Voipicelli in an article in Wired UK. I presented my own concerns in a summary to the consumer forum for communications in June of 2019.[1] The problems with this approach were so apparent that I could not imagine this idea making its way into the Queen’s Speech as part of the legislative programme for the newly elected Conservative Government. It has, so let me briefly outline my concerns. 

Robert Huntington, The Nanny State, book cover

The aim has been to find a way to stop illegal or ‘unacceptable’ content and activity online. The problem has been finding a way to regulate the Internet and social media in ways that could accomplish this aim without violating the privacy and freedom of all digital citizens – networked individuals, such as yourself. The big idea has been to apply a duty of care responsibility on the social media companies, the intermediaries between those who use the Internet. Generally, Internet companies, like telephone companies, in the past, would not be held responsible for what their users do. Their liability would be very limited. Imagine a phone company sued because a pedophile used the phone. The phone company would have to surveil all telephone use to catch offenses. Likewise, Internet intermediaries will need to know what everyone is using the Internet and social media for to stop illegal or ‘unacceptable’ behavior. This is one reason why many commentators have referred to this as a draconian initiative. 

So, what are the possible harms? Before enumerating the harms it does consider, it does not deal with harms covered by other legislation or regulators, such as privacy, which is the responsibility of the Information Commissioner’s Office (ICO). Ironically, one of the major harms of this initiative will be to the privacy of individual Internet users. Where is the ICO?

The harms cited as within the scope of this cyber harms initiative included: child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; harassment and cyberstalking;  hate crime; encouraging and assisting suicide; incitement to violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18). This is only a start, as there are cyber harms with ‘less clear’ definitions, including: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM); and underage exposure to legal content, such as children accessing pornography, and spending excessive time online – screen time.  Clearly, this is a huge range of possible harms, and the list can be expanded over time, as new harms are discovered. 

Take one harm, for example, disinformation. Seriously, do you want the regulator, or the social media companies to judge what is disinformation? This would be ludicrous. Internet companies are not public service broadcasters, even though many would like them to behave as if they were. 

The idea is that those companies that allow users to share or discover ‘user-generated content or interact with each other online’ will have ‘a statutory duty of care’ to be responsible for the safety of their users and prevent them from suffering these harms. If they fail, the regulator can take action against the companies, such as fining the social media executives, or threatening them with criminal prosecution.[2]

The White Paper also recommended several technical initiatives, such as to flag suspicious content, and educational initiatives, such as in online media literacy. But the duty of care responsibility is the key and most problematic issue. 

Specifically, the cyber harms initiative poses the following risks: 

  1. Covering an overly broad and open-ended range of cyber harms;
  2. Requiring surveillance in order to police this duty that could undermine privacy of all users;
  3. Incentivizing companies to over-regulate content & activity, resulting in more restrictions on anonymity, speech, and chilling effects on freedom of expression;
  4. Generating more fear, and panic among the general public, undermining adoption & use of the Internet and widening digital divides;
  5. Necessitating an invasive monitoring of content, facing a volume of instances that is an order of magnitude beyond traditional media and telecom, such as 300 hours of video posted on YouTube every minute;
  6. Essentially targeting American tech giants (no British companies), and even suggesting subsidies for British companies, which will be viewed as protectionist, leaving Britain as a virtual backwater of a more global Internet; 
  7. Increasing the fragmentation of Internet regulators: a new regulator, Ofcom, new consumer ‘champion’, ICO, or more?

Notwithstanding these risks, this push is finding support for a variety of reasons. One general driver has been the rise of a dystopian climate of opinion about the Internet and social media over the last decade. This has been exacerbated by concerns over child protection and elections in the US, across Europe, such as with Cambridge Analytica, and with Brexit that created the spectre of foreign interference. Also, Europe and the UK have not developed Internet and social media companies comparable to the so-called big nine of the US and China. (While the UK has a strong online game industry, this industry is not mentioned at all in the White Paper, except as a target of subsidies.) The Internet and social media companies are viewed as foreign, and primarily American, companies that are politically popular to target. In this context, the platformization of the Internet and social media has been a gift to regulators — the potential for companies to police a large proportion of traffic, providing a way forward for politicians and regulators to ‘do something’. But at what costs? 

The public has valid complaints and concerns over instances of online harms. Politicians have not known what to do, but now have been led to believe they can simply turn to the companies and command them to stop cyber harms from occurring, or they will suffer the consequences in the way of executives facing steep fines or criminal penalties. But this carries huge risks, primarily in leading to over-regulation and inappropriate curtailing of the privacy and freedom of expression of all digital citizens across the UK. 

You only need to look at China to see how this model works. In China, an Internet or social media company could lose its license overnight if it allowed users to cross red lines determined by the government. And this fear has unsurprisingly led to over-regulation by these companies. Thus, the central government of China can count on private firms to strictly regulate Internet content and use. A similar outcome will occur in Britain, making it not the safest place to be online, but a place you would not want to be online with your content with even screen time under surveillance. User-generated content will be dangerous. Broadcast news and entertainment will be safe. Let the public watch movies. 

In conclusion, while I am an American, I don’t think this is simply an American obsession with freedom of expression. This right is not absolute even in the USA. Internet users across the world value their ability to ask questions, voice concerns, and use online digital media to access information, people, and services they like without fear of surveillance.[3] It can be a technology of freedom, as Ithiel de Sola Pool argued, in countries that support freedom of expression and personal privacy. If Britons decide to ask the government and regulators to restrict their use of the Internet and social media – for their own good – then they should support this framework for an e-nanny, or digital-nanny state. But its implications for Britain are real cyber harms that will result from this duty of care framework. 


[1] A link to my slides for this presentation is here: https://www.slideshare.net/WHDutton/online-harms-white-paper-april-2019-bill-dutton?qid=5ea724d0-7b80-4e27-bfe0-545bdbd13b93&v=&b=&from_search=1

[2] https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7

[3] Dutton, W. H., Law, G., Bolsover, G., and Dutta, S. (2013, released 2014) The Internet Trust Bubble: Global Values, Beliefs and Practices. NY: World Economic Forum.