Zoom-bombing the Future of Education

Zoom-bombing the Future of Education

by Bill Dutton and Arnau Erola based on their discussions with Louise Axon, Mary Bispham, Patricia Esteve-Gonzalez, and Marcel Stolz

In the wake of the Coronavirus pandemic, schools and universities across the globe have moved to online education as a substitute rather than a complement for campus-based instruction. While this mode of online learning may be time-limited and is expected to return to campuses and classroom settings once the Covid-19 outbreak subsides, this period could also be an important watershed for the future of education. Put simply, with thousands of courses and classrooms going online, this could usher in key innovations in the technologies and practices of teaching and learning online in ways that change the future of education. 

However, the success of this venture in online learning could be undermined by a variety of challenges. With dramatic moves to online education and a greater reliance on audio, video and Web conferencing systems, like Zoom, Webex and Skype, have come unexpected challenges. One particular challenge that has risen in prominence is efforts of malicious users to sabotage classrooms and discussions, such as by what has been called Zoom-bombing (Zoombombing). Some have defined it as ‘gate-crashing tactics during public video conference calls’, that often entail the ‘flooding of Zoom calls with disturbing images’. There are a growing number of examples of courses and meetings that have been bombed in such ways. It seems that most ‘Zoombombers’ join illegitimately, by somehow gaining access to the meeting or classroom details. But a student who is actually enrolled in a class could create similar problems. In either case, it is clear that zoom-bombing has become an issue for schools and universities, threatening to undermine the vitality of their teaching and relationships with faculty, students, and alumni of their institutions. 

TheQuint.com

We are involved in research on cybersecurity, and see this as one example in the educational domain, of how central cybersecurity initiatives can be to successfully using the Internet and related social media. We also believe that this problem of the digital gate-crasher and related issues of malicious users can be addressed effectively by a number of actors. As you will see, it is in part, but not only, a cybersecurity problem. It involves training in the use of online media, awareness of risks, and a respect for the civility of discussion in the classroom, meetings, and online discussions. Unfortunately, given how abrupt the shift to online learning has been, given efforts to protect the health of students, staff, faculty, and their networks, there has not been sufficient time to inform and train all faculty and students in the use of what is, to many, a new media. Nor has there been time to explain the benefits as well as the risks, intended and unintended, such as is the case with digital gate-crashers. 

Not a New Phenomenon

From the earliest years of computer-based conferencing systems, issues have arisen over productively managing and leading discussion online. One to many lectures by instructors have been refined dramatically over the years enabling even commercially viable initiatives in online education, such as Ted Talks, which actually began in the early 1980s and have been refined since, as well as live lectures, provided by many schools for at home students. 

But the larger promise of online learning is the technical facility for interaction one-to-one, one-to-many, many-to-one, and many-to-many. An early, pioneering computer-mediated conferencing system, called ‘The Emergency Management Information System and Reference Index’ (EMISARI) led to one of the first academic studies of the issues involved in what was called ‘computerized conferencing’ in the mid-1970s (Hiltz and Turoff 1978). Since the 1970s, many have studied the effective use of the Internet and related social and digital media in online learning. It would be impossible to review this work here, but suffice it to say, problems with the classroom, and online learning have a long and studied history that can inform and address the issues raised by these new digital gate-crashers.

Actors and Actions

This is not simply a problem for an administrator, or a teacher, as online courses and meetings involve a wide array of actors, each of which have particular as well as some shared responsibilities. Here we identify some of the most central actors and some of the actions they can take to address malicious actors in education’s cyberspace. 

Recommendations 

There are different issues facing different actors in online education. Initially, we focus on the faculty (generally the conference host) side, providing guidance on essential actions that can be taken to diminish the risks of zoom-bombing the future of education. We will then turn to other actors, including students and administrators.

  • Authentication: as far as possible, limit the connection to specific users by only allowing users authenticated with specific credentials, having a valid and unique link, or possessing an access code. Ideally, many want courses to be open to visitors, but the risks of this are apparent unless the moderator is able to eject malicious users, as discussed below. A pre-registration process for attendees  (e.g. via an online ticketing system) could help limit the risk of “trolls” joining while keeping an event open to visitors. 
  • Authorization: limit the technical facilities to which the students or participants in any meeting have access. Keep to the minimum required for the class session. That is, in most circumstances, the instructor should restrict file sharing, chat access, mic holding or video broadcasting if they do not need to use these in the session. This does not prevent students from using chat (interacting with other students) over other media, but it limits disruption of the class. The need to access these resources varies largely depending on the type of classroom, and it is the responsibility of the instructor or host to grant the permissions required.
  • Monitoring: careful monitoring of the connected participants can help avoid unauthorized connections – the gatecrashers, so the course lead should have access to the list of participants and monitor it routinely. In some cases, virtual classrooms can be locked when no more participants are allowed. (See the last bullet point with respect to stolen accounts.)
  • Moderation: in the same way that participants are monitored, their participation in the form of text, voice, video or shared links or files, should be reviewed. This can be a tedious task, particularly with a large class, but it is an advantage of online courses that instructors can monitor student participation, comments, and gain a better sense of their engagement. That said, it can take some time and it might not be possible during the class. 
  • Policies: Each institution should have adequate policies and reporting mechanisms to deal with offensive, violent and threatening behaviour in the classroom, real or virtual. Actions or words that are judged offensive, or otherwise toxic language, should not necessarily exclude a student’s opinions from a class discussion, but the students should be aware of and try to abide by the institution’s standards and policies. It is also helpful if student participants have the facility to report offensive posts, which instructors can then review, delete or discuss with the individual(s) posting them. 
  • Procedures: procedures need to be in place to deal in a timely manner (quickly) with stolen credentials and participants behaving irresponsibly. That could involve removing classroom access for an offending user and their loss of authorization to the specific credentials, as well as processes for generating new ones in case they are needed.

The above recommendations provide general guidance in securing online classrooms without any specifics on the technology used. Some platforms such as Zoom, have published their own guidelines for the administrators of online educational initiatives. But here it is useful to identify some of the responsibilities of other actors.

Students need to understand how the principles of behaviour in the classroom translate into the online, virtual classroom. The Internet is not a ‘Wild West, and the rules and etiquette of the classroom need to be followed for effective and productive use of everyone’s time. Students should have the ability to express their opinions and interpretations of course material, but this would be impossible without following rules of appropriate behaviour and what might be called ‘rules of order’, such as raising your hand, which can be done in the virtual classroom (Dutton 1996). Also, just as it would be wrong to give one’s library card to another person, when credentials or links are provided for enabling authentic students to join a class, it is the student’s responsibility to keep these links to themselves, and not share with individuals not legitimately enrolled. These issues need to be discussed with students and possibly linked to the syllabus of any online course. 

Administrators and top managers also have a responsibility to ensure that faculty and students have access to training on the technologies and best practices of online learning. It is still the case that some students are better equipped in the online setting than their instructors, but instructors can no longer simply avoid the Internet. It is their responsibility to learn how to manage their classroom, and not blame the technology, but it is the institution’s responsibility to ensure that appropriate training is available to those who need it. Finally, administrations need to ensure that IT staff expertise is as accessible as possible to any instructor that needs assistance with managing their online offerings. 

Points of Conclusion and Discussion

On Zoom, and other online learning platforms, instructors may well have more rather than less control of participation in the classroom, even if virtual, such as in easily excluding or muting a participant, but that has its added responsibilities. For example, the classroom is generally viewed as a private space for the instructors and students to interact and learn through candid and open communication about the topics of a course. Some level of toxicity, for example, should not justify expelling a participant. However, this is a serious judgement call for the instructor. Balancing the concerns over freedom of expression, ethical conduct, and a healthy learning environment is a challenge for administrators, students and teachers, but approaches such as those highlighted above are available to manage lectures and discussions in the online environment. Zoom-bombing can be addressed without diminishing online educational initiatives. 

We would greatly welcome your comments or criticisms in addressing this problem. 

References

Dutton, W. H. (1996), ‘Network Rules of Order: Regulating Speech in Public Electronic Fora,’ Media, Culture, and Society, 18 (2), 269-90.

Hiltz, S. R., and Turoff, M. (1978), The Network Nation: Human Communication via Comptuer. Reading, Massachusetts: Addison-Wesley Publishing. 

How the #Infodemic is being Tackled

The fight against conspiracy theories and other fake news about the coronavirus crisis is receiving more help from the social media and other tech platforms, as a number of thought leaders have argued.[1] However, in my opinion, a more important factor has been more successful outreach by governmental, industry, and academic researchers. Too often, the research community has been too complacent about getting the results of their research to opinion leaders and the broader public. Years ago, I argued that too many scientists held a ‘trickle down’ theory of information dissemination.[2] Once they publish their research, their job is done, and others will read and disseminate their findings. 

Even today, too many researchers and scientists are complacent about outreach. They are too focused on publication and communication with their peers and see outreach as someone else’s job. The coronavirus crisis has demonstrated that governments and mainstream, leading researchers, can get their messages out if they work hard to do so. In the UK, the Prime Minister’s TV address, and multiple press conferences have been very useful – the last reaching 27 million viewers in the UK, becoming one of the ‘most watched TV programmes ever’, according to The Guardian.[3] In addition, the government distributed a text message to people across the UK about it rules during the crisis. And leading scientists have been explaining their findings, research, and models to the public, with the support of broadcasters and social media. 

If scientists and other researchers are complacent, they can surrender the conversation to creative and motivated conspiracy theorists and fake news outlets. In the case of Covid-19, it seems to be that a major push by the scientific community of researchers and governmental experts and politicians has shown that reputable sources can be heard over and amongst the crowd of rumors and less authoritative information. Rather than try to censor or suppress social media, we need to step-up efforts by mainstream scientific communities to reach out to the public and political opinion leaders. No more complacency. It should not take a global pandemic crisis to see that this can and should be done.


[1] Marietje Schaake (2020), ‘Now we know Big Tech can tackle the ‘infdemic’ of fake news’, Financial Times, 25 March: p. 23. 

[2] Dutton, W. (1994), ‘Trickle-Down Social Science: A Personal Perspective,’ Social Sciences, 22, 2.

[3] Jim Waterson (2020), ‘Boris Johnson’s Covid-19 address is one of the most-watched TV programmes ever’, The Guardian, 24 March: https://www.theguardian.com/tv-and-radio/2020/mar/24/boris-johnsons-covid-19-address-is-one-of-most-watched-tv-programmes-ever

Women and the Web

News reports today citing one of the inventors of the Web, Sir Tim Berners-Lee, as arguing that the Web “is not working for women and girls”. Tim Berners-Lee is a hero of all of us involved in study and use of the Internet, Web, and related information and communication technologies. Clearly, many women and girls might well ‘experience violence online, including sexual harassment and, threatening messages’. This is a serious problem, but it should not be unnoticed that the Internet and Web have been remarkably gender neutral with respect to access. 

In fact, women and girls access and use the Internet, Web and social media at about the same level as men and boys. There are some nations in which the use of the Internet and related ICTs is dramatically lower for women and girls, but in Britain, the US, and most high-income nations, digital divides are less related to gender than to such factors as income, education, and age.  This speaks volumes about the value of these media to women and girls, and this should not be lost in focusing on problematic and sometimes harmful aspects of access to content on the Web and related media. 

Below is one example of use of the Internet by gender in Britain in 2019, which shows that women are more likely to be next generation users (using three or more devices, one of which is mobile) and less likely to be a non-user:

The full report from which this drawn is available online here.

Poster-first Presentations: The Rise of Poster Sessions on Academic Research

Times have changed. In the early years of my career as an academic, the poster session used to be sort of a second class offer for presenting at an academic conference. That is no longer the case. Newer generations of academics are trained and attuned to creating posters and infographics to explain and communicate their work. In many cases, it seems like the poster and poster sessions are the preferred mode of presentation, such as compared to sitting on a panel or making a traditional presentation of an academic paper, which is often a set of slides that could be incorporated into a poster. 

Courtesy of Forbes.com

Anecdotally, I have seen the rising prominence of poster sessions across a wide range of academic conferences I’ve attended over the years, in communication, political science, computer science, and communication policy, such as TPRC. For example, it is increasingly common for a time slot of a conference to be devoted to poster sessions, and not compete with other presentations. I can also see a leap in the sophistication and visualization quality evident in poster sessions. More software, templates, training, and guidelines are being developed to refine posters in an increasingly competitive field. 

Younger academics are more attuned to the creation of posters, but I am sure they will continue to develop them as they rise in the academic ranks. I think it is more of a cohort issue than a status issue in academia. But think of the added value of poster sessions to the presenters and their audiences.

From the presenter’s perspective, rather than have one shot to stand in front of a large audience to formally present a paper, they can have multiple opportunities to present the same material to smaller groups or even a single individual. All presentations help you refine your ideas and the logic of your argument, so I would think multiple iterations are even more beneficial. And aware presenters can gauge their presentation to the particular interests and questions of the specific audience they have at the moment. It is wonderful when a member of the audience introduces themselves to you after a panel, but you can introduce your self to many more individuals and network in more effective ways in smaller sessions.

From the audience’s perspective, everyone has been in an academic presentation that did not meet one’s expectations. They misunderstood the title, or came for another paper, and were polite enough to listen to others. But in the case of a poster session, audiences stroll through rows of posters and are able to locate particular topics and presentations of genuine interest. Moreover, the opportunity for some serendipity, finding interest in a topic you had not previously considered, is far more likely. Presenters can spend a few or many minutes not only listening but discussing the topic with the audience. It is truly an efficient as well as an effective presentational style. 

Shame on me for not proposing a poster yet in my career. But I am not so blind that I cannot see that the poster has risen as a medium for academic communication and increasingly as a preferred rather than a second choice for leading academics. Universities and research institutes need to support students and faculty who choose this option. 

Here is a nice example of a useful, infographic packed poster via Chris Bode’s Twitter:

Courtesy of Chris Bode

A Research Agenda for Digital Politics

My edited book within the Elgar Research Agendas Series will be out shortly. Its entitled A Research Agenda for Digital Politics, and aims to stimulate innovative research on the role of digital media and communication in the study of politics.

“This Elgar Research Agenda showcases insights from leading researchers on the charged issues and questions that lie ahead in the multidisciplinary field of digital politics. Covering the political implications of the Internet, social media, datafication and computational analytics, it looks to the future of how research might address the political challenges of the digital age and maps the key emerging trends in this field.”

I hope you can recommend the book to your librarian or research unit and consider this volume for your courses. Those with a serious interest in the political implications of digital and social media will find it valuable in considering their own directions for future research.

Contributors include Nick Anstead at the LSE, Jay G. Blumler at Leeds, Andrew Chadwick at Loughborough, Stephen Coleman at Leeds, Alexi Drew at King’s College London, Elizabeth Dubois at Ottawa, Leah Fernandez at Michigan State University, Heather Ford at UT Sydney, M. I. Franklin at Goldsmiths, Paolo Gerbaudo at King’s College London, Dave Karpf at George Washington University, Leah Lievrouw at UCLA, Wan-Ying LIN at City University of Hong Kong, Florian Martin-Bariteau at Ottawa, Declan McDowell-Naylor at Cardiff University, Giles Moss at Leeds, Ben O’Loughlin at Royal Holloway, Patricia Rossini at Un of Liverpool, Volker Schneider at University of Konstanz, Lone Sorensen at Huddersfield, Scott Wright at University of Melbourne, and Xinzhi ZHANG at Hong Kong Baptist University.

The Fragile Beauty of Democracy: The Iowa Caucuses

I watched the Iowa caucuses on Monday, February 3, 2020, from the UK. Good coverage came from a remote caucus in Florida – one of Iowa’s 87 satellite caucuses – in addition to 1,678 precinct caucuses. In that particular satellite caucus, Iowa voters, sunbirds residing during the winter months in Florida, seemed to be in a gymnasium. Each individual participant moved to a particular corner or location depending on the candidate they wished to support. If their candidate did not have a sufficient percentage of supporters, then they could move to one of the groups that did. Those in the more populated groups could not move, but there was obviously much discussion and toing and froing among the voters as they were urged to join with others. 

npr.org

Journalists were singing praises for the Iowa caucuses as nothing less that democracy in action. Watching volunteers and citizens debating and sorting themselves by their preferred candidate was inspiring. It was beautiful. Citizens were not simply rolling over on their couch to vote for a candidate but committing themselves in public and debating about the choice before them. Of course, not everyone can come to a caucus, but more caucuses were held and some after working hours to maximize access. Likewise, the satellites enabled citizens to vote even if not currently in Iowa. 

But suddenly, just as the preliminary tallies were expected to be shown, and with media pundits anxious to discuss the meaning of the early returns, it was not to happen. Unexplained delays, followed by notification of problems reconciling the numbers across the different methods used to tally all the caucuses, and problems with the new ap being used to support these tallies, and finally partial returns. 

Days later, the votes were counted and despite a close race between Bernie Sanders, the popular vote winner, and Pete Buttigieg, who edged out as the delegate winner (13-12), criticism focused on the delays and not on the overall shape of the final results. 

But a torrent of criticism focused on early discrepancies, errors, and discrepanies in the tallies, which led to delay in reporting, and the process, which was slammed as unacceptable. Nathan Robinson in The Guardian called it a mess, a debacle, and a ‘blow to American faith in democracy.’[1] These problems have been the focus of much good journalism, but did they forget the major story? Instantly, a beautiful display of democratic practice was turned into an American debacle. 

Commentators — as soon as on the very night of the caucuses — were posing sanctions on Iowa’s Democratic Party, saying that they should not be allowed to hold caucuses again, and that Iowa should no longer be the first primary in the election season. Iowa was going to pay for this screw-up for the candidates and the media. 

Personally, I have not seen many if any candidate or media pundit go to the defense of Iowa. This is a shame. Discussion focused on whom to blame for the problems. Should it be the state party, the national party apparatus, the ap developer, the ap, the volunteers who couldn’t use it effectively, or some conspiracy. I did find a wonderful letter from Julie Riggs, a contributor to Iowa View in the Des Moines Register, days after the crisis, which claimed that the ‘Iowa caucuses are an American treasure’.[2] She exclaimed:  ‘Don’t take our caucus away!’. I completely agree. 

Somehow, the media lost the plot when their expectations were not met, and they were left stammering in front of the camera with nothing to report. Flipping the story to the cause of the delays could have real damage to the democratic process. Democracy should be more important than efficiency, and real democrats should not surrender control to the media. As Julie Riggs said in her piece, through participating in the cut and thrust of debate in the caucus, she had a ‘palpable feeling’ that ‘the people hold the power’. You do. So don’t give that up. The media and the candidates can just wait a few hours or days on the volunteers and citizens making sure they get this right. Democracy is inefficient. The media need to get over it. 


[1] See: https://www.theguardian.com/commentisfree/2020/feb/07/the-iowa-caucuses-only-reinforced-the-idea-that-democracy-is-a-joke

[2] https://eu.desmoinesregister.com/story/opinion/columnists/iowa-view/2020/02/16/iowa-caucuses-american-treasure-republican-first-time-experience/4754985002/

Downing of Ukrainian Flight #PS752 on 8 January 2020: An Information Disaster?

In 1994, I helped organize a forum for the Programme on Information and Communication Technologies (PICT) on what we called ‘ICT disasters’. We described and compared three cases, including the shooting down of Iran Air Flight 655 by the USS Vincennes, a US warship patrolling the Persian Gulf, killing all 290 people on the plane. It was incorrectly identified as an Iranian F-14 fighter descending towards the ship, when in fact it was a civilian flight that was ascending. We concluded it was an information disaster in that available information was not correctly communicated, interpreted and acted on in time to prevent this accident and developed explanations for how this was able to occur. 

Image from Business Insider

I dare not suggest whether or not the downing of Ukrainian Flight #PS752 is comparable to the disaster of Iran Flight 655 or other civilian flight disasters. The dynamics leading to these two events are undoubtedly very different indeed. However, I do believe this latest disaster in Iran is a case that is worthy of study in ways that will go well beyond assigning blame to particular nations or actors. 

There is a need to understand the social, organizational, and technological dynamics of these disasters as a means for improving policy and practice in the specific areas in which they are embedded, but also to learn lessons that could be informative for areas far removed from airline safety and defense. For example, our case study of Iran Flight 655 was studied along with two other case studies in entirely different areas, one involving London ambulance dispatching, and another the London Stock Exchange. The common denominator was the extreme circumstances that were involved in each incident that led them to be perceived as disasters. 

By studying extreme cases of information disasters, it might be possible to provide new insights and lessons that are applicable to more routine information failures that occur in organizations and society. Such disasters might help shape safer digital socio-technical outcomes. 

If you are interested, let me suggest these readings on the Flight 655 disaster case study and how such cases can be explored for broader lessons, include:

Rochlin, G. I (1991), ‘Iran Flight 655 and the USS Vincennes: Complex, Large-Scale Military Systems and the Failure of Control’, pp. 99-125 in La Porte, T. (ed.), Social Responses to Large Technical Systems (Dordrecht: Kluwer Academic Publishers). 

Dutton, W. H., MacKenzie, D., Shapiro, S., and Peltu, M. (1995), Computer Power and Human Limits: Learning from IT and Telecommunications Disasters. Policy Research Paper No. 33 (Uxbridge: PICT, Brunel University). 

Peltu, M., MacKenzie, D., Shapiro, S., and Dutton, W. H. (1996), Computer Power and Human Limits, pp. 177-95 in Dutton, W. H. (ed.), Information and Communication Technologies  Visions and Realities. Oxford: Oxford University Press. 

Jettison the Digital Nanny State: Digitally Augment Users

My last blog argued that the UK should stop moving along the road of a duty of care regime, as this will lead Britain to become what might be called a ‘Digital Nanny State’, undermining the privacy and freedom of expression of all users. A promising number of readers agreed with my concerns, but some asked whether there was an alternative solution.

Before offering my suggestions, I must say that I do not see any solutions outlined by the duty of care regime. Essentially, a ‘duty of care’ approach[1], as outlined in the Cyber Harms White Paper would delegate solutions to the big tech companies, threatening top executives with huge fines or criminal charges if they fail to stop or address them.[2] That said, I assume that any ‘solutions’ would involve major breaches of the privacy and freedom of expression of Internet users across Britain given that surveillance and content controls would be the most likely necessity of their approach. The remedy would be draconian and worse that the problems to be addressed.[3]

Nevertheless, it is fair to ask how the problems raised by the lists of cyber harms could be addressed. Let me outline elements of a more viable approach. 

Move Away from the Concept of Cyber Harms

Under the umbrella of cyber harms are lumped a wide range of problems that have little in common beyond being potential problems for some Internet users. Looked at with any care it is impossible to see them as that similar in origin or solution. For example, disinformation is quite different from sexting. They involve different kinds of problems, to different people, imposed by different actors. Trolling is a fundamentally different set of issues than the promotion of female genital mutilation (FGM). The only common denominator is that any of these actions might result is some harm at some level for some individuals or groups – but they are so different that they violate common sense and logic to put them into the same scheme. 

Moreover, any of the problems are not harms per se, but actions that could be harmful – maybe even lead to many harms at many different levels, from psychological to physical.  Step one in any reasonable approach would be to decompose this list of cyber harms into specific problems in order to think through how each problem could be addressed. Graham Smith captures this problem in noting that the mishmash of cyber harms might be better labelled ‘users behaving badly’.[4] The authors of the White Paper did not want a ‘fragmented’ array of problems, but the reality is that there are distinctly different problems that need to be addressed in different ways in different contexts by different people. For example, others have argued for looking at cyber harms from the perspective of human rights law. But each problem needs to be addressed on its own terms.

Remember that Technologies have Dual Effects

Ithiel de Sola Pool pointed out how almost any negative impact of the telephone could be said to have exactly the opposite impact as well – ‘dual effects’.[5] For example, a telephone in one’s home could undermine your privacy by interrupting the peace and quiet of the household, but it could also provide more privacy compared to people coming to your door. A computer could be used to enhance the efficiency of an organization, but if poorly designed and implemented, the same technology could undermine its efficiency. In short, technologies do not have inherent, deterministic effects, as their implications can be shaped by how we design, use and govern them in particular contexts. 

This is important here because the discussion of cyber harms is occurring is a dystopian climate of opinion. Journalists, politicians, and academics are jumping on a dystopian bandwagon that is as misleading as the utopian bandwagon of the Arab Spring when all thought the Internet would democratize the world. Both the utopian and dystopian perspectives are misleading, deterministic viewpoints that are unhelpful for policy and practice. 

Recognise: Cyber Space is not the Wild West

Many of the cyber harms listed in the White Paper are activities that are illegal. It seems silly to remind the Home Office in the UK that what is illegal in the physical world is also illegal online in so-called cyber space or our virtual world. Given that financial fraud or selling drugs is illegal, then it is illegal online, and is a matter for law enforcement. The difference is that activities online do not always respect the same boundaries as activities in the real world of jurisdictions, law enforcement, and the courts. But this does not make the activities any less illegal, only more jurisdictionally complex to police and enforce. This does not require new law but better approaches to connecting and coordinating law enforcement across geography of spaces and places. Law enforcement agencies can request information from Internet platforms, but they probably should not outsource law enforcement, as suggested by the cyber harms framework. Cyber space is not the “Wild West” and never was.

Legal, but Potentially Harmful, Activities Can be Managed

The White Paper lists many activities that are not necessarily illegal – in fact some actions are not illegal, but potentially harmful. Cyberbullying is one example. Someone bullying another person is potentially harmful, but not necessarily. It is sometimes possible to ignore or standup to a bully and find that this actually could raise one’s self-esteem and sense of efficacy. A bully on the playground can be stopped by a person standing up to him or her, or another person intervening, or a supervisor on the playground calling a stop to it. If an individual repeatedly bullies, or actually harms another person, then they face penalties in the context of that activity, such as the school or workplace. In many ways, the act of cyberbullying can be useful in proving that a particular actor bullied another person. 

Many other examples could be developed to show how each problem has unique aspects and requires different networks of actors to be involved in managing or mitigating any harms. Many problems do not involve malicious actors, but some do. Many occur in households, others in schools, and workplaces, and anywhere at any time. The actors, problems, and contexts matter, and need to be considered in addressing these issues. 

Augment User Intelligence to Move Regulation Closer to Home

Many are beginning to address the hype surrounding artificial intelligence (AI) as a technological fix.[6] But in the spirit of Douglas Englebart in the 1950s, computers and the Internet can be designed to ‘augment’ human intelligence, and AI along with other tools have the potential to augment the choices of Internet users, as so widely experience in the use of search. While technically and socially challenging, it is possible and an innovative challenge to develop approaches to using digital technology to move regulation closer to the users: with content regulation, for example, being enabled by networked individuals, households, schools, businesses, and governmental organizations, as opposed to moving regulation up to big tech companies or governmental regulators. 

Efforts in the 1990s to develop a violence-chip (V-chip) for televisions provides an early example of this approach. It was designed to allow parents to set controls to prevent young children from watching adult programming. It would move content controls closer to the viewers and, theoretically, parents. [Children were often the only members of the household who knew how to use the V-chip.] The idea was good, its implementation limited. 

Cable television services often enable the use of a child lock for reducing access by children to adult programming. Video streaming services and age verification systems have had problems but remain ways to potentially enable a household to create services safer for children. Mobile Internet and video streaming services have apps for kids. Increasingly, it should be possible to design more ways to control access to content by users and households in ways that can address many of the problems raised by the cyber harms framework, such as access to violent content, that can be filtered by users.

With emerging approaches of AI, for example, it could be possible to not simply have warning flags, but information that could be used by users to decide whether to block or filter online content, such as unfriending a social media user. With respect to email, while such tools are in their infancy, there is the potential for AI to be used to identify emails that reflect bullying behavior. So Internet users will be increasingly able to detect individuals or messages that are toxic or malicious before they even see them, much like SPAM and junk mail can disappear before ever being seen by the user.[7] Mobile apps, digital media, intelligent home hubs and routers, and computer software generally could be designed and used to enable users to address their personal and household concerns. 

One drawback might be the ways in which digital divides and skills could enable the most digitally empowered households to have more sophisticated control over content and services. This will create a need for public services to help households without the skills ‘inhouse’ to grapple with emerging technology. However, this could be a major aspect of educational and awareness training that is one valuable recommendation of the Cyber Harms White Paper. Some households might create a personalized and unique set of controls over content, while others might simply choose from a number of set profiles that can be constantly up-dated, much like anti-virus software and SPAM filters that permit users to adjust the severity of filtering. In the future, it may be as easy to avoid unwanted content as it now is to avoid SPAM and junk mail. 

Disinformation provides another example of a problem that can be addressed by existing technologies, like the use of multiple media sources and search technologies. Our own research found that most Internet users consulted four our more sources of information about politics, for example, and online (one source), they would consult an average of four different sources.[8] These patterns of search meant that very few users are likely to be trapped in a filter bubble or echo chamber, albeit still subject to the selective perception bias that no technology can cure. 


My basic argument is to not to panic in this dystopian climate of opinion and consider the following:

  • Jettison the duty of care regime. It will create problems that are disproportionately greater than the problems to be addressed.
  • Jettison the artificial category of cyber harms. It puts apples and oranges in the same basket in very unhelpful ways, mixing legal and illegal activities, and activities that are inherently harmful promotion of FMG, with activities that can be handled by a variety of actors and mitigating actions. 
  • Augment the intelligence of users. Push regulation down to users – enable them to regulate content seen by themselves or for their children. 

If we get rid of this cyber harm umbrella and look at each ‘harm’ as a unique problem, with different actors, contexts, and solutions, then they can each be dealt with through more uniquely appropriate mechanisms. 

That would be my suggestion. Not as simple as asking others to just ‘take care of this’ or ‘stop this’ but there simply is no magic wand or silver bullet that the big tech companies have at their command to accomplish this. Sooner or later, each problem needs to be addressed by often different but appropriate sets of actors, ranging from children, parents, and Internet users to schools, business and governmental organizations, law enforcement, and Internet platforms. The silver lining might be that as the Internet and its benefits become ever more embedded in everyday life and work. And as digital media become more critical that we routinely consider the potential problems as well as the benefits of every innovation made in the design, use, and governance of the Internet in your life and work. All should aim to further empower users to use, and control, and network with others to control the Internet and related digital media, and not to be controlled by a nanny state.  

Further Reading

Useful and broad overviews of the problems with the cyber harms White Paper are available by Gian Volpicelli in Wired[9] and Graham Smith[10] along with many contributions to the Cyber Harms White Paper consultation.


[1] A solicitor, Graham Smith, has argued quite authoritatively that the White Paper actually “abandons the principles underpinning existing duties of care”, see his paper, ‘Online Harms White Paper Consultation – Response to Consultation’, 28 June 2019, posted on his Twitter feed:  https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html

[2] https://www.bmmagazine.co.uk/news/tech-bosses-could-face-criminal-proceedings-if-they-fail-to-protect-users/

[3] Here I found agreement with the views of Paul Barron’s blog, ‘Response to Online Harms White Paper’, 3 July 2019: https://paulbernal.wordpress.com/2019/07/03/response-to-online-harms-white-paper/ Also, see his book, The Internet, Warts and AllCambridge: Cambridge University Press, 2018.

[4] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

[5] Ithiel de Sola Pool (1983), Forecasting the Telephone: A Retrospective Technology Assessment. Norwood, NJ: Ablex. 

[6] See, for example, Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’, October 2019, forthcoming in the European Journal of Risk Regulation, available at: https://osf.io/preprints/lawarxiv/dvx4f/

[7] https://www.theguardian.com/technology/2020/jan/03/metoobots-scientists-develop-ai-detect-harassment

[8] See Dutton, W. H. and Fernandez, L., ‘How Susceptible are Internet Users‘, Intermedia, Vol 46 No 4 December/January 2019

[9] https://www.wired.co.uk/article/online-harms-white-paper-uk-analysis

[10] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

Britain’s Digital Nanny State

The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. Perhaps it is because discussion of a new approach to Internet regulation occurred in the midst of the public’s focus on Brexit, this initiative has not really been exposed to critical scrutiny. Ironically, its implementation would do incredible harm to the human rights of the public at large albeit in the name of curbing the use of the Internet by malicious users, such as terrorists and pedophiles. Hopefully, it is not too late to reconsider this cyber harms framework. 

The problems with the government’s approach were covered well by Gian Voipicelli in an article in Wired UK. I presented my own concerns in a summary to the consumer forum for communications in June of 2019.[1] The problems with this approach were so apparent that I could not imagine this idea making its way into the Queen’s Speech as part of the legislative programme for the newly elected Conservative Government. It has, so let me briefly outline my concerns. 

Robert Huntington, The Nanny State, book cover

The aim has been to find a way to stop illegal or ‘unacceptable’ content and activity online. The problem has been finding a way to regulate the Internet and social media in ways that could accomplish this aim without violating the privacy and freedom of all digital citizens – networked individuals, such as yourself. The big idea has been to apply a duty of care responsibility on the social media companies, the intermediaries between those who use the Internet. Generally, Internet companies, like telephone companies, in the past, would not be held responsible for what their users do. Their liability would be very limited. Imagine a phone company sued because a pedophile used the phone. The phone company would have to surveil all telephone use to catch offenses. Likewise, Internet intermediaries will need to know what everyone is using the Internet and social media for to stop illegal or ‘unacceptable’ behavior. This is one reason why many commentators have referred to this as a draconian initiative. 

So, what are the possible harms? Before enumerating the harms it does consider, it does not deal with harms covered by other legislation or regulators, such as privacy, which is the responsibility of the Information Commissioner’s Office (ICO). Ironically, one of the major harms of this initiative will be to the privacy of individual Internet users. Where is the ICO?

The harms cited as within the scope of this cyber harms initiative included: child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; harassment and cyberstalking;  hate crime; encouraging and assisting suicide; incitement to violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18). This is only a start, as there are cyber harms with ‘less clear’ definitions, including: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM); and underage exposure to legal content, such as children accessing pornography, and spending excessive time online – screen time.  Clearly, this is a huge range of possible harms, and the list can be expanded over time, as new harms are discovered. 

Take one harm, for example, disinformation. Seriously, do you want the regulator, or the social media companies to judge what is disinformation? This would be ludicrous. Internet companies are not public service broadcasters, even though many would like them to behave as if they were. 

The idea is that those companies that allow users to share or discover ‘user-generated content or interact with each other online’ will have ‘a statutory duty of care’ to be responsible for the safety of their users and prevent them from suffering these harms. If they fail, the regulator can take action against the companies, such as fining the social media executives, or threatening them with criminal prosecution.[2]

The White Paper also recommended several technical initiatives, such as to flag suspicious content, and educational initiatives, such as in online media literacy. But the duty of care responsibility is the key and most problematic issue. 

Specifically, the cyber harms initiative poses the following risks: 

  1. Covering an overly broad and open-ended range of cyber harms;
  2. Requiring surveillance in order to police this duty that could undermine privacy of all users;
  3. Incentivizing companies to over-regulate content & activity, resulting in more restrictions on anonymity, speech, and chilling effects on freedom of expression;
  4. Generating more fear, and panic among the general public, undermining adoption & use of the Internet and widening digital divides;
  5. Necessitating an invasive monitoring of content, facing a volume of instances that is an order of magnitude beyond traditional media and telecom, such as 300 hours of video posted on YouTube every minute;
  6. Essentially targeting American tech giants (no British companies), and even suggesting subsidies for British companies, which will be viewed as protectionist, leaving Britain as a virtual backwater of a more global Internet; 
  7. Increasing the fragmentation of Internet regulators: a new regulator, Ofcom, new consumer ‘champion’, ICO, or more?

Notwithstanding these risks, this push is finding support for a variety of reasons. One general driver has been the rise of a dystopian climate of opinion about the Internet and social media over the last decade. This has been exacerbated by concerns over child protection and elections in the US, across Europe, such as with Cambridge Analytica, and with Brexit that created the spectre of foreign interference. Also, Europe and the UK have not developed Internet and social media companies comparable to the so-called big nine of the US and China. (While the UK has a strong online game industry, this industry is not mentioned at all in the White Paper, except as a target of subsidies.) The Internet and social media companies are viewed as foreign, and primarily American, companies that are politically popular to target. In this context, the platformization of the Internet and social media has been a gift to regulators — the potential for companies to police a large proportion of traffic, providing a way forward for politicians and regulators to ‘do something’. But at what costs? 

The public has valid complaints and concerns over instances of online harms. Politicians have not known what to do, but now have been led to believe they can simply turn to the companies and command them to stop cyber harms from occurring, or they will suffer the consequences in the way of executives facing steep fines or criminal penalties. But this carries huge risks, primarily in leading to over-regulation and inappropriate curtailing of the privacy and freedom of expression of all digital citizens across the UK. 

You only need to look at China to see how this model works. In China, an Internet or social media company could lose its license overnight if it allowed users to cross red lines determined by the government. And this fear has unsurprisingly led to over-regulation by these companies. Thus, the central government of China can count on private firms to strictly regulate Internet content and use. A similar outcome will occur in Britain, making it not the safest place to be online, but a place you would not want to be online with your content with even screen time under surveillance. User-generated content will be dangerous. Broadcast news and entertainment will be safe. Let the public watch movies. 

In conclusion, while I am an American, I don’t think this is simply an American obsession with freedom of expression. This right is not absolute even in the USA. Internet users across the world value their ability to ask questions, voice concerns, and use online digital media to access information, people, and services they like without fear of surveillance.[3] It can be a technology of freedom, as Ithiel de Sola Pool argued, in countries that support freedom of expression and personal privacy. If Britons decide to ask the government and regulators to restrict their use of the Internet and social media – for their own good – then they should support this framework for an e-nanny, or digital-nanny state. But its implications for Britain are real cyber harms that will result from this duty of care framework. 


[1] A link to my slides for this presentation is here: https://www.slideshare.net/WHDutton/online-harms-white-paper-april-2019-bill-dutton?qid=5ea724d0-7b80-4e27-bfe0-545bdbd13b93&v=&b=&from_search=1

[2] https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7

[3] Dutton, W. H., Law, G., Bolsover, G., and Dutta, S. (2013, released 2014) The Internet Trust Bubble: Global Values, Beliefs and Practices. NY: World Economic Forum. 

Addressing the Quality of Broadcast Coverage of Politics in Britain

As an American living in the UK, who is not a journalist, I’ve long looked at broadcast journalism in Britain as a model for the US to emulate. Over time, however, my confidence in the UK’s coverage has declined. Rather than simply complain, let me offer a few observations and suggestions. Most recently, after weeks of watching broadcast coverage of the 2019 election in the UK, my concern over the state of ‘quality journalism’ was reinforced.

Partisan Coverage

A common rant over highly partisan news coverage is one aspect of the problem, as illustrated by Fox News and CNN in the US. But through much of this last UK election, it seemed both Conservative and Labour Party supporters, along with politicians from the minor parties, were accusing broadcasters of overly partisan favoritism. For example, Channel 4’s Jon Snow has been accused of having a liberal bias in anchoring their news coverage.[1] But partisan bias aside, which is even more evident in the US, partisan coverage is not my primary concern in the British case.

More importantly, broadcast news in the UK seems to be facing problems of quality coverage in several related ways that cumulatively contribute to polarizing the political process and undermining the civility of political discourse. I’ll provide a few problematic patterns.

Over-Simplify and Over-Exaggerate

First, we increasingly hear less from the mouths of politicians and candidates for office and more from journalists and the members of the public at large. While not a bad turn in itself, it has had negative consequences.

When journalists provide their summary synthesis of a candidate or campaign, it is inevitably very brief and dramatic. One could say this has long been guidance to even quality print news reporters: simplify and then exaggerate. This surely distorts news coverage, but broadcast journalism is particularly vulnerable due to the tremendous pressure to be exceedingly brief and conclusive – ending with a catchy theme. So leading journalists are led to over-simplify and over exaggerate and in the process, seldom allow politicians and candidates to speak for themselves. Perhaps news producers see politicians as too nuanced and long winded for live television news coverage, and more difficult to access and interview that their journalistic surrogates. But the resulting simplification and exaggeration can be misleading and polarizing.

Dramatically Contrasting Competing Points of View

A popular format for 2019 election coverage was moving a broadcasting crew across the UK to visit cities, towns, and villages that ‘represented’ leave, remain, or divided opinions on the Britain’s future in the EU. And during each stop, the team would broadcast short snippets of interviews with people on the street, in the pub, or in their homes. The idea of getting the views from the street was good, but these interviews sought out diverse, colorful, and often caustic viewpoints. One would call a candidate for office a liar, another would call a candidate a racist, and so on. Often, the interviews would end with concluding that the voters were forced to choose the lesser of various evils.

But of course, choosing four or five caustic or colorful interviewees off the street of any city is not truly representative, much less a systematic or scientific sampling of opinion. Rather than sampling opinions, the broadcasts showcase entertaining or jaw dropping insults, which convey a clear message: it is okay to insult the candidates for public office. This is cheap and quick and possibly entertaining, but it contributes to the toxicity and polarization of politics. Perhaps it is too costly to actually sample public opinion, but journalists should refrain from suggesting they are genuinely sampling opinion.

The Leading Question with No Such Thing as a Non-Opinion

An added element of interviews with the public is the prevalence of leading questions. With the journalist asking: “So you can’t really trust any of these candidates, can you?” What can you expect Joe or Joan public to say? Maybe the journalist discussed their views ahead of time, and simply want to push the interviewee to get to the point, but while going on air with a leading question might speed things up, it also leads journalists to over-simplify and exaggerate the public’s views. It may even create opinions when there are none. It is very common for members of the public to not have an opinion about many issues. Asking leading questions forces them to make up an opinion on the spot. This has been a well studied problem in survey research, when respondents are asked to respond to a question that they have no opinion about. It is also a problem for journalism that needs more study.

The Proverbial Horse Race

Finally, the public love a horse race encourages broadcasters to find a way to make any election into a horse race if at all possible. The weeks leading up to the 2019 UK election consistently showed a gap in the voting intentions in favor of the Conservative Party. But in the week and days before the election, pundits nervously claimed that there were signs of a narrowing of the polls, and a very real possibility of an upset. It not only didn’t happen, but post hoc, there seemed to be little sign of this narrowing, yet losers were more crushed than they might have otherwise been, and the winners were pleasantly surprised.

These are just a few examples of the ways in which journalistic practices might have gone wrong in ways that might well contribute to the toxicity of public discourse, and the polarization of public life. Perhaps this is not new. The old adage is that: If it bleeds, it leads. Notwithstanding, we are not just talking about car crashes but the coverage of candidates and elections for public office. People love to joke about politics and politicians. But journalistic coverage has gone beyond jokes to publicly cutting and insulting remarks that would come close to hate speech in another context.

Why?

This may well be a symptom of a decline of broadcast journalism that is driven by a raft of factors. More competition for the attention of viewers? Declining revenues and financing relative to demands? More focus on street reporting and immediacy, than on thoughtful synthesis? Efforts to entertain rather than to report? Whatever the causes, the problems may need to be recognized and agreed upon — that a problem exists and that there is a need to focus attention on higher quality journalism. There is a looming debate ahead over the future of public service broadcasting and that debate needs to address perceived risks to high quality broadcast journalism, and not become another example of sensational or exaggerated coverage.

[1] https://www.telegraph.co.uk/news/2017/06/28/jon-snow-criticised-mid-interview-panelist-tells-not-everyone/