Jettison the Digital Nanny State: Digitally Augment Users

My last blog argued that the UK should stop moving along the road of a duty of care regime, as this will lead Britain to become what might be called a ‘Digital Nanny State’, undermining the privacy and freedom of expression of all users. A promising number of readers agreed with my concerns, but some asked whether there was an alternative solution.

Before offering my suggestions, I must say that I do not see any solutions outlined by the duty of care regime. Essentially, a ‘duty of care’ approach[1], as outlined in the Cyber Harms White Paper would delegate solutions to the big tech companies, threatening top executives with huge fines or criminal charges if they fail to stop or address them.[2] That said, I assume that any ‘solutions’ would involve major breaches of the privacy and freedom of expression of Internet users across Britain given that surveillance and content controls would be the most likely necessity of their approach. The remedy would be draconian and worse that the problems to be addressed.[3]

Nevertheless, it is fair to ask how the problems raised by the lists of cyber harms could be addressed. Let me outline elements of a more viable approach. 

Move Away from the Concept of Cyber Harms

Under the umbrella of cyber harms are lumped a wide range of problems that have little in common beyond being potential problems for some Internet users. Looked at with any care it is impossible to see them as that similar in origin or solution. For example, disinformation is quite different from sexting. They involve different kinds of problems, to different people, imposed by different actors. Trolling is a fundamentally different set of issues than the promotion of female genital mutilation (FGM). The only common denominator is that any of these actions might result is some harm at some level for some individuals or groups – but they are so different that they violate common sense and logic to put them into the same scheme. 

Moreover, any of the problems are not harms per se, but actions that could be harmful – maybe even lead to many harms at many different levels, from psychological to physical.  Step one in any reasonable approach would be to decompose this list of cyber harms into specific problems in order to think through how each problem could be addressed. Graham Smith captures this problem in noting that the mishmash of cyber harms might be better labelled ‘users behaving badly’.[4] The authors of the White Paper did not want a ‘fragmented’ array of problems, but the reality is that there are distinctly different problems that need to be addressed in different ways in different contexts by different people. For example, others have argued for looking at cyber harms from the perspective of human rights law. But each problem needs to be addressed on its own terms.

Remember that Technologies have Dual Effects

Ithiel de Sola Pool pointed out how almost any negative impact of the telephone could be said to have exactly the opposite impact as well – ‘dual effects’.[5] For example, a telephone in one’s home could undermine your privacy by interrupting the peace and quiet of the household, but it could also provide more privacy compared to people coming to your door. A computer could be used to enhance the efficiency of an organization, but if poorly designed and implemented, the same technology could undermine its efficiency. In short, technologies do not have inherent, deterministic effects, as their implications can be shaped by how we design, use and govern them in particular contexts. 

This is important here because the discussion of cyber harms is occurring is a dystopian climate of opinion. Journalists, politicians, and academics are jumping on a dystopian bandwagon that is as misleading as the utopian bandwagon of the Arab Spring when all thought the Internet would democratize the world. Both the utopian and dystopian perspectives are misleading, deterministic viewpoints that are unhelpful for policy and practice. 

Recognise: Cyber Space is not the Wild West

Many of the cyber harms listed in the White Paper are activities that are illegal. It seems silly to remind the Home Office in the UK that what is illegal in the physical world is also illegal online in so-called cyber space or our virtual world. Given that financial fraud or selling drugs is illegal, then it is illegal online, and is a matter for law enforcement. The difference is that activities online do not always respect the same boundaries as activities in the real world of jurisdictions, law enforcement, and the courts. But this does not make the activities any less illegal, only more jurisdictionally complex to police and enforce. This does not require new law but better approaches to connecting and coordinating law enforcement across geography of spaces and places. Law enforcement agencies can request information from Internet platforms, but they probably should not outsource law enforcement, as suggested by the cyber harms framework. Cyber space is not the “Wild West” and never was.

Legal, but Potentially Harmful, Activities Can be Managed

The White Paper lists many activities that are not necessarily illegal – in fact some actions are not illegal, but potentially harmful. Cyberbullying is one example. Someone bullying another person is potentially harmful, but not necessarily. It is sometimes possible to ignore or standup to a bully and find that this actually could raise one’s self-esteem and sense of efficacy. A bully on the playground can be stopped by a person standing up to him or her, or another person intervening, or a supervisor on the playground calling a stop to it. If an individual repeatedly bullies, or actually harms another person, then they face penalties in the context of that activity, such as the school or workplace. In many ways, the act of cyberbullying can be useful in proving that a particular actor bullied another person. 

Many other examples could be developed to show how each problem has unique aspects and requires different networks of actors to be involved in managing or mitigating any harms. Many problems do not involve malicious actors, but some do. Many occur in households, others in schools, and workplaces, and anywhere at any time. The actors, problems, and contexts matter, and need to be considered in addressing these issues. 

Augment User Intelligence to Move Regulation Closer to Home

Many are beginning to address the hype surrounding artificial intelligence (AI) as a technological fix.[6] But in the spirit of Douglas Englebart in the 1950s, computers and the Internet can be designed to ‘augment’ human intelligence, and AI along with other tools have the potential to augment the choices of Internet users, as so widely experience in the use of search. While technically and socially challenging, it is possible and an innovative challenge to develop approaches to using digital technology to move regulation closer to the users: with content regulation, for example, being enabled by networked individuals, households, schools, businesses, and governmental organizations, as opposed to moving regulation up to big tech companies or governmental regulators. 

Efforts in the 1990s to develop a violence-chip (V-chip) for televisions provides an early example of this approach. It was designed to allow parents to set controls to prevent young children from watching adult programming. It would move content controls closer to the viewers and, theoretically, parents. [Children were often the only members of the household who knew how to use the V-chip.] The idea was good, its implementation limited. 

Cable television services often enable the use of a child lock for reducing access by children to adult programming. Video streaming services and age verification systems have had problems but remain ways to potentially enable a household to create services safer for children. Mobile Internet and video streaming services have apps for kids. Increasingly, it should be possible to design more ways to control access to content by users and households in ways that can address many of the problems raised by the cyber harms framework, such as access to violent content, that can be filtered by users.

With emerging approaches of AI, for example, it could be possible to not simply have warning flags, but information that could be used by users to decide whether to block or filter online content, such as unfriending a social media user. With respect to email, while such tools are in their infancy, there is the potential for AI to be used to identify emails that reflect bullying behavior. So Internet users will be increasingly able to detect individuals or messages that are toxic or malicious before they even see them, much like SPAM and junk mail can disappear before ever being seen by the user.[7] Mobile apps, digital media, intelligent home hubs and routers, and computer software generally could be designed and used to enable users to address their personal and household concerns. 

One drawback might be the ways in which digital divides and skills could enable the most digitally empowered households to have more sophisticated control over content and services. This will create a need for public services to help households without the skills ‘inhouse’ to grapple with emerging technology. However, this could be a major aspect of educational and awareness training that is one valuable recommendation of the Cyber Harms White Paper. Some households might create a personalized and unique set of controls over content, while others might simply choose from a number of set profiles that can be constantly up-dated, much like anti-virus software and SPAM filters that permit users to adjust the severity of filtering. In the future, it may be as easy to avoid unwanted content as it now is to avoid SPAM and junk mail. 

Disinformation provides another example of a problem that can be addressed by existing technologies, like the use of multiple media sources and search technologies. Our own research found that most Internet users consulted four our more sources of information about politics, for example, and online (one source), they would consult an average of four different sources.[8] These patterns of search meant that very few users are likely to be trapped in a filter bubble or echo chamber, albeit still subject to the selective perception bias that no technology can cure. 


My basic argument is to not to panic in this dystopian climate of opinion and consider the following:

  • Jettison the duty of care regime. It will create problems that are disproportionately greater than the problems to be addressed.
  • Jettison the artificial category of cyber harms. It puts apples and oranges in the same basket in very unhelpful ways, mixing legal and illegal activities, and activities that are inherently harmful promotion of FMG, with activities that can be handled by a variety of actors and mitigating actions. 
  • Augment the intelligence of users. Push regulation down to users – enable them to regulate content seen by themselves or for their children. 

If we get rid of this cyber harm umbrella and look at each ‘harm’ as a unique problem, with different actors, contexts, and solutions, then they can each be dealt with through more uniquely appropriate mechanisms. 

That would be my suggestion. Not as simple as asking others to just ‘take care of this’ or ‘stop this’ but there simply is no magic wand or silver bullet that the big tech companies have at their command to accomplish this. Sooner or later, each problem needs to be addressed by often different but appropriate sets of actors, ranging from children, parents, and Internet users to schools, business and governmental organizations, law enforcement, and Internet platforms. The silver lining might be that as the Internet and its benefits become ever more embedded in everyday life and work. And as digital media become more critical that we routinely consider the potential problems as well as the benefits of every innovation made in the design, use, and governance of the Internet in your life and work. All should aim to further empower users to use, and control, and network with others to control the Internet and related digital media, and not to be controlled by a nanny state.  

Further Reading

Useful and broad overviews of the problems with the cyber harms White Paper are available by Gian Volpicelli in Wired[9] and Graham Smith[10] along with many contributions to the Cyber Harms White Paper consultation.


[1] A solicitor, Graham Smith, has argued quite authoritatively that the White Paper actually “abandons the principles underpinning existing duties of care”, see his paper, ‘Online Harms White Paper Consultation – Response to Consultation’, 28 June 2019, posted on his Twitter feed:  https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html

[2] https://www.bmmagazine.co.uk/news/tech-bosses-could-face-criminal-proceedings-if-they-fail-to-protect-users/

[3] Here I found agreement with the views of Paul Barron’s blog, ‘Response to Online Harms White Paper’, 3 July 2019: https://paulbernal.wordpress.com/2019/07/03/response-to-online-harms-white-paper/ Also, see his book, The Internet, Warts and AllCambridge: Cambridge University Press, 2018.

[4] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

[5] Ithiel de Sola Pool (1983), Forecasting the Telephone: A Retrospective Technology Assessment. Norwood, NJ: Ablex. 

[6] See, for example, Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’, October 2019, forthcoming in the European Journal of Risk Regulation, available at: https://osf.io/preprints/lawarxiv/dvx4f/

[7] https://www.theguardian.com/technology/2020/jan/03/metoobots-scientists-develop-ai-detect-harassment

[8] See Dutton, W. H. and Fernandez, L., ‘How Susceptible are Internet Users‘, Intermedia, Vol 46 No 4 December/January 2019

[9] https://www.wired.co.uk/article/online-harms-white-paper-uk-analysis

[10] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

Comments are most welcome