Flawed Economics Behind Online Harms Regulation

The Flawed Economics of Online Harms Regulation

I am not an economist,  but even I can see the huge flaws in a recently published “cost/benefit analysis of the UK’s online safety bill”.[1] My immediate reactions:

The author, Sam Wood, of ‘The Economics of Online Harms Regulation’ in InterMEDIA, begins with an argument that the pandemic ‘[feuled] concerns about harmful content and behaviour encountered online’. Quite the contrary, I think it is arguable that the Internet and related online media became a lifeline for households in the UK and across the world during this period of lockdowns and working from home. The impetus for the online harms [now called ‘safety’] bill was fueled by the demonization of social media in the years before the pandemic. So, from the very introduction to this piece, I worried about the credibility of the economic analysis it promises. 

I was not disappointed. It is jaw-dropping. Even enumerating some of the many online harms to be addressed and outlining some of the ‘challenges’ in quantifying them, the piece proceeds to do exactly that. Using Department of Digital, Culture, Media & Sports (DCMS) estimates, the author argues that the estimated costs of seven types of harm, the greatest being cyberstalking – at £2,176m over ten years beginning in 2023, is much greater than the costs of implementing the bill, the greatest cost being ‘content moderation’ estimated to be £1,700m over the same ten years in the future. 

Pulling the costs of regulation out of a hat?

Of course, the costs of implementing this bill are not simply captured by the activities enumerated: awareness campaigns, creating reporting mechanisms, updating terms of service, producing risk assessments, content moderation, and transparency reports. What has the analysis left out?

Well, what about reductions in freedom of expression and commensurate reductions in the value and use of the Internet and related social media for the public good? This will be the major impact of the disproportionate incentivisation of censorship by tech platforms in order for platforms to avoid huge the potentially huge costs to be imposed by government regulators.

The duty of care solution is the problem that will have major negative impacts on what has been the lifeline for households, educators, healthcare professionals and all government departments at all times, but made so visible by the pandemic. The duty of care mechanism will incentivise censorship and surveillance of users and push the tech platforms to act like newspapers in performing ever stronger editorial roles, such as determining what is ‘disinformation’.

One could ask: Would not an economist need to list all the benefits of social media and related activities, and not just the costs?

This is not a neutral, critical analysis, but seems to be a political stitch up to support the proposed regulation. That said, such a flawed analysis might well make a better case for opposing than supporting this bill. Read it, consider its flaws, and oppose this misguided effort to address particular grievances by introducing a terrible policy. The proposed bill will do unmeasured damage to one of the most critical infrastructures available here and now for enabling enhanced means for communication and information for every age group in the UK and worldwide. 

With apologies to the journal editors, but if the BBC or public service broadcasting were subject to such a flawed analysis, I am sure that InterMEDIA would not have even considered publishing such a piece. Then again, this bill seems to enjoy widespread support and specifically advertises its intent to protect freedom of expression. Yet how often do the intentions of regulation end up in failure and the unintended collateral damage overwhelm any positive outcomes. I’m afraid we are about to see this happen when this bill undermines an open and global Internet and free expression, and privacy is further eroded in order to enforce tech’s duty of care.

Progress on the pandemic is allowing the UK to talk about moving back to a new normal. In that spirit, may the UK apply a level of common sense and closer parliamentary and public scrutiny to the online safety bill – a level of care that such an important piece of media regulation would normally receive.


[1] Sam Wood, ‘The Economics of Online Harms Regulation’, InterMEDIA, 49(2): 31-34. 

Britain’s Digital Nanny State

The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. Perhaps it is because discussion of a new approach to Internet regulation occurred in the midst of the public’s focus on Brexit, this initiative has not really been exposed to critical scrutiny. Ironically, its implementation would do incredible harm to the human rights of the public at large albeit in the name of curbing the use of the Internet by malicious users, such as terrorists and pedophiles. Hopefully, it is not too late to reconsider this cyber harms framework. 

The problems with the government’s approach were covered well by Gian Voipicelli in an article in Wired UK. I presented my own concerns in a summary to the consumer forum for communications in June of 2019.[1] The problems with this approach were so apparent that I could not imagine this idea making its way into the Queen’s Speech as part of the legislative programme for the newly elected Conservative Government. It has, so let me briefly outline my concerns. 

Robert Huntington, The Nanny State, book cover

The aim has been to find a way to stop illegal or ‘unacceptable’ content and activity online. The problem has been finding a way to regulate the Internet and social media in ways that could accomplish this aim without violating the privacy and freedom of all digital citizens – networked individuals, such as yourself. The big idea has been to apply a duty of care responsibility on the social media companies, the intermediaries between those who use the Internet. Generally, Internet companies, like telephone companies, in the past, would not be held responsible for what their users do. Their liability would be very limited. Imagine a phone company sued because a pedophile used the phone. The phone company would have to surveil all telephone use to catch offenses. Likewise, Internet intermediaries will need to know what everyone is using the Internet and social media for to stop illegal or ‘unacceptable’ behavior. This is one reason why many commentators have referred to this as a draconian initiative. 

So, what are the possible harms? Before enumerating the harms it does consider, it does not deal with harms covered by other legislation or regulators, such as privacy, which is the responsibility of the Information Commissioner’s Office (ICO). Ironically, one of the major harms of this initiative will be to the privacy of individual Internet users. Where is the ICO?

The harms cited as within the scope of this cyber harms initiative included: child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; harassment and cyberstalking;  hate crime; encouraging and assisting suicide; incitement to violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18). This is only a start, as there are cyber harms with ‘less clear’ definitions, including: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM); and underage exposure to legal content, such as children accessing pornography, and spending excessive time online – screen time.  Clearly, this is a huge range of possible harms, and the list can be expanded over time, as new harms are discovered. 

Take one harm, for example, disinformation. Seriously, do you want the regulator, or the social media companies to judge what is disinformation? This would be ludicrous. Internet companies are not public service broadcasters, even though many would like them to behave as if they were. 

The idea is that those companies that allow users to share or discover ‘user-generated content or interact with each other online’ will have ‘a statutory duty of care’ to be responsible for the safety of their users and prevent them from suffering these harms. If they fail, the regulator can take action against the companies, such as fining the social media executives, or threatening them with criminal prosecution.[2]

The White Paper also recommended several technical initiatives, such as to flag suspicious content, and educational initiatives, such as in online media literacy. But the duty of care responsibility is the key and most problematic issue. 

Specifically, the cyber harms initiative poses the following risks: 

  1. Covering an overly broad and open-ended range of cyber harms;
  2. Requiring surveillance in order to police this duty that could undermine privacy of all users;
  3. Incentivizing companies to over-regulate content & activity, resulting in more restrictions on anonymity, speech, and chilling effects on freedom of expression;
  4. Generating more fear, and panic among the general public, undermining adoption & use of the Internet and widening digital divides;
  5. Necessitating an invasive monitoring of content, facing a volume of instances that is an order of magnitude beyond traditional media and telecom, such as 300 hours of video posted on YouTube every minute;
  6. Essentially targeting American tech giants (no British companies), and even suggesting subsidies for British companies, which will be viewed as protectionist, leaving Britain as a virtual backwater of a more global Internet; 
  7. Increasing the fragmentation of Internet regulators: a new regulator, Ofcom, new consumer ‘champion’, ICO, or more?

Notwithstanding these risks, this push is finding support for a variety of reasons. One general driver has been the rise of a dystopian climate of opinion about the Internet and social media over the last decade. This has been exacerbated by concerns over child protection and elections in the US, across Europe, such as with Cambridge Analytica, and with Brexit that created the spectre of foreign interference. Also, Europe and the UK have not developed Internet and social media companies comparable to the so-called big nine of the US and China. (While the UK has a strong online game industry, this industry is not mentioned at all in the White Paper, except as a target of subsidies.) The Internet and social media companies are viewed as foreign, and primarily American, companies that are politically popular to target. In this context, the platformization of the Internet and social media has been a gift to regulators — the potential for companies to police a large proportion of traffic, providing a way forward for politicians and regulators to ‘do something’. But at what costs? 

The public has valid complaints and concerns over instances of online harms. Politicians have not known what to do, but now have been led to believe they can simply turn to the companies and command them to stop cyber harms from occurring, or they will suffer the consequences in the way of executives facing steep fines or criminal penalties. But this carries huge risks, primarily in leading to over-regulation and inappropriate curtailing of the privacy and freedom of expression of all digital citizens across the UK. 

You only need to look at China to see how this model works. In China, an Internet or social media company could lose its license overnight if it allowed users to cross red lines determined by the government. And this fear has unsurprisingly led to over-regulation by these companies. Thus, the central government of China can count on private firms to strictly regulate Internet content and use. A similar outcome will occur in Britain, making it not the safest place to be online, but a place you would not want to be online with your content with even screen time under surveillance. User-generated content will be dangerous. Broadcast news and entertainment will be safe. Let the public watch movies. 

In conclusion, while I am an American, I don’t think this is simply an American obsession with freedom of expression. This right is not absolute even in the USA. Internet users across the world value their ability to ask questions, voice concerns, and use online digital media to access information, people, and services they like without fear of surveillance.[3] It can be a technology of freedom, as Ithiel de Sola Pool argued, in countries that support freedom of expression and personal privacy. If Britons decide to ask the government and regulators to restrict their use of the Internet and social media – for their own good – then they should support this framework for an e-nanny, or digital-nanny state. But its implications for Britain are real cyber harms that will result from this duty of care framework. 


[1] A link to my slides for this presentation is here: https://www.slideshare.net/WHDutton/online-harms-white-paper-april-2019-bill-dutton?qid=5ea724d0-7b80-4e27-bfe0-545bdbd13b93&v=&b=&from_search=1

[2] https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7

[3] Dutton, W. H., Law, G., Bolsover, G., and Dutta, S. (2013, released 2014) The Internet Trust Bubble: Global Values, Beliefs and Practices. NY: World Economic Forum. 

Should Tweeting Politicians be able to Block Users?

An interesting debate has been opened up by lawyers who have argued that President Trump should not block Twitter users from posting on Twitter. I assume this issue concerns his account @realDonaldTrump (32M followers) but the same issue would arise over his newer and official account as President @realDonaldTrump (almost 19M followers).

th

Apparently, the President has blocked users who may have made rude or critical comments to one or more of his Twitter posts. Regardless of the specifics of Donald Trump’s tweets, and specific individuals blocked, the general question is: Should any American politician who tweets be able to block any user without violating the user’s first amendment rights? I would say, yes, but others, including the lawyers posing this question, would disagree.

I would think that any user has a right to block any other user, particularly if they appear to be a malicious user, bot, or simply obnoxious. I’d argue this on the basis that these are the affordances of Twitter, and the rules of the site are – or should be – known by users. Moreover, the potential for blocking is a means of maintaining some level of civility on one’s social media. Having rude or obnoxious users posting harassing comments could frighten other users off the site, and thereby undermine a space for dialogue and the provision of information. If there is no way for a social media site to moderate its users, its very survival is at risk.

I actually argued this in the mid-1990s, when the issue surrounded electronic bulletin boards, and some of the first public forums, such as Santa Monica, California’s Public Electronic Network (PEN).* Essentially, I maintained that any democratic forum is governed by rules, such as Robert’s Rules of Order for many face-to-face meetings. Such rules evolved in response to difficulties in conducting meeting without rules. Some people will speak too long and not take turns. Some will insult or talk over the speaker. Democratic communication requires some rules, even thought this may sound somewhat ironic. As long as participants know the rules in advance, rules of order seem legitimate to enabling expression. Any rule suppresses some expression in order to enable more equitable, democratic access to a meeting. Obviously, limiting a tweet to 140 characters is a restriction on speech, but it has fostered a rich medium for political communication.

In this sense, blocking a Twitter user is a means for moderation, and if known in advance, and not used in an arbitrary or discriminatory way, it should be permitted. That said, I will post a Twitter poll and let you know what respondents believe. Bryan M. Sullivan (2017), an attorney, seems to argue a very different position in his Forbes article.** I respectively disagree, but wonder what the Twitter community thinks, while it is easy to guess that they will be on the side of not being blocked. But please think about it, before you decide.

Reference

*Dutton, W. H. (1996), ‘Network Rules of Order: Regulating Speech in Public Electronic Fora,’ Media, Culture, and Society, 18 (2), 269-90. Reprinted in David, M., and Millward, P. (2014) (eds), Researching Society Online. (London: Sage), pp. 269-90.

**Sullivan, B. (2017), ‘Blocked by the President: Are Trump’s Twitter Practices Violating Free Speech?’, Forbes, available here: https://www.forbes.com/sites/legalentertainment/2017/06/08/blocked-by-the-president-are-trumps-twitter-practices-violating-free-speech/#40fe73043d57

Wonderful Student Team on Study of Whiteboards at MSU

I am working with two of my masters students on a study of the issues that arose over whiteboards in the dormitories at MSU. The students presented their conclusions yesterday, and today they finish their paper. I’ll then work with their paper to develop a working paper that we might blog or disseminate in various ways. It was a fascinating and fun project is several ways. It was for a course on media and information policy, so this led us to quickly see the whiteboard as a media for communication and information. It is simple – everyone understands it, but it raises many of the same issues that are raised by social media and the Internet on college campuses. It also fits into the rising debate over speech on college campuses. Can’t wait to share our findings, which I believe to demonstrate the value of research in contrast to journalistic coverage of events such as the whiteboard controversy at MSU. It also really does speak to the issues of freedom of communication and civility in the university context.

Most importantly, it was a delight working with Irem Gokce Yildirim, an international student from Turkey, and Bingzhe Li, an international student from China, on this study of communication on an American campus. This is the kind of experience that makes teaching so enjoyable and rewarding.

[We are all laughing about my clumsy efforts to take this with my selfie stick.]

Irem, Bill, and Bingzhe

UNESCO’s Connecting the Dots: Options for Future Action, 3-4 March 2015

UNESCO’s CONNECTing the Dots conference will reflect on a report of UNESCO’s Internet Study, entitled ‘Keystones to foster inclusive Knowledge Societies: Access to information and knowledge, Freedom of Expression, Privacy, and Ethics on a Global Internet’. Representatives from 180 Member States will be present to present and discuss the major themes of this report. It will be held at the headquarters of UNESCO at 7, place de Fontenoy, Paris, 75007, France. As a contributor to this study and the report, I will be there to help moderate, report, and summarize the conclusions of the two-day meeting.

My policy class at the Quello Center at MSU is reading the report, and will join the live stream of the conference. I hope you will do the same. Information about live streaming of the event will be on the conference Web site, so consider joining the conversation. UNESCO’s is doing all it can do to ensure that this is truly a multistakeholder consultation on how UNESCO can contribute to fostering an inclusive, global, open and secure Internet in the coming years.  UNESCO2

 

 

 

 

 

 

 

 

Notes:

Report available at: http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/internet_draft_study.pdf

Conference Web Site at: http://en.unesco.org/events/connecting-dots-options-future-action