Jettison the Digital Nanny State: Digitally Augment Users

My last blog argued that the UK should stop moving along the road of a duty of care regime, as this will lead Britain to become what might be called a ‘Digital Nanny State’, undermining the privacy and freedom of expression of all users. A promising number of readers agreed with my concerns, but some asked whether there was an alternative solution.

Before offering my suggestions, I must say that I do not see any solutions outlined by the duty of care regime. Essentially, a ‘duty of care’ approach[1], as outlined in the Cyber Harms White Paper would delegate solutions to the big tech companies, threatening top executives with huge fines or criminal charges if they fail to stop or address them.[2] That said, I assume that any ‘solutions’ would involve major breaches of the privacy and freedom of expression of Internet users across Britain given that surveillance and content controls would be the most likely necessity of their approach. The remedy would be draconian and worse that the problems to be addressed.[3]

Nevertheless, it is fair to ask how the problems raised by the lists of cyber harms could be addressed. Let me outline elements of a more viable approach. 

Move Away from the Concept of Cyber Harms

Under the umbrella of cyber harms are lumped a wide range of problems that have little in common beyond being potential problems for some Internet users. Looked at with any care it is impossible to see them as that similar in origin or solution. For example, disinformation is quite different from sexting. They involve different kinds of problems, to different people, imposed by different actors. Trolling is a fundamentally different set of issues than the promotion of female genital mutilation (FGM). The only common denominator is that any of these actions might result is some harm at some level for some individuals or groups – but they are so different that they violate common sense and logic to put them into the same scheme. 

Moreover, any of the problems are not harms per se, but actions that could be harmful – maybe even lead to many harms at many different levels, from psychological to physical.  Step one in any reasonable approach would be to decompose this list of cyber harms into specific problems in order to think through how each problem could be addressed. Graham Smith captures this problem in noting that the mishmash of cyber harms might be better labelled ‘users behaving badly’.[4] The authors of the White Paper did not want a ‘fragmented’ array of problems, but the reality is that there are distinctly different problems that need to be addressed in different ways in different contexts by different people. For example, others have argued for looking at cyber harms from the perspective of human rights law. But each problem needs to be addressed on its own terms.

Remember that Technologies have Dual Effects

Ithiel de Sola Pool pointed out how almost any negative impact of the telephone could be said to have exactly the opposite impact as well – ‘dual effects’.[5] For example, a telephone in one’s home could undermine your privacy by interrupting the peace and quiet of the household, but it could also provide more privacy compared to people coming to your door. A computer could be used to enhance the efficiency of an organization, but if poorly designed and implemented, the same technology could undermine its efficiency. In short, technologies do not have inherent, deterministic effects, as their implications can be shaped by how we design, use and govern them in particular contexts. 

This is important here because the discussion of cyber harms is occurring is a dystopian climate of opinion. Journalists, politicians, and academics are jumping on a dystopian bandwagon that is as misleading as the utopian bandwagon of the Arab Spring when all thought the Internet would democratize the world. Both the utopian and dystopian perspectives are misleading, deterministic viewpoints that are unhelpful for policy and practice. 

Recognise: Cyber Space is not the Wild West

Many of the cyber harms listed in the White Paper are activities that are illegal. It seems silly to remind the Home Office in the UK that what is illegal in the physical world is also illegal online in so-called cyber space or our virtual world. Given that financial fraud or selling drugs is illegal, then it is illegal online, and is a matter for law enforcement. The difference is that activities online do not always respect the same boundaries as activities in the real world of jurisdictions, law enforcement, and the courts. But this does not make the activities any less illegal, only more jurisdictionally complex to police and enforce. This does not require new law but better approaches to connecting and coordinating law enforcement across geography of spaces and places. Law enforcement agencies can request information from Internet platforms, but they probably should not outsource law enforcement, as suggested by the cyber harms framework. Cyber space is not the “Wild West” and never was.

Legal, but Potentially Harmful, Activities Can be Managed

The White Paper lists many activities that are not necessarily illegal – in fact some actions are not illegal, but potentially harmful. Cyberbullying is one example. Someone bullying another person is potentially harmful, but not necessarily. It is sometimes possible to ignore or standup to a bully and find that this actually could raise one’s self-esteem and sense of efficacy. A bully on the playground can be stopped by a person standing up to him or her, or another person intervening, or a supervisor on the playground calling a stop to it. If an individual repeatedly bullies, or actually harms another person, then they face penalties in the context of that activity, such as the school or workplace. In many ways, the act of cyberbullying can be useful in proving that a particular actor bullied another person. 

Many other examples could be developed to show how each problem has unique aspects and requires different networks of actors to be involved in managing or mitigating any harms. Many problems do not involve malicious actors, but some do. Many occur in households, others in schools, and workplaces, and anywhere at any time. The actors, problems, and contexts matter, and need to be considered in addressing these issues. 

Augment User Intelligence to Move Regulation Closer to Home

Many are beginning to address the hype surrounding artificial intelligence (AI) as a technological fix.[6] But in the spirit of Douglas Englebart in the 1950s, computers and the Internet can be designed to ‘augment’ human intelligence, and AI along with other tools have the potential to augment the choices of Internet users, as so widely experience in the use of search. While technically and socially challenging, it is possible and an innovative challenge to develop approaches to using digital technology to move regulation closer to the users: with content regulation, for example, being enabled by networked individuals, households, schools, businesses, and governmental organizations, as opposed to moving regulation up to big tech companies or governmental regulators. 

Efforts in the 1990s to develop a violence-chip (V-chip) for televisions provides an early example of this approach. It was designed to allow parents to set controls to prevent young children from watching adult programming. It would move content controls closer to the viewers and, theoretically, parents. [Children were often the only members of the household who knew how to use the V-chip.] The idea was good, its implementation limited. 

Cable television services often enable the use of a child lock for reducing access by children to adult programming. Video streaming services and age verification systems have had problems but remain ways to potentially enable a household to create services safer for children. Mobile Internet and video streaming services have apps for kids. Increasingly, it should be possible to design more ways to control access to content by users and households in ways that can address many of the problems raised by the cyber harms framework, such as access to violent content, that can be filtered by users.

With emerging approaches of AI, for example, it could be possible to not simply have warning flags, but information that could be used by users to decide whether to block or filter online content, such as unfriending a social media user. With respect to email, while such tools are in their infancy, there is the potential for AI to be used to identify emails that reflect bullying behavior. So Internet users will be increasingly able to detect individuals or messages that are toxic or malicious before they even see them, much like SPAM and junk mail can disappear before ever being seen by the user.[7] Mobile apps, digital media, intelligent home hubs and routers, and computer software generally could be designed and used to enable users to address their personal and household concerns. 

One drawback might be the ways in which digital divides and skills could enable the most digitally empowered households to have more sophisticated control over content and services. This will create a need for public services to help households without the skills ‘inhouse’ to grapple with emerging technology. However, this could be a major aspect of educational and awareness training that is one valuable recommendation of the Cyber Harms White Paper. Some households might create a personalized and unique set of controls over content, while others might simply choose from a number of set profiles that can be constantly up-dated, much like anti-virus software and SPAM filters that permit users to adjust the severity of filtering. In the future, it may be as easy to avoid unwanted content as it now is to avoid SPAM and junk mail. 

Disinformation provides another example of a problem that can be addressed by existing technologies, like the use of multiple media sources and search technologies. Our own research found that most Internet users consulted four our more sources of information about politics, for example, and online (one source), they would consult an average of four different sources.[8] These patterns of search meant that very few users are likely to be trapped in a filter bubble or echo chamber, albeit still subject to the selective perception bias that no technology can cure. 


My basic argument is to not to panic in this dystopian climate of opinion and consider the following:

  • Jettison the duty of care regime. It will create problems that are disproportionately greater than the problems to be addressed.
  • Jettison the artificial category of cyber harms. It puts apples and oranges in the same basket in very unhelpful ways, mixing legal and illegal activities, and activities that are inherently harmful promotion of FMG, with activities that can be handled by a variety of actors and mitigating actions. 
  • Augment the intelligence of users. Push regulation down to users – enable them to regulate content seen by themselves or for their children. 

If we get rid of this cyber harm umbrella and look at each ‘harm’ as a unique problem, with different actors, contexts, and solutions, then they can each be dealt with through more uniquely appropriate mechanisms. 

That would be my suggestion. Not as simple as asking others to just ‘take care of this’ or ‘stop this’ but there simply is no magic wand or silver bullet that the big tech companies have at their command to accomplish this. Sooner or later, each problem needs to be addressed by often different but appropriate sets of actors, ranging from children, parents, and Internet users to schools, business and governmental organizations, law enforcement, and Internet platforms. The silver lining might be that as the Internet and its benefits become ever more embedded in everyday life and work. And as digital media become more critical that we routinely consider the potential problems as well as the benefits of every innovation made in the design, use, and governance of the Internet in your life and work. All should aim to further empower users to use, and control, and network with others to control the Internet and related digital media, and not to be controlled by a nanny state.  

Further Reading

Useful and broad overviews of the problems with the cyber harms White Paper are available by Gian Volpicelli in Wired[9] and Graham Smith[10] along with many contributions to the Cyber Harms White Paper consultation.


[1] A solicitor, Graham Smith, has argued quite authoritatively that the White Paper actually “abandons the principles underpinning existing duties of care”, see his paper, ‘Online Harms White Paper Consultation – Response to Consultation’, 28 June 2019, posted on his Twitter feed:  https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html

[2] https://www.bmmagazine.co.uk/news/tech-bosses-could-face-criminal-proceedings-if-they-fail-to-protect-users/

[3] Here I found agreement with the views of Paul Barron’s blog, ‘Response to Online Harms White Paper’, 3 July 2019: https://paulbernal.wordpress.com/2019/07/03/response-to-online-harms-white-paper/ Also, see his book, The Internet, Warts and AllCambridge: Cambridge University Press, 2018.

[4] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

[5] Ithiel de Sola Pool (1983), Forecasting the Telephone: A Retrospective Technology Assessment. Norwood, NJ: Ablex. 

[6] See, for example, Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’, October 2019, forthcoming in the European Journal of Risk Regulation, available at: https://osf.io/preprints/lawarxiv/dvx4f/

[7] https://www.theguardian.com/technology/2020/jan/03/metoobots-scientists-develop-ai-detect-harassment

[8] See Dutton, W. H. and Fernandez, L., ‘How Susceptible are Internet Users‘, Intermedia, Vol 46 No 4 December/January 2019

[9] https://www.wired.co.uk/article/online-harms-white-paper-uk-analysis

[10] https://inforrm.org/2019/04/30/users-behaving-badly-the-online-harms-white-paper-graham-smith/

Britain’s Digital Nanny State

The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. Perhaps it is because discussion of a new approach to Internet regulation occurred in the midst of the public’s focus on Brexit, this initiative has not really been exposed to critical scrutiny. Ironically, its implementation would do incredible harm to the human rights of the public at large albeit in the name of curbing the use of the Internet by malicious users, such as terrorists and pedophiles. Hopefully, it is not too late to reconsider this cyber harms framework. 

The problems with the government’s approach were covered well by Gian Voipicelli in an article in Wired UK. I presented my own concerns in a summary to the consumer forum for communications in June of 2019.[1] The problems with this approach were so apparent that I could not imagine this idea making its way into the Queen’s Speech as part of the legislative programme for the newly elected Conservative Government. It has, so let me briefly outline my concerns. 

Robert Huntington, The Nanny State, book cover

The aim has been to find a way to stop illegal or ‘unacceptable’ content and activity online. The problem has been finding a way to regulate the Internet and social media in ways that could accomplish this aim without violating the privacy and freedom of all digital citizens – networked individuals, such as yourself. The big idea has been to apply a duty of care responsibility on the social media companies, the intermediaries between those who use the Internet. Generally, Internet companies, like telephone companies, in the past, would not be held responsible for what their users do. Their liability would be very limited. Imagine a phone company sued because a pedophile used the phone. The phone company would have to surveil all telephone use to catch offenses. Likewise, Internet intermediaries will need to know what everyone is using the Internet and social media for to stop illegal or ‘unacceptable’ behavior. This is one reason why many commentators have referred to this as a draconian initiative. 

So, what are the possible harms? Before enumerating the harms it does consider, it does not deal with harms covered by other legislation or regulators, such as privacy, which is the responsibility of the Information Commissioner’s Office (ICO). Ironically, one of the major harms of this initiative will be to the privacy of individual Internet users. Where is the ICO?

The harms cited as within the scope of this cyber harms initiative included: child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; harassment and cyberstalking;  hate crime; encouraging and assisting suicide; incitement to violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18). This is only a start, as there are cyber harms with ‘less clear’ definitions, including: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM); and underage exposure to legal content, such as children accessing pornography, and spending excessive time online – screen time.  Clearly, this is a huge range of possible harms, and the list can be expanded over time, as new harms are discovered. 

Take one harm, for example, disinformation. Seriously, do you want the regulator, or the social media companies to judge what is disinformation? This would be ludicrous. Internet companies are not public service broadcasters, even though many would like them to behave as if they were. 

The idea is that those companies that allow users to share or discover ‘user-generated content or interact with each other online’ will have ‘a statutory duty of care’ to be responsible for the safety of their users and prevent them from suffering these harms. If they fail, the regulator can take action against the companies, such as fining the social media executives, or threatening them with criminal prosecution.[2]

The White Paper also recommended several technical initiatives, such as to flag suspicious content, and educational initiatives, such as in online media literacy. But the duty of care responsibility is the key and most problematic issue. 

Specifically, the cyber harms initiative poses the following risks: 

  1. Covering an overly broad and open-ended range of cyber harms;
  2. Requiring surveillance in order to police this duty that could undermine privacy of all users;
  3. Incentivizing companies to over-regulate content & activity, resulting in more restrictions on anonymity, speech, and chilling effects on freedom of expression;
  4. Generating more fear, and panic among the general public, undermining adoption & use of the Internet and widening digital divides;
  5. Necessitating an invasive monitoring of content, facing a volume of instances that is an order of magnitude beyond traditional media and telecom, such as 300 hours of video posted on YouTube every minute;
  6. Essentially targeting American tech giants (no British companies), and even suggesting subsidies for British companies, which will be viewed as protectionist, leaving Britain as a virtual backwater of a more global Internet; 
  7. Increasing the fragmentation of Internet regulators: a new regulator, Ofcom, new consumer ‘champion’, ICO, or more?

Notwithstanding these risks, this push is finding support for a variety of reasons. One general driver has been the rise of a dystopian climate of opinion about the Internet and social media over the last decade. This has been exacerbated by concerns over child protection and elections in the US, across Europe, such as with Cambridge Analytica, and with Brexit that created the spectre of foreign interference. Also, Europe and the UK have not developed Internet and social media companies comparable to the so-called big nine of the US and China. (While the UK has a strong online game industry, this industry is not mentioned at all in the White Paper, except as a target of subsidies.) The Internet and social media companies are viewed as foreign, and primarily American, companies that are politically popular to target. In this context, the platformization of the Internet and social media has been a gift to regulators — the potential for companies to police a large proportion of traffic, providing a way forward for politicians and regulators to ‘do something’. But at what costs? 

The public has valid complaints and concerns over instances of online harms. Politicians have not known what to do, but now have been led to believe they can simply turn to the companies and command them to stop cyber harms from occurring, or they will suffer the consequences in the way of executives facing steep fines or criminal penalties. But this carries huge risks, primarily in leading to over-regulation and inappropriate curtailing of the privacy and freedom of expression of all digital citizens across the UK. 

You only need to look at China to see how this model works. In China, an Internet or social media company could lose its license overnight if it allowed users to cross red lines determined by the government. And this fear has unsurprisingly led to over-regulation by these companies. Thus, the central government of China can count on private firms to strictly regulate Internet content and use. A similar outcome will occur in Britain, making it not the safest place to be online, but a place you would not want to be online with your content with even screen time under surveillance. User-generated content will be dangerous. Broadcast news and entertainment will be safe. Let the public watch movies. 

In conclusion, while I am an American, I don’t think this is simply an American obsession with freedom of expression. This right is not absolute even in the USA. Internet users across the world value their ability to ask questions, voice concerns, and use online digital media to access information, people, and services they like without fear of surveillance.[3] It can be a technology of freedom, as Ithiel de Sola Pool argued, in countries that support freedom of expression and personal privacy. If Britons decide to ask the government and regulators to restrict their use of the Internet and social media – for their own good – then they should support this framework for an e-nanny, or digital-nanny state. But its implications for Britain are real cyber harms that will result from this duty of care framework. 


[1] A link to my slides for this presentation is here: https://www.slideshare.net/WHDutton/online-harms-white-paper-april-2019-bill-dutton?qid=5ea724d0-7b80-4e27-bfe0-545bdbd13b93&v=&b=&from_search=1

[2] https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7

[3] Dutton, W. H., Law, G., Bolsover, G., and Dutta, S. (2013, released 2014) The Internet Trust Bubble: Global Values, Beliefs and Practices. NY: World Economic Forum. 

Society and the Internet’s 2nd Edition

The 2nd Edition of Society and the Internet should be out in July 2019. You can access information about the book from OUP here: https://global.oup.com/academic/product/society-and-the-internet-9780198843504?lang=en&cc=de

With the academic year fast approaching, we are hoping that the book will be useful for many courses around Internet studies, new media, and media and society. If you are teaching in this area, Mark and I hope you might consider this reader for your courses, and let your colleagues know about its availability. Authors of our chapters range from senior luminaries in our field, such as Professor Manuel Castels, who has written a brilliant foreword, to some promising graduate students.

Society and the Internet
2nd Edition.

How is society being reshaped by the continued diffusion and increasing centrality of the Internet in everyday life and work? Society and the Internet provides key readings for students, scholars, and those interested in understanding the interactions of the Internet and society. This multidisciplinary collection of theoretically and empirically anchored chapters addresses the big questions about one of the most significant technological transformations of this century, through a diversity of data, methods, theories, and approaches. 

Drawing from a range of disciplinary perspectives, Internet research can address core questions about equality, voice, knowledge, participation, and power. By learning from the past and continuing to look toward the future, it can provide a better understanding of what the ever-changing configurations of technology and society mean, both for the everyday life of individuals and for the continued development of society at large. 

This second edition presents new and original contributions examining the escalating concerns around social media, disinformation, big data, and privacy. Following a foreword by Manual Castells, the editors introduce some of the key issues in Internet Studies. The chapters then offer the latest research in five focused sections: The Internet in Everyday Life; Digital Rights and Human Rights; Networked Ideas, Politics, and Governance; Networked Businesses, Industries, and Economics; and Technological and Regulatory Histories and Futures. This book will be a valuable resource not only for students and researchers, but for anyone seeking a critical examination of the economic, social, and political factors shaping the Internet and its impact on society.

Available for Courses in 2019

Nominate an Early Career Research to Become a TPRC Junior Fellow

The TPRC is seeking to select up to 6 TPRC Junior Fellows – early-career researchers engaged in research on the Internet, telecommunication and media policy in the digital age. Please nominate individuals whom you think might make outstanding fellows. Those who have wond student paper awards at the TPRC conference as well as those who served Benton Award winners could be candidates, but we are open to anyone you feel to have the potential to do outstanding research on key issues for the TPRC, and engage other early-career researchers in our activities.

The TPRC Junior Fellows Program was designed in part to award excellence but also tobring new members into the TPRC community. Those appointed will be honoured and serve as ambassadors for TPRC, working pro bono and appointed to two-year terms by the Board. Junior Fellows will be emerging scholars with good connections to their peers, including but not limited to successful TPRC paper presenters and alumni of the Graduate Student Consortium and Benton Award.

TPRC hopes that Junior Fellows will help broaden the TPRC community, and improve the participation of underrepresented groups, such as young academics, certain disciplines not traditionally involved in telecom research who are engaged in new media and digitial policy, and those engaged in new research areas, as well as those who bring greater diversity to our community, including women, minorities, and under-represented groups.

The TPRC Board anticipates that Fellows will disseminate information about TPRC on their personal networks, and identify and engage 1-1 with prospective attendees and encourage them to participate in TPRC. In return, TPRC will recognize Fellows on the TPRC web site, and publicly welcome new appointees during the conference, and provide material and mentoring to support their outreach mission. Of course, the Early Career Fellows will be able to list this service on their resumes. Each Fellow will have a designated Board liaison, who will check in periodically to discuss support needed and progress made. TPRC will aim to support your career.

Desiderata

We’re looking for people that meet as many of the following criteria as possible. None of them are required qualifications; we don’t expect that anyone will check all the boxes.

  • From under-represented groups, including women and minorities
  • Working in new research areas and those under-represented at TPRC
  • Academic talent and promise
  • Good network of contacts, e.g. active on social media
  • Able and willing to advocate for TPRC

For information about the TPRC, see: http://www.tprcweb.com/

If you have ideas, you may contact me on this site, or by email at william.dutton@gmail.com

Cybersecurity and the Rationale for Capacity Building: Notes on a Conference

The fifth annual conference of Oxford’s Global Cyber Security Capacity Centre (GCSCC) was held in late February 2019 at the Oxford University’s Martin School. It engaged over 120 individuals from the capacity building community in one full day of conference sessions, preceded and followed by several days of more specialized meetings.*

The focus of the conference was on taking stock of the last five years of the Centre’s work, and looking ahead to the next five years in what is an incredibly fast moving area of Internet studies. So it was an ideal setting for reflecting on current themes within the cybersecurity and capacity building community. The presentations and discussions at this meeting provided a basis for reflections on major themes of contemporary discussions of cybersecurity and how they come together in ways that reinforce the need for capacity building in this area.

The major themes I took away from the day concerned 1) changing nature of threats and technologies; 2) the large and heterogeneous ecology of actors involved in cybersecurity capacity building; 3) the prominence of cross-national and regional differences; and 4) the range and prevalence of communication issues. These themes gave rise to a general sense of what could be done. Essentially, there was agreement that there was no technical fix to security, and that fear campaigns were ineffective, particularly unless Internet users are provided instructions on how to respond. However, there was also a clear recommendation not to throw up your hands in despair, as ‘cybersecurity capacity building works’ – nations need to see capacity building as a direction for their own strategies and actions.

Bill courtesy of Voices from Oxford (VOX)

I’ll try to further develop each of these points, although I cannot hope to give justice to the discussion throughout the day. Voices from Oxford (VOX) has helped capture the day in a short clip that I will soon post. But here, briefly, are my major takeaways from the day.

Changing Threats and Technologies

The threats to cybersecurity are extremely wide ranging across contexts and technologies, and the technologies are constantly and rapidly changing. Contrast the potential threats to national infrastructures from cyberwarfare with the threats to privacy from the Internet of Things, such as a baby with a toy that is online. The number of permutations of contexts and technologies is great.

The Complex Ecology of Actors

There is a huge and diverse set of actors and institutions involved in cybersecurity capacity building. There are: cybersecurity professionals, IT professionals, IT, software, and Internet industries; non-governmental organizations; donors; researchers; managers of governments and organizations; national and regional agencies; and global bodies, such as the World Economic Forum and the Internet Governance Forum. Each has many separate but overlapping roles and areas of focus, and each has a stake in global cybersecurity given the risks posed by malicious actors that can take advantage of global weaknesses.

One theme of our national cybersecurity reviews was that the multitude of actors within one country that were involved with cybersecurity often came together in one room for the very first time to speak with our research team. Cybersecurity simply involves a diverse range of actors at all levels of nations and organizations, and with a diverse array of relationships to the Internet and information and communication technologies, from professional IT teams and cybersecurity response teams to users. Developing a more coherent perspective on this ecology of actors is a key need in this area.

National and Regional Differences

Another clear theme of the day was the differences across the various nations and regions, including the obvious issues of the smaller versus larger nations in the scale of their efforts, but also between the low and high income nations. We heard cases of Somalia juxtaposed with examples from the UK and Iceland. And the range and nature of actors across these nations often differed dramatically, such as in the relevance of different global facilitating organisations, such as the World Bank.

Communication in So Many Words

Given this ecology of actors in a global arena, it might not be surprising that communication emerged as a dominant theme. It arose through many presentations and discussions of the need for awareness, coordination, collaboration (across areas and levels within nations, across countries, regions), as well as the need for prioritizing efforts and instruction and training, both of which work through communication. Of course, the conference itself was an opportunity for communication and networking that seemed to be highly valued.

What Can Be Done? Capacity Building

However, despite these technical, individual, and national differences, requiring intensive efforts to communicate, coordinate, and collaborate nationally, regionally, and globally, there were some common thoughts on what needs to be done. Time and again, speakers stressed the lack of any technical fix – or what one participant referred to as a silver bullet – to fix cybersecurity. And there was a general consensus that awareness campaigns that were basically fear campaigns did not work. Internet users, whether in households or major organizations, need instructions on what to do in order to improve their security. But doing nothing was not an option, and given the conference, it may not be surprising, but there did seem to be a general acceptance that cybersecurity capacity building was a set of instructions on a way forward. Our own research has provided empirical evidence than capacity building works, and is in the interest of every nation.**

A short video of the conference will give you a more personal sense of the international ecology of stakeholders and issues: https://vimeo.com/voicesfromoxford/review/322632731/ec0d5e5f9f 

Notes

*An overview of the first five years of the centre is available here: https://www.sbs.ox.ac.uk/cybersecurity-capacity/system/files/GCSCC%20booklet%20WEB.pdf 

**An early working paper is available online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2938078

 

 

 

Society and the Internet: a new reader for courses

A new book edited by Mark Graham and myself is in print and available for courses: Society and the Internet: How Networks of Information and Communication are Changing Our Lives. It is published by Oxford University Press, and material about the book is available on their website at: http://ukcatalogue.oup.com/product/9780199662005.do

How is society being shaped by the diffusion and increasing centrality of the Internet in everyday life and work? By bringing together leading research that addresses some of the most significant cultural, economic, and political roles of the Internet, this volume introduces students to a core set of readings that address this question in specific social and institutional contexts.

Internet Studies is a burgeoning new field, which has been central to the Oxford Internet Institute (OII), an innovative multi-disciplinary department at the University of Oxford. Society and the Internet builds on the OII’s evolving series of lectures on society and the Internet. The series has been edited to create a reader to supplement upper-division undergraduate and graduate courses that seek to introduce students to scholarship focused on the implications of the Internet for networked societies around the world.

The chapters of the reader are rooted in a variety of disciplines, but all directly tackle the powerful ways in which the Internet is linked to political, social, cultural, and economic transformations in society. This book will be a starting point for anyone with a serious interest in the factors shaping the Internet and its impact on society.  The book begins with an introduction by the editors, which provides a brief history of the Internet and Web and its study from multi-disciplinary perspectives. The chapters are grouped into five focused sections: (I) Internet Studies of Everyday Life, (II) Information and Culture on the Line, (III) Networked Politics and Government, (IV) Networked Businesses, Industries, and Economies, and (V) Technological and Regulatory Histories and Futures.

A full table of contents is below:

Society and the Internet How Networks of Information and Communication are Changing Our Lives

Manuel Castells: Foreword

Mark Graham and William H. Dutton: Introduction

Part I. Internet Studies Of Everyday Life

1: Aleks Krotoski: Inventing the Internet: Scapegoat, Sin Eater, and Trickster

2: Grant Blank And William Dutton: Next Generation Internet Users: A New Digital Divide

3: Bernie Hogan And Barry Wellman: The Conceptual Foundations of Social Network Sites and the Emergence of the Relational Self-Portrait

4: Victoria Nash: The Politics of Children s Internet Use

5: Lisa Nakamura: Gender and Race Online

Part II. Information And Culture On The Line

6: Mark Graham: Internet Geographies: Data Shadows and Digital Divisions of Labour

7: Gillian Bolsover, William H. Dutton, Ginette Law, And Soumitra Dutta: China and the US in the New Internet World: A Comparative Perspective

8: Nic Newman, William H. Dutton, And Grant Blank: Social Media and the News: Implications for the Press and Society

9: Sung Wook Ji And David Waterman: The Impact of the Internet on Media Industries: An Economic Perspective

10: Ralph Schroeder: Big Data: Towards a More Scientific Social Science and Humanities?

Part III. Networked Politics And Governments

11: Miriam Lips: Transforming Government by Default?

12: Stephen Coleman And Jay Blumler: The Wisdom of Which Crowd? On the Pathology of a Digital Democracy Initiative for a Listening Government

13: Sandra Gonzalez-Bailon: Online Social Networks and Bottom-up Politics

14: Helen Margetts, Scott A. Hale, Taha Yasseri: Big Data and Collective Action

15: Elizabeth Dubois And William H. Dutton: Empowering Citizens of the Internet Age: The Role of a Fifth Estate

Part IV: Networked Businesses, Industries AND Economies

16: Greg Taylor: Scarcity of Attention for a Medium of Abundance: An Economic Perspective

17: Richard Susskind: The Internet in the Law: Transforming Problem-Solving and Education

18: Laura Mann: The Digital Divide and Employment: The Case of the Sudanese Labour Market

19: Mark Graham: A Critical Perspective on the Potential of the Internet at the Margins of the Global Economy

Part V. Technological And Regulatory Histories And Futures

20: Eli M. Noam: Next-Generation Content for Next-Generation Networks

21: Christopher Millard: Data Privacy in the Clouds

22: Laura Denardis: The Social Media Challenge to Internet Governance

23: Yorick Wilks: Beyond the Internet and Web

Let us know what you think of our reader, and thanks for your interest.

Identifying centres of cybersecurity research expertise – results to date

We have volunteered to help CDEC find expertise in areas key to its work. One of the first areas we’ve considered is cybersecurity.  Where does expertise lie in cybersecurity research in the UK, but also internationally. We asked six cybersecurity researchers in the UK to indicate the locus of the most important contemporary work. While we would not claim to have done a comprehensive study, we found a good deal of convergence through this reputational review of the field.
The top five sites that these experts identified (not in order of priority) were:

•    Cambridge University’s Security Group in the Computer Laboratory: one of the longest running security programmes in UK universities.
Contact: Ross Anderson at Ross.Anderson@cl.cam.ac.uk

•    Oxford University’s Cyber Security Centre, which brings together relevant Oxford departments, and associated centres beyond Oxford, such as in the Cybersecurity Capacity Building Project.
Contact: sadie.creese@cs.ox.ac.uk

•    Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast, founded in 2008 in the Institute of Electronics, Communications and Information Technology, and claimed to be the UK’s largest university cyber security research lab.
Contact: Professor John McCanny, Principal Investigator info@ecit.qub.ac.uk

•    Royal Holloway’s Information Security Group, University of London
Contact: ISG Administrator isg@rhul.ac.uk

•    UCL’s Academic Centre of Excellence for Cyber Security Research, set up in 2012, by GCHQ in partnership with the Research Councils’ Global Uncertainties Programme (RCUK) and the Department for Business Innovation and Skills (BIS).
Contact: Professor Angela Sasse  a.sasse@cs.ucl.ac.uk

Other UK programmes that were mentioned, but not by multiple experts, were:

•    Bristol Security Centre, University of Bristol
•    Institute for Security Science and Technology, Imperial College London
•    Security Lancaster, Lancaster University
•    Academic Centre of Excellence in Cybersecurity, University of Southampton

All of the above centres have been awarded Centre of Excellence status in cyber security research under the BIS/RCUK/EPSRC scheme. While they were not mentioned by our sample of experts, two other centres are among those awarded Centre of Excellence status in cybersecurity research: Centre for Cybercrime and Computer Security, Newcastle University and the School of Computer Science, University of Birmingham.

In response to more international programmes, all of the nominations by our reviewers identified US programmes as the most significant, including:

•    Belfer Center for Science and International Affairs in the Harvard Kennedy School. This centre has launched a Cyber Security Initiative as part of a project known as Project Minerva, a joint effort of the Department of Defense, Massachusetts Institute of Technology, and Harvard University.

•    CyLab at Carnegie Mellon University, perhaps the largest cyber security group in the US, joining researchers across more than six departments.

•    Cornell University’s Department of Computer Science that lists security as one of the major strengths of the department

•    .Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University

•    The Institute for Security, Technology, and Society (ISTS), Dartmouth

•    Cyber Security Policy and Research Institute (CSPRI) at The George Washington University

•    .Stanford Security Laboratory, Stanford University

•    Pacific Northwest National Laboratory (PNNL) National Security Directorate, Cybersecurity

We hope this list stimulates discussion about where relevant expertise on cyber security for the CDEC lies in the UK and abroad. This represents work in progress, and any feedback on our list to date would be very welcome. If there are centres omitted or where you wish to provide information about specific areas of strengths or contacts, please comment or email.

Thanks to our students Elizabeth Dubois, Gillian Bolsover and Heather Ford, who helped conduct, review and collate this research, and to the experts in the field for their supporting input in this area.

Bill Dutton and Bill Imlah
Oxford

Web Science Conference 23-26 June 2014 at Indiana University

I have agreed to co-chair the next Web Science Conference, Web Science 2014, which will be held in 2014 at Indiana University. The lead chairs are Fil Menczer and his group at Indiana University, and Jim Hendler at Rensselaer Polytechnic Institute, and one of the originators of the Semantic Web. The dates are 23-26 June 2014.

My mission is to help bring social scientists and humanities scholars to this conference to ensure that it is truly multi-disciplinary, and also to help encourage a more global set of participants, attracting academics from Europe but also worldwide. IU_H_P2_S1_T1

For those who are not quite sure of the scope and methods of Web Science, let me recommend a chapter in my handbook by Kieron O’Hara and Wendy Hall, entitled ‘Web Science’, pp. 48-68 in Dutton, W. H. (2013) (ed.), The Oxford Handbook of Internet Studies. Oxford: Oxford University Press.The core of the Web Science community sometimes view this as a field or discipline on its own, while I would define it as a topic or focus within a broader, multdisciplinary field of Internet Studies.

In any case, I will be adding to this blog over the coming months as the conference planning progresses, but please consider participating. Information about the conference is posted at: http://websci14.org/#

 

The EU’s Right to be Forgotten and Why it is Wrong

The Guardian today featured two articles that bring home the risks of governmental policies and directives seeking to enforce the ‘right to forget’. One was about Britain (wisely) seeking to opt-out of EU’s data protection regulation that dictates the right for people to delete information from the Internet, such as an embarrassing photo. The other article is about the British Library archiving the Web, in collaboration with other main copyright libraries. With one hand, many governments are seeking ways to enable libraries to overcome restrictions, such as copyrights, to capture our cultural heritage, while with the other hand, many governments are imposing regulations that will make it easier to erase that history. In the name of privacy and data protection, governments are legitimizing their role in censoring the Internet and Web, and creating new threats to freedom of expression.

Erasing history is not only Orwellian and unfeasible, given the scale of the Web, but it will have a chilling effect on freedom of expression – ushering in a legitimate government role in censorship, even in liberal democratic societies. It is clearly an  issue of Internet governance that any advocate of freedom of expression should not ignore. It will also create a legal swamp by expanding law and regulation in the privacy and data protection area that is already fraught with uncertainties, and arguably already covers any abuse of personal privacy that is the target of right to be forgotten rules.

My apologies for this brief position statement, but I have written more about this threat to expression in a UNESCO publication and a review in Science. If you think I may wish to forget that I wrote these words at some future date, you may want to save it on your computer.

References

Dutton, W. (2010), ‘Programming to Forget’, a review of Delete: The Virtue of Forgetting in the Digital Age by Viktor Mayer-Schönberger in Science, Vol. 327, 19 March: 1456. http://www.sciencemag.org/cgi/content/summary/327/5972/1456-a

William H. Dutton, Anna Dopatka, Michael Hills, Ginette Law, and Victoria Nash (2011), Freedom of Connection – Freedom of Expression: The Changing Legal and Regulatory Ecology Shaping the Internet. Paris: UNESCO, Division for Freedom of Expression, Democracy and Peace. Reprinted in 2013; Trans. In French and Arabic.

 

Independence of the Press is Key to Any Leveson Reform

It is heartening to read Alan Rusbridger’s editorial in The Guardian of 25 March 2013, as he seems to have become more aware of some of the serious weaknesses in the proposed press regulation, which has changed in ways that may have undermined his early support. See: http://www.tandfonline.com/toc/rics20/current He calls attention to the private meetings with Hacked Off, the imposition of punitive damages on those who don’t sign up to the regulator, and the power of the regulator to direct papers to print apologies – even where to place them. Hardly an independent press nor an independent regulator. He notes: “The advocates of reform – including the Guardian – should be unenthusiastic about endorsing a messy compromise with unintended consequences and with the prospect of years of stalemate in the courts and with the regulator itself.” Mr Rusbridger does complain that few people raised concerns over freedom of the press during early private meetings among editors, but I should hope that all of the stakeholders see the value of public debate on issues that threaten the independence of the press, and freedom of expression online. Perhaps there is hope that politicians will get off this escalator towards inappropriate press regulation and take the time to find a resolution that does not threaten the independence of the press and impose governmental controls on bloggers and expression online.

I’ve expressed my own worries online: http://people.oii.ox.ac.uk/dutton/2013/03/20/how-politicians-can-endorse-a-statutory-press-regulator-and-what-can-be-done/