I had the opportunity to work with Merit, Michigan’s research and education network, and the Quello Center at MSU, who have teamed up on a comment to the US NTIA on how to enhance indicators of broadband access. The comment provides an innovative approach to consumer sourcing of broadband availability data that builds off the FCC’s initiatives with crowd sourcing, but also leverages the strategic advantages of Merit, as a research educational network that covers the State of Michigan. If successful, this approach has the potential to be scaled nationally. The comment provides an overview of current approaches, the potential of consumer-sourced data, and an outline of their approach.
The seven papers in the special issue span topics concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the papers reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.
This special issue is the first major release of Internet Policy Review in its fifth anniversary year. The open access journal on internet regulation is a high-quality publication put out by four leading European internet research institutions: The Humboldt Institute for Internet and Society (HIIG), Berlin; the Centre for Creativity, Regulation, Enterprise and Technology (CREATe), Glasgow; the Institut des sciences de la communication (ISCC-CNRS), Paris; the Internet Interdisciplinary Institute (IN3), Barcelona.
The release of this special issue officially kicks off the Internet Policy Review anniversary series of activities, including both an Open Access Minigolf during the Long Night of the Sciences (Berlin) and the IAMCR conference (Eugene, Oregon) in June, a Grand anniversary celebration (Berlin) in September and a participation in the AoIR2018 conference in October (Montreal). For up-to-date information on our planned activities, please kindly access: https://policyreview.info/5years
Papers in this Special Issue of Internet Policy Review
Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University
Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Utrecht University
Daniela van Geenen, University of Applied Sciences Utrecht
Mirko Tobias Schäfer, Utrecht University
Ludo Gorzeman, Utrecht University
Big crisis data: generality-singularity tensions
Karolin Eva Kappler, University of Hagen
Cryptographic imaginaries and the networked public
Sarah Myers West, University of Southern California
Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, KU Leuven
Alejandro Gonzalez Arreaza, KU Leuven
Elsemiek Apers, Conseil International du Notariat Belge
What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner, University of Hamburg
Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson, Lund University
Standard form contracts and a smart contract future
Kristin B. Cornelius, University of California, Los Angeles
Can We Make the Chatham House Rule the Exception?
It is common to debate the definition and correct implementation of the Chatham House Rule. My issue is with its over-use. It should be used in exceptional cases, rather than being routinized as a norm for managing communication about meetings.
To be clear, the Chatham House Rule (singular) is: “When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.”*
One of the central rationales of this rule was to enable more transparency by freeing governmental and other officials to speak without attribution.** Clearly, there are cases in which individuals cannot speak publicly about an issue given their position. Think about the many cases in which news sources do not wish to be identified by journalists. Similar situations arise in meetings, and it is good that The Chatham House Rule exists to use in just such occasions to promote greater transparency.
However, it is arguable that The Chatham House Rule is used in ways that do not promote transparency. For example, it is often misunderstood and used to prevent members of a meeting from conveying information provided at the meeting. Clearly, the original rule left participants ‘free to use the information’, just without identifying the source. This expansion of the Rule runs counter to the aim of the rule’s establishment.
In addition, all too often the Rule is invoked not because the content of a meeting is particularly sensitive, but because it creates a sense of tradition, and an aura of importance. It conveys the message that something important will be discussed at this meeting. However, the function of this is more in marketing a meeting rather than creating a safe setting for revealing secret, confidential, or new information.
A related rationale is that it is just ‘the way we do things’ – the tradition. In this case, there is likely to be no need for less transparency, but a case of blindly following tradition, resulting in information being inadvertently suppressed.
In many ways, the times are making The Chatham House Rule more problematic.
First, history is pushing us toward more transparency, not less. The spirit of the Rule should lead us to apply it only when necessary to open communication, such as around a sensitive issue, not to routinely regulate discussion of what was said in a meeting.
Secondly, the authenticity of information that comes out of a meeting is often enhanced by knowing more information about its source. If a new idea or piece of information is attributed to an individual, that individual can become a first source for authenticating what was said, and for follow up questions.
Thirdly, technical advances are making it less and less realistic to keep the source of information confidential. Leaks, recordings, live blogging and more are making transparency the norm of nearly every meeting. That is, it is better to assume that any meeting is public than to assume any meeting is confidential.
Over a decade ago, I once organized and chaired a meeting that included the UK’s Information Commissioner (the privacy commissioner, if you will), and it was conducted under The Chatham House Rule. At the break, I checked with my IT group about how the recording was going, as we were recording the meeting for preparing a discussion paper to follow. Lo and behold, the meeting was being Webcast! This made for a good laugh by the Commissioner and all when we reconvened, but it also reminded me that everyone should assume the default of a meeting in the digital world is that all is public rather than private.
Finally, there are better ways to handle information in today’s technical and political contexts. Personally, I usually record meetings that are about academic or applied matters, as opposed to meetings about personnel issues, for example. So if we convene a group to discuss a substantive issue, such as a digital policy issue like net neutrality, we let all participants know that presentations and discussions will be recorded. We do not promise that anything will be confidential, as it is not completely under our control, but we promise that our recording will be used primarily for writing up notes of the meeting, and that if anyone is quoted, they will be asked to approve the quote before it is distributed publicly.
Of course, when individuals request that something remains confidential, or confined to those present, then we do everything we can to ensure that confidentiality. (As with The Chatham House Rule, much relies on trust among the participants in a meeting.) But this restriction is the exception, rather than the rule. This process tends to ensure more accurate reports of meetings, enable us to quote individuals, who should get credit or attribution, and support transparency.
The Chatham House Rule was established in 1927 with Chatham House being the UK’s Royal Institute of International Affairs. The worries at that time were more often about encouraging government officials to participate in a discussion about sensitive international concerns by assuring anonymity. Today there are still likely to be occasions when this rule could be useful in bringing people around the table, but that is likely to be exception and not the rule in the era of the Internet, distributed electronic conferencing, and live Tweeting.
** As noted by Chatham House: “The Chatham House Rule originated at Chatham House with the aim of providing anonymity to speakers and to encourage openness and the sharing of information. It is now used throughout the world as an aid to free discussion.” https://www.chathamhouse.org/about/chatham-house-rule
Global debate over alternative approaches to governing the Internet has been wide ranging, but increasingly has pivoted around the wisdom of “multistakeholder governance.” This paper takes controversy around a multistakeholder versus an alternative multilateral approach as a focus for clarifying the changing context and significance of Internet governance. A critical perspective on this debate challenges some of the conventional wisdom marshaled around positions on the history and future of Internet governance. By providing an understanding of the dynamics of Internet governance, this paper seeks to illuminate and engage with issues that are of rising importance to the vitality of a global infrastructure that is becoming more central to economic and social development around the world. Based on the perspective developed in this paper, a multistakeholder process appears best suited for helping a widening array of actors, including multilateral organizations, to connect a worldwide ecology of choices that are governing the Internet.
My paper is being posted on SSRN and I’ll be speaking at the Digital Futures Conference at Shanghai Jiao Tong University this week.
Modernizing and Inspiring a “Startup Mentality” in Legacy Information Technology Organizations
Speakers: David A. Bray, Oxford Martin Associate and CIO of the U.S. FCC, Yorick Wilks, and Greg Taylor
19 June 2014 from 4-5 pm
OII Seminar Room, 1 St Giles’, Oxford
By some estimates, 70% of IT organization budgets are spent on maintaining legacy systems. These costs delays needed transitions to newer technologies. Moreover, this cost estimate only captures those legacy processes automated by IT; several paper-based, manual processes exist and result in additional hidden, human-intensive costs that could benefit from modern IT automation.
This interactive discussion will discuss the opportunities and challenges with inspiring a “startup mentality” in legacy information technology organizations. Dr. David Bray, will discuss his own experiences with inspiring a “startup mentality” in legacy IT organizations as well as future directions for legacy organizations confronted with modernization requirements. The discussion will be chaired by OII’s Dr. Greg Taylor, and Yorick Wilks, an OII Research Associate, and Professor of Artificial Intelligence in the Department of Computer Science at the University of Sheffield, will offer his comments and responses to David’s ideas before opening the discussion to participation from the audience.
Information about the speakers:
David A. Bray: http://www.oxfordmartin.ox.ac.uk/cybersecurity/people/575
Yorick Wilks: http://www.oii.ox.ac.uk/people/?id=31
Greg Taylor: http://www.oii.ox.ac.uk/people/?id=166
After 12 great years at Oxford, I am delighted to be joining MSU as their new Quello Professor. Not sure how my former USC Trojan colleagues will react to me joining the Spartans! The current Director of the Quello Center, Professor Steve Wildman, a recent Chief Economist at the FCC, posted a much appreciated announcement of the appointment. I’ll be joining MSU in August 2014 and look forward to staying in touch with you over this and related blogs in the future. One of my goals will be to put the Internet and Web into the center of a forward strategy for building the Quello Center’s role in the new digital world of communication research, policy and regulation. My work as a co-principal on the Global Cyber Security Capacity Centre will continue at MSU, as will my work on the Fifth Estate, partly through the support of a project on collaboration at the DTU (Danmarks Tekniske Universitet) as well as through support of the Quello Center. At MSU, I will hold the James H. Quello Chair of Media and Information Policy.
I have been quite interested in the Internet of Things since participating in a ‘roadmapping’ workshop organized by the TSB SIG on the topic. I chaired a group focused on the social science aspects of the IoT, which led to a working paper that is available online, entitled ‘A Roadmap for Interdisciplinary Research on the Internet of Things: Social Sciences’.
This eventually evolved into published article in Info, an Emerald journal: William Dutton, (2014) “Putting things to work: social and policy challenges for the internet of things”, info, Vol. 16 Iss: 3 Available soon at: http://www.emeraldinsight.com/journals.htm?issn=1463-6697&volume=16&issue=3&articleid=17108501&show=pdf
I’ve also spoken about the IoTs in a short video produced by VOX (Voices from Oxford) focused on my edited book with Mark Graham, entitled Society and the Internet (OUP 2014). The interview is conducted by Prof Christine Borgman, Professor and Presidential Chair in Information Studies, UCLA. The interview is primarily about the edited book, with an example drawn from the Internet of Things. You can see the video at: http://www.voicesfromoxford.org/video/society-and-the-internet-of-things/423