Could it be that the digerati are beginning to wonder about the origins of such ‘innovations’ as video communication, AI, remote work, and more? Are they discovering that all these innovations have a long history in the development of information and communication technologies (ICTs)?
These questions arose as I’ve become aware of a variety of initiatives to better document the history of communication and information technologies and the people associated with the communication revolution. It is arguable that most individuals focused on new advances in media and ICTs have no historical perspective at all. I’ve called it ‘innovation amnesia’. Some think video is new, for example, but have little or no knowledge of the many efforts to launch video communication since the late 1960s.
Most recently I was interviewed by the individuals behind the development of Archives of IT. These developers are realizing that many of those associated with the emergence of information technologies have either passed away or may not be around many more years. The Archives are collecting oral histories of those closely associated with IT and the IT industry in the UK and worldwide. As they began to look at those studying the societal implications of IT, they interviewed me, as the founding director of the OII, among a number of others to begin tracking its study. See: https://archivesit.org.uk/interviews/professor-bill-dutton/
This experience reminded me of my own work in archiving the papers of James H. Quello, one of the longest serving members of the US Federal Communications Commission (FCC). When I was Director of the Quello Center at MSU I put together the James H. Quello Archives, which is being supported and up-dated by the Quello Center.
Similarly, an old colleague from my USC days (A. Michael Noll) has assembled an archive of William O. ‘Bill’ Baker, who was the vice president for research at Bell Telephone Laboratories from 1955 to 1973, retiring as Chairman in 1980. Bell Labs was critical to the revolution in communication technologies.
Teaching and research could be supported by new materials such as these. Might these be traces of a new interest in the history of ICTs and their implications for society? Possibly, and for two basic reasons.
First, there is an increasingly interesting and cumulative history to document.
Secondly, the gathering of information and conduct of interviews, for example, are increasingly possible anywhere in the world. ICTs have democratized the process of archiving so we no longer have to rely only on special collections in libraries. Individuals and civic minded associations have the wherewithal to archive.
So, as we see people talking about old enduring topics as if they are genuinely new, more of us can see the value of better documenting and preserving the social dynamics of past successes and failures – and we have the means to do it – archiving.
Having created and served on advisory boards in a number of organisations and countries, I’ve begun to see some principles that can guide others serving on an advisory board. I am not a management consultant nor an expert on advisory boards, but as I try to think through my own experiences on boards, I thought it would be fun to write about my views on what could be key principles. These have been learned the hard way, by seeing the reactions of organisations and other members of boards to my interventions – efforts to give advice and support organisations, mainly those involved in academic research.
Any organization, such as an academic unit, can get too insulated or too loosely connected to a multitude of important stakeholders, ranging from other academics to policy and practitioner communities and any audiences it seeks to reach. They may ask themselves: Is our work meeting the high expectations set for the organisation? Are we doing our work in ways that are recognised as best practice in relevant communities? How can we excel further on any number of criteria? Are we missing important topics or areas of work? Are there new and promising sources of funding? To answer such questions, it can be helpful to set up a group of individuals who are trusted to be constructive but also have a critical perspective that can inform the unit moving forward.
Given such questions, the organization often sets up an advisory board to review the unit’s work on a periodic basis and give them feedback on notable strengths and any weaknesses that could be addressed. A report or multiple documents are assembled for the board members to review and provide feedback during a short but substantively rich meeting of the board. So what principles might help board members in contributing to their next board meeting? I apologise in advance for keeping this simple, but I often forget them in the process of meeting.
The organization knows far more than the board about its activities and practices.
One positive role of a board meeting is that it should force or at least incentivise the organisation to pull together a clear overview of its activities and the issues it is facing. In the process of pulling this information together and communicating it to the board, a large proportion of the work of the advisory board is accomplished. The managers and leadership of the organisation updates its sense of who has done what and with what impact over the last period of time. In the course of doing so, the organisation develops a better understanding of its strengths and weaknesses, and how they can or cannot be addressed, before the board even meets.
An obvious corollary of this point is that outside advisory boards really can’t possibly understand internal personnel and management issues. They might need to know they exist but without knowing the individuals and circumstances in detail, they have no basic grounding for advising an organisation. Keep the board focused on the work of the organisation and its implications. At the same time, I’ve been impressed when an organisation does not hesitate to note that it is facing some interpersonal, management, or leadership issues as one aspect of conveying the factors facilitating or limiting its work.
2. Advice is not likely to be the only – or even primary – objective of meeting with the board.
An advisory board can help progress a number of objectives with advice being only one and not necessarily the primary reason for its existence. As noted above, it creates an occasion for self-reflection by the organisation. In addition, it can help the unit reach out to other stakeholders and constituencies – by incorporating influential individuals across these different targets for outreach and providing them with information about the organisation. It can provide support to the organisation, endorsing its activities and practices. The status and diversity of individuals on the board can communicate something about the importance and diversity of the organisation. The board in a reflection of the organization.
3. There is limited time for advice.
It seems inevitable that there is limited time a board can be expected to spend reading material before a meeting, and meetings are generally limited to one or at most a few hours. Once board members reintroduce themselves to one another and the organisation presents information to remind the board about its activities and accomplishments and any new developments then little time is left for real feedback or discussion. Organisations should and usually do try to ensure there is ample time for discussion, but often over-program meetings in ways that little time is actually left for feedback. It doesn’t help to send a questionnaire or email soliciting further feedback, as the organisation will only hear what there is time to communicate during the meeting.
This is one reason why online meetings do not work nearly as well as personal face-to-face meetings of a board. Recent experience during the pandemic suggests that more advisors can attend an online meeting, which is one of the best features of meeting online. However, most in person meetings are able to embed meaningful but informal communication around the event, such as a dinner or site visits. These occasions enable individuals to clarify their assessments, time for people to get over their differences of opinion and ‘makeup’, and for the group to gain a better sense of its value to and support by the organisation.
4. Advice is difficult to give and to receive.
It is common for board members to provide very general feedback that recognises the accomplishments documented by the material communicated to the members and validating the challenges the organisation has identified. In 1995, I put together a document for the Programme on Information and Communication Technologies (PICT) that I directed which was entitled a “A Profile of Research and Publications 1995”. My key aim in compiling this was to communicate the incredible range and quality of research projects and publications that the PICT centres had completed. I was delighted when the board noted that we had done a great deal over the span of the project – they were impressed as they had not seen this pulled together until this report. It was 120 pages jammed packed with information about our work and its impact. So the members simply acknowledging the productivity and quality of the programme was exactly the feedback I had hoped for. Very simple.
Too often, as a member of a board, I can get carried away with a perceived need to provide advice, partly, I am sure, as a reflection of commonly being asked to review books, articles, or proposals, when critical comment is genuinely requested. But an organisation probably does not want a review of its report to the board and most advice we could give is already known by the organisation. As above, they know more than the board about the strengths and weaknesses of their organisation. So I try to prioritise what I have to offer in case I have a very limited time to speak – what would be my one idea.
Nevertheless, organisations need to listen and accept that they have asked for advice in creating an advisory board. So do not be surprised if you get advice you don’t want to hear. There is no need to take the advice. More than likely it is something that should have been considered before, but it is always worth understanding what the advisor is seeing and saying, and asking why particular advice was given and whether it is an idea for the leadership to kill, further discuss, develop, or possibly better deal with in your communication about the organisation’s project(s).
5. Advise and forget.
Finally, despite all I have said above, it is entirely fair and appropriate for any member of an advisory board to give any feedback that seems useful for the board member to convey. In my opinion, as a board member, you really should not worry about how it is received or whether it will be well received. Some may regard your advice as simplistic, wrong, old-fashioned, patronising, ill-informed, or in any other way, unhelpful. But that is not your problem. You are simply responding to what you’ve read and heard and think important to communicate. That is what you volunteered your time to do, so board members really can’t afford to second guess whether to communicate what they’ve gathered from the material. It is the option of the organisation to take or leave your advice. If your feedback is unhelpful, such as in misunderstanding what the organisation has done, then they need to do a better job in communicating their work or in selecting advisors.
In conclusion, and to be fair, the aim of any member of an advisory board is not simply to give advice. People join an advisory board because they have been asked, or because they want to keep up with the field, support an organisation, or meet other members of the board – network, or you name it. In commenting on this blog, a colleague put it this way: “In addition to giving advice, I see the board’s role as providing a web of professional networks that create an additional resource for the organisation. Advisory board members should use their networks for a variety of functions, such as raising visibility, distributing information about outputs or vacancies, and helping organizational leaders establish contacts.”
Given these potential payoffs, I’ve found every advisory board I’ve served on to have been beneficial in many ways, both personally and professionally.
Is there another principle I should add to this list?
The Value of Academics Working with Government: Lessons from Collaboration on Cybersecurity
William H. Dutton with Carolin Weisser Harris
Six of the benefits of academics collaborating with government include realising the value of: 1) complementary perspectives and knowledge sets; 2) different communication skills and styles; 3) distributing the load; 4) different time scales; 5) generating impact; and 6) tackling multifaceted problems.
Our Global Cybersecurity Capacity Centre (GCSCC) at Oxford University recently completed a short but intense period of working with a UK Government team focused on cybersecurity capacity building with foreign governments. In one of our last meetings around our final reports, we had a side discussion – not part of the report – about the differences between academic researchers and our colleagues working in government departments. Of course, some academics end up in government and vice versa, but individuals quickly adapt to the different cultures and working patterns of government or academia if they choose to stay.
For example, the differences in our time horizons were not controversial, as some of us on the academic team have been working on particular issues for decades while our government colleagues are focused on the start and finish a project over a short, finite time, such as lasting one year or even less. These different time horizons are only one of many other challenges tied to the very different ways of working, but what about the benefits?
What is the value of fostering more academic-government collaboration? Here we were not as quick to come up with clear answers. But collaboration between academia and government is more difficult than working within one’s own institutional context. There must be benefits to justify the greater commitments of time and effort to collaborate. On reflection, and from our experience, a number of real benefits and taken-for-granted assumptions come to mind. The all ways to realise the benefits of:
Complementary Perspectives and Knowledge Sets
Our focus on cybersecurity, for example, is inherently tied to both academic research and policy and practice. By bringing actors together across academia and government, there is less risk of working in a way that is blind to the perspectives of other sectors. It might be impossible to shape policy and practice if the academic research is not alert to the issues most pertinent to government. Likewise, governments cannot establish credible policy or regulatory initiatives without an awareness of the academic controversies and consensus around relevant concepts and issues.
2. Different Communication Skills and Styles
Academic research can get lost in translation if academics are not confronted with what resonates well with governmental staff and leadership. What is understood and misunderstood in moving across academic and government divides? Think of the acronyms used in government versus academia. How can assumptions and work be better translated to each set of participants? Working together forces a confrontation with these communication issues, as well as the different styles in the two groups. Comparing the slides prepared by academics with those of government staff can provide a sense of people coming from different planets, not just different sectors.
3. Distributing the Load – Time to Read Everything?
My academic colleagues noticed that many in the government simply did not have the time to read extremely long and often dense academic papers or books, much less to write a blog about collaborative research! It was far better to have brief executive oriented briefing papers. Better yet would be a short 10-minute oral explanation of any research or a discussion in the form of a webinar. Do they need to know the finest details of a methodology, or to simply have a basic understanding of the method and trust that the specific methodology followed was state of the practice, done professionally, or peer reviewed? Can they quickly move to: What did they find? Being able to trust the methods of the academics saved an enormous amount of time for the governmental participants.
Likewise, did the academics want to take the time to read very long and detailed administrative reports and government documents? Clearly, they also appreciated the brief summary or distillation of any texts that were not central to the study. Unless academics were focused on organizational politics and management, they often do not need to know why the government has chosen to support or not support particular work, but trust that there is a green light to go ahead, and their colleagues in government will try to keep the work going.
So, the two groups read and were interested in reading and hearing different kinds of reports and documentation, about different issues, and at different levels. Working together, they could then cover more ground in the time of the project and better understand each other’s needs and what each could contribute to the collaboration.
4. Different Time Scales
As mentioned above, another aspect of time was the different time scales of academic research versus governmental studies. One of our colleagues had been working on Internet studies for over four decades, but a short governmental study could draw easily on that investment in time. Everyone did not need to spend decades on research.
Academics can’t change the focus of their research too rapidly without losing their basis of expertise. The cycle of attention in government may move towards the interests of an academic from time to time and then it is important to connect governmental staff with the right researchers to take advantage of their different time scales.
The different time scales do not undermine collaboration, but they put a premium on being able to connect governmental research with relevant academic research that is at a level and at a time at which the findings can be valuable to policy or practice. Academics cannot chase policy issues as they will always be late to the debate. But governmental researchers can find researchers doing relevant work that is sufficiently mature to inform the questions faced by the government.
5. Generating Impact
Academics are increasingly interested in having an impact, which has been defined as ‘having an effect, benefit, or contribution to economic, social, cultural, and other aspects of the lives of citizens and society beyond contributions to academic research’ (Hutchinson 2019). Is their research read, understood, or acted upon? Does it make a difference to the sector of relevance to their research? Working directly with government can enhance the likelihood of governmental actors being aware of and reactive to academic research. Collaboration does not guarantee greater productivity (Lee and Bozeman 2005). However, it has the potential to support the greater dissemination of the research across government and create greater awareness of the evidence behind the policy advice of academic researchers.
Of course, governments do not simply write reports to tick boxes. They also wish to have an impact on policy or practice. Working with academics can help gain insights and credibility that can make reports more novel, interesting, and meaningful for enacting change in policy and practice. They can also gain a better sense of the limits of academic research as researchers explain the lack of evidence in some areas and the needs for additional work.
6. Tackling Multifaceted Problems
Cybersecurity is not only tied to academia and government. Many other actors are involved. We found that our partners in government had different contacts with different networks of actors than we had and vice versa. Putting together these networks of actors enabled us to better embed the multiplicity of actors – other governments, civil society, non-governmental organizations, business and industry, and experts in cybersecurity – in our joint work.
The potential benefits are many, but there are risks. Participants need to care a great deal about the common work and be committed to the area in order to overcome the challenges. That said, the different time frames, communication styles, and more that confront collaboration between government and academia not only can be addressed but also bring some benefits to the collaboration.
Cybersecurity is one of many policy areas that requires engagement with various stakeholders, and for meaningful engagement to develop you need to build trustful relationships. Projects like ours where partners from different stakeholder groups (in this case academia and government) work together can enable building those trustful relationships and strengthen the potential for others to trust the outputs of joint projects.
Looking into one of my College’s hallway recycling bins, as one does, I found a fourth edition paperback of Strunk and White’s The Elements of Style. Arguably, for my generation, as Strunk died the year before I was born, this has been one of the most useful and inspiring books for any young writer or anyone seriously interested in writing.
Online Micro-Choices Shaping Remote Seminars, Teaching, and Learning
The move to online education has been a huge shift, dramatically hastened by the COVID-19 pandemic and the existence of technical options, such as online meeting platforms like Zoom and Teams. For decades, handwringing and resistance over moves toward more online instruction, seminars, and lectures has collapsed as universities not only accept this shift but are supporting if not requiring it. In many respects, the move online has saved many educational institutions and the new normal – whatever that ends up being – is almost certain to incorporate more online teaching and learning.
However, after participating in many online seminars, lectures, and conferences, I sense that it is time to focus far more attention on the micro-choices being made about the conduct of online teaching and learning. Not focus on on or off-line, but how to do online teaching and learning.
There are books on teaching tips for graduate students and instructors, but fewer for the online world. That said, I imagine that most academics tend to follow the examples set by their own best teachers. Unfortunately, in the online world of education, there are fewer great examples on which developing teachers can model themselves. Moreover, I believe I am seeing so many problematic examples and trends emerging that the micro-choices underpinning them merit more critical discussion.
Take for example, the decision on whether or not to mute the audio and turn off the video of the audience – whether students or fellow colleagues. The convenor of an online session, such as over Zoom, can mute everyone but the speaker and turn off everyone’s video but the speaker’s video, or they can simply ask everyone but the speaker to mute their own audio and turn off their video while the speaker or teacher is presenting. Who has permission to share their screen is another micro-choice of a convenor.
Screen sharing enables people to show a slide or a graph or any image or text that they can put on their own screen to the group. For a small seminar with known participants, everyone can be enabled to share their screen. If open to the public and if a larger group is brought together, screen sharing needs to be restricted to avoid problems such as Zoombombing, such as a malicious user sharing a vulgar image. But it is easy to keep the meeting link to those invited, use passwords to join, and restrict screen sharing to avoid such possible problems.
Muting everyone’s audio during a presentation seems to be good practice as well. You avoid unplanned sounds in households, like the sounds of barking dogs and crying babies, from interrupting a seminar. And individuals normally have a means to raise their hand to ask a question or make a comment, so they can be unmuted when speaking. That said, if it is a small group discussion, such as following a lecture, I think individuals should decide on their own whether to mute, such as if their dog starts barking, but generally remain unmuted to be as interactive as possible during the discussion. When education is being socially distanced in so many ways by going online, any opportunities to enhance sociality and interaction online should be seriously considered.
In contrast, in my opinion, stopping everyone’s video is not a good practice. Unfortunutely, I see this a becoming a trend. In the earliest weeks and months of the pandemic and online meetings, people tended to be visible online all the time even when their audio is muted. With my video on, you could see if I was on the call and that I was listening or if I was multitasking. If I had to leave or take a break, I could switch to a still photo of me or my initials, until I was ready to engage again. More importantly, the speakers would know that they were speaking to real, live, human beings, rather than talking to themselves in a dark room.
Over time, it is clear that more universities and conferences are moving to shut off the video of the audiences, and only have video streaming on for the speaker or the panelists. Often this means that no one is visible as the speaker is presenting slides – such as when talking behind the slides occupying center stage. Once a critical proportion of the audience starts shutting off their video, then others feel pressured to as well, lest they be accused of perceiving themselves as too self-important. But it is for others, not for yourself, that it is good to be seen.
I have taken issue with this minimalist approach to limiting video on the basis that it takes social distancing to an unacceptable and unjustifiable limit. Of course, I’ve heard justifications, such as maintaining the focus on the material on the slides and keeping people from being distracted by the images of audience members. Protecting the privacy of individuals and households is another. There are many ways to protect privacy of the listeners, such as by using a virtual background or sitting in front of a blank wall. Nevertheless, I find such justifications to be weak rationales for avoiding social interaction.
Teaching or lecturing is not simply about transferring information. If that were so, a reading or video recording would be superior to a seminar. Most importantly, teaching or lecturing is about motivating the audience – students or colleagues – to see your topic as important and interesting and worthy of reading and learning more about. That means you need to engage them in the presentation and make sure they are engaged. In the classroom, you can tell if students are not engaged, even if – as was the case in many in-person classes – many are pressed against the back row of seats. You can see if the audience is engaged online as well, but only if you keep the video going both ways.
Also, you need to motivate the lecturer. Unless you are very shy or nervous about public speaking, I can’t think of what could be more deflating that speaking to a set of initials or a blank screen or simply reading your own slides. Cut off the video and you risk disengaging the speaker as well as the audience.
Obviously, I am a cranky, old colleague, easily annoyed, and opinionated. Fine if you disagree with my suggestions, but you should really think through these many micro-choices you make in presenting and speaking and listening online. Discuss them with those convening any seminar where you are presenting.
I accept and defend the right of teachers to present material to their classes in the ways they choose – assuming they are within an increasing set of rules and guidelines set by educational institutions. Similarly, lecturers or speakers should be free to present in ways in which they are comfortable. But be careful that you don’t undermine your ability to engage, educate, and entertain your audience simply by following bad practices set by colleagues that are too cautious or conservative about the issues that might arise from social interaction. Don’t handicap yourself by speaking to an invisible audience or supporting any idea that being invisible is a good idea in online teaching or learning that is engaging.
Across most academic fields, researchers are increasingly focused on outreach to relevant practitioner and policy communities. It can sharpen their sense of the key questions but also enable their research to have greater application and impact. In contrast, within the field of cybersecurity, policy and practitioners from governmental, non-governmental organizations (NGOs), like the World Bank, and business and industry are more dominant in the production of research. Academic researchers play a relatively less active role. That said, research on cybersecurity could be greatly enhanced if a larger and more multidisciplinary collection of academic researchers could be engaged to focus on issues of cybersecurity and build collaborative relationships with the policy and practitioner communities.
Why is this the case, and what could be done to correct it?
The Dynamics Limiting Academia’s Role in Cybersecurity
I am but one of a growing set of multidisciplinary researchers with a focus on cybersecurity. The field is clearly engaging some top researchers and scholars from a variety of fields, evidenced by colleagues and centers at prominent universities, a growing number of journals and publications, and a dizzying number of events and conferences on topics within the field. Stellar academics, such as Professor David Clark at MIT, Professor Sadie Creese at Oxford University, and Bruce Schneier, a Fellow at the Berkman Center at Harvard, are strong examples. I would add Gabriella Coleman, a chaired professor at McGill University, and Professor Patrick Burkart at Texas A&M, to the list, even though they might not identify themselves as cybersecurity researchers. Many others could be added.
Nevertheless, compared with other fields, cybersecurity research appears to be dominated more by the practitioner and policy communities. Cybersecurity is not a discipline but a multidisciplinary field of study. But it remains less multidisciplinary and more anchored within the computer sciences than some related fields, such as Internet studies as one comparator with which I am familiar. A number of possible explanations for the different multidisciplinary balance of this field come to mind.
First, it is a relatively new field of academic research. It was preceded by studies of computer security, which were more computer science centric as they were more focused on technical advances in security systems. The development of shared computing systems and the Internet in particular, has greatly expanded the range of users and devices linked to computer systems, reaching over 4 billion users in 2020. In many respects, the Internet drove the transition from computer security to cybersecurity research and is therefore understandably young in relation to other academic fields of study.
Secondly, the concept of cybersecurity carries some of the baggage of its early stages. While the characterisations evoked by concepts are often crude, the term often conjures up images of men in suits employed by large institutions trying to keep young boys out of their systems. My MSU colleague, Ruth Shillair, reminded me of the 1983 movie War Games. It is based around a young hacker getting into the backdoor of a major military computer system in ways that threatened to launch a world war, but which left the audience cheering for the young haker.
Today, big mainframe computers are less central than are the billions of devices in households and business and industry and governments across the world. Malicious users, rather than a child accidentally entering the backdoor of a military complex, are the norm. Yet cybersecurity carries some of this off-putting imagery from its early days into the present.
Thirdly, it is an incredibly important field of research for which there is great demand. Many rising academics in the field of cybersecurity are snapped up by business, industry and governmental headhunters for lucrative positions rather than by academia.
These are only a few of many reasons for the relative lack of a stronger multidisciplinary research community. Whatever initiatives might enhance its multidisciplinary make-up might also bring more academics as well as more academic disciplines into the study of cybersecurity. How could this be changed?
What Needs to Be Done?
First, academics involved with research on cybersecurity need to do more to network among themselves. This is somewhat of a chicken and egg problem as when there are relatively few academics in a field it seems less important to network with each other. However, until the field comes together to better define the field and its priorities for research, it is harder for it to flourish. Similarly, there are so many pulls to work with practitioners and the policy communities in this area that academic collaboration may seem like a distraction. It is not, as it is essential for the field to mature as an academic field of study.
Secondly, the field needs to identify and promote academic research on cybersecurity that address big questions with major implications for policy and practice. On this point, some of the research at Oxford’s Global Cyber Security Capacity Centre (GCSCC) has made a difference for nations across the world. For example, the research demonstrates that nations that have enhanced their cybersecurity capacity building efforts have made a serious improvement in the experiences of their nations’ Internet users. But this work is one of many examples of work that is meeting needs in this new area of technological and organizational advances.
Thirdly, national governments need to place a greater priority on building this field of academia along with building their own cybersecurity capacities. Arguably, in the long run, a stronger academic field in cybersecurity will help nations advance cybersecurity capacity, such as by creating a larger pool of expertise and thought leadership in this area.
This would be possible through a number of initiatives, from simply taking a leadership role in identifying the importance of the field to encouraging the public research councils and other funding bodies to consider the development of grant support for multidisciplinary research on cybersecurity.
For example, the UK’s Economic and Social Research Council (ESRC) generated early funding for what became the Programme on Information and Communication Technologies (PICT). The establishment of PICT helped to draw leading researchers, such as the late Roger Silverstone, into the study of the social aspects of information and communication technologies. Such pump-priming helped put the UK in an early strategic international position in research on the societal aspects of the Internet and related digital media.
What factors are constraining the more rapid and widespread development of this field? What could be done to accelerate and deepen its development?
There are a host of other issues around whether policy makers and practitioners would value collaboration with academics, given that their time scales and methodologies can be so dramatically different. That is for another blog, but in the interim, I’d value your thoughts on whether you agree on the need and approaches to further develop the multidisciplinary study of cybersecurity within academia.
 See: Creese, S., Shillair, R., Bada, M., Reisdorf, B.C., Roberts, T., and Dutton, W. H. (2019), ‘The Cybersecurity Capacity of Nations’, pp. 165-179 in Graham, M., and Dutton, W. H. (eds), Society and the Internet: How Networks of Information and Communication are Changing our Lives, 2nd Edition. Oxford: Oxford University Press.
 My thanks to Caroline Weisser Harris for suggesting a focus on this question of why practitioners and policy makers might or might not value collaboration with academia.
Way too much talk, research, and handwringing are all about how to stop people from seeing or believing disinformation, such as the latest conspiracy theories. But pushing governments and platforms or anyone to censor information is not only ineffective in the digital age, but also likely to be dysfunctional – such as in activating the proverbial Barbara Streisand effect. You will only generate more interest in the information you want to censor. Moreover, you will not communicate the facts, narrative, or truth, as you see it.
Alternatively, think about two other ways to grapple with misinformation.
First, place greater trust in people – Internet users, for example, to be more intelligent and more discerning. Almost every empirical study of how people actually use the Internet and related digital technologies like social media indicates that most people who are interested in a topic will look at multiple sources of information.* If they are uncertain or suspicious of one source, they will double or triple check the information, such as by using search or going to a trusted source, such as Wikipedia or an official Web site. Most theories that frighten us about being caught in an echo chamber or filter bubble of false information are technologically deterministic and do not look carefully at how people actually look for and use information. It is clear that the proponents of censorship almost always assume that people are stupid. Only they know how to find the correct information!
Secondly, and perhaps most importantly, put more effort into communicating the right news, information, or facts, rather than trying to block other information. It seems increasingly clear to me that too many government agencies and academic institutions – as two examples – are too complacent about reaching their audiences. They might set up a Web site, and post a report online, but not really put major effort into reaching out to ensure that a larger audience is aware of the work, can access it, and understand its message. Think about popular conspiracy theories, like QAnon. They have an evolving narrative, a distributed network of people sharing and helping to distribute their messages. They are motivated and creative in getting this information out. Legitimate and more authoritative sources of information need to be just as clever, if not cleverer and more motivated and ingenious in figuring how a narrative and various outlets will help them reach their audiences in not only digestible but compelling ways.
In the case of QAnon, I agree with a recent post by Abby Ohlheiser that it’s ‘too late to stop QAnon with fact checks and account bans’.** But it is not too late to stop being complacent about how you and your colleagues and organization communicate in this digital world. You need to be creative, smart and motivated to reach audiences. You may be an authority in your own eyes, but few people will come to you as a source of information. Putting something online won’t suffice. If you or your unit has important information, such as about protecting yourself in a pandemic, then you need to reach out to audiences that matter using all the tools available on Twitter, WordPress, Facebook, Instagram, TikTok, LinkedIn, and via the press.
As hypocrite in chief, at least I am writing this blog. But far more would need to be done in order to communicate this message. Agree?
* For example, see: Dutton, W. H., Reisdorf, B. C., Blank, G., Dubois, E., and Fernandez, L. (2019), ‘The Internet and Access to Information About Politics: Searching Through Filter Bubbles, Echo Chambers, and Disinformation’, pp. 228-247 in Graham, M., and Dutton, W. H. (eds), Society and the Internet: How Networks of Information and Communication are Changing our Lives, 2nd Edition. Oxford: Oxford University Press. An earlier version of this paper is online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2960697
For decades I have been concerned over the fragility of information and whether ephemerality or the transitory nature of information and communication is just an inevitable feature of the digital age. I therefore frequently look back at a talk I gave on the Internet to a conference of historians held in Oxford in the early 2000s. Given that I was speaking to historians, at a time when I was the founding director of the Oxford Internet Institute, one key theme of my talk concerned the major ways in which content on the Web was unlikely to be preserved. The Internet community did not have adequate plans and strategies for preserving the Internet, Web and related online content. I thought they would be engaged – if not frightened – by a shift of content to online media when it might mean losing much of our history with respect to data, documents, letters, and more.
My audience seemed interested but unmoved. A historian from the audience chatted with me after the talk to explain that this is not new. Historians have always worked in piecing together history from letters found in a shoebox stored in an attic, tomb stones, and so on – not from systematically recorded archives, even though fragments of such records exist in many libraries, museums, and archives. This is nothing new to efforts aimed at writing or reconstructing history.
This attitude frightened me even more. From my perspective, perhaps the historians had not seen anything yet. And I am continually reminded of this problem. Of course, there have been brilliant efforts to preserve online, digital content, such as the ‘Way Back Machine’, an initiative of the Internet Archive,[i] which indicates it has saved over 446 billion web pages. Yet the archive and its Way Back Machine have become a subscription service and have dropped out of the limelight they shared in the early days of the Web. The archive is also being limited by concerns over copyright that are leading them to reduce valuable services, such as their digital library.[ii]
But a recent and more personal experience brought all of this to the forefront of my thinking. I always print to save a hard copy of anything of significance (to me) that I write. That may seem quaint, but time and again, it has saved me from losing work that was stored on out of date media, such as floppy discs, or failing journals. I recently wanted to share a copy of a piece I did for a journal of the UK’s Economic and Social Research Council (ESRC), written in 1994, when I was director of an ESRC programme. This time my system failed me and I could not find it in my files.
This was a short piece that the ESRC published in one of its journals called Social Sciences. Being a social scientist, my article focused on the problematic mindset of social scientists regarding outreach (Dutton 1994). Too often, I argued, a (social) scientist thought they were through with outreach once they published an article. The way I put it was that many social scientists believed in sort of a ‘trickle-down’ theory of outreach. Once their work was published, the findings and their implications will eventually trickle down to those who might benefit from their insights.
Today, all disciplines of the sciences are far more focused on outreach and the impact of research. Many research assessment exercises require evidence of the impact of research as a basis for assessment. And individual academics, research units, departments and universities are becoming almost too focused on getting the word out to the world about their research and related achievements. Outreach has become a major aspect of contemporary academic and not-for-profit research enterprises. There is even an Association for Academic Outreach.[iii] One only needs to reflect on the innovative and competitive race to a vaccine for COVID-19, where at least 75 candidate vaccines are in preclinical or clinical evaluation[iv], to see how robust and important outreach has become. Nevertheless, outreach does not necessarily translate into preservation of academic work.
So – lo and behold – I could not find a copy of my piece on ‘Trickle-Down Social Science’. I recall seeing it in my files, but given moves back and forth across the Atlantic, it had vanished without a trace. I searched online for it, and found my books and articles that referenced it, but no copy of the article. I tried the Way Back Machine, but it was not on the Web, as the journal Social Sciences in those days did not put its publication online. I wrote the ESRC, as they might have an archive of their journal. They kindly replied that they not only did not have a copy of the article (from that far back), but, more surprisingly, they did not even have a copy of Social Sciences in their archives. So, 1994 is such ancient history that even revered institutions like the ESRC do not keep copies of their publications. [A former student read this blog and sent me a photocopy, which I used to create a new version of my little viewpoint piece from a quarter-century earlier.]
Well, this little personal experience reminded me of my practice of keeping copies and reinforced the obvious conclusion that I need to preserve my own work, as I had tried to do, and do a more consistent job of it in the process! The toppling of real, analogue statues across the world selfishly reminded me of the need to preserve my own far less significant – if not insignificant – historical record and not to count on anyone else doing this for me.
So, preserve your own work and don’t rely on the Internet, Web, big data, or any other person to save your work. Take it from C. Wright Mills (1952), any academic should devote considerable time to their files. While Mills argued that maintaining one’s files was a central aspect of ‘intellectual craftsmanship’, even he did not focus on their preservation.
That said, if anyone has a copy of ‘Trickle-Down Social Science’, name your price. 😉
Finally, a short leaflet is available on the site, with comments on the book from Professors W. Lance Bennett, Michael X. Delli Carpini, and Laura DeNardis. I was not aware of these comments, with one exception, until today – so I am truly grateful to such stellar figures in the field for contributing their views on this volume.
Digital politics has been a burgeoning field for years, but with the approach of elections in the US and around the world in the context of a pandemic, Brexit, and breaking cold wars, it could not be more pertinent than today. If you are considering texts for your (online) courses in political communication, media and politics, Internet studies, or digital politics, do take a look at the range and quality of perspectives offered by the contributors to this new book. Provide yourself and your students with valuable insights on issues framed for high quality research.
List of Contributors:
Nick Anstead, London School of Economics and Political Science; Jay G. Blumler, University of Leeds and University of Maryland; Andrew Chadwick, Loughborough University; Stephen Coleman, University of Leeds; Alexi Drew, King’s College London and Charles University, Prague; Elizabeth Dubois, University of Ottawa; Laleah Fernandez, Michigan State University; Heather Ford, University of Technology Sydney; M. I. Franklin, Goldsmiths, University of London; Paolo Gerbaudo, King’s College London; Dave Karpf, George Washington University; Leah Lievrouw, University of California, Los Angeles; Wan-Ying Lin, City University of Hong Kong; Florian Martin-Bariteau, University of Ottawa; Declan McDowell-Naylor, Cardiff University; Giles Moss, University of Leeds; Ben O’Loughlin, Royal Holloway, University of London; Patrícia Rossini, University of Liverpool; Volker Schneider, University of Konstanz; Lone Sorensen, University of Huddersfield; Scott Wright, University of Melbourne; Xinzhi Zhang, Hong Kong Baptist University.
In the wake of the Coronavirus pandemic, with so many organizations and activities moving online, I’ve seen a remarkable push to ‘professionalize’ [for want of a better word] everything online. You might think that is a good thing, but to me, it is undermining, if not destroying, the free and open culture of the Internet. For example, I can sit down and draft a blog and post it in seconds without fear with the hope that a few people besides myself might enjoy it. It’s fun to share ideas and issues.
Increasingly I hear colleagues talking about doing an event online in a more ‘professional’ way. They want high production value, even though they are shooting a talk, not a major motion picture, or an interview for a major news channel. They need all the organisational trappings, corporate logos, and branding down to the right font.
Of course, I whine, protest, and argue that it is okay to relax a bit online – it can be more ‘Internety’ and that is fine – that is what is special about the Internet and social media. But that does not translate well for those trying to move their professional organizations, meetings, marketing, outreach, courses, and more onto the Internet – and they are bulldozing the culture of the Internet as they do.
I also see the consequences of this transition in my inbox. Email is increasingly dominated by messages from institutions, organizations, campaigns, candidates, and news organizations dressed in all their corporate style guides. Instead of a serious letter sent by snail mail on corporate letterhead, I get more emails with the image of a serious letter on corporate letterhead attached. It is like telemarketing has moved onto the Internet big time, giving me so much to delete before reading.
This invasion of professionalism into all the nooks and crannies of the Internet brings to mind the late John Perry Barlow’s Declaration of Independence. Every year I gain more respect for his vision in his 1996 ‘A Declaration of the Independence of Cyberspace’, which you can read here: https://www.eff.org/cyberspace-independence If he were alive today, he would be so disappointed.